uuid
int64 541B
3,299B
| dataset
stringclasses 1
value | text
stringlengths 1
4.29M
|
---|---|---|
1,108,101,566,615 | arxiv | \section{Introduction}
\label{sec:intro}
Recently, the E989 experiment at Fermilab has measured the anomalous magnetic moment (AMM) of muon, $a_\mu$ = $(g - 2)_\mu/2$, showing a discrepancy with respect to the theoretical prediction of the Standard
Model (SM) \cite{Abi:2021gix}
\begin{eqnarray}
a^{\rm FNAL}_\mu = 116 592 040(54) \times 10^{-11}\\
a^{\rm SM}_\mu = 116 591 810(43) \times 10^{-11}.
\end{eqnarray}
When combined with the previous Brookhaven determination
\begin{equation}
a^{\rm BNL}_\mu = 116 592 089(63) \times 10^{-11},
\end{equation}
it leads to a 4.2 $\sigma$ observed excess of
$\Delta a_\mu = 251(59) \times 10^{-11}$ \footnote{It should however, be noted that the latest lattice results \cite{Borsanyi:2020mff} predict a larger value of muon $(g-2)$ bringing it closer to experimental value. Tension of measured muon $(g-2)$ with global electroweak fits from $e^+ e^-$ to hadron data was also reported in \cite{Crivellin:2020zul, Colangelo:2020lcg, Keshavarzi:2020bfy}.}. The theoretical status of SM calculation of muon AMM can be found in \cite{Aoyama:2020ynm}. While this anomaly is known for a long time since the Brookhaven measurements \cite{Muong-2:2001kxu}, the recent Fermilab measurements have also led to several recent works on updating possible theoretical models with new data, a comprehensive review of which may be found in \cite{Athron:2021iuf}. Earlier reviews on this topic can be found in \cite{Jegerlehner:2009ry, Lindner:2016bgg}.
In this work, we consider an anomaly free $U(1)_X$ gauge extension of the SM where first two generations of charged fermions acquire masses only at radiative level. While triangle anomalies cancel due to addition of chiral fermion triplets, giving rise to type III seesaw origin of light neutrino masses, the new fields introduced for radiative charged fermion masses can also serve as a stable dark matter (DM) candidate, if it is stable and neutral. Focusing primarily on radiative muon mass and muon AMM, we constrain the model from the requirement of satisfying muon mass, latest muon $(g-2)$ data along with other relevant bounds like the Higgs coupling to muons as measured by the large hadron collider (LHC), Higgs to diphoton bound as well as direct search bounds on beyond standard model (BSM) particles. We also constrain the model from the requirement of generating the desired DM phenomenology. Radiative charged lepton mass in the context of AMM have been a topic of interest for many years and several interesting works have already appeared in the literature within supersymmetric \cite{Borzumati:1999sp, Czarnecki:2001pv, Crivellin:2010ty, Thalapillil:2014kya} as well as non-supersymmetric frameworks \cite{Fraser:2014ija, Fraser:2015zed, Calibbi:2020emz, Yin:2021yqy, Chiang:2021pma, Baker:2021yli}. On the other hand, connection between dark matter and muon $(g-2)$ have also been studied in several earlier works, but with tree level muon mass \cite{Calibbi:2018rzv, Kawamura:2020qxo, Chen:2020tfr, Jana:2020joi, Kowalska:2017iqv, Kowalska:2020zve, Arcadi:2021cwg, Chowdhury:2021tnm}.
We provide a natural origin of muon AMM together with radiative muon mass and dark matter in a sequential $U(1)_X$ gauged model that can also explain light neutrino mass from type III seesaw. The particle content and the corresponding $U(1)_X$ charge assignments are chosen in such an anomaly free way that additional global symmetries are not required. The radiative muon mass leads to anomalous Higgs coupling to muon which can be probed at the LHC. In spite of having several BSM particles and free parameters, we find the model to be highly constrained from the requirements of satisfying relevant constraints.
This paper is organised as follows. In section \ref{sec:model}, we briefly discuss the model. In section \ref{sec:g-2}, we discuss the possible origin of muon $(g-2)$ in this model followed by discussion of electroweak precision constraints in section \ref{sec:S&T}. We briefly comment upon electric dipole moment and lepton flavour violation constraints in section \ref{sec:edmlfv} followed by discussion of collider constraints in section \ref{sec:lhc}. In section \ref{sec:DM} we discuss DM details and summarise our results in section \ref{sec:conclude}.
\section{The Model}
\label{sec:model}
\begin{center}
\begin{table}\label{table1}
\caption{Fermion Content of the minimal model}
\begin{tabular}{|c|c|c|}
\hline
Particle & $SU(3)_c \times SU(2)_L \times U(1)_Y$ & $U(1)_X$ \\
\hline
$ (u,d)_L $ & $(3,2,\frac{1}{6})$ & $n_1$ \\
$ u_R $ & $(\bar{3},1,\frac{2}{3})$ & $\frac{1}{4}(7 n_1 -3 n_4)$ \\
$ d_R $ & $(\bar{3},1,-\frac{1}{3})$ & $\frac{1}{4} (n_1 +3 n_4)$ \\
$ (\nu, e)_L $ & $(1,2,-\frac{1}{2})$ & $n_4$ \\
$e_R$ & $(1,1,-1)$ & $\frac{1}{4} (-9 n_1 +5 n_4)$ \\
\hline
$\Sigma_{R} $ & $(1,3,0)$ & $\frac{1}{4}(3n_1+n_4)$ \\
\hline
\end{tabular}
\end{table}
\end{center}
The fermion content of the minimal model is shown in table \ref{table1}. The $U(1)_X$ charges correspond to anomaly-free combination with $n_1, n_4$ being arbitrary with $n_4 \neq -3n_1$. While such Abelian extension of the standard model was studied before \cite{Ma:2002pf, Barr:2005je, Adhikari:2008uc, Adhikari:2015woo, Bhat:2019yqo} in different contexts, recently the possibility of having a sequential $U(1)_X$ with different quantum numbers for each family was proposed \cite{Ma:2021aag}. As an working example, $n_1=0$ for all three families while $n_4=2, 1, 0$ for first, second and third families respectively were chosen. Now, if just one scalar doublet is chosen having zero $U(1)_X$ charge and responsible for electroweak symmetry breaking, only the third generation quarks and charged leptons can acquire masses at renormalisable level\footnote{See \cite{Balakrishna:1987qd, Balakrishna:1988ks, Ma:1990ce, Barr:1989ta, Babu:1988fn, Weinberg:2020zba} for earlier discussions on fermion mass hierarchy through sequential loop suppression.}. The field content of the minimal model with such choices of $n_1, n_4$ is shown in table \ref{table2}. As discussed in \cite{Ma:2021aag}, such a minimal setup leads to tree level third generation charged fermion masses while the first and second generation masses arise only at dimension six and dimension five levels, leading to natural suppression.
\begin{center}
\begin{table}\label{table2}
\begin{tabular}{|c|c|c|}
\hline
Particle & $SU(3)_c \times SU(2)_L \times U(1)_Y$ & $U(1)_X$ \\
\hline
$ (u,d)_L, (c,s)_L, (t,b)_L $ & $(3,2,\frac{1}{6})$ & $0$ \\
$ u_R, c_R, t_R $ & $(\bar{3},1,\frac{2}{3})$ & $-\frac{3}{2}, -\frac{3}{4}, 0$ \\
$ d_R, s_R, b_R $ & $(\bar{3},1,-\frac{1}{3})$ & $\frac{3}{2}, \frac{3}{4}, 0$ \\
$ (\nu_e, e)_L, (\nu_{\mu}, \mu)_L, (\nu_{\tau}, \tau)_L $ & $(1,2,-\frac{1}{2})$ & $2, 1, 0$ \\
$e_R, \mu_R, \tau_R$ & $(1,1,-1)$ & $\frac{5}{2}, \frac{5}{4}, 0$ \\
\hline
$\Sigma^e_{R}, \Sigma^{\mu}_{R}, \Sigma^{\tau}_{R} $ & $(1,3,0)$ & $\frac{1}{2}, \frac{1}{4}, 0$ \\
\hline
$\Phi=(\phi^+, \phi^0)$ & $(1,2,\frac{1}{2})$ & $0$ \\
$\eta_1, \eta_2$ & $(1,1,0)$ & $\frac{1}{4}, \frac{3}{4}$ \\
\hline
\end{tabular}
\caption{Particle content of the minimal model with chosen $n_1, n_4$.}
\end{table}
\end{center}
\begin{figure}
\centering
\includegraphics[scale=0.75]{feynman1.pdf}
\caption{One-loop contribution to muon mass.}
\label{fig1}
\end{figure}
Clearly, one can consider additional field content in order to provide a UV complete realisation for such higher dimensional operators for first and second generation masses. For example, muon mass can arise at one-loop level, in scotogenic fashion \cite{Ma:2006km}, after introducing the particles shown in table \ref{table3}. The corresponding one-loop diagram is shown in figure \ref{fig1}.
\begin{center}
\begin{table}
\begin{tabular}{|c|c|c|}
\hline
Particle & $SU(3)_c \times SU(2)_L \times U(1)_Y$ & $U(1)_X$ \\
\hline
$N_{L,R} $ & $(1,1,0)$ & $-\frac{1}{4}$ \\
$\zeta = (\zeta^+, \zeta^0)$ & $(1,2,\frac{1}{2})$ & $-\frac{5}{4}$ \\
$\rho$ & $(1,1,1)$ & $ -\frac{3}{2}$ \\
\hline
\end{tabular}
\caption{Particles responsible for scotogenic muon mass.}
\label{table3}
\end{table}
\end{center}
Similarly, additional fields can be introduced to generate other charged fermion as well as Dirac neutrino masses of first and second generations at radiative level. Here we focus only the new physics responsible for muon mass origin at one-loop in the context of dark matter, muon $(g-2)$ and LHC constraints. The relevant part of the Lagrangian for muon mass is given by
\begin{align}
\mathcal{L} \supset -y_{\zeta} \bar{L_{\mu}} \tilde{\zeta} N_R - M_N \bar{N_L}N_R - y_{\rho} \bar{N_L} \rho \mu_R -\lambda \Phi \zeta \eta^{\dagger}_1 \rho^{\dagger} + {\rm h.c.}
\end{align}
As noticed from the above Lagrangian, the newly introduced fields for scotogenic muon mass, always appear in pairs of the form $\psi^{\dagger}_1 \psi$. This is due to the chosen $U(1)_X$ charge assignments of these fields. Therefore, the Lagrangian possesses a global $U(1)_D$ symmetry under which the fields shown in table \ref{table3} can have non-trivial transformations while the SM fields transform trivially \cite{Ma:2021aag}. As none of the scalar fields in table \ref{table3} acquire any vacuum expectation value (VEV), this symmetry remains unbroken, keeping the lightest particle with non-trivial $U(1)_D$ charge stable and hence the DM candidate.
The one-loop muon mass can be estimated as
\begin{equation}
m_{\mu}=\frac{Y_{\zeta} Y_{\rho}}{16\pi^2} \frac{\lambda v u_1}{2} \frac{M_N}{M_{\chi^+_1} M_{\chi^+_2}} I(x_1, x_2)
\end{equation}
where $M_{\chi^+_1}, M_{\chi^+_2}$ are physical masses of scalars in loop, which can be derived by diagonalising the charged scalar mass matrix given in Appendix \ref{appen1}. Here $v, u_1$ denote the VEV of the neutral component of the SM Higgs doublet $\Phi$ and singlet scalar $\eta_1$ respectively. The physical mass eigenstates arise due to mixing of $\zeta^+, \rho^+$ by angle given by
\begin{equation}
\sin{2 \theta_{\rm ch}} = \frac{\lambda v u_1}{M^2_{\chi^+_1}-M^2_{\chi^+_2}}.
\label{eq:mixing1}
\end{equation}
The loop function $I(x_1, x_2)$ is given by
\begin{equation}
I(x_1, x_2)=\frac{\sqrt{x_1 x_2}}{x_1-x_2} \left ( \frac{x_1}{x_1-1} \ln{x_1}- \frac{x_2}{x_2-1} \ln{x_2} \right)
\end{equation}
where $x_{1}=M^2_{\chi^+_1}/M^2_N$, $x_{2}=M^2_{\chi^+_2}/M^2_N$. The effective coupling of the SM Higgs to muon can be calculated from the same muon mass diagram as
\begin{equation}
Y^{\rm eff}_{\mu} = \frac{\sqrt{2} m_{\mu}}{v} \bigg [ \cos^2{(2\theta_{\rm ch})}+\frac{1}{2} \sin^2{(2\theta_{\rm ch})} \frac{\sqrt{x_1 x_2}}{I(x_1, x_2)} \left ( \frac{I(x_1)}{x_1}+\frac{I(x_2)}{x_2} \right ) \bigg ]
\label{muoncoupling}
\end{equation}
where
$$ I(x)=\frac{x}{x-1}-\frac{x \ln{x}}{(x-1)^2}. $$
For details of other fermion masses including neutrinos, one may refer to \cite{Ma:2021aag}. The physical scalar spectrum and the couplings are given in Appendix \ref{appen1}. Clearly, the muon coupling to the SM Higgs gets changed from the usual SM value $\sqrt{2} m_{\mu}/v$ to the one shown in equation \eqref{muoncoupling} above. As can be seen from the full scalar potential of the model given in Appendix \ref{appen1}, in addition to $\lambda \Phi \zeta \eta^{\dagger}_1 \rho^{\dagger}$ term discussed above, there exist other quartic couplings of SM Higgs with scalars like $\zeta, \rho$. Since $\zeta, \rho$ also couple to muons, such additional quartic couplings can also lead to anomalous Higgs coupling to muons without contributing to muon mass at one-loop. However, we have considered such additional quartic couplings to be small so that dominant contribution to muon anomalous coupling to Higgs arises from the same quartic coupling which also gives rise to radiative muon mass as discussed above. This anomalous muon coupling to the SM Higgs can be constrained from the LHC observations as we discuss in one of the upcoming sections. While there is no role of singlet scalar $\eta_2$ in muon mass generation at one-loop, it is required to generate other fermion masses within a minimal setup as discussed in \cite{Ma:2021aag}. Considering the VEV of $\eta_2$ to be $u_2$, the mass of $U(1)_X$ gauge boson after symmetry breaking is $M_{Z_X}=g_X \sqrt{u^2_1+9u^2_2}/4$.
\begin{figure}
\centering
\includegraphics[scale=0.75]{feynman2.pdf}
\caption{One-loop contribution to muon $(g-2)$ from charged scalars.}
\label{fig2}
\end{figure}
\section{Muon Anomalous Magnetic Moment}
\label{sec:g-2}
As mentioned before, there is a $4.2\sigma$ discrepancy between muon AMM predictions of theory and experimental measurements and can potentially be explained with BSM physics. In the $U(1)_X$ gauge model we discuss here, there are two different contributions to muon $(g-2)$: one from charged scalars in the loop and another where $U(1)_X$ gauge boson goes in the loop. While the contribution from $U(1)_X$ gauge boson loop is sub-dominant for typical TeV scale masses, the contribution from charged scalar loop can be enhanced. This is because, the same loop particles also give rise to muon mass thereby removing the additional loop factor from muon $(g-2)$ contributions \cite{Fraser:2015zed}. Similar discussions on muon $(g-2)$ in radiative muon mass models have also appeared recently in \cite{Baker:2021yli}. The charged scalar loop contribution to muon $(g-2)$ in our model is shown in figure \ref{fig2} where $\chi^{+}_{1,2}$ are the mass eigenstates of $\zeta^+, \rho^+$ after symmetry breaking. The corresponding contribution to muon $(g-2)$ is given by \cite{Baker:2021yli}
\begin{align}
\Delta a_{\mu} & =\frac{m^2_{\mu}}{M^2_N} \left ( \frac{x_1 \ln{x_1}}{1-x_1}- \frac{x_2 \ln{x_2}}{1-x_2} \right)^{-1} \bigg [ \frac{3x_1-1}{(1-x_1)^2} -\frac{3x_2-1}{(1-x_2)^2} + \frac{2x^2_1 \ln{x_1}}{(1-x_1)^3}- \frac{2x^2_2 \ln{x_2}}{(1-x_2)^3} \nonumber \\
& + 2 \left ( \frac{1}{1-x_1}-\frac{1}{1-x_2}+ \frac{x_1 \ln{x_1}}{(1-x_1)^2}-\frac{x_2 \ln{x_2}}{(1-x_2)^2} \right ) \bigg ].
\end{align}
The neutral $U(1)_X$ gauge boson contribution to muon AMM (shown in figure \ref{figX}) can be written as \cite{Brodsky:1967sr, Baek:2008nz, Queiroz:2014zfa}
\begin{equation}
\Delta a_{\mu} = \frac{\alpha_X}{2\pi} \int^1_0 dx \frac{2m^2_{\mu} x^2 (1-x)}{x^2 m^2_{\mu}+(1-x)M^2_{Z'}} \approx \frac{\alpha_x}{2\pi} \frac{2m^2_{\mu}}{3M^2_{Z'}}
\end{equation}
where $\alpha_X=g^2_{X}/(4\pi)$. As shown in earlier works \cite{Bauer:2018onh, Borah:2020jzi, Borah:2021jzu, Borah:2021mri, Borah:2021khc}, the only allowed region where such neutral gauge boson contribution can explain muon AMM is in the sub-GeV regime with corresponding gauge coupling smaller than $10^{-3}$. Since we consider heavy gauge boson limit, the contribution from such neutral gauge bosons remain suppressed. In fact, since $U(1)_X$ gauge boson couples to electrons as well, the bounds from low energy experiments related to dark photon searches are likely to rule out the low mass regime completely \cite{Bauer:2018onh} leaving us with the explanation of muon AMM from charged scalar loop only.
An important observation about muon g-2 is that if the muon
mass originates at tree level, as in the SM, then a loop contribution from a scalar
and a fermion is positive if the scalar (fermion) is neutral (charged), but
negative if the scalar (fermion) is charged (neutral). However, if the muon mass is radiative in one-loop coming from a scalar and a fermion as in our model, then the sign reverses. Therefore, even with charged scalar loop shown in figure \ref{fig2}, we can still explain positive $\Delta a_{\mu}$.
\begin{figure}[h]
\centering
\includegraphics[scale=0.75]{muon-x.pdf}
\caption{One-loop contribution to muon $(g-2)$ due to extra $U(1)$ gauge boson.}
\label{figX}
\end{figure}
\section{Electroweak Precision Constraints}
\label{sec:S&T}
Another constraint on the model parameters can arise due to the electroweak precision data (EWPD) encoded in Peskin-Takeuchi oblique parameters S and T. Due to the presence of new scalar doublet ($\zeta$) and charge singlet scalar ($\rho$), these oblique parameters can receive additional contributions. As shown in \cite{Grimus:2008nb, Cao:2017ffm}, the charged singlet scalar ($\rho$) contributes to the S parameter only and does not affect the T parameter at one loop level. Also, the corresponding contribution of singlet scalar remains small, well within error bars. The contributions due to the scalar doublet ($\zeta$) can be written as\cite{Jueid:2020rek}
\begin{align}
S &= \frac{1}{12\pi}
\ln \frac{M_{\zeta_R}^2}{M_{\chi_1^+}^2} ,
\\ \nonumber
T &= \frac{1}{16\pi^2 \alpha v^2} F(M_{\chi_1^+}^2,M_{\zeta_R}^2),
\end{align}
where $F(x,y)$ is the loop function and can be expressed as
\begin{align}
F(x,y) =
\left\{
\begin{array}{ll}
\frac{x+y}{2}-\frac{xy}{x-y}\ln \frac{x}{y}, & \hbox{ if } x \neq y;
\\
0, & \hbox{ if } x =y.
\end{array}
\right.
\end{align}
The present best fit values of $S=0.02\pm 0.07$ and $T=0.07\pm0.06$ \cite{ParticleDataGroup:2018ovx} can be used for deriving the constraint on the model parameters as we discuss in upcoming sections.
\section{Electric dipole moment and lepton flavour violation}
\label{sec:edmlfv}
Similar to the anomalous magnetic moment discussed above, electric dipole moment (EDM) of leptons is a flavour conserving observable which is a measure of the coupling of the lepton's spin to an external electric field. In the SM, lepton EDMs are vanishingly small and hence any experimental observation can be a clear sign of BSM physics. While in the SM, EDM of lepton like muon arises only at four loop level, in the present model, we can have muon EDM at one-loop level itself via a diagram similar to the one-loop diagrams for muon $(g-2)$. Since one-loop contribution to muon EDM can be sizeable, one can constrain the model parameters from experimental bound \cite{Muong-2:2008ebm}
\begin{equation}
\lvert d_{\mu} \rvert/e < 1.9 \times 10^{-19} \; \text{cm}.
\end{equation}
However, EDM is a CP violating observable and hence depends upon CP violating couplings involved in the one-loop process \cite{Borah:2017leo}. Since rest of our analysis does not rely upon new sources of CP violation, we can tune them appropriately to keep the resulting EDM within experimental limit.
Another flavour observable is related to charged lepton flavour violation (CLFV) like $\mu \rightarrow e \gamma$ which can naturally arise in BSM scenarios like radiative mass models. Experimental constraints on this rare decay process $\text{Br}(\mu \rightarrow e \gamma) < 4.2 \times 10^{-13}$ at $90\%$ confidence level \cite{MEG:2016leq} can be used to constrain the parameter space of such models. In order to realise such flavour violating decays, the particles in the loop need to couple to different generations of fermions. However, due to non-universal $U(1)_X$ charges in our model, the fields responsible for radiative muon mass as well as muon $(g-2)$, do not couple to other lepton generations. Therefore, we do not have such one-loop CLFV processes and hence they do not impose any additional constraints on the parameter space.
\section{Collider Constraints}
\label{sec:lhc}
Collider constraints can primarily apply on SM Higgs decay into muons as the effective coupling is changed in such radiative muon mass models. Additional constraints can apply to physical masses of charged scalars as well as other particles having electroweak interactions from direct search bounds. The modifications in Higgs decay into muons, relative to the SM can be written the corresponding ratio of branching fractions as
\begin{equation}
0.8 \times 10^{-4} < {\rm BR} \left(h \rightarrow \mu^+ \mu^- \right) < 4.5 \times 10^{-4}
\end{equation}
as given by the CMS collaboration \cite{Sirunyan:2020two}. Similar bound has been reported by the ATLAS collaboration \cite{Aad:2020xfq} as well.
Higgs to diphoton rate in the model including SM contribution and new charged scalars $\chi^+_{1,2}$ is given by\cite{Djouadi:2005gi}
\begin{equation}
\Gamma (h \rightarrow \gamma \gamma) = \frac{G_F \alpha^2 m^3_h}{128 \sqrt{2} \pi^3} \bigg \lvert \sum_f N_c Q^2_f A^h_{1/2} (\tau_f) + A^h_1 (\tau_w) + \sum_i g_{hii} Q^2_i A^h_0 (\tau_i) \bigg \rvert^2
\label{eq:hgg1}
\end{equation}
where $G_F$ is Fermi coupling constant, $\alpha$ is fine structure constant, $N_c$ is the color factor of
charged fermion in loop, $Q_{f,i}$ are electromagnetic charges of fermions and scalars in loop and $\tau_i
= m^2_h/4m^2_i$ with $i$ running over all charged particles in loop. The form factors for fermion, vector boson and scalars are given by
$$ A^h_{1/2} (\tau) = 2[ \tau + (\tau-1)f(\tau) ] \tau^{-2}, $$
$$ A^h_1 (\tau) = -[2\tau^2 + 3\tau + 3(2\tau-1) f(\tau) ] \tau^{-2}, $$
$$ A^h_0 (\tau) = -[ \tau -f(\tau) ] \tau^{-2}. $$
The function $f(\tau)$ is given by
\[ f(\tau) =
\begin{cases}
\text{arcsin}^2 \sqrt{\tau}, & \tau \leq 1 \\
-\frac{1}{4} \left ( \log \frac{1+\sqrt{1-\tau^{-1}}}{1-\sqrt{1-\tau^{-1}}} -i\pi \right )^2, & \tau >1.
\end{cases}
\]
The parameter $g_{hij}$ denotes SM Higgs coupling with the charged scalar $\chi^+_{i} \chi^-_{j}$. They are given by :
\[g_{h11}= - \lambda u_1 \cos{\theta_{\rm ch}} \sin{\theta_{\rm ch}}, \; g_{h22}= \lambda u_1 \cos{\theta_{\rm ch}} \sin{\theta_{\rm ch}},\; g_{h12}= \lambda u_1 (\cos^{2}{\theta_{\rm ch}} -\sin^{2}{\theta_{\rm ch}} )\]
where $\theta_{\rm ch}$ is the mixing angle for $\zeta^+$ and $\rho^+$ as given by Eq. \eqref{eq:mixing1}.
The first two couplings are relevant for $h \rightarrow \gamma \gamma$. The first two terms in Eq. \eqref{eq:hgg1} are due to SM contributions and the last term is due to charged scalars in extra $U(1)_X$ gauge model. So new contributions to $\Gamma (h \rightarrow \gamma \gamma)$ come from the last term and its interference with SM terms.
According to the the latest CMS results \cite{Sirunyan:2021ybb}, the constraints on Higgs to diphoton ratio is $\frac{{\rm BR}(h \rightarrow \gamma \gamma)_{\rm expt}}{{\rm BR}(h \rightarrow \gamma \gamma)_{\rm SM}} = 1.12 \pm 0.09 $ which implies the new contribution should satisfy the constraint
\begin{equation}
\frac{{\rm BR}(h \rightarrow \gamma \gamma)_{\rm New}}{{\rm BR}(h \rightarrow \gamma \gamma)_{\rm expt}} = 0.0291 \;\; {\rm to}\;\; 0.1735
\end{equation}
Similarly, collider bounds exist on neutral gauge boson mass and corresponding gauge couplings. The limits from LEP II data constrains such additional gauge sector by imposing a lower bound on the ratio of new gauge boson mass to the new gauge coupling $M_{Z_X}/g_X \geq 7$ TeV \cite{Carena:2004xs, Cacciapaglia:2006pk}. The bounds from ongoing LHC experiment have already surpassed the LEP II bounds. In particular, search for high mass dilepton resonances have put strict bounds on such additional gauge sector coupling to all generations of leptons and quarks with coupling similar to electroweak ones. The latest bounds from the ATLAS experiment \cite{Aaboud:2017buh, Aad:2019fac} and the CMS experiment \cite{Sirunyan:2018exx} at the LHC rule out such gauge boson masses below 4-5 TeV from analysis of 13 TeV data. Such bounds get weaker, if the corresponding gauge couplings are weaker \cite{Aaboud:2017buh} than the electroweak gauge couplings. Also, if the $Z'$ gauge boson couples only to the third generation of leptons, all such collider bounds become much weaker, as explored in the context of DM and collider searches in a recent work \cite{Barman:2019aku}. Similarly, additional scalar sector can also be constrained from collider data. While there are no dedicated LHC searches for singlet charged scalar (like $\rho$ in our model) yet, theoretical studies like \cite{Alcaide:2019kdr} show high luminosity LHC sensitivity upto 500 GeV. For electroweak doublet like $\zeta$, LEP II bounds rule out some part of the parameter space below 100 GeV \cite{Lundstrom:2008ai}. At colliders, if they are produced, they can decay into DM (missing energy) as well as charged leptons (say, muon). Such leptonic final states with missing energy have been studied in several earlier works \cite{Miao:2010rg, Gustafsson:2012aj, Datta:2016nfz}. As a conservative lower limit, we consider all such BSM scalars to be heavier than 100 GeV in our numerical analysis.
\section{Dark Matter}
\label{sec:DM}
The neutral singlet vector like fermion $N_{L, R}$ is the dark matter candidate in this model. Although neutral component of the scalar doublet $\zeta$ could also be a DM candidate, it turns out that the neutral components of $\zeta$ are degenerate leading to a large $Z$ boson mediated DM-nucleon scattering, ruled out by experiments like XENON1T \cite{Aprile:2018dbl}. The situation is similar to sneutrino DM in minimal supersymmetric standard model (MSSM) \cite{Arina:2007tm}. This leaves us with the only choice of fermion singlet being the DM candidate. Since it does not interact with any singlet scalar, so DM phenomenology is dictated by its annihilation via $U(1)_X$ gauge boson only. While for such pure gauge mediated annihilations, the relic is likely to be satisfied near the resonance region $M_{\rm DM} \approx M_{Z_X}/2$, for small mass splitting between DM and charged scalars $\chi^+_{1,2}$ one can have interesting coannihilation effects which depends upon Yukawa couplings dictating both muon mass and $(g-2)$.
The relic abundance of a dark matter particle $\rm DM$, which was in thermal equilibrium at some earlier epoch can be calculated by solving the Boltzmann equation
\begin{equation}
\frac{dn_{\rm DM}}{dt}+3Hn_{\rm DM} = -\langle \sigma v \rangle (n^2_{\rm DM} -(n^{\rm eq}_{\rm DM})^2)
\label{eq:dmbe1}
\end{equation}
where $n_{\rm DM}$ is the number density of the dark matter particle $\rm DM$ and $n^{\rm eq}_{\rm DM}$ is the number density when $\rm DM$ was in thermal equilibrium. $H$ is the Hubble expansion rate of the Universe and $ \langle \sigma v \rangle $ is the thermally averaged annihilation cross section of the dark matter particle $\rm DM$. In terms of partial wave expansion $ \langle \sigma v \rangle = a +b v^2$. Numerical solution of the Boltzmann equation above gives \cite{Kolb:1990vq,Scherrer:1985zt}
\begin{equation}
\Omega_{\rm DM} h^2 \approx \frac{1.04 \times 10^9 x_F}{M_{\text{Pl}} \sqrt{g_*} (a+3b/x_F)}
\end{equation}
where $x_F = M_{\rm DM}/T_F$, $T_F$ is the freeze-out temperature, $M_{\rm DM}$ is the mass of dark matter, $g_*$ is the number of relativistic degrees of freedom at the time of freeze-out and and $M_{\text{Pl}} \approx 2.4\times 10^{18}$ GeV is the Planck mass. Dark matter particles with electroweak scale mass and couplings freeze out at temperatures approximately in the range $x_F \approx 20-30$. More generally, $x_F$ can be calculated from the relation
\begin{equation}
x_F = \ln \frac{0.038gM_{\text{Pl}}M_{\rm DM}<\sigma v>}{g_*^{1/2}x_F^{1/2}}
\label{xf}
\end{equation}
which can be derived from the equality condition of DM interaction rate $\Gamma = n_{\rm DM} \langle \sigma v \rangle$ with the rate of expansion of the Universe $H \approx g^{1/2}_*\frac{T^2}{M_{Pl}}$.
The thermal averaged annihilation cross section $\langle \sigma v \rangle$ used in Boltzmann equation of \eqref{eq:dmbe1} is given by \cite{Gondolo:1990dk}
\begin{equation}
\langle \sigma v \rangle = \frac{1}{8M^4_{\rm DM}T K^2_2(M_{\rm DM}/T)} \int^{\infty}_{4M^2_{\rm DM}}\sigma (s-4M^2_{\rm DM})\surd{s}K_1(\surd{s}/T) ds
\end{equation}
where $K_i$'s are modified Bessel functions of order $i$, $m$ is the mass of Dark Matter particle and $T$ is the temperature.
If there exists some additional particles having mass difference close to that of DM, then they can be thermally accessible during the epoch of DM freeze out. This can give rise to additional channels through which DM can coannihilate with such additional particles and produce SM particles in the final states. This type of coannihilation effects on dark matter relic abundance were studied by several authors in \cite{Griest:1990kh, Edsjo:1997bg, Bell:2013wua}. As we will see while incorporating all relevant constraints, there exist regions of parameter space where DM fermion can have small mass splitting with charged scalars leading to a region of strong coannihilations. Since the corresponding Yukawa couplings are also required to be large to satisfy other bounds, such coannihilations can in fact lead to suppressed relic abundance. We use the package \texttt{micrOMEGAs} \cite{Belanger:2013oya} to calculate DM relic abundance in the most general way and use \texttt{FeynRules} \cite{Alloul:2013bka} package to prepare the required model files.
\section{Results and Conclusion}
\label{sec:conclude}
Let us now discuss all possible phenomenological consequences of our model. We will consider the constraints coming form the AMM of muon, muon mass, the decay of SM Higgs to $\gamma \gamma$ and $\mu^+ \mu^-$, and finally the relic abundance of DM respectively. The important parameters for these different observable are the following:
$${\rm M_{\chi_1^+}, M_{\zeta_R}, M_{DM}, \theta_{ch}, y_{\zeta}=y_{\rho}, \lambda,g_X, M_{Z_X}}.$$
However, all the other observable are independent of $g_X$ and $M_{Z_X}$ except the relic density of DM and we will discuss the role of these parameters first. In figure \ref{fig:BP1}, we have shown the allowed parameter space in $M_{\rm DM}$ vs $M_{\chi_1^+}$ plane, from muon mass (blue line), muon $(g-2)$ (the brown band), $h\rightarrow \gamma \gamma$ (vertical green band) and
$h\rightarrow \mu^+ \mu^-$ (the grey mesh). We have fixed all the other parameters according to the BP-1 shown is table \ref{tab:BP} and one can clearly see that all these different regions coincide with each other in a very tiny region in $M_{\rm DM}$ vs $M_{\chi_1^+}$ plane.
\begin{figure}[h!]
\includegraphics[scale=0.5]{benchmark_1.pdf}
\caption{Common parameter space satisfying Muon mass (blue line), Muon $(g-2)$ (the brown band), $h\rightarrow \gamma \gamma$ (vertical green band) and $h\rightarrow \mu^+ \mu^-$ (the grey mesh) for a chosen benchmark BP-1.}
\label{fig:BP1}
\end{figure}
In figure \ref{fig:scan:1}, we have shown the allowed parameter space in the same $M_{\rm DM}$ vs $M_{\chi_1^+}$ plane by varying all the other parameters as mentioned in table \ref{tab:scan}. In the left panel, the color code represents the mass splitting between the $M_{\chi_1^+}$ and $M_{\zeta_R}$ where as in the right panel, the color code shows the variation of the Yukawa coupling $y_{\zeta}$. For simplicity, we have assumed equality of Yukawa couplings $y_{\zeta}=y_{\rho}$. Any deviation from this equality is unlikely to bring substantial change in our results. In spite of the presence of many different parameters, a very small region of parameter space is allowed from all the above-mentioned constraints. One can also note that we require quite large $y_{\zeta}$ ($<0.5$) to satisfy all possible constraints. Finally, we have shown the constraints coming from the electroweak precision observable as discussed in section \ref{sec:S&T}. A very small region of the parameter space is excluded from the EWPD constraints\footnote{Note that we have made a conservative estimate by considering a pure scalar doublet contribution. The actual estimate will involve both doublet and singlet scalar contributions with possible interference, a full calculation of which is beyond the scope of present work.} as shown in the black and red coloured points in the high mass regime of charged scalar in figure \ref{fig:scan:1}. While the band consisting of coloured points satisfy all relevant bounds, the upper half of the plane (shaded) is disfavoured as it corresponds to unstable DM candidate.
\begin{figure}[h!]
\includegraphics[scale=0.5]{mdm_mchi1.pdf}\, \includegraphics[scale=0.5]{mdm_mchi1_yzeta.pdf}
\caption{Allowed parameter space from all relevant constraints satisfying muon mass, muon $(g-2)$, $h\rightarrow \gamma \gamma$ and $h\rightarrow \mu^+ \mu^-$. The color code represents the mass splitting between the $M_{\chi_1^+}$ and $M_{\zeta_R}$ in left panel whereas the color code shows the variation of the Yukawa coupling $y_{\zeta}$ in the right panel.}
\label{fig:scan:1}
\end{figure}
\begin{center}
{\tiny \begin{table}
\begin{tabular}{|p{1.5cm} |p{2.1cm}|p{2.1cm}|p{2.1cm}|p{1.3cm}| p{1.3cm}| p{1.3cm}|p{1.2cm}|p{2.0cm}|}
\hline
\multicolumn{9}{|c|}{Benchmark points} \\
\hline
\ \ \ \ &\ $M_{\chi_1^+}$ (GeV)&\ $M_{\zeta_R}$ (GeV) & $M_{\rm DM}$ (GeV) & $\sin \theta_{\rm ch} $& $y_{\zeta}=y_{\rho}$&\ \ \ \ \ $\lambda$ & $g_X$ & $M_{Z_{X}}$ (GeV)\\
\hline
\ \ \ BP-1 & 200-3000 & $M_{\chi_1^+}-$71.43 & 200-5000 & 0.8741 & 0.6756 & -0.8327 & \ \ \ \ $-$ &\ \ \ \ \ \ \ $-$ \\
\hline
\ \ \ BP-1/2 & $M_{DM}$ + 15 & $M_{DM}$ + 10 & 500-3000 & 0.887 & 0.792 & -0.862 & 0.009 & \ \ \ \ 2813\\
\hline
\ \ \ BP-2/2 & $M_{DM}$ + 105 & $M_{DM}$ + 100 & 500-3000 & 0.887 & 0.792 & -0.862 & 0.009 & \ \ \ \ 2813\\
\hline
\ \ \ BP-3/2 & $M_{DM}$ + 255 & $M_{DM}$ + 250 & 500-3000 & 0.887 & 0.792 & -0.862 & 0.009 & \ \ \ \ 2813\\
\hline
\ \ \ BP-1/3 & $M_{DM}$ + 15 & $M_{DM}$ + 10 & 500-3000 & 0.9156 & 0.644 & -0.282 & 0.038 & \ \ \ \ 2447\\
\hline
\ \ \ BP-2/3 & $M_{DM}$ + 105 & $M_{DM}$ + 100 & 500-3000 & 0.9156 & 0.644 & -0.282 & 0.038 & \ \ \ \ 2447\\
\hline
\ \ \ BP-3/3 & $M_{DM}$ + 605 & $M_{DM}$ + 600 & 500-3000 & 0.9156 & 0.644 & -0.282 & 0.038 & \ \ \ \ 2447 \\
\hline\end{tabular}
\caption{Benchmark points used in numerical analysis.}
\label{tab:BP}
\end{table}}
\end{center}
\begin{table}
\begin{center}
\begin{tabular}{|c|c|}
\hline
Parameters & Range \\
\hline
\hline
$\rm{M_{\chi_1^+}}$ & (100 GeV, 5 TeV)\\
$\rm{M_{\chi_1^+}- M_{\zeta_R}}$ & (1 GeV, 500 GeV)\\
$\rm{M_{DM}}$ & (1 GeV, 10 TeV)\\
$\rm{\sin \theta_{\rm ch}}$ & (0.01, 1)\\
$\rm{y_{\zeta}=y_{\rho}}$ & (0.01, $\sqrt{4\pi}$)\\
$\rm{\lambda}$ & (-0.001, -1)\\
\hline
\end{tabular}
\end{center}
\caption{The parameters of our model and ranges used in the scan leading to figure \ref{fig:scan:1}.}
\label{tab:scan}
\end{table}
\begin{figure}[h!]
\includegraphics[scale=0.45]{line_relic_BP2.pdf}\, \includegraphics[scale=0.45]{line_relic_BP3.pdf}
\caption{The variation of relic abundance of DM as a function of its mass for different benchmark values of other relevant parameters.}
\label{fig:relic}
\end{figure}
\begin{figure}
\includegraphics[scale=0.55]{dm_scan.pdf}
\caption{Parameter space in $g_X$ versus $M_{Z_{X}}$ plane favoured from dark matter phenomenology related to relic abundance and direct detection cross section. Dark matter mass range as well as other parameters correspond to allowed points in figure \ref{fig:scan:1} after incorporating other relevant constraints.}
\label{fig:bound-DM}
\end{figure}
So far, we have not taken into account the constraints coming from the observed relic density of DM. As discussed earlier, the DM particles freeze-out from the thermal bath due to the annihilation and co-annihilation processes through the new Yukawa as well as gauge interactions. Figure \ref{fig:relic} represents the relic abundance of DM as a function of its mass ($M_{\rm DM}$) and all the other parameters have been kept fixed according to the benchmark points shown is table \ref{tab:BP}. We have chosen these benchmark points from allowed region shown in figure \ref{fig:scan:1} so that all other constraints are satisfied. The left panel is for very small $g_X \sim 0.009$ whereas the right panel is for slightly larger $g_X \sim 0.03$. One can clearly notice the absence (presence) of $Z_X$ resonance in the left (right) panel due to the smallness (largeness) of the gauge coupling $g_X$. Finally, we have shown the role of both $g_X$ and $M_{Z_X}$ in figure \ref{fig:bound-DM}. Here, we have shown the allowed parameter space in $g_X$ versus $M_{Z_{X}}$ plane, while other parameters are kept fixed at benchmark points allowed from all possible experimental constraints. We consider the allowed points for DM masses as shown in figure \ref{fig:scan:1} and then vary $(g_X, M_{Z_X})$ randomly in the range shown in figure \ref{fig:bound-DM}. The scattered points in figure \ref{fig:bound-DM} correspond to DM masses (shown in colour bar) which satisfy correct relic abundance. The effect of DM annihilation mediated by $Z_X$ is clearly visible for $Z_X$ masses close to resonance regime while the points away from resonance will satisfy relic due either due to large gauge coupling $g_X$ or coannihilation with scalars. The grey shaded region in figure \ref{fig:bound-DM} corresponds to the exclusion limits from the LHC searches for heavy resonances decaying into lepton pairs \cite{Aaboud:2017buh, Aad:2019fac, Sirunyan:2018exx}. The brown shaded region corresponds to the LEP bound $M_{Z_X}/g_X \geq 7$ TeV. The most important point to note here is that the LHC-13 TeV data excludes a broad region of parameter space of our model. In order to implement the LHC bound we compute the dilepton production cross section at 13 TeV center of mass energy at the LHC using the package {\tt MADGRAPH}~\cite{Alwall:2014hca}. As the first generation quarks have $U(1)_X$ charges more than unity, we get a stricter bound on $g_X-M_{Z_X}$ parameter space compared to other universal Abelian extensions like gauged $B-L$. Clearly, the benchmark points shown in the last three rows of table \ref{tab:BP} are already disallowed by the LHC bounds. In fact, only a handful of DM masses from Fig. \ref{fig:scan:1} survive LHC bounds, as shown in Fig. \ref{fig:bound-DM}. Due to constant DM mass but varying $g_X, M_{Z_X}$, many of these points seem to fall on a line in the allowed region of Fig. \ref{fig:bound-DM}. With a much bigger scan size, the allowed region can be filled with more points allowed from all relevant constraints. Clearly, all these allowed points correspond to small values of gauge coupling $g_X$ and hence coannihilation effects play dominant role in generating correct DM relic. The mass splitting of DM and scalars are in the range of 150-500 GeV while the corresponding Yukawa couplings are of order unity leading to efficient coannihilations for DM masses in 1200-1500 GeV range falling in the allowed region. We also check that the points allowed by LHC bounds are also allowed from DM direct detection bounds from XENON1T experiment \cite{Aprile:2018dbl}
To summarise, we have studied an Abelian gauge extension of the standard model with radiative muon mass leading to anomalous magnetic moment as well as anomalous Higgs coupling of muon having very interesting consequences at experiments. While a positive muon $(g-2)$ has been reported recently by the Fermilab experiment confirming the Brookhaven measurements made much earlier, the anomalous Higgs coupling to muon can be probed at the LHC. The model also predicts a stable fermion singlet dark matter candidate which goes inside radiative muon mass loop in scotogenic fashion. Taking into account of all relevant constraints related to muon mass along with its $(g-2)$, Higgs coupling to muons, Higgs to diphoton decay, direct search bounds from colliders as well as dark matter phenomenology lead to a tiny region of parameter space that can be probed at future experiments.
\acknowledgments
DB acknowledges the support from Early Career Research Award from the Science and Engineering Research Board (SERB), Department of Science and Technology (DST), Government of India (reference number: ECR/2017/001873). DN would like to thank Dr. Najimuddin Khan for fruitful discussions related to collider bounds.
|
1,108,101,566,616 | arxiv | \section{Introduction}
First described by \citet{Baade34} as the "transition of an ordinary star into a neutron star", core-collapse supernovae (CCSNe) are the powerful explosions of massive stars that occur at the end of their lives \citep[e.g.,][]{Bethe90}. Upon reaching its maximum mass, the iron core becomes unstable, initiating a collapse to a proto-neutron star (PNS). The shock wave launched at core bounce quickly loses its energy and stalls at a radius of $\sim 150\,\mathrm{km}$. In order to produce an explosion and leave behind a stable neutron star, the shock must recover within a few hundreds of milliseconds and expel the stellar envelope \citep[e.g.,][]{Oconnor11,Ott11,Ugliano12}. Otherwise, a black hole (BH) forms \citep[e.g.,][]{Nadezhin80,Lovegrove13,Adams17,Kashiyama15}. The details of how this occurs remain unclear, constituting one of the longest-standing open questions in modern astrophysics \citep[see, e.g.,][for recent reviews]{Janka12, Foglizzo15, Burrows13, Mueller16b}.
A key ingredient for producing the explosion is the neutrino emission by the newly-born PNS, which deposits energy behind the shock and establishes a negative entropy gradient that drives vigorous neutrino-driven convection. Together with the standing-accretion shock instability (SASI), these multi-dimensional hydrodynamic effects create favorable conditions for shock revival \citep[e.g.,][]{Herant95, Burrows95, Janka96, Blondin03, Foglizzo06, Yamasaki06,Hanke12, Hanke13, Dolence13, Murphy13, Takiwaki14, Ott13, Abdikamalov15, Radice15, Radice16, Melson15a, Lentz15, Fernandez14, Fernandez15, Cardall15, Bruenn16, Roberts16}. If present, rapid rotation may facilitate explosion via the magneto-rotational mechanism \citep{Burrows07,Moesta14,Moesta15} \citep[see also][]{Takiwaki16,Summa17}.
\citet{Couch13} demonstrated that the perturbations arising from the turbulent convection in Si and O burning shells in CCSN progenitors may help to revive the shock. As the iron core collapses, the perturbations follow the core and accrete towards the center. Due to the converging geometry of the flow, the perturbations amplify significantly during collapse \citep{Kovalenko98,Lai00,Takahashi14}. Further amplification occurs at shock crossing \citep{Abdikamalov16}. Once in the post-shock region, the fluctuations contribute to the non-radial flow in the gain region, creating a more favorable condition for producing explosion \citep{Couch15a, Couch15b, Mueller15, Abdikamalov16, Takahashi16, Burrows16, Radice17}.
\citet{Mueller16} presented 3D simulation of the last minutes of O shell burning in an $18M_\odot$ progenitor star. Prior to collapse, they observed vigorous convection with Mach number of $\sim 0.1$ and dominant angular wave number of $l=2$. Full 3D neutrino-hydrodynamics simulation of this model yielded strong explosion after the accretion of the O shell through the shock, whereas in a model with artificially suppressed pre-collapse convection, no explosion was observed \citep{Mueller17}. The reduction of the the critical (i.e., minimum) neutrino luminosity for producing explosion due to these perturbations was estimated to be $\sim 20\%$, which is roughly in agreement with the analytic predictions of \citet{Mueller16}. Recently, \citet{Collins17} investigated the properties of Si and O shell burning in a broad range of presupernova models with ZAMS masses between $9.45M_\odot$ and $35M_\odot$. They found that the progenitor models between $16M_\odot$ and $26M_\odot$ exhibit large scale convective motions with high Mach numbers in the O shells, which are favorable conditions for producing perturbation-aided neutrino-driven explosions \citep{Mueller15}. On the other hand, strong perturbations were rarely observed in the Si shells.
The emerging qualitative picture of how the progenitor asphericities impact the explosion condition is as follows. The convective vorticity waves distort the spherical isodensity surfaces of the progenitor star, creating Eulerian density perturbations at a given radius. When these density and vorticity perturbations encounter and cross the shock, they generate strong buoyancy-driven turbulence in the post-shock region, which helps to trigger an explosion \citep{Mueller15}.
In order to gain the full understanding of how these perturbations affect the explosion dynamics, it is necessary to understand the physics of shock-turbulence interaction, starting with linear order. With this premise, \citet{Abdikamalov16} studied the effect of entropy and vorticity perturbations using a linear perturbation theory known as the {\it linear interaction analysis} (LIA) \citep[e.g.,][]{Ribner53,Mahesh96}. These two represent two of the three components of a generic turbulent flow, the third being acoustic waves \citep{Kovasznay53,Chu1958}. They found that the kinetic energy of these fluctuations increases by a factor of $\sim 2$ as they cross the shock. Assuming direct injection of this energy into the post-shock region, they estimated that these perturbations can reduce the critical neutrino luminosity for producing explosion by $\sim 12\%$. While this is an important finding, the physics of shock-turbulence in CCSNe, even at linear level, is not yet completely understood. As noted by \citet{Mueller17}, the buoyancy plays a dominant role in generating post-shock turbulence. Moreover, the acoustic waves generated by infalling entropy and vorticity perturbations \citep{Kovalenko98, Foglizzo00, Mueller16b} will affect the shock dynamics and the post-shock flow. Finally, the impact of perturbations on the nuclear dissociation rate itself should also be taken into account. These aspects are missing from the analysis of \citet{Abdikamalov16}.
In this work, we investigate the interaction between accretion shocks and turbulent fluctuations in further detail. Our study is based on the solution of the linearized hydrodynamics equations in the post-shock region, which permits to capture the full temporal evolution of shock-vorticity interaction. The mathematical formalism describing the post-shock perturbation flow is similar to that employed in theoretical works on Richtmyer-Meshkov-type flows \citep{Wouchuk2001,Wouchuk2001b,Cobos2014} and analogous to that used in canonical interactions of non-reactive and reactive shocks with turbulent flows \citep{Wouchuk2009,Huete2017}. This improved formalism allows us to take into account the perturbation of the nuclear dissociation itself, which was not included in \citet{Abdikamalov16}. As demonstrated below, this effect is found to be important in the turbulent kinetic amplification factor, with the parametric trends and the asymptotic values being significantly affected.
This is the first in a series of two papers. The current paper is dedicated to the study of the interactions of accretion shocks with vorticity waves, while the second will study the interactions with density perturbations generated due to differential infall. The aim of this series of works is to establish in detail the linear physics of interaction of shocks with hydrodynamic turbulence in CCSNe.
The rest of the paper is organized as follows. Section~\ref{sec:problem} presents the problem formulation and the solution method. In Section~\ref{sec:analysis_wave}, an analysis of the interaction of shock waves with individual vorticity waves is presented, while Section~\ref{sec:analysis_field} focuses on the interaction of shocks with isotropic field of vorticity waves. The base-flow properties for the shock Mach number and the dissociation degree are computed in Section~\ref{sec:varepsilon}. In Section~\ref{sec:discussion}, we discuss the implication of our results on the explosion condition of core-collapse supernovae. Finally, in Section~\ref{sec:conclusion} we present our conclusions.
\section{Problem Formulation}
\label{sec:problem}
\subsection{Perturbation-free flow}
Let us consider an expanding shock wave placed at $r=R_{{\rm shock}}(t)$ that separates the in-falling flow ahead of shock front $r>R_{{\rm shock}}$, denoted with subscript 1, and the downstream post-shock flow identified with subscript 2 in $r<R_{{\rm shock}}$ (see Fig. \ref{fig:scheme2D} for clarification). In the thin-shock limit, when the radius of the shock is much larger than the accretion-shock thickness $R_{{\rm shock}}\gg l$, the variation of the different flow variables across the shock is readily obtained through the radial integration of the conservation equations, yielding
\begin{subequations}
\begin{alignat}{3}
&\rho_1 \left(u_1+\dot{R}_{{\rm shock}}\right) = \rho_2 u_2 \ ,\label{mass0}\\
&p_1 + \rho_1 \left(u_1+\dot{R}_{{\rm shock}}\right)^2 = p_2 + \rho_2 u_2^2\ ,\label{momentum0}\\
&e_1 +\frac{p_1}{\rho_1} +\frac{1}{2} \left(u_1+\dot{R}_{{\rm shock}}\right)^2 = e_2 +\frac{p_2}{\rho_2} + \frac{1}{2} u_2^2\ ,\label{energy0}%
\end{alignat}
\end{subequations}
for the mass, momentum and energy conservations equations, respectively. The symbols $u$, $\rho$, $p$ and $e$ refer to the bulk velocity, density, pressure, and internal energy of the gas, respectively. Notice that, for non-negligible accretion shock thicknesses, the mass equation \eqref{mass0} should include the term involving the divergence of the post-shock expanding gas.
\begin{figure}
\includegraphics[width=0.47\textwidth]{figccsn.pdf}
\caption{Scheme of the accretion shock expanding through the in-falling mass and characteristic scales: shock radius $R_{{\rm shock}}$, shock thickness $l$, and characteristic perturbation wavelength $\lambda_c$.}
\label{fig:scheme2D}
\end{figure}
When the compressed flow, modeled as a perfect gas with the polytropic index $\gamma=4/3$, is affected by nuclear dissociation effects occurring in a thin layer right behind the shock, the variation of the internal energy can be computed as
\begin{equation}
e_1-e_2 =\frac{1}{\gamma-1}\frac{p_1}{\rho_1}-\frac{1}{\gamma-1}\frac{p_2}{\rho_2} + \Delta e_\mathrm{dis}
\end{equation}
with $\gamma$ assumed constant through the interaction process, and $\Delta e_\mathrm{dis}$ referring to the energy per unit mass employed in dissociating the nuclei.
Following \citet{Fernandez2009a,Fernandez2009b}, the nuclear dissociation energy can be scaled with the free-fall speed squared,
\begin{equation}
\Delta e_\mathrm{dis} = \frac{1}{2} \varepsilon \upsilon_\mathrm{FF}^2,
\label{Deltaedis}
\end{equation}
using dimensionless nuclear dissociation parameter $\varepsilon$. As we show below in Section~\ref{sec:varepsilon}, $\varepsilon$ typically ranges from $0.2$ to $0.4$ in CCSN models. Assuming that the Bernoulli parameter is zero above the shock,
\begin{equation}
\upsilon_\mathrm{FF}^2 = \frac{2 G M}{R_{{\rm shock}}} = \frac{1}{2} u_1^2 + \frac{\gamma}{\gamma-1}\frac{p_1}{\rho_1}\ ,
\end{equation}
where $G$ is the gravitational constant and $M$ is the gravitating mass.
In the stalled-shock regime, $u_1/\dot{R}_{{\rm shock}}\gg1$, the Mach number $M_1 =u_1/a_1$, with $a_1=(\gamma_1 p_1/\rho_1)^{1/2}$ defining the speed of sound upstream, is used to rewrite nuclear dissociation energy as
\begin{equation}
\frac{\gamma^2-1}{2}\frac{\Delta e_\mathrm{dis}}{a_1^2} = \varepsilon \frac{\gamma+1}{2}\left(1+\frac{\gamma-1}{2}M_1^2\right).
\label{eq:varepsilon}
\end{equation}
Taking $\varepsilon$ and Mach as the independent parameters, the values of which may vary within the range established from numerical simulations of CCSNe (see Section~\ref{sec:varepsilon}), the fluid properties behind the shock are expressed in the form
\begin{equation}
C_2 = \frac{\rho_2}{\rho_1} = \frac{u_1}{u_2} = \frac{\left( \gamma + 1\right) M_1^2}{\left( \gamma - \kappa \right) M_1^2 + 1}\ ,
\label{R}
\end{equation}
and
\begin{equation}
P_2 =\frac{p_2}{\rho_1 u_1^2}=\frac{\gamma M_1^2(1+\kappa) +1}{\gamma(\gamma + 1) M_1^2}\ ,
\label{P}
\end{equation}
for post-shock density and pressure. The Mach number of the fluid particles leaving the shock is
\begin{equation}
M_2 = \frac{u_2}{a_2} = \left(\gamma C_2 P_2\right)^{-1/2} = \left[\frac{\left( \gamma - \kappa \right) M_1^2 + 1}{\gamma M_1^2(1+\kappa) +1}\right]^{1/2}\ ,
\label{M2}
\end{equation}
with the function
\begin{equation}
\kappa=\left[(1-M_1^{-2})^2+ \varepsilon(\gamma+1) \left(\gamma-1+2 M_1^{-2}\right)\right]^{1/2}
\label{kappa}
\end{equation}
accounting for the dimensionless endothermic parameter $\varepsilon$. For non-reacting shock waves the value of $\kappa=1-M_1^{-2}$, thereby reducing Eqs.~\eqref{R}-\eqref{M2} to the well-known regular Rankine-Hugoniot relationships.
The effect of nuclear dissociation into the post-shock flow density and pressure is easily analyzed through Fig. \ref{fig:RH}, with the final values provided by the intersection of the Rayleigh line (for a constant Mach number propagation) and the non-adiabatic Rankine-Hugoniot curve. It is found that higher pressures and densities are required downstream to get the shock with the same Mach number, if endothermic transformations take place through the shock wave. The maximum energy that can be employed in the nuclear dissociation process is found in the extreme limit $1-\varepsilon\ll1$, which provides limiting conditions for the post-shock gas: post-shock Mach number and temperature that tends to zero, and density that tends to infinity. That is, all the kinetic and thermal energy of the in-falling gas is used in dissociating the nuclei. The corresponding Hugoniot curve collapses into the vertical axis and the finite values of pressure are given by the intersection with the Rayleigh lines.
\begin{figure}
\includegraphics[width=0.5\textwidth]{figRH.pdf}
\caption{Hugoniot curves and Rayleigh lines for several values of the dissociation energy $\varepsilon = 0,\ 0.2,\ 0.4$ and $0.6$.}
\label{fig:RH}
\end{figure}
Equations \eqref{R} and \eqref{P} are computed in Fig. \ref{fig:RP} as a function of the Mach number, $M_1$. Both $C_2$ and $P_2$ increase with $\varepsilon$ for the same value of $M_1$. Unlike regular detonations, where chemical energy release does not depend on shock intensity since the reaction is self-sustaining, nuclear-dissociation degree does depend on upstream Mach number. Thus, the function $\kappa$ approaches the value $\left[1+(\gamma^2-1)\varepsilon\right]^{1/2}$ in the strong-shock limit, $M_1\gg1$, then yielding
\begin{equation}
C_2|_{M_1\gg1} = \frac{\gamma + 1}{ \gamma - \left[1+\left(\gamma^2-1\right)\varepsilon\right]^{1/2}}
\label{RMgg1}
\end{equation}
and
\begin{equation}
P_2|_{M_1\gg1} = \frac{1+ \left[1+\left(\gamma^2-1\right)\varepsilon\right]^{1/2}}{\gamma +1}\ ,
\label{PMgg1}
\end{equation}
for the post-shock density and pressure values, in agreement with Fig. \ref{fig:RP}. The mass-compression ration $C_2$ is found to diverge, and $P_2$ approaches unity in the double limit $M_1\gg1$, $1-\varepsilon\ll1$.
\begin{figure}
\includegraphics[width=0.47\textwidth]{figC2.pdf}\\
\includegraphics[width=0.47\textwidth]{figP2.pdf}
\caption{Mass compression ratio $C_2$ (Top) and pressure amplification $P_2$ (Bottom) as a function of the incident Mach number $M_1$ for $\varepsilon = 0,\ 0.1,\ 0.2,\ 0.3$ and $0.4$}
\label{fig:RP}
\end{figure}
\subsection{Perturbation problem}
\label{sec:perturb_problem}
The upstream and the downstream linear disturbances can be characterized in terms of acoustic, entropy and vortical modes. In the in-falling gas reference frame, the upstream mono-frequency perturbation in the fresh-gas reference frame $(x_1,y_1)$ is determined by the divergence-free velocity perturbation wave, namely
\begin{equation}
\begin{aligned}
&\bar{u}_1 \left( x_1, y_1 \right)= \frac{u_1-\langle u_1 \rangle}{ \langle a_2 \rangle}
= \hat{u}_1 \cos\left(k_x x_1 \right)\cos\left(k_y y_1 \right), \\
&\bar{v}_1 \left( x_1, y_1 \right)= \frac{v_1-\langle v_1 \rangle}{ \langle a_2 \rangle}
= \hat{u}_1 \frac{k_x}{k_y} \sin\left(k_x x_1 \right)\sin\left(k_y y_1 \right),
\label{u1v1}%
\end{aligned}
\end{equation}
for the streamwise and crosswise perturbations, respectively. The brackets denote the time-averaged mean value of the flow variable, which is effectively null for the upstream velocity in the stagnant gas reference frame. The dimensionless factor $\hat{u}_1$ stands for the amplitude of the upstream velocity disturbances and $\vec{k}=(k_x,k_y)$ is the upstream wave number vector. The associated non-dimensional vorticity wave, $\bar{\omega}_1 \left( x_1, y_1 \right) =\partial \bar{v}_1 /(\partial k_y x_1) -\partial \bar{u}_1/(\partial k_y y_1)$, is
\begin{equation}
\bar{\omega}_1 \left( x_1, y_1 \right) = \hat{u}_1 \left(1+\frac{k_x^2}{k_y^2}\right) \cos\left(k_x x_1 \right)\sin\left(k_y y_1 \right)\ .
\label{omega1}
\end{equation}
The interaction of the CCSN shock with the upstream shear wave, characterized by the angle $\theta = \tan^{-1}(k_y/k_x)$, is sketched in Fig. \ref{fig:scheme}. As a result of the interaction, the shock ripples and the fluid downstream is correspondingly altered with acoustic and entropic-vorticity waves, the former traveling at the speed of sound downstream $a_2$ and the latter moving with the fluid particles.
\begin{figure}
\includegraphics[width=0.5\textwidth]{figscheme.pdf}
\caption{Scheme of the shock shear-wave interaction in the compressed gas reference frame.}
\label{fig:scheme}
\end{figure}
For the perturbed accretion shock to be seen as a discontinuity front, the characteristic perturbation wavelength $\lambda_c\sim k_y^{-1}$ must be much larger than the accretion-shock thickness $l$, including the dissociation layer in it (see Fig. \ref{fig:scheme2D}). Besides, to consider the base-flow variable as constant properties, the shock and the in-falling gas must be in the nearly-steady regime, so that the variations of the base-flow properties within the characteristic wavelength can be neglected. These two conditions are simultaneously true for perturbation wavelengths much smaller than expanding shock evolution range and much higher than the shock thickness, both used to define the limits of validity of the model associated to spatial and temporal scales: $k_y R_{{\rm shock}} \gg 1\gg k_y l$, and $ \dot{R}_{{\rm shock}}\ll a_2 \dot{\xi}_s$, respectively. On the other hand, the planar shock assumption, $k_y R_{{\rm shock}} \gg 1$, is not suitable for perturbations characterized by low-mode numbers such as SASI \citep{Blondin03,Fernandez15}. For such modes, spherical geometry is more suitable \citep{Foglizzo09}, which has been employed by \citet{Takahashi16} to study the influence of pre-collapse perturbations on the hydrodynamic eigenmodes in the gain region.
For the analysis it is convenient to use a reference frame moving with the velocity of the post-shock flow. The solution is to be described in terms of the dimensionless coordinates $x = k_y x_2$ and $y = k_y y_2$ and the dimensionless time $\tau = a_2 k_y t $.
The nondimensional values for pressure, density and velocity perturbations downstream, defined as
\begin{equation}
\begin{aligned}
&\bar{p} = \frac{p-\langle p_2 \rangle}{\gamma \langle p_2 \rangle} \ ,\quad & &\bar{\rho}= \frac{\rho-\langle \rho_2 \rangle}{\langle \rho_2 \rangle}\ , \\
&\bar{u}= \frac{u-\langle u_2 \rangle}{\langle a_2 \rangle} \ ,\quad &
&\bar{v}= \frac{v-\langle v_2 \rangle}{\langle a_2 \rangle}\ ,
\label{perturb}
\end{aligned}
\end{equation}
respectively, are used to write the adiabatic Euler equations governing the post-shock flow. Anticipating that $\bar{p}$ and $\bar{v}$ are always proportional to $\cos(y)$ and $\sin(y)$, respectively, the conservation equations for mass, $x$-momentum, $y$-momentum and energy, namely
\begin{equation}
\begin{aligned}
&\frac{\partial \bar{\rho}}{\partial \tau} +\frac{\partial \bar{u}}{\partial x}+\bar{v}=0 \ ,\quad &
&\frac{\partial \bar{u}}{\partial \tau} +\frac{\partial \bar{p}}{\partial x}=0\ ,\\
&\frac{\partial \bar{v}}{\partial \tau} -\bar{p}=0 \ ,\quad &
&\frac{\partial \bar{p}}{\partial \tau} = \frac{\partial \bar{\rho}}{\partial \tau}\ ,
\label{eqperturb}
\end{aligned}
\end{equation}
respectively, are combined for $\bar{p}$ to yield
\begin{equation}
\frac{\partial^2 \bar{p}}{\partial \tau^2}= \frac{\partial^2 \bar{p}}{\partial x^2}-\bar{p}
\label{sonicwave}
\end{equation}
as the two-dimensional periodically-symmetric wave equation, which governs the perturbation field behind the shock.
The problem reduces to that of integrating the linearized Euler equations, or equivalently the wave equation \eqref{sonicwave}, for $\tau\geq0$ and within the domain delimited by the leading reflected sonic wave traveling backwards, $x= -\tau$ and the shock front moving upwards $x= M_2\tau$. One boundary condition is provided by the isolated-shock assumption, which translates into not considering the effect of the acoustic waves reaching the shock front from behind, in consonance with the large radius limit, $R_{{\rm shock}} k_y \gg 1$. On the other hand, the boundary condition at the CCSN shock is determined by the linearized Rankine-Hugoniot relationships,
\begin{subequations}
\begin{alignat}{3}
&\left(C_2-1\right)\dot{\xi}_s =C_2\bar{u}_s-M_2 C_2\bar{\rho}_s-\bar{u}_1 \ , \label{massRH}\frac{}{}\\
&\bar{p}_s = 2M_2\left(\bar{u}_s-\bar{u}_1 \right)-M_2^2\bar{\rho}_s\ , \label{xmomRH}\frac{}{}\\
&M_1^2 M_2^2\bar{\rho}_s = \Pi_s\bar{p}_s-\Delta_s \left(\dot{\xi}_s-\bar{u}_1\right)\ ,
\label{eneRH}\frac{}{}\\
&\bar{v}_s = M_2 \left(C_2-1\right)\xi_s+\bar{v}_1\ ,\label{tanRH}
\end{alignat}
\end{subequations}
with $\dot{\xi}_s$ denoting the temporal derivative of the dimensionless ripple shock position $\xi_s = k_y\left( x_{1,s}-u_1 t\right)$, as depicted in Fig. \ref{fig:scheme}.
The energy equation \eqref{eneRH}, which involves the functions
\begin{equation}
\Pi_s = \frac{M_1^2\left[1 + M_1^2\left(1-\kappa\right)\right]^2}{\left(M_1^2+1\right)^2-M_1^4\kappa^2}
\label{Pis}
\end{equation}
and
\begin{equation}
\Delta_s = \varepsilon\frac{2 M_2^3 M_1^6\left(\gamma-1 \right) \left[1 + M_1^2\left(1-\kappa\right)\right]}{\left(M_1^2+1\right)^2-M_1^4\kappa^2}\ ,
\label{Deltas}
\end{equation}
distinguishes regular adiabatic shocks from reacting shocks like detonations or nuclear-dissociating shocks.
In the previous work \citep{Abdikamalov16}, the coefficients accompanying the linear perturbations in the linear energy equation \eqref{eneRH} were the same as those found in perturbed adiabatic shock ($\Pi_s=1$ and $\Delta_s=0$), although the values of the base-flow properties, namely $M_2$ and $C_2$, were accordingly modified by nuclear dissociation effects. How nuclear-dissociation degree is affected by the perturbations, and how that modification ultimately acts upon the downstream flow variables, is incorporated in this model through coefficients $\Pi_s$ and $\Delta_s$. In this sense, the present analysis consistently accounts for the effect of $\varepsilon$ in both zero-order and first-order flow variables.
The value of $\Pi_s$ is positive when the dissociation energy is sufficiently low, that is $\kappa<1+M_1^{-2}$. On the other hand, when the dissociation energy is sufficiently high, the value of $\Pi_s$ becomes negative, then reverting the relationship between density and pressure perturbations in \eqref{eneRH}. Since the degree of dissociation depends on the shock strength, the term involving the function $\Delta_s$ in \eqref{eneRH} is proportional to the incident Mach number perturbation $\delta M_1 = \left(\dot{\xi}_s-\bar{u}_1 \right)M_1/(M_2 C_2)$. The value of $\Delta_s$ is found to be negative for $\varepsilon > 0$. It is worth commenting that the case of exothermic detonations is significantly different since the second term in the right-hand side of \eqref{eneRH} will vanish \citep{Huete2013,Huete2017}. This is so because the total heat release, generated by the combustion process behind the shock, does not depend on the shock intensity perturbation, as it is provided by self-sustained reactions. Once the reaction is triggered it will release all the thermonuclear (or chemical) energy.
Algebraical manipulation of \eqref{massRH}-\eqref{tanRH} is carried out to write one of the two equations for the shock boundary condition involving $\bar{\xi}_s$ and $\bar{p}_s$, that is
\begin{equation}
\frac{d \xi_s}{d \tau} = \sigma_a \bar{p}_s+\hat{u}_1\cos\left(\frac{k_x}{k_y}C_2 M_2 \tau\right)\ ,
\label{xis}
\end{equation}
with the factor accompanying the pressure perturbation being
\begin{equation}
\sigma_a = \frac{C_2\left(M_1^2-\Pi_s\right)}{2 M_2 M_1^2 \left(C_2-1\right)+C_2\Delta_s}\ .
\label{As}
\end{equation}
Similarly, the material derivative behind the shock, $\partial/(\partial \tau) + M_2\partial/(\partial x)$, of the streamwise velocity perturbation $\bar{u}_s= \sigma_b \bar{p}_s+\bar{u}_1$, with
\begin{equation}
\sigma_b = \frac{M_1^2+\Pi_s+\Delta_s \sigma_a}{2 M_2 M_1^2}\ ,
\label{Bs}
\end{equation}
is used to provide
\begin{equation}
\begin{aligned}
&\left(\sigma_b +M_2 \right)\frac{\partial \bar{p}_s}{\partial \tau} +\left.\left(\sigma_b M_2+ 1\right)\frac{\partial \bar{p}}{\partial x}\right|_s=-M_2^2 \left(C_2- 1\right)\xi_s \\
&+\frac{k_x}{k_y}M_2\left(C_2-1\right)\hat{u}_1 \sin\left(\frac{k_x}{k_y}C_2 M_2 \tau\right)
\label{ps}
\end{aligned}
\end{equation}
as the second equation that conforms, along with \eqref{xis}, the shock boundary condition for the functions $\bar{\xi}_s$ and $\bar{p}_s$.
The coefficients $\sigma_a$ and $\sigma_b$ are positive for any combination of parameters $M_1$ and $\varepsilon$. In the strong shock limit for $\varepsilon>0$, the value of $\sigma_a$ approaches to zero with $\sigma_a|_{M_1\gg1}\sim M_1^{-2}$, while $\sigma_b$ reaches a constant value determined by the inverse of the post-shock Mach number, namely $\sigma_b|_{M_1\gg1}=M_2^{-1}|_{M_1\gg1}$.
The initial condition of the shock perturbations is determined by knowing that the shock is initially planar, so that $\bar{\xi}_s= \bar{v}_s = 0$. Correspondingly, the initial perturbations of pressure and streamwise velocity must satisfy $\bar{u}_s +\bar{p}_s=0$, as dictated by the first acoustic wave emitted backwards, thereby giving
\begin{equation}
\bar{p}_{s0}= -\frac{1}{\sigma_b+1} \hat{u}_1
\label{ps0}
\end{equation}
for the initial shock pressure perturbation.
\section{Linear interaction analysis with monochromatic vorticity perturbations}
\label{sec:analysis_wave}
\subsection{Shock pressure and corrugation temporal evolution}
The asymptotic behavior of the corrugated shock can be inferred from the Laplace transform expression provided in \eqref{PsLaplace}, with the imaginary poles in the dispersion relationship
\begin{equation}
\left(s\sqrt{s^2+1}+\sigma_b s^2 + \sigma_c\right)\left(s^2+\zeta^2\right)=0\ ,
\label{denominator}
\end{equation}
indicating the possibility of asymptotic harmonic oscillations. The left-hand product in \eqref{denominator} accounts for the shock response in the absence of continuous perturbations, whereas the right-hand product refers to the induced oscillations from the non-homogeneous upstream flow. The characteristic dimensionless frequency $\zeta$ is provided in \eqref{zeta}. Notice that the term $\sqrt{s^2+1}$ may change the sign if the pole lies on the bottom half-space of the imaginary plane.
It has been found that equation $s\sqrt{s^2+1}+\sigma_b s^2 + \sigma_c =0$ has no poles, indicating that shock pressure perturbations decay with time in the absence of continuous excitement. Generally, the perturbations decay in time like $\tau^{-3/2}$, but this decay rate changes, however, for infinitely strong shocks with $\varepsilon=0$, since $\sigma_b=\sigma_c$, yielding $\tau^{-1/2}$ as the law describing the approach to the permanent solution \citep{Fraley1986}.
We are firstly interested in the long-time response of the accretion shock to mono-frequency perturbations. As $\sigma_c<\sigma_b$ the shock will oscillate only with the excitement frequency coming from upstream perturbations, $\omega_s = R M_2 k_x/k_y$, thereby yielding an asymptotic response qualitatively similar to the one found for adiabatic shock waves \citep{Wouchuk2009}
\begin{equation}
\bar{p}_{s}(\tau\gg 1) = \left \{ \begin{array}{ll}
\mathcal{P}_{lr} \cos\left(\omega_s \tau \right) + \mathcal{P}_{li} \sin\left( \omega_s \tau \right) & ,\zeta \leq 1 \\
\mathcal{P}_{s} \cos\left(\omega_s \tau \right) & ,\zeta \geq 1
\end{array} \right.
\label{pstau}
\end{equation}
except for the coefficients defining the amplitudes, which are provided by Eqs.~\eqref{Plr}-\eqref{Ps} in Appendix~\ref{App1}. As the planar infinitely-thin assumption does not provide any length scale, the shock oscillation period will be proportional to the upstream characteristic length. In dimensional variables, the time between pressure peaks is given by $t_{\text{per}}=\lambda_x/(2 \pi a_1 M_1)$.
As in previous LIA works \citep{Wouchuk2009,Huete2013,Huete2017}, the pressure perturbation field splits into two distinguished regimes depending on the dimensionless frequency
\begin{equation}
\zeta = \frac{k_x}{k_y}\frac{M_2 C_2}{\sqrt{1-M_2^2}} = \frac{\omega_s}{\sqrt{1-M_2^2}}\ .
\label{zeta}
\end{equation}
In the long-wavelength (low-frequency) regime, $\zeta <1$, the acoustic perturbation right behind the shock is composed by the amplitudes of two orthogonal contributions $\mathcal{P}_{lr}$, and $\mathcal{P}_{li}$, respectively. In this range, the amplitude of the pressure disturbances exponentially decays with the distance from the shock front. On the other hand, in the short-wavelength (high-frequency) regime $\zeta >1$, the acoustic radiation travels in the form of constant-amplitude waves. The critical value $\zeta =1$ then indicates the condition at which stable sonic perturbations downstream move parallel to the shock front in the shock reference frame.
As shown in equation~\eqref{Besselr} of Appendix~\ref{App1}, the post-shock pressure perturbation field can be computed as a linear combination of Bessel functions. In particular, right behind the shock, we have
\begin{equation}
\bar{p}_s(\tau)=\sum_{\nu=0}^{\infty}N_{\nu}J_{\nu}\left(r=\tau \sqrt{1-M_2^2}\right) \ ,
\label{psBesseltau}
\end{equation}
with the corresponding coefficients for $N_{\nu}$, provided in \eqref{Dm}, being obtained through the Laplace transform \eqref{Laplace} and the isolated-shock boundary condition. The temporal evolution of the shock ripple $\xi_s(\tau)$ is readily obtained through the integration of eq.~\eqref{xis}, whose solution can be expressed in terms of hyper-geometrical functions, as shown in \eqref{xisbeseel}. Akin to the shock pressure, the asymptotic long-time response is written in terms of harmonic functions, as provided in \eqref{xisasym}.
\begin{figure}
\includegraphics[width=0.47\textwidth]{figpstau.pdf} \vspace{2mm}
\\ \includegraphics[width=0.47\textwidth]{figxitau.pdf}
\caption{Shock pressure perturbation (top) and shock ripple amplitude (bottom) as a function of $\tau$ for $M_1=5$, $\zeta=1.2$, and for $\varepsilon = 0.4$. Solid: transient evolution \eqref{psBesseltau} and \eqref{xisbeseel}. Dashed: asymptotic long-time equations \eqref{pstau} and \eqref{xisasym}.}
\label{fig:pstau}
\end{figure}
The functions $\bar{p}_s(\tau)$ and $\xi_s(\tau)$ are computed in Fig. \ref{fig:pstau} as a function of $\tau$, for $M_1=5$, $\zeta=1.2$, and for $\varepsilon=0.4$. Both transient (solid line) and long-time response (dashed line) are shown. The shock transient evolution is found to agree fairly well with the asymptotic expressions provided in \eqref{pstau} and \eqref{xisasym}, then confirming that asymptotic functions can be used to compute the interaction with an isotropic spectrum without significant loss of accuracy.
\subsection{Downstream flow variables}
The spatial distribution of the flow variables, namely, pressure, density and velocity, are derived from the shock pressure evolution computed previously. For example, pressure perturbations downstream can be written in terms of Bessel functions as
\begin{equation}
\bar{p}\left(x,\tau\right)=\sum_{\nu=0}^{\infty}N_{\nu}J_{\nu}\left(\sqrt{\tau^2-x^2}\right)e^{-\nu\left[\tanh^{-1}\left(M_2\right)-\tanh^{-1}\left(\frac{x}{\tau}\right)\right]}\ .
\end{equation}
As the asymptotic expression \eqref{pstau} is found to reproduce accurately the shock pressure evolution, the asymptotic long-time response of the shock is employed to compute the post-shock disturbances.
Downstream linear perturbations are conveniently split into entropic-vortical, conveyed by the fluid particles, and traveling acoustic modes \citep{Kovasznay53,Chu1958}, namely
\begin{equation}
\begin{aligned}
&\bar{p}(x,\tau)= \bar{p}_a(x,\tau)\ ,&\quad &\bar{\rho}(x,\tau)= \bar{\rho}_a(x,\tau) + \bar{\rho}_e(x) \\
&\bar{u}(x,\tau)= \bar{u}_a(x,\tau)+\bar{u}_r(x)\ ,&\quad &\bar{v}(x,\tau)= \bar{v}_a(x,\tau) + \bar{v}_r(x)\ .\nonumber
\label{kova}
\end{aligned}
\end{equation}
In the absence of diffusive effects, the amplitudes of the entropic-solenoidal perturbations are given by their corresponding values generated right behind the shock, and they are steady in a reference frame co-moving with the fluid particles. Acoustic disturbances, on the other hand, refer to traveling sonic waves that escape from the shock when $\zeta>1$.
The acoustic radiation condition is then determined by $\omega_s>(1-M_2^2)^{1/2}$, a condition that depends on the upstream shear wave, since $\zeta=\left[0,\infty\right)$ depends on the relative properties of the perturbation field ahead of the shock. Small values of $\zeta$ represent the interaction with upstream vortices highly stretched in the streamwise direction $\lambda_x\gg\lambda_y$, while the opposite is true for $\zeta\gg 1$. In the latter low mode-number scenario ($\lambda_x\ll\lambda_y$), the problem reduces to the one-dimensional interaction of the shock with radial perturbation waves. Such stability analysis has been developed by \citet{Velikovich2016} for the classical Noh's configuration in adiabatic conditions. The asymptotic far-field solution for the acoustic disturbances is also written in terms of harmonic functions, representing stable traveling fronts that occur only when the shock oscillation frequency is sufficiently high, $\zeta > 1$.
Traveling sonic perturbations are functions of $(\omega_{a} \tau - k_{a} x)$, with the frequency $\omega_{a}$ and the wave number $k_{a}$ being determined by the post-shock adiabatic dispersion relationship $\omega_{a}^2=k_{a}^2+1$, and the shock oscillation frequency $\omega_{s}= \omega_{a} - M_2 k_{a}$, yielding
\begin{equation}
\omega_{a}=\frac{\omega_s - M_2\sqrt{\omega_s^2-1+M_2^2}}{1-M_2^2}
\label{omegaa}
\end{equation}
and
\begin{equation}
k_{a}=\frac{\omega_s M_2 - \sqrt{\omega_s^2-1+M_2^2}}{1-M_2^2}\ ,
\label{ka}
\end{equation}
respectively, which depend upon the shock frequency $\omega_s$. It is straightforward to see that $k_a$ can be either negative or positive, the former representing the sonic waves propagating downwards in the compressed gas reference frame, and the latter denoting the waves moving upwards, although never catching up the shock wave as dictated by the isolated-front boundary condition. The shock oscillation frequency, $\omega_s=1$, marks the standing acoustic wave regime, therefore separating the left-traveling solution $\omega_s>1$ from the right-traveling regime $(1-M_2^2)^{1/2}<\omega_s<1$ in the compressed gas reference frame. When the shock oscillates with two frequencies, the possibility of having sonic fronts running upstream and downstream is possible.
The asymptotic pressure and isentropic density perturbations, far behind the shock, equal
\begin{equation}
\bar{p}(x,\tau)=\bar{\rho}_a(x,\tau)= \mathcal{P}_s \cos\left(\omega_a \tau-k_a x \right)\ ,
\label{pa}
\end{equation}
with $\mathcal{P}_s$ standing for the amplitude of the shock pressure disturbances in the long-wavelength regime. The amplitude of the associated acoustic-velocity perturbations are proportional to the pressure changes through the functions \eqref{UVa}, provided in the Appendix \ref{App1}. The corresponding isentropic temperature variations induced by the acoustic-shock radiation are simply $\bar{T}_a(x,\tau) = \left(\gamma-1\right) \bar{p}(x,\tau)$.
The entropic contribution to the density perturbations $\bar{\rho}_e$ is computed from Rankine-Hugoniot relations~\eqref{massRH}-\eqref{eneRH}, after subtracting the acoustic part. It is readily seen that
\begin{equation}
\bar{\rho}_e(x)=\left(\mathcal{D}-1\right) \bar{p}_s\left(\tau=\frac{x}{M_2}\right)
\label{dene}
\end{equation}
with $\mathcal{D} = \left(2 M_2 \sigma_b -1\right)/M_2^2$ being the amplitude of the density perturbations behind the shock. As easily inferred from Fig.~\ref{fig:RH}, the value of $\mathcal{D}$ is found to be positive, and reaches a constant value in the strong-shock limit: $\mathcal{D}|_{M_1\gg1}=2 M_2^{-2}$. The corresponding isobaric temperature perturbation, scaled with base-flow temperature, is the function $\bar{T}_e(x) = -\bar{\rho}_e(x)= -(\mathcal{D}-1) \bar{p}_s(\tau=x/M_2)$.
Analogously, dimensionless vorticity disturbances are determined by
\begin{equation}
\bar{\omega}(x)=\frac{\partial \bar{v}}{\partial x}-\frac{\partial \bar{u}}{\partial y}=\Omega_2 \bar{p}_s\left(\tau=\frac{x}{M_2}\right) + \Omega_1 \cos\left(\frac{\omega_s}{M_2} x \right)
\label{vort}
\end{equation}
with
\begin{equation}
\Omega_1=C_2\left[1+\left(\frac{k_x}{k_y}\right)^2\right]=C_2\left(1+\frac{1-M_2^2}{C_2^2 M_2^2}\zeta^2\right)
\label{Omega1}
\end{equation}
indicating the contribution result of the one-dimensional compression effect, the shrinking of the vortices by the overall mass compression ratio, and
\begin{equation}
\Omega_2=\frac{M_2 \left(C_2-1\right) \sigma_a+ \sigma_b M_2 -1}{M_2}
\label{Omega2}
\end{equation}
referring to contribution induced by shock rippling proportional to pressure, a two-dimensional effect.
The rotational contribution for the velocity disturbances is readily computed through the vorticity field, by knowing that rotational perturbations are steady and isobaric in the linear-inviscid approach. The relationships
\begin{equation}
\bar{\omega}(x)=-\frac{\partial^2 \bar{u}_r}{\partial x^2}+\bar{u}_r \ ,\ \bar{v}_r(x)=-\frac{\partial \bar{u}_r}{\partial x}
\label{urot}
\end{equation}
are then employed to write the asymptotic longitudinal and transverse rotational-velocity distributions, provided in eqs.~\eqref{urasym} and \eqref{vrasym}.
\begin{figure}
\includegraphics[width=0.47\textwidth]{figdenrot.pdf}
\caption{Two-dimensional vector field plot for rotational-velocity perturbations superposed to iso-contours of entropic-density disturbances for a shock wave with $M_1=5$, $\varepsilon =0.4$, and $\zeta=1.2$.}
\label{fig:voren}
\end{figure}
The asymptotic expressions for the rotational-velocity and entropic-density perturbations are computed in Fig.~\ref{fig:voren}, for the same conditions as in Fig. \ref{fig:pstau}. Velocity perturbations are displayed in a two-dimensional vector field, with the length of the vectors being scaled within the maximum and minimum velocity amplitudes, $1.9$ and $0.45$, respectively. Transverse component of the velocity perturbations is found to be much greater than longitudinal contribution. The spatial frequency modulation, given by $\omega_s/M_2$ is clearly distinguished. The amplitude of the rotational perturbations depends on the the incident angle $\theta$, as shown in Fig.~\ref{fig:UV} for $M_1=5$. This dependence is later used to account for the interaction with a whole spectrum of vorticity waves, with $\theta$ ranging from $0$ to $\pi$, upon consideration of the isotropic probability distribution.
Superposed to the vector field, the entropic-density disturbances are displayed in a contour plot in Fig.~\ref{fig:voren}. The center of the eddies and the peaks of the density field are shifted in $\pi/2$ in the lateral coordinate, as the former are proportional to $\sin(y)$ and the second to $\cos(y)$. Along the streamwise direction the peak values of density and rotational perturbations are in phase for $\zeta>1$, as both periodic distributions are proportional to $\cos{\left(\omega_s/M_2\, x\right)}$. There exits a spatial shift between the rotational and entropic mode, $\Delta \phi=\phi_r-\phi_e$, for $\zeta<1$, given by the contribution of the orthogonal components, $\tan \phi_r =\Omega_2\mathcal{P}_{li} /(\Omega_2\mathcal{P}_{lr}+\Omega_1)$ and $\tan \phi_e =\mathcal{P}_{li} /\mathcal{P}_{lr}$.
\section{Linear interaction analysis with 3D isotropic vorticity perturbations}
\label{sec:analysis_field}
\subsection{Turbulent kinetic energy}
\label{sec:tke}
The three-dimensional upstream flow is assumed to be homogeneous and isotropic. Therefore, the amplitude of the incident shear wave $\hat{u}_1$ depends exclusively on the wave-number amplitude $|\vec{k}|=k$ as $\vec{k}$ is uniformly distributed over the unit sphere. The three-dimensional problem is conveniently formulated in spherical polar coordinates, so the upstream velocity field $(\bar{u}_1,\bar{v}_1,\bar{w}_1)=\hat{u}_1 (\sin \theta \sin\varphi,\cos \theta \sin\varphi,\cos\varphi )$ and the associated wave-number vector is $\vec{k}=k (\cos \theta ,-\sin\theta,0 )$. The interaction with the whole spectrum of perturbations is carried out by direct superposition of linear perturbations \citep{Batchelor1953}. The average upstream velocity perturbation is
\begin{equation}
\langle\bar{u}_1^2\rangle= \int_{k^3} |\bar{u}_1|^2{\rm d}k^3 = \frac{8\pi}{3} \int_0^{\infty}\hat{u}_1^2(k)k^2{\rm d}k\ ,
\label{uw3D}
\end{equation}
\begin{equation}
\langle\bar{v}_1^2\rangle= \langle\bar{w}_1^2\rangle= \int_{k^3} |\bar{v}_1|^2{\rm d}k^3 = \frac{2\pi}{3} \int_0^{\infty}\hat{u}_1^2(k)k^2{\rm d}k
\label{vw3D}
\end{equation}
so the corresponding turbulent kinetic energy (TKE) computes as
\begin{equation}
\text{TKE}_1 = \frac{1}{2} \left( \langle\bar{u}_1^2 \rangle + \langle \bar{v}_1^2 \rangle + \langle \bar{w}_1^2 \rangle\right)= 2\pi\int_0^{\infty} \hat{u}_1^2(k) k^2 {\rm d}k
\label{TKEo}
\end{equation}
with $\hat{u}_1(k)=\text{fun}(k)$ representing the isotropic energy spectrum.
The problem is further simplified by reducing the three-dimensional geometry into an equivalent two-dimensional case that accounts for the effect of vorticity perturbations that are parallel or perpendicular to the shock propagation velocity. After some straightforward algebra, the amplification ratio across the shock wave is
\begin{equation}
K= \frac{\text{TKE}_2}{\text{TKE}_1} = \frac{1}{2} \int_0^{\pi/2} \left(\bar{u}^2 + \bar{v}^2 \right)\sin^3\theta {\rm d}\theta + \frac{1}{2}
\label{K3Dtheta}
\end{equation}
which is conveniently rewritten in terms of the integration variable $\zeta$ as
\begin{equation}
\begin{aligned}
K=\frac{1}{3} \int_0^{\infty} \left(\bar{u}^2 + \bar{v}^2 \right) \text{P}(\zeta) {\rm d}\zeta+ \frac{1}{2}
\label{K3D}
\end{aligned}
\end{equation}
with
\begin{equation}
\text{P}(\zeta) = \frac{3}{2}\frac{M_2^4 C_2^4 \sqrt{1-M_2^2}}{\left[M_2^2 C_2^2 +\zeta^2 \left(1-M_2^2\right)\right]^{5/2}}
\label{pdf}
\end{equation}
standing for the normalized probability-density distribution obeying $\int_0^{\infty}\text{P}(\zeta){\rm d}\zeta=1$. It is readily seen that, although post-shock turbulence spectrum depends on upstream energy distribution $\int_0^{\infty}\hat{u}_1^2(k)k^2dk$, the kinetic energy amplification ratio does not as long as isotropic conditions are considered, namely $\hat{u}_1(k)=\text{fun}(k)$.
The amplification ratios for the longitudinal and transverse kinetic energy contributions can be computed with the aid of the probability density distribution. They are conveniently split into rotational and acoustic contributions, yielding
\begin{equation}
\begin{aligned}
L &= L_r + L_a = \int_0^{1}\left[\left(\mathcal{U}_{li}^r\right)^2+\left(\mathcal{U}_{li}^r\right)^2\right] \text{P}(\zeta) {\rm d}\zeta+\\
&+\int_1^{\infty} \left(\mathcal{U}_{s}^r\right)^2 \text{P}(\zeta) {\rm d}\zeta + \int_1^{\infty} \left(\mathcal{U}^a\right)^2 \text{P}(\zeta) {\rm d}\zeta
\label{L3D}
\end{aligned}
\end{equation}
for the longitudinal part. The variation of the velocity perturbation amplitudes with $\zeta$ is deduced from Fig.~\ref{fig:UV} (for $M_1=5$) knowing that $\zeta$ is inversely proportional to $\tan\theta$, as \eqref{zeta} reads.
Equivalently, the turbulent kinetic energy associated to the transverse contribution is
\begin{equation}
\begin{aligned}
T &= T_r + T_a = \frac{1}{2} \int_0^{1}\left[\left(\mathcal{V}_{li}^r\right)^2+\left(\mathcal{V}_{li}^r\right)^2\right] \text{P}(\zeta) {\rm d}\zeta+\\
&+ \frac{1}{2}\int_1^{\infty} \left(\mathcal{V}_{s}^r\right)^2 \text{P}(\zeta) {\rm d}\zeta + \frac{1}{2}\int_1^{\infty} \left(\mathcal{V}^a\right)^2 \text{P}(\zeta) {\rm d}\zeta+ \frac{3}{4}\ .
\end{aligned}
\label{T3D}
\end{equation}
The total turbulent kinetic energy, also split into
rotational and acoustic contributions through $K=K_r+K_a$, is computed with the aid of $K_r= (L_r+2 T_r)/3$ and $K_a= (L_a+2 T_a)/3$, or equivalently through $K=\left(L+2T\right)/3$.
\begin{figure}
\includegraphics[width=0.47\textwidth]{figL3D.pdf} \vspace{2mm}
\\ \includegraphics[width=0.47\textwidth]{figT3D.pdf}\vspace{2mm} \\ \includegraphics[width=0.47\textwidth]{figK3D.pdf}
\caption{Longitudinal $L$, transverse $T$ and total $K$ turbulent kinetic energy for $\varepsilon=$ 0, 0.2, and 0.4. The solid lines account for rotational contribution and the dashed lines show rotational and acoustic contributions.}
\label{fig:LTK}
\end{figure}
The variation of the longitudinal, transverse and total contributions for the turbulent kinetic energy are shown in Fig.~\ref{fig:LTK} as a function of $M_1$, for $\varepsilon=$ 0, 0.2, and 0.4. The solid lines show the rotational contribution and the dashed lines include the contribution of both rotational and acoustic kinetic energy. In agreement with Fig.~\ref{fig:UV}, the acoustic contribution is found to be greater for the longitudinal part $L$, although sufficiently small to be neglected for any $M_1$ and $\varepsilon$ considered. Although not clearly seen in Fig.~\ref{fig:LTK}, the function $K$ approaches a constant value in the strong shock limit $M_1\gg1$, they are 1.8, 7.1, and 9.8 for $\varepsilon=$ 0, 0.2, and 0.4, respectively. On the other hand, the weak shock limit $M_1-1\ll1$ provides 1, 1.4, and 1.6 for the same conditions. For a fixed value of the incident Mach number, the effect of nuclear dissociation is seen to increase the total kinetic energy. It is found that, for a Mach number close to 3, the total kinetic energy is less sensitive to dissociation energy, although longitudinal and transverse contributions are clearly counter-affected. It indicates that post-shock anisotropy is modified by $\varepsilon$. Longitudinal contribution is generally diminished by nuclear dissociation if the Mach number is sufficiently high, a region that covers the scenarios of most interest. It is also found that transverse perturbations across the shock are more sensitive to the shock passage, then conforming a post-shock flow that differs from the ideal 1D configuration.
\begin{figure}
\includegraphics[width=0.47\textwidth]{figLTK2016.pdf}
\caption{Longitudinal $L$, transverse $T$ and total $K$ turbulent kinetic energy for $M_1=5$ as a function of $\varepsilon$. The solid lines represent computations of eqs.~\eqref{L3D}, \eqref{T3D} and \eqref{K3D}, while the dashed lines show the predictions in \protect\cite{Abdikamalov16}.}
\label{fig:LTK1617}
\end{figure}
A direct comparison with the results obtained in \cite{Abdikamalov16} reveals that the dependence of the turbulent kinetic energy with $M_1$ and $\varepsilon$ is affected when endothermic effects are included in the linear perturbation analysis. Although similar trends, when increasing $\varepsilon$, is found in both works, the values may differ substantially when the energy employed in dissociating the gas is sufficiently high. For the sake of exemplification, predictions for $L$, $T$ and $K$ are computed in Fig. \ref{fig:LTK1617} by using eqs.~\eqref{L3D}, \eqref{T3D} and \eqref{K3D} (solid) and recasting the data in \cite{Abdikamalov16} (dashed). The differences become more pronounced with increasing shock strength, reaching $\sim 30\%$ in $K$ for $M_1 = 10$ and $\varepsilon=0.4$.
\subsection{Turbulent Mach number}
In is instructive to relate the pre-shock and post-shock turbulent Mach numbers. It is immediate to see that
\begin{equation}
\langle \delta M_2^2\rangle = -4 M_2 \langle \bar{u}\bar{a} \rangle + \langle \bar{u}^2 \rangle + \langle \bar{v}^2 \rangle + \langle \bar{w}^2 \rangle +3 M_2^2 \langle \bar{a}^2\rangle\ ,
\end{equation}
which can be split into entropic-rotational and acoustic contributions as $\langle \delta M_2^2\rangle=\langle \delta M_1^2 \rangle \left( \Phi_{er}+\Phi_{ac}\right)$ in terms of functions $\Phi_{er}$ and $\Phi_{ac}$ represent these two contributions. For isotropic turbulence in the upstream flow, the entropic-rotational part reads
\begin{equation}
\begin{aligned}
\Phi_{er} &=\frac{M_2^2C_2^2}{M_1^2}\left[\frac{\langle \bar{u}_r^2 \rangle + \langle \bar{v}_r^2 \rangle + \langle \bar{w}_r^2 \rangle}{3\langle \bar{u}_1^2 \rangle}+ \frac{M_2^2}{4} \frac{\langle \bar{\rho}_e^2 \rangle}{\langle \bar{u}_1^2 \rangle} + \frac{2 M_2}{3} \frac{\langle \bar{u}_r \bar{\rho}_e \rangle}{\langle \bar{u}_1^2 \rangle}\right]\\
&= \frac{M_2^2C_2^2}{M_1^2}\left[K_r + \frac{M_2^2}{4} D_e+
\frac{2 M_2}{3} B_{er}\right]\ ,
\end{aligned}
\end{equation}
while the acoustic contribution can be expressed as
\begin{equation}
\begin{aligned}
\Phi_{ac} &= \frac{M_2^2C_2^2}{M_1^2}\left[\frac{\langle \bar{u}_a^2 \rangle + \langle \bar{v}_a^2 \rangle}{3\langle \bar{u}_1^2 \rangle}+ \frac{M_2^2}{4} \frac{\langle\bar{\rho}_a^2 \rangle}{\langle \bar{u}_1^2 \rangle}- \frac{2M_2}{3} \frac{\langle \bar{u}_a\bar{\rho}_a \rangle}{\langle \bar{u}_1^2 \rangle} \right]\\
&=\frac{M_2^2C_2^2}{M_1^2}\left[K_a + \frac{M_2^2 (\gamma-1)^2}{4} D_a - \frac{2 M_2 (\gamma-1)}{3} B_a\right]\ .
\end{aligned}
\end{equation}
The values of $K_r$, $K_a$, $D_e$, $D_a$, $B_{er}$, and $B_a$ are provided in Eq.~\eqref{K3D} for the kinetic energy, in Eq.~\eqref{D3D} for the average density perturbations, and in Eq.~\eqref{B3D} for the buoyancy correlation. The mean value of the post-shock Mach number includes changes in the velocity field, density and the cross-product contribution. As $\bar{v}$ and $\bar{\rho}$ are orthogonal functions, only the longitudinal contribution correlates with density perturbations. The latter are being expressed as a function of shock pressure through $\bar{\rho}_e(x)= (\mathcal{D}-1)\bar{p}_s(\tau=x/M_2)$ for then entropic perturbations, and through $\bar{\rho}_a=\bar{p}_s(\tau=x/M_2)$ for the acoustic part.
The value of $\Phi=\Phi_{er}+\Phi_{ac}$ is computed in Fig.~\ref{fig:lum} as a function of the shock strength $M_1$ for $\varepsilon=$ 0, 0.2, and 0.4. For typical values these parameters ($0.2 \lesssim \varepsilon \lesssim 0.4$ and $M_1 \gtrsim 5$), $\Phi$ ranges from $\sim 0.3$ to $\sim 0.6$. Similarly to the turbulent kinetic energy in the post-shock region, most of the contribution to $\Phi$ comes from the entropic-rotational part, while the acoustic contribution $\Phi_{ac}$ is found to be negligibly small.
\begin{figure}
\includegraphics[width=0.47\textwidth]{figPhi3D.pdf}
\caption{Variable $\Phi$ as a function of the shock strength $M_1$ for $\varepsilon=$ 0, 0.2, and 0.4.}
\label{fig:lum}
\end{figure}
\subsection{Enstrophy}
The effect of the shock passage on the upstream isotropic vorticity field can be computed with the aid of eqs. \eqref{vort} and \eqref{pdf}. The amplification of the average squared vorticity perturbations, nondimensionalized with $(k a_2)^2$, is written as
\begin{equation}
W = \frac{\langle\bar{\omega}_{x}^2+\bar{\omega}_{y}^2+\bar{\omega}_{z}^2\rangle}{\langle\bar{\omega}_{1,x}^2+\bar{\omega}_{1,y}^2+\bar{\omega}_{1,z}^2\rangle} = \frac{1}{3}+ \frac{2}{3}\frac{\langle \bar{\omega}_{y}^2+\bar{\omega}_{z}^2\rangle}{\langle\bar{\omega}_{1,y}^2+\bar{\omega}_{1,z}^2\rangle}=\frac{1}{3}+ \frac{2}{3}W_{\perp}\ ,
\label{W3D}
\end{equation}
with the factor $1/3$ referring to the invariable component of the vorticity pointing in the streamwise direction, and $W_{\perp}$ being the amplification factor of the averaged squared vorticity perpendicular to the shock propagation velocity. The two-dimensional equivalent factor
\begin{equation}
\begin{aligned}
&W_z = \frac{\langle \bar{\omega}_{z}^2\rangle}{\langle \bar{\omega}_{1,z}^2\rangle}=\int_1^\infty\left(\Omega_1+\Omega_2\mathcal{P}_{s}\right)^2\frac{C_2^2 M_2^2}{C_2^2 M_2^2+(1-M_2^2)\zeta^2} \text{P}(\zeta) {\rm d}\zeta \\
&+ \int_0^1\left[\left(\Omega_1+\Omega_2\mathcal{P}_{lr}\right)^2+\Omega_2^2\mathcal{P}_{li}^2\right]\frac{C_2^2 M_2^2}{C_2^2 M_2^2+(1-M_2^2)\zeta^2} \text{P}(\zeta) {\rm d}\zeta
\label{Wz}
\end{aligned}
\end{equation}
is conveniently employed in computing the perpendicular contribution as $W_{\perp}=(C_2+3W_z)/4$.
\begin{figure}
\includegraphics[width=0.47\textwidth]{figW3D.pdf}
\caption{Mean value of squared vorticity perturbations, $W$, for $\varepsilon=$ 0, 0.2, and 0.4.}
\label{fig:W}
\end{figure}
The so-called enstrophy, $W$, is computed in Fig.~\ref{fig:W} for the same conditions as in Fig.~\ref{fig:LTK}. In consonance to the turbulent kinetic energy, the effect of nuclear dissociation across the shock is found to increase the average vorticity intensity, for a fixed value of $M_1$.
When the shock is expanding at variable Mach number, the theory still holds if base-flow changes are negligible within the perturbation wavelength distance. Upstream turbulent flows characterized by short wavelengths will meet this constriction. On the other side, perturbations must be sufficiently large for the shock to be seen as a pure discontinuity. In such case, the post-shock kinetic energy at any radial locus can be approximated by the one left by the expanding shock, whose instantaneous properties $M_1$ and $\varepsilon$ can be computed following the analysis presented in next section. The values obtained for the downstream kinetic energy and enstrophy can be then used to compute the evolution of the turbulent flow by viscous-dissipative effects. Then, Figs.~\ref{fig:LTK} and \ref{fig:W} serve as the onset for such post-shock stage, with the subsequent thermalization of the kinetic energy being inferred by the dissipation energy cascade associated to the dominant scales \citep{Mabanta17}.
\section{Nuclear Dissociation Energy and the Pre-shock Mach Number In CCSN models} \label{sec:varepsilon}
\begin{figure}
\includegraphics[width=0.47\textwidth]{e_m_rsh_vs_t_e_s15_h122.pdf}
\includegraphics[width=0.47\textwidth]{e_m_rsh_vs_t_e_s15_h123.pdf}
\includegraphics[width=0.47\textwidth]{e_m_rsh_vs_t_e_s25_h118.pdf}
\caption{{\bf Top panel:} Time evolution of the shock radius (dashed black line) and the nuclear dissociation parameter (solid red line) for non-exploding model $s15$ with heating factor $h=1.2$ (i.e., a group I model). For reference, the horizontal red dashed line shows the $\varepsilon=0.2$ line. {\bf Center panel:} The same as in top panel but for exploding model $s15$ with heating factor $h=1.23$ (i.e., a group II model). {\bf Bottom panel:} The same as in top panel but for model $s25$ with heating factor $h=1.18$ that undergoes strong shock oscillations (i.e., a group III model).}
\label{fig:rsh_e_m}
\end{figure}
\begin{figure}
\includegraphics[width=0.47\textwidth]{e_vs_rsh.pdf}
\caption{The nuclear dissociation parameter $\varepsilon$ as a function of the shock radius for non-exploding (group I) and exploding (group II) models. Each line represents a specific model and the color of each line indicates the time: the blue end of the line corresponds to $10\,\mathrm{ms}$ after bounce, while the red end corresponds to late postbounce time ($t-t_\mathrm{b} \sim 1\,\mathrm{s}$). For shock radii $R_\mathrm{shock}\lesssim 175 \,\mathrm{km}$, $\varepsilon$ scales as $\propto R_\mathrm{shock}$, while for large shock radii, the growth of $\varepsilon$ saturates and remains $\sim 0.5$ until $R_\mathrm{shock}\lesssim 600 \,\mathrm{km}$.}
\label{fig:e_vs_rsh2}
\end{figure}
This section presents the estimates of the nuclear dissociation energy and the pre-shock Mach number from a series of spherically-symmetric CCSN simulations using the {\tt GR1D} code with the leakage/heating scheme \citep{Oconnor10}. Eight \cite{Woosley07} progenitor star models with ZAMS masses of $12M_\odot$, $15M_\odot$, $18M_\odot$, $20M_\odot$, $25M_\odot$, $30M_\odot$, $40M_\odot$, and $70M_\odot$ were considered. Each progenitor model is evolved using several values of the heating parameter. This yields a variety of qualitatively different evolutionary paths for each stellar model, ranging from non-exploding models to rapidly exploding models. Each simulation is named using the following convention: for example, the simulation $s15h1.23$ uses a progenitor model with a ZAMS mass of $15M_\odot$ evolved with heating factor of $1.23$ \citep[for the definition of the heating factor, see, e.g.,][]{Oconnor10,Ott13}.
Our simulations use the SFHo finite-temperature nuclear EOS of \cite{Steiner13}\footnote{Available at {\tt www.stellarcollapse.org} \citep{Oconnor10}.} as this EOS employs an accurate treatment of light nuclei. Calculations with the \cite{Lattimer91} EOS with nuclear incompressibility of $K=220$ MeV revealed similar results. Across our computational domain, we use $1000$ logarithmic radial grids with the central resolution of $0.1\,\mathrm{km}$. The outer boundary is fixed at the radius where the initial density is $2\times10^3 \, \mathrm{g/cm^3}$.
The shock wave dissociates heavy nuclei into light nuclei such as $\alpha$ particles and free nucleons. The SFHo EOS includes the nuclei ${}^2$H , ${}^3$H, ${}^3$H, ${}^3$He, ${}^4$Li, $\alpha$ particles, and heavy nuclei. Based on the change of the mass fractions of nuclei across the shock, the nuclear dissociation parameter is calculated using formula (\ref{eq:varepsilon2}) derived in Appendix~\ref{App2}. The binding energies of the light nuclei are taken from the \cite{Audi03} database, while that of heavy nuclei are assumed to be equal to that of iron nuclei, i.e., $8.8$ MeV per nucleon. For calculating the dissociation energy at the shock, this is a reasonable assumption as the binding energies of heavy nuclei in the iron core and Si/O shells differ by at most $\sim 10\%$.
The qualitative behaviors of $\varepsilon$ and $M_1$ depends on the overall dynamics of each model. In this respect, all the models considered here can be categorized into three groups: ($i$) non-exploding models, in which the shock wave gradually decreases with time without exhibiting strong radial oscillations (group I), ($ii$) exploding models, in which the shock gradually expands without strong oscillations (group II), and ($iii$) models, in which the shock wave exhibits strong oscillations before either transitioning to explosion or failing to explode (group III). In the following, we describe these three different model groups separately.
The top panel of Fig.~\ref{fig:rsh_e_m} shows the shock radius (solid black line) and the dissociation parameter $\varepsilon$ (solid red line) as a function of post-bounce time for model $s15h1.22$. This is a non-exploding model, in which the shock gradually recedes without exhibiting strong radial oscillations, i.e., this model belongs to group I. After the initial period of $\sim 50\,\mathrm{ms}$, during which shock undergoes rapid expansion, the shock stalls until $t-t_\mathrm{b}\sim 100\,\mathrm{ms}$, after which $R_\mathrm{shock}$ starts receding monotonically. The qualitative behavior of $\varepsilon$ is similar to that of $R_\mathrm{shock}$: following the initial period of increase and subsequent stagnation, $\varepsilon$ gradually decreases with time. The dissociation parameters $\varepsilon$ falls below, e.g., $\varepsilon=0.2$ when $R_\mathrm{shock} \lesssim 55\,\mathrm{km}$. Other models of group I exhibit a similar behavior.
The center panel of Fig.~\ref{fig:rsh_e_m} shows the shock radius (dashed black line) and the dissociation parameter $\varepsilon$ (solid red line) as a function of post-bounce time for model $s15h1.23$. This is an exploding model, in which the shock gradually expands without exhibiting strong radial oscillations, i.e., it belongs to group II. In this model, the stalled shock phase lasts until $t-t_\mathrm{b}\sim 200\,\mathrm{ms}$, after which $R_\mathrm{shock}$ slowly increases. In this phase, $R_\mathrm{shock}$ exhibits only weak oscillations with a relative amplitude of a few percent. At $t-t_\mathrm{b}\sim 500\,\mathrm{ms}$, the shock starts rapidly expanding and the model quickly transitions towards explosion. In the early $t-t_\mathrm{b}\lesssim 500\,\mathrm{ms}$ after bounce, the dissociation parameter stays above $0.2$ and oscillates around the value of $\sim 0.5$. However, it rapidly decreases during the explosion phase, once the shock radius becomes $\gtrsim 800\,\mathrm{km}$. Other models of group II exhibit a similar behavior.
It is illuminating to analyze $\varepsilon$ as a function of shock radius, a plot of which is shown in Fig.~\ref{fig:e_vs_rsh2} for all of our models in group I and II. Each line in this plot corresponds to one model and the color of a point on this line reflect that the time after bounce: the blue end of each line corresponds to $t-t_\mathrm{b}=10\,\mathrm{ms}$, while the red part corresponds to the end of the simulations ($t-t_\mathrm{b}\sim 1 \,\mathrm{s}$). In all non-exploding models (group I), $\varepsilon$ scales as $\propto R_\mathrm{shock}$, with the proportionality depending on mass:
\begin{equation}
\varepsilon \sim \frac{2}{3} M_{1.3}^{-1} \left( \frac{R_\mathrm{shock}}{150\,\mathrm{km}} \right),
\label{eq:eps_scaling}
\end{equation}
This relation is qualitatively similar to Eq.~(4) predicted by \citet{Fernandez2009a}. However, as can be seen in Fig.~\ref{fig:e_vs_rsh2}, the $\varepsilon\propto R_\mathrm{shock}$ scaling becomes invalid as soon as shock becomes larger than $\sim 175\,\mathrm{km}$, which occurs in exploding models. In this regime, $\varepsilon$ stops growing with $R_\mathrm{shock}$ and saturates to $\sim 0.5$ for most models.
Figure~\ref{fig:m_vs_rsh} shows pre-shock Mach number $M_1$ as a function of shock radius for all of our models in groups I and II. As in Fig.~\ref{fig:e_vs_rsh2}, each line represents a single model and the color of each point on each line represents the post-bounce time. Except the immediate post-bounce time ($t-t_\mathrm{b}\sim 10-20\,\mathrm{ms}$), $M_1$ depends on $R_\mathrm{shock}$ as
\begin{equation}
M_1 \sim 6.5\times \left( \frac{150\,\mathrm{km}}{R_\mathrm{shock}} \right)^{0.37}
\label{eq:M1_scaling}
\end{equation}
This relations is only approximate and the spread of the values of $M_1$ at a given $R_\mathrm{shock}$ is caused by the fact that different models have somewhat different thermodynamic conditions (e.g., temperature), which leads to different values of the speed of sound, which, in turn, affects the Mach number.
Finally, the bottom panel of Fig.~\ref{fig:rsh_e_m} shows the shock radius (solid black line) and the nuclear dissociation parameter $\varepsilon$ (solid red line) as a function of time for models $s25h1.18$. This model exhibits strong radial shock oscillations from $\sim200\,\mathrm{ms}$ till $\sim800\,\mathrm{ms}$ after bounce. During this time, $\varepsilon$ also undergoes strong oscillations with the same frequency as the shock radius. The oscillations in the two quantities are somewhat out of phase. When the increase of $R_\mathrm{shock}$ is decelerating, $\varepsilon$ starts decreasing fast, reaching its local minimum just before $R_\mathrm{shock}$ does. It starts increasing when the shock radius is approaching its local minimum. At its minimum, $\varepsilon$ can become as small as $0.1$ for a brief period of time. The frequency of these oscillations are comparable to the frequencies of the infalling perturbations. For this reason, the linear formalism presented in this work is unlikely to be applicable to such models (cf. Section~\ref{sec:perturb_problem}). On the other hand, such oscillations are artificially strong in 1D models. Full 3D simulations are unlikely to exhibit strong oscillations, at least not in the angle-averaged shock radius. However, in the presence of strong SASI oscillations, the shock radius may oscillate along radial directions. In these situations, the values of $\varepsilon$ are likely exhibit similar oscillations as in models in group III.
\begin{figure}
\includegraphics[width=0.47\textwidth]{m_vs_rsh.pdf}
\caption{Mach number as a function of the shock radius. The color of each line indicates the corresponding post-bounce time at which this value of the Mach number is extracted. The blue end of the lines corresponds to early post-bounce time of $t-t_\mathrm{b} = 10 \, \mathrm{ms}$, while the red region corresponds to late post-bounce time ($t-t_\mathrm{b} \sim 1 \, \mathrm{s}$). The dashed black line represents fitting function (\ref{eq:M1_scaling}) that yield the values of the pre-shock Mach number as a function of the shock radius $R_\mathrm{shock}$ in the stalled shock phase.}
\label{fig:m_vs_rsh}
\end{figure}
\subsection{Amplification of turbulent kinetic energy as a function of the shock radius}
In addition to analyzing the amplification of turbulent kinetic energy across the shock as a function of parameters $\varepsilon$ and $M_1$, as was done in Section~\ref{sec:analysis_field}, one can get additional insight by looking at it as a function of the shock radius $R_\mathrm{shock}$. To this end, equations \eqref{eq:eps_scaling} and \eqref{eq:M1_scaling} allows us to express the nuclear dissociation degree $\varepsilon$ and shock strength $M_1$ as functions of the shock radius, $R_\mathrm{shock}$. These expressions are employed to compute $L$, $T$ and $K$ as a function of $R_\mathrm{shock}$ in Fig.~\ref{fig:LTKrs}. Each component of the turbulent kinetic energy appears to depend rather weakly on $R_\mathrm{shock}$. The transverse component increases by a factor of $\sim 3$, while the longitudinal component experiences no significant amplification. The total turbulent kinetic energy amplifies by a factor of $\sim 2$. As dictated by computations in Fig.~\ref{fig:e_vs_rsh2}, there exit two distinguished regions, the zone where $\varepsilon$ is linearly proportional to the shock position ($R_s\leq$175 km) and the region where nuclear dissociation is saturated. For small radius, the strong-shock adiabatic limit applies, as $M_1$ grows proportional to $R_s^{-0.37}$ and $\varepsilon$ approaches to zero.
The dashed lines in Fig.~\ref{fig:LTKrs} represent the amplification of the integrated kinetic energy in the region of space confined between the shock and the center through
\begin{equation}
\label{eq:LTK_int}
\begin{pmatrix} \bar L\\ \bar T\\ \bar K \end{pmatrix}
= \frac{3}{R_{\text{shock}}^3}\int_0^{R_{\text{shock}}}\begin{pmatrix} L(r)\\ T(r)\\ K(r) \end{pmatrix}r^2 \rm{d} r\ ,
\end{equation}
provided that the characteristic time of post-shock turbulent structures evolution due to viscous-diffusive effects is much longer than shock-time passage through the matter to the distance $R_{\text{shock}}$.
\begin{figure}
\includegraphics[width=0.45\textwidth]{figLTKRshock.pdf}
\caption{Longitudinal $L$, transverse $T$ and total $K$ turbulent kinetic energy for $M_1=5$ as a function of the shock radius. The dashed lines represent the amplification of the integration kinetic energy in the post-shock region (cf. Eq.~\ref{eq:LTK_int}).}
\label{fig:LTKrs}
\end{figure}
\section{Discussion: Impact on the CCSN Explosion Mechanism}
\label{sec:discussion}
Generally speaking, the pre-shock perturbations in CCSNe consist of different physical modes, including acoustic and entropy waves in addition to the vorticity modes considered in this work. Without including all of these modes, one cannot obtain a rigorous estimate of the impact of perturbations on the explosion condition. However, in the linear order and for uniform mean flow, all these modes evolve independently from each other. Therefore, we can study the effect of vorticity modes alone in this work. The effect of other modes will be given in a future work.
The impact of the perturbations on the explosion condition can be analyzed using the concept of the critical neutrino luminosity, i.e., the minimum neutrino luminosity that is necessary in order to produce an explosion for a given stellar model \citep{Burrows93}. The turbulence behind the supernova shock reduces the critical luminosity, an analytical estimate of which was obtained by \citet{Mueller15}:
\begin{equation}
\label{eq:crit_lum}
L_\mathrm{crit} \propto \left(\dot M M\right)^{3/5}r_\mathrm{gain}^{-2/5}\left(1+\frac{4}{3}\langle \delta M_2^2\rangle \right)^{-3/5},
\end{equation}
where $\delta M_2$ is the turbulent Mach number in the gain region. It is comprised of two contributions, one coming from neutrino-driven convection and/or SASI, another stemming from the perturbations crossing the shock. \citet{Mueller15} argue that the impact of the density perturbations generated by the advection of vorticity waves plays the dominant role in driving buoyancy-driven turbulence in the post-shock region. The resulting reduction in the critical luminosity was recently calculated by \citet{Mueller16}:
\begin{equation}
\begin{aligned}
\label{eq:dl}
\frac{\Delta L_\mathrm{crit}}{L_\mathrm{crit}} \simeq - \frac{0.15 \pi}{l \eta_\mathrm{acc}\eta_\mathrm{heat}} \sqrt{\langle \delta M^2_0 \rangle},
\end{aligned}
\end{equation}
where $\delta M_0$ is the turbulent Mach number in the convective nuclear burning shell prior to collapse, $l$ is the angular wavenumber of the dominant perturbation, $\eta_\mathrm{heat}$ and $\eta_\mathrm{acc}$ are the efficiencies of neutrino heating and accretion.
Expression (\ref{eq:dl}) is derived under the assumption that the advection of convective perturbations from Si/O shells towards the shock generates density perturbations of order
\begin{equation}
\label{eq:drho2_m2}
\sqrt{\langle \bar\rho^2_2 \rangle} \sim \sqrt{\langle \delta M_0^2 \rangle}
\end{equation}
behind the shock. This estimate does not include the density fluctuations associated with entropy perturbations generated in the post-shock region by the interaction of the shock with vorticity waves. Below, we estimate the impact of these perturbations on the critical luminosity.
\subsection{Density perturbations in the post-shock region}
\label{sec:density_pert}
According to the linearized RH equations, \eqref{massRH}-\eqref{eneRH}, the corrugated shock front induces density perturbations in the post-shock gas. Such perturbations are of entropic ($\hat{\rho}_e$) and acoustic ($\hat{\rho}_a$) nature, with the former remaining frozen to the fluid particles in the absence of diffusive effects. For isotropic field of incoming vorticity perturbations, the average of the squared density changes in the post-shock region can be written as
\begin{equation}
\begin{aligned}
\langle\bar{\rho}^2\rangle = D \int_{0}^\infty\hat{u}_1^2(k)k^2dk
\label{denave}
\end{aligned}
\end{equation}
with the dimensional pre-spectrum coefficient $D$, split into entropic $D_e$ and acoustic $D_a$ contributions, being computed as
\begin{equation}
\begin{aligned}
D &= D_e + D_a = \left(\mathcal{D}-1\right)^2\int_0^{1}\left(\mathcal{P}_{li}^2+\mathcal{P}_{li}^2\right) \text{P}(\zeta) {\rm d}\zeta+\\
&+\left(\mathcal{D}-1\right)^2\int_1^{\infty}\mathcal{P}_{s}^2 \, \text{P}(\zeta) {\rm d}\zeta + \int_1^{\infty} \mathcal{P}^2\, \text{P}(\zeta) {\rm d}\zeta\ .
\label{D3D}
\end{aligned}
\end{equation}
The terms involving the factor $\left(\mathcal{D}-1\right)^2$ correspond to the entropic contribution $D_e$, while the last term refers to the acoustic part $D_a$.
Figure~\ref{fig:D} shows the function $D_e$ and $D_a$ versus $M_1$ for $\varepsilon=0$, $0.2$, and $0.4$. Both $D_e$ and $D_a$ grows with $M_1$ and $\varepsilon$. The acoustic part $D_a$ is at least two orders of magnitude smaller than the entropic part $D_e$ and thus it is negligible.
In order to obtain a more intuitive insight, it is useful to express $\langle\bar{\rho}^2\rangle$ as a function of the pre-shock turbulent Mach number. The latter is related to the average upstream velocity perturbations as
\begin{equation}
\begin{aligned}
\langle \delta M_1^2 \rangle=3 \left(\frac{a_2}{a_1}\right)^2\langle \bar{u}_1^2 \rangle=\frac{3 M_1^2}{M_2^2C_2^2}\langle \bar{u}_1^2 \rangle
\end{aligned}
\end{equation}
Combining this with (\ref{uw3D}) and (\ref{denave}), we obtain
\begin{equation}
\begin{aligned}
\langle \bar\rho^2_2 \rangle = \frac{M_2^2C_2^2 D}{8 \pi M^2_1} \langle \delta M_1^2 \rangle = A \langle \delta M_1^2 \rangle
\end{aligned}
\end{equation}
Figure~\ref{fig:drho2dM1} shows the ratio $A=\langle \bar\rho^2_2 \rangle/\langle \delta M_1^2 \rangle$ as a function of the shock strength $M_1$ for $\varepsilon=0$, $0.2$, and $0.4$. For typical values of these parameters ($0.2 \lesssim \varepsilon \lesssim 0.4$ and $M_1 \gtrsim 5$), the ratio $\langle \bar\rho^2_2 \rangle/\langle \delta M_1^2 \rangle$ ranges from $\simeq\!0.1$ to $\simeq\!0.2$. Accordingly,
\begin{equation}
\label{eq:drho2vsMach1}
\sqrt{\langle \bar\rho^2_2 \rangle} \simeq (0.32 - 0.45) \times \sqrt{\langle \delta M_1^2 \rangle}.
\end{equation}
We can relate the turbulent Mach number $\sqrt{\langle \delta M_1^2 \rangle}$ immediately above the shock to that in the pre-collapse convective shells. During collapse, the Mach number of vorticity waves grows as $\propto r^{(3\gamma-7)/4}$ in the absence of dissipative effects \citep{Kovalenko98,Lai00}. If the convective shell falls from a radius of $\sim 1500\,\mathrm{km}$ to $\sim 200\,\mathrm{km}$, the turbulent Mach number should increase by a factor of $\sim 4.53$. Applying this to scaling (\ref{eq:drho2vsMach1}), we obtain
\begin{equation}\
\sqrt{\langle \bar\rho^2_2 \rangle} \simeq (1.45 - 2.04) \times \sqrt{\langle \delta M_0^2 \rangle}.
\label{eq:drho2vsMach12}
\end{equation}
The density perturbations predicted by this relation is significantly larger than that generated by the advection of the vorticity waves given by (\ref{eq:drho2_m2}). Below, we investigate if these perturbations contribute to the turbulence in the gain region.
\begin{figure}
\includegraphics[width=0.47\textwidth]{figD3De.pdf} \\
\includegraphics[width=0.47\textwidth]{figD3Da.pdf}
\caption{Variable $D$ as a function of $M_1$ for $\varepsilon=$ 0, 0.2, and 0.4. The upper panel shows the contribution of entropic-rotational perturbations $D_e$ and the lower panel displays the acoustic contribution $D_a$.}
\label{fig:D}
\end{figure}
\begin{figure}
\includegraphics[width=0.49\textwidth]{figA3D.pdf}
\caption{Ratio $A=\langle \bar\rho^2_2 \rangle/\langle \delta M_1^2 \rangle$ as a function of the shock strength $M_1$ for $\varepsilon=0$, $0.2$, and $0.4$.}
\label{fig:drho2dM1}
\end{figure}
\subsection{Generation of Turbulence from Density Perturbations}
When density perturbations are immersed in a gravitational field, buoyancy effects may play a significant role in contributing to the turbulent kinetic energy. The kinetic energy production or consumption can be scaled with $\langle \bar{\rho}\bar{u} \rangle g / a_2$ \citep[see, e.g., Chapter 8.2 of][]{Holton12}, with $\bar{u}$ being the velocity component parallel to the gravity field $g$, which in our case coincides with direction of the mean flow. Similarly as done for pure density perturbations, the correlation of velocity and density disturbances can be expressed as
\begin{equation}
\begin{aligned}
\langle\bar{\rho}\bar{u}\rangle = B \int_{0}^\infty\hat{u}_1^2(k)k^2dk
\label{buoave}
\end{aligned}
\end{equation}
where $B$ is a dimensionless pre-spectrum factor,
\begin{equation}
\begin{aligned}
B &= B_{er} + B_a = \left(\mathcal{D}-1\right)\int_0^{1}\left(\mathcal{U}_{lr}^r\mathcal{P}_{lr}+\mathcal{U}_{li}^r\mathcal{P}_{li}\right)\text{P}(\zeta) {\rm d}\zeta+\\
&+\left(\mathcal{D}-1\right)\int_1^{\infty} \mathcal{U}_{s}^r\mathcal{P}_{s}\,\text{P}(\zeta) {\rm d}\zeta + \int_1^{\infty} \mathcal{U}^a\mathcal{P}\, \text{P}(\zeta) {\rm d}\zeta.
\label{B3D}
\end{aligned}
\end{equation}
The entropic-rotational part are the terms proportional to the factor $\mathcal{D}-1$, while the last integral represent the acoustic contribution. For negative values of $\langle \bar{\rho}\bar{u} \rangle$ (i.e., positive velocity-temperature correlation), the density perturbation contributes constructively to the post-shock turbulent kinetic energy. The contrary applies for $\langle \bar{\rho}\bar{u} \rangle>0$.
Figure \ref{fig:DB} shows $B$ as a function of the the shock Mach number for $\varepsilon=$ 0, 0.2, and 0.4. Similarly to $D$, the acoustic contribution to $B$ is found to be negligible. The buoyancy perturbations are negative, meaning that the density perturbations will increase the value of the final turbulent kinetic energy.
\begin{figure}
\includegraphics[width=0.47\textwidth]{figB3D.pdf}\vspace{2mm}
\caption{Correlated density-velocity $B$ as a function of shock strength $M_1$ for $\varepsilon=$ 0, 0.2, and 0.4.}
\label{fig:DB}
\end{figure}
In the light of this finding, we can substitute the density fluctuations (\ref{eq:drho2vsMach12}) from entropy waves into the expression for the reduction of the critical luminosity (\ref{eq:dl}) and obtain
\begin{equation}
\begin{aligned}
\label{eq:dlf}
\frac{\Delta L_\mathrm{crit}}{L_\mathrm{crit}} \simeq - (1.45 - 2.04) \times \frac{0.15 \pi}{l \eta_\mathrm{acc}\eta_\mathrm{heat}} \sqrt{\langle \delta M^2_0 \rangle},
\end{aligned}
\end{equation}
For typical values of $\eta_\mathrm{acc}=2$, $\eta_\mathrm{heat}=0.1$, $\sqrt{\langle \delta M^2_0 \rangle} \sim 0.1$, and $l=2$, we get $17-24\%$ reduction in the critical luminosity. This roughly agrees with the results of 3D simulations \citep{Mueller17}.
\subsection{Impact of acoustic waves and direct injection of kinetic energy}
In addition to the impact of entropy perturbations on the explosion condition of CCNSe, one can in principle study the role of other effects such as the acoustic waves generated by the interaction of the shock with vorticity waves and the direction injection of the kinetic energy of vorticity waves to the post-shock region. The reduction of the critical luminosity due to the latter was estimated by \citet{Abdikamalov16}:
\begin{equation}
\label{eq:dldi}
\frac{\Delta L_\mathrm{crit}}{L_\mathrm{crit}} \sim 0.6 \langle \delta M^2_1 \rangle .
\end{equation}
For the same parameters used for estimate ($\ref{eq:dlf}$), equation (\ref{eq:dldi}) yields $\sim 12\%$ reduction in the critical luminosity. This is smaller than that due to the entropy perturbations calculated above. Hence, the direct injection of turbulent kinetic energy of vorticity waves is expected to play a sub-dominant effect, in agreement with the estimate of \citet{Mueller15}.
As we saw above in Sections~\ref{sec:tke} and \ref{sec:density_pert}, the acoustic waves have negligibly small contribution to the perturbations of velocity and density compared to the contributions of the vorticity and entropy modes in the post-shock. For this reason, the acoustic waves in the post-shock region are expected to have negligibly small effect on the explosion condition of CCSNe (see also the discussion in \cite{Mueller16}).
\section{Conclusions}
\label{sec:conclusion}
The shock-turbulence interplay plays a key role in facilitating core-collapse supernova (CCSN) explosions. In this paper, we studied how vorticity waves from nuclear shell burning affect the shock dynamics once they encounter in the aftermath of stellar core collapse. Our study accounts for the interaction of the shock with intermediate vortical scales, i.e., those whose characteristic length is sufficiently small for the shock to be considered a planar front, yet sufficiently large for the shock to be a seen as a discontinuity front. The mathematical formalism is based on the solution of the linearized hydrodynamics equations in the post-shock region \citep{Wouchuk2009,Huete2017}, which captures the full time evolution of shock-vorticity system in the linear order. In particular, this allowed us to take into account the perturbation of the nuclear dissociation itself, which was not included previously \citep{Abdikamalov16}. We demonstrated that this effect plays an important role in shock-turbulence interaction.
When a vorticity wave encounters a shock, it deforms and generates a post-shock field of vorticity, entropy, and acoustic waves. We have analyzed the properties of these fluctuations for a wide range of the parameters of the incoming vorticity waves and mean flow (Sections~\ref{sec:analysis_wave}-\ref{sec:analysis_field}). We have found that, within the limits of validity of the model, the density perturbations in the post-shock region are dominantly of entropic nature, while the contribution of the acoustic waves is negligibly small.
We show that the entropy perturbations in the post-shock region is the dominant factor in generating turbulence in the post-shock flow due to work by buoyancy forces (Section~\ref{sec:discussion}). For typical program parameters, the amplitude of density perturbations is about $1.45-2.04$ times the turbulent Mach number in the Si/O shell. Following the method proposed by \cite{Mueller16}, we show that this results in $17-24\%$ reduction in the critical luminosity for producing explosion (cf. Section~\ref{sec:discussion}). This approximately agrees with the results of recent 3D neutrino-hydrodynamics simulations \citep{Mueller17}.
This paper is the first in a series of two papers that aims at establishing the linear physics of interactions of shocks with turbulent flow. In future, we will study the effect of other perturbation modes that originate from convective shells. Also, the interaction of pre-collapse perturbations with the hydrodynamic instabilities in the post-shock region has to be treated in a more rigorous way as in, e.g., \citet{Takahashi16}. This will be the subject of future studies.
\section*{Acknowledgements}
We thank B. M\"uller for carefully reading the manuscript and for many valuable comments that significantly improved the manuscript. We also thank T. Foglizzo, M. Hempel and A. L. Velikovich for useful discussions. This work is supported by the Ministry of Science, MEC (ENE2015-65852-C2-2-R) and Fundaci\'on Iberdrola (BINV-ua37crdy), Spain (for C. Huete), by ORAU grant at Nazarbayev University (for E. Abdikamalov), by Max-Planck/Princeton Center (MPPC) for Plasma Physics (NSF PHY-1144374) and a Schmidt Fellowship (for D. Radice). The computations were performed at the NULITS Linux cluster at Nazarbayev University. We thank S. Bubin for his contribution to set up the cluster.
\section{Introduction}
First described by \citet{Baade34} as the "transition of an ordinary star into a neutron star", core-collapse supernovae (CCSNe) are the powerful explosions of massive stars that occur at the end of their lives \citep[e.g.,][]{Bethe90}. Upon reaching its maximum mass, the iron core becomes unstable, initiating a collapse to a proto-neutron star (PNS). The shock wave launched at core bounce quickly loses its energy and stalls at a radius of $\sim 150\,\mathrm{km}$. In order to produce an explosion and leave behind a stable neutron star, the shock must recover within a few hundreds of milliseconds and expel the stellar envelope \citep[e.g.,][]{Oconnor11,Ott11,Ugliano12}. Otherwise, a black hole (BH) forms \citep[e.g.,][]{Nadezhin80,Lovegrove13,Adams17,Kashiyama15}. The details of how this occurs remain unclear, constituting one of the longest-standing open questions in modern astrophysics \citep[see, e.g.,][for recent reviews]{Janka12, Foglizzo15, Burrows13, Mueller16b}.
A key ingredient for producing the explosion is the neutrino emission by the newly-born PNS, which deposits energy behind the shock and establishes a negative entropy gradient that drives vigorous neutrino-driven convection. Together with the standing-accretion shock instability (SASI), these multi-dimensional hydrodynamic effects create favorable conditions for shock revival \citep[e.g.,][]{Herant95, Burrows95, Janka96, Blondin03, Foglizzo06, Yamasaki06,Hanke12, Hanke13, Dolence13, Murphy13, Takiwaki14, Ott13, Abdikamalov15, Radice15, Radice16, Melson15a, Lentz15, Fernandez14, Fernandez15, Cardall15, Bruenn16, Roberts16}. If present, rapid rotation may facilitate explosion via the magneto-rotational mechanism \citep{Burrows07,Moesta14,Moesta15} \citep[see also][]{Takiwaki16,Summa17}.
\citet{Couch13} demonstrated that the perturbations arising from the turbulent convection in Si and O burning shells in CCSN progenitors may help to revive the shock. As the iron core collapses, the perturbations follow the core and accrete towards the center. Due to the converging geometry of the flow, the perturbations amplify significantly during collapse \citep{Kovalenko98,Lai00,Takahashi14}. Further amplification occurs at shock crossing \citep{Abdikamalov16}. Once in the post-shock region, the fluctuations contribute to the non-radial flow in the gain region, creating a more favorable condition for producing explosion \citep{Couch15a, Couch15b, Mueller15, Abdikamalov16, Takahashi16, Burrows16, Radice17}.
\citet{Mueller16} presented 3D simulation of the last minutes of O shell burning in an $18M_\odot$ progenitor star. Prior to collapse, they observed vigorous convection with Mach number of $\sim 0.1$ and dominant angular wave number of $l=2$. Full 3D neutrino-hydrodynamics simulation of this model yielded strong explosion after the accretion of the O shell through the shock, whereas in a model with artificially suppressed pre-collapse convection, no explosion was observed \citep{Mueller17}. The reduction of the the critical (i.e., minimum) neutrino luminosity for producing explosion due to these perturbations was estimated to be $\sim 20\%$, which is roughly in agreement with the analytic predictions of \citet{Mueller16}. Recently, \citet{Collins17} investigated the properties of Si and O shell burning in a broad range of presupernova models with ZAMS masses between $9.45M_\odot$ and $35M_\odot$. They found that the progenitor models between $16M_\odot$ and $26M_\odot$ exhibit large scale convective motions with high Mach numbers in the O shells, which are favorable conditions for producing perturbation-aided neutrino-driven explosions \citep{Mueller15}. On the other hand, strong perturbations were rarely observed in the Si shells.
The emerging qualitative picture of how the progenitor asphericities impact the explosion condition is as follows. The convective vorticity waves distort the spherical isodensity surfaces of the progenitor star, creating Eulerian density perturbations at a given radius. When these density and vorticity perturbations encounter and cross the shock, they generate strong buoyancy-driven turbulence in the post-shock region, which helps to trigger an explosion \citep{Mueller15}.
In order to gain the full understanding of how these perturbations affect the explosion dynamics, it is necessary to understand the physics of shock-turbulence interaction, starting with linear order. With this premise, \citet{Abdikamalov16} studied the effect of entropy and vorticity perturbations using a linear perturbation theory known as the {\it linear interaction analysis} (LIA) \citep[e.g.,][]{Ribner53,Mahesh96}. These two represent two of the three components of a generic turbulent flow, the third being acoustic waves \citep{Kovasznay53,Chu1958}. They found that the kinetic energy of these fluctuations increases by a factor of $\sim 2$ as they cross the shock. Assuming direct injection of this energy into the post-shock region, they estimated that these perturbations can reduce the critical neutrino luminosity for producing explosion by $\sim 12\%$. While this is an important finding, the physics of shock-turbulence in CCSNe, even at linear level, is not yet completely understood. As noted by \citet{Mueller17}, the buoyancy plays a dominant role in generating post-shock turbulence. Moreover, the acoustic waves generated by infalling entropy and vorticity perturbations \citep{Kovalenko98, Foglizzo00, Mueller16b} will affect the shock dynamics and the post-shock flow. Finally, the impact of perturbations on the nuclear dissociation rate itself should also be taken into account. These aspects are missing from the analysis of \citet{Abdikamalov16}.
In this work, we investigate the interaction between accretion shocks and turbulent fluctuations in further detail. Our study is based on the solution of the linearized hydrodynamics equations in the post-shock region, which permits to capture the full temporal evolution of shock-vorticity interaction. The mathematical formalism describing the post-shock perturbation flow is similar to that employed in theoretical works on Richtmyer-Meshkov-type flows \citep{Wouchuk2001,Wouchuk2001b,Cobos2014} and analogous to that used in canonical interactions of non-reactive and reactive shocks with turbulent flows \citep{Wouchuk2009,Huete2017}. This improved formalism allows us to take into account the perturbation of the nuclear dissociation itself, which was not included in \citet{Abdikamalov16}. As demonstrated below, this effect is found to be important in the turbulent kinetic amplification factor, with the parametric trends and the asymptotic values being significantly affected.
This is the first in a series of two papers. The current paper is dedicated to the study of the interactions of accretion shocks with vorticity waves, while the second will study the interactions with density perturbations generated due to differential infall. The aim of this series of works is to establish in detail the linear physics of interaction of shocks with hydrodynamic turbulence in CCSNe.
The rest of the paper is organized as follows. Section~\ref{sec:problem} presents the problem formulation and the solution method. In Section~\ref{sec:analysis_wave}, an analysis of the interaction of shock waves with individual vorticity waves is presented, while Section~\ref{sec:analysis_field} focuses on the interaction of shocks with isotropic field of vorticity waves. The base-flow properties for the shock Mach number and the dissociation degree are computed in Section~\ref{sec:varepsilon}. In Section~\ref{sec:discussion}, we discuss the implication of our results on the explosion condition of core-collapse supernovae. Finally, in Section~\ref{sec:conclusion} we present our conclusions.
\section{Problem Formulation}
\label{sec:problem}
\subsection{Perturbation-free flow}
Let us consider an expanding shock wave placed at $r=R_{{\rm shock}}(t)$ that separates the in-falling flow ahead of shock front $r>R_{{\rm shock}}$, denoted with subscript 1, and the downstream post-shock flow identified with subscript 2 in $r<R_{{\rm shock}}$ (see Fig. \ref{fig:scheme2D} for clarification). In the thin-shock limit, when the radius of the shock is much larger than the accretion-shock thickness $R_{{\rm shock}}\gg l$, the variation of the different flow variables across the shock is readily obtained through the radial integration of the conservation equations, yielding
\begin{subequations}
\begin{alignat}{3}
&\rho_1 \left(u_1+\dot{R}_{{\rm shock}}\right) = \rho_2 u_2 \ ,\label{mass0}\\
&p_1 + \rho_1 \left(u_1+\dot{R}_{{\rm shock}}\right)^2 = p_2 + \rho_2 u_2^2\ ,\label{momentum0}\\
&e_1 +\frac{p_1}{\rho_1} +\frac{1}{2} \left(u_1+\dot{R}_{{\rm shock}}\right)^2 = e_2 +\frac{p_2}{\rho_2} + \frac{1}{2} u_2^2\ ,\label{energy0}%
\end{alignat}
\end{subequations}
for the mass, momentum and energy conservations equations, respectively. The symbols $u$, $\rho$, $p$ and $e$ refer to the bulk velocity, density, pressure, and internal energy of the gas, respectively. Notice that, for non-negligible accretion shock thicknesses, the mass equation \eqref{mass0} should include the term involving the divergence of the post-shock expanding gas.
\begin{figure}
\includegraphics[width=0.47\textwidth]{figccsn.pdf}
\caption{Scheme of the accretion shock expanding through the in-falling mass and characteristic scales: shock radius $R_{{\rm shock}}$, shock thickness $l$, and characteristic perturbation wavelength $\lambda_c$.}
\label{fig:scheme2D}
\end{figure}
When the compressed flow, modeled as a perfect gas with the polytropic index $\gamma=4/3$, is affected by nuclear dissociation effects occurring in a thin layer right behind the shock, the variation of the internal energy can be computed as
\begin{equation}
e_1-e_2 =\frac{1}{\gamma-1}\frac{p_1}{\rho_1}-\frac{1}{\gamma-1}\frac{p_2}{\rho_2} + \Delta e_\mathrm{dis}
\end{equation}
with $\gamma$ assumed constant through the interaction process, and $\Delta e_\mathrm{dis}$ referring to the energy per unit mass employed in dissociating the nuclei.
Following \citet{Fernandez2009a,Fernandez2009b}, the nuclear dissociation energy can be scaled with the free-fall speed squared,
\begin{equation}
\Delta e_\mathrm{dis} = \frac{1}{2} \varepsilon \upsilon_\mathrm{FF}^2,
\label{Deltaedis}
\end{equation}
using dimensionless nuclear dissociation parameter $\varepsilon$. As we show below in Section~\ref{sec:varepsilon}, $\varepsilon$ typically ranges from $0.2$ to $0.4$ in CCSN models. Assuming that the Bernoulli parameter is zero above the shock,
\begin{equation}
\upsilon_\mathrm{FF}^2 = \frac{2 G M}{R_{{\rm shock}}} = \frac{1}{2} u_1^2 + \frac{\gamma}{\gamma-1}\frac{p_1}{\rho_1}\ ,
\end{equation}
where $G$ is the gravitational constant and $M$ is the gravitating mass.
In the stalled-shock regime, $u_1/\dot{R}_{{\rm shock}}\gg1$, the Mach number $M_1 =u_1/a_1$, with $a_1=(\gamma_1 p_1/\rho_1)^{1/2}$ defining the speed of sound upstream, is used to rewrite nuclear dissociation energy as
\begin{equation}
\frac{\gamma^2-1}{2}\frac{\Delta e_\mathrm{dis}}{a_1^2} = \varepsilon \frac{\gamma+1}{2}\left(1+\frac{\gamma-1}{2}M_1^2\right).
\label{eq:varepsilon}
\end{equation}
Taking $\varepsilon$ and Mach as the independent parameters, the values of which may vary within the range established from numerical simulations of CCSNe (see Section~\ref{sec:varepsilon}), the fluid properties behind the shock are expressed in the form
\begin{equation}
C_2 = \frac{\rho_2}{\rho_1} = \frac{u_1}{u_2} = \frac{\left( \gamma + 1\right) M_1^2}{\left( \gamma - \kappa \right) M_1^2 + 1}\ ,
\label{R}
\end{equation}
and
\begin{equation}
P_2 =\frac{p_2}{\rho_1 u_1^2}=\frac{\gamma M_1^2(1+\kappa) +1}{\gamma(\gamma + 1) M_1^2}\ ,
\label{P}
\end{equation}
for post-shock density and pressure. The Mach number of the fluid particles leaving the shock is
\begin{equation}
M_2 = \frac{u_2}{a_2} = \left(\gamma C_2 P_2\right)^{-1/2} = \left[\frac{\left( \gamma - \kappa \right) M_1^2 + 1}{\gamma M_1^2(1+\kappa) +1}\right]^{1/2}\ ,
\label{M2}
\end{equation}
with the function
\begin{equation}
\kappa=\left[(1-M_1^{-2})^2+ \varepsilon(\gamma+1) \left(\gamma-1+2 M_1^{-2}\right)\right]^{1/2}
\label{kappa}
\end{equation}
accounting for the dimensionless endothermic parameter $\varepsilon$. For non-reacting shock waves the value of $\kappa=1-M_1^{-2}$, thereby reducing Eqs.~\eqref{R}-\eqref{M2} to the well-known regular Rankine-Hugoniot relationships.
The effect of nuclear dissociation into the post-shock flow density and pressure is easily analyzed through Fig. \ref{fig:RH}, with the final values provided by the intersection of the Rayleigh line (for a constant Mach number propagation) and the non-adiabatic Rankine-Hugoniot curve. It is found that higher pressures and densities are required downstream to get the shock with the same Mach number, if endothermic transformations take place through the shock wave. The maximum energy that can be employed in the nuclear dissociation process is found in the extreme limit $1-\varepsilon\ll1$, which provides limiting conditions for the post-shock gas: post-shock Mach number and temperature that tends to zero, and density that tends to infinity. That is, all the kinetic and thermal energy of the in-falling gas is used in dissociating the nuclei. The corresponding Hugoniot curve collapses into the vertical axis and the finite values of pressure are given by the intersection with the Rayleigh lines.
\begin{figure}
\includegraphics[width=0.5\textwidth]{figRH.pdf}
\caption{Hugoniot curves and Rayleigh lines for several values of the dissociation energy $\varepsilon = 0,\ 0.2,\ 0.4$ and $0.6$.}
\label{fig:RH}
\end{figure}
Equations \eqref{R} and \eqref{P} are computed in Fig. \ref{fig:RP} as a function of the Mach number, $M_1$. Both $C_2$ and $P_2$ increase with $\varepsilon$ for the same value of $M_1$. Unlike regular detonations, where chemical energy release does not depend on shock intensity since the reaction is self-sustaining, nuclear-dissociation degree does depend on upstream Mach number. Thus, the function $\kappa$ approaches the value $\left[1+(\gamma^2-1)\varepsilon\right]^{1/2}$ in the strong-shock limit, $M_1\gg1$, then yielding
\begin{equation}
C_2|_{M_1\gg1} = \frac{\gamma + 1}{ \gamma - \left[1+\left(\gamma^2-1\right)\varepsilon\right]^{1/2}}
\label{RMgg1}
\end{equation}
and
\begin{equation}
P_2|_{M_1\gg1} = \frac{1+ \left[1+\left(\gamma^2-1\right)\varepsilon\right]^{1/2}}{\gamma +1}\ ,
\label{PMgg1}
\end{equation}
for the post-shock density and pressure values, in agreement with Fig. \ref{fig:RP}. The mass-compression ration $C_2$ is found to diverge, and $P_2$ approaches unity in the double limit $M_1\gg1$, $1-\varepsilon\ll1$.
\begin{figure}
\includegraphics[width=0.47\textwidth]{figC2.pdf}\\
\includegraphics[width=0.47\textwidth]{figP2.pdf}
\caption{Mass compression ratio $C_2$ (Top) and pressure amplification $P_2$ (Bottom) as a function of the incident Mach number $M_1$ for $\varepsilon = 0,\ 0.1,\ 0.2,\ 0.3$ and $0.4$}
\label{fig:RP}
\end{figure}
\subsection{Perturbation problem}
\label{sec:perturb_problem}
The upstream and the downstream linear disturbances can be characterized in terms of acoustic, entropy and vortical modes. In the in-falling gas reference frame, the upstream mono-frequency perturbation in the fresh-gas reference frame $(x_1,y_1)$ is determined by the divergence-free velocity perturbation wave, namely
\begin{equation}
\begin{aligned}
&\bar{u}_1 \left( x_1, y_1 \right)= \frac{u_1-\langle u_1 \rangle}{ \langle a_2 \rangle}
= \hat{u}_1 \cos\left(k_x x_1 \right)\cos\left(k_y y_1 \right), \\
&\bar{v}_1 \left( x_1, y_1 \right)= \frac{v_1-\langle v_1 \rangle}{ \langle a_2 \rangle}
= \hat{u}_1 \frac{k_x}{k_y} \sin\left(k_x x_1 \right)\sin\left(k_y y_1 \right),
\label{u1v1}%
\end{aligned}
\end{equation}
for the streamwise and crosswise perturbations, respectively. The brackets denote the time-averaged mean value of the flow variable, which is effectively null for the upstream velocity in the stagnant gas reference frame. The dimensionless factor $\hat{u}_1$ stands for the amplitude of the upstream velocity disturbances and $\vec{k}=(k_x,k_y)$ is the upstream wave number vector. The associated non-dimensional vorticity wave, $\bar{\omega}_1 \left( x_1, y_1 \right) =\partial \bar{v}_1 /(\partial k_y x_1) -\partial \bar{u}_1/(\partial k_y y_1)$, is
\begin{equation}
\bar{\omega}_1 \left( x_1, y_1 \right) = \hat{u}_1 \left(1+\frac{k_x^2}{k_y^2}\right) \cos\left(k_x x_1 \right)\sin\left(k_y y_1 \right)\ .
\label{omega1}
\end{equation}
The interaction of the CCSN shock with the upstream shear wave, characterized by the angle $\theta = \tan^{-1}(k_y/k_x)$, is sketched in Fig. \ref{fig:scheme}. As a result of the interaction, the shock ripples and the fluid downstream is correspondingly altered with acoustic and entropic-vorticity waves, the former traveling at the speed of sound downstream $a_2$ and the latter moving with the fluid particles.
\begin{figure}
\includegraphics[width=0.5\textwidth]{figscheme.pdf}
\caption{Scheme of the shock shear-wave interaction in the compressed gas reference frame.}
\label{fig:scheme}
\end{figure}
For the perturbed accretion shock to be seen as a discontinuity front, the characteristic perturbation wavelength $\lambda_c\sim k_y^{-1}$ must be much larger than the accretion-shock thickness $l$, including the dissociation layer in it (see Fig. \ref{fig:scheme2D}). Besides, to consider the base-flow variable as constant properties, the shock and the in-falling gas must be in the nearly-steady regime, so that the variations of the base-flow properties within the characteristic wavelength can be neglected. These two conditions are simultaneously true for perturbation wavelengths much smaller than expanding shock evolution range and much higher than the shock thickness, both used to define the limits of validity of the model associated to spatial and temporal scales: $k_y R_{{\rm shock}} \gg 1\gg k_y l$, and $ \dot{R}_{{\rm shock}}\ll a_2 \dot{\xi}_s$, respectively. On the other hand, the planar shock assumption, $k_y R_{{\rm shock}} \gg 1$, is not suitable for perturbations characterized by low-mode numbers such as SASI \citep{Blondin03,Fernandez15}. For such modes, spherical geometry is more suitable \citep{Foglizzo09}, which has been employed by \citet{Takahashi16} to study the influence of pre-collapse perturbations on the hydrodynamic eigenmodes in the gain region.
For the analysis it is convenient to use a reference frame moving with the velocity of the post-shock flow. The solution is to be described in terms of the dimensionless coordinates $x = k_y x_2$ and $y = k_y y_2$ and the dimensionless time $\tau = a_2 k_y t $.
The nondimensional values for pressure, density and velocity perturbations downstream, defined as
\begin{equation}
\begin{aligned}
&\bar{p} = \frac{p-\langle p_2 \rangle}{\gamma \langle p_2 \rangle} \ ,\quad & &\bar{\rho}= \frac{\rho-\langle \rho_2 \rangle}{\langle \rho_2 \rangle}\ , \\
&\bar{u}= \frac{u-\langle u_2 \rangle}{\langle a_2 \rangle} \ ,\quad &
&\bar{v}= \frac{v-\langle v_2 \rangle}{\langle a_2 \rangle}\ ,
\label{perturb}
\end{aligned}
\end{equation}
respectively, are used to write the adiabatic Euler equations governing the post-shock flow. Anticipating that $\bar{p}$ and $\bar{v}$ are always proportional to $\cos(y)$ and $\sin(y)$, respectively, the conservation equations for mass, $x$-momentum, $y$-momentum and energy, namely
\begin{equation}
\begin{aligned}
&\frac{\partial \bar{\rho}}{\partial \tau} +\frac{\partial \bar{u}}{\partial x}+\bar{v}=0 \ ,\quad &
&\frac{\partial \bar{u}}{\partial \tau} +\frac{\partial \bar{p}}{\partial x}=0\ ,\\
&\frac{\partial \bar{v}}{\partial \tau} -\bar{p}=0 \ ,\quad &
&\frac{\partial \bar{p}}{\partial \tau} = \frac{\partial \bar{\rho}}{\partial \tau}\ ,
\label{eqperturb}
\end{aligned}
\end{equation}
respectively, are combined for $\bar{p}$ to yield
\begin{equation}
\frac{\partial^2 \bar{p}}{\partial \tau^2}= \frac{\partial^2 \bar{p}}{\partial x^2}-\bar{p}
\label{sonicwave}
\end{equation}
as the two-dimensional periodically-symmetric wave equation, which governs the perturbation field behind the shock.
The problem reduces to that of integrating the linearized Euler equations, or equivalently the wave equation \eqref{sonicwave}, for $\tau\geq0$ and within the domain delimited by the leading reflected sonic wave traveling backwards, $x= -\tau$ and the shock front moving upwards $x= M_2\tau$. One boundary condition is provided by the isolated-shock assumption, which translates into not considering the effect of the acoustic waves reaching the shock front from behind, in consonance with the large radius limit, $R_{{\rm shock}} k_y \gg 1$. On the other hand, the boundary condition at the CCSN shock is determined by the linearized Rankine-Hugoniot relationships,
\begin{subequations}
\begin{alignat}{3}
&\left(C_2-1\right)\dot{\xi}_s =C_2\bar{u}_s-M_2 C_2\bar{\rho}_s-\bar{u}_1 \ , \label{massRH}\frac{}{}\\
&\bar{p}_s = 2M_2\left(\bar{u}_s-\bar{u}_1 \right)-M_2^2\bar{\rho}_s\ , \label{xmomRH}\frac{}{}\\
&M_1^2 M_2^2\bar{\rho}_s = \Pi_s\bar{p}_s-\Delta_s \left(\dot{\xi}_s-\bar{u}_1\right)\ ,
\label{eneRH}\frac{}{}\\
&\bar{v}_s = M_2 \left(C_2-1\right)\xi_s+\bar{v}_1\ ,\label{tanRH}
\end{alignat}
\end{subequations}
with $\dot{\xi}_s$ denoting the temporal derivative of the dimensionless ripple shock position $\xi_s = k_y\left( x_{1,s}-u_1 t\right)$, as depicted in Fig. \ref{fig:scheme}.
The energy equation \eqref{eneRH}, which involves the functions
\begin{equation}
\Pi_s = \frac{M_1^2\left[1 + M_1^2\left(1-\kappa\right)\right]^2}{\left(M_1^2+1\right)^2-M_1^4\kappa^2}
\label{Pis}
\end{equation}
and
\begin{equation}
\Delta_s = \varepsilon\frac{2 M_2^3 M_1^6\left(\gamma-1 \right) \left[1 + M_1^2\left(1-\kappa\right)\right]}{\left(M_1^2+1\right)^2-M_1^4\kappa^2}\ ,
\label{Deltas}
\end{equation}
distinguishes regular adiabatic shocks from reacting shocks like detonations or nuclear-dissociating shocks.
In the previous work \citep{Abdikamalov16}, the coefficients accompanying the linear perturbations in the linear energy equation \eqref{eneRH} were the same as those found in perturbed adiabatic shock ($\Pi_s=1$ and $\Delta_s=0$), although the values of the base-flow properties, namely $M_2$ and $C_2$, were accordingly modified by nuclear dissociation effects. How nuclear-dissociation degree is affected by the perturbations, and how that modification ultimately acts upon the downstream flow variables, is incorporated in this model through coefficients $\Pi_s$ and $\Delta_s$. In this sense, the present analysis consistently accounts for the effect of $\varepsilon$ in both zero-order and first-order flow variables.
The value of $\Pi_s$ is positive when the dissociation energy is sufficiently low, that is $\kappa<1+M_1^{-2}$. On the other hand, when the dissociation energy is sufficiently high, the value of $\Pi_s$ becomes negative, then reverting the relationship between density and pressure perturbations in \eqref{eneRH}. Since the degree of dissociation depends on the shock strength, the term involving the function $\Delta_s$ in \eqref{eneRH} is proportional to the incident Mach number perturbation $\delta M_1 = \left(\dot{\xi}_s-\bar{u}_1 \right)M_1/(M_2 C_2)$. The value of $\Delta_s$ is found to be negative for $\varepsilon > 0$. It is worth commenting that the case of exothermic detonations is significantly different since the second term in the right-hand side of \eqref{eneRH} will vanish \citep{Huete2013,Huete2017}. This is so because the total heat release, generated by the combustion process behind the shock, does not depend on the shock intensity perturbation, as it is provided by self-sustained reactions. Once the reaction is triggered it will release all the thermonuclear (or chemical) energy.
Algebraical manipulation of \eqref{massRH}-\eqref{tanRH} is carried out to write one of the two equations for the shock boundary condition involving $\bar{\xi}_s$ and $\bar{p}_s$, that is
\begin{equation}
\frac{d \xi_s}{d \tau} = \sigma_a \bar{p}_s+\hat{u}_1\cos\left(\frac{k_x}{k_y}C_2 M_2 \tau\right)\ ,
\label{xis}
\end{equation}
with the factor accompanying the pressure perturbation being
\begin{equation}
\sigma_a = \frac{C_2\left(M_1^2-\Pi_s\right)}{2 M_2 M_1^2 \left(C_2-1\right)+C_2\Delta_s}\ .
\label{As}
\end{equation}
Similarly, the material derivative behind the shock, $\partial/(\partial \tau) + M_2\partial/(\partial x)$, of the streamwise velocity perturbation $\bar{u}_s= \sigma_b \bar{p}_s+\bar{u}_1$, with
\begin{equation}
\sigma_b = \frac{M_1^2+\Pi_s+\Delta_s \sigma_a}{2 M_2 M_1^2}\ ,
\label{Bs}
\end{equation}
is used to provide
\begin{equation}
\begin{aligned}
&\left(\sigma_b +M_2 \right)\frac{\partial \bar{p}_s}{\partial \tau} +\left.\left(\sigma_b M_2+ 1\right)\frac{\partial \bar{p}}{\partial x}\right|_s=-M_2^2 \left(C_2- 1\right)\xi_s \\
&+\frac{k_x}{k_y}M_2\left(C_2-1\right)\hat{u}_1 \sin\left(\frac{k_x}{k_y}C_2 M_2 \tau\right)
\label{ps}
\end{aligned}
\end{equation}
as the second equation that conforms, along with \eqref{xis}, the shock boundary condition for the functions $\bar{\xi}_s$ and $\bar{p}_s$.
The coefficients $\sigma_a$ and $\sigma_b$ are positive for any combination of parameters $M_1$ and $\varepsilon$. In the strong shock limit for $\varepsilon>0$, the value of $\sigma_a$ approaches to zero with $\sigma_a|_{M_1\gg1}\sim M_1^{-2}$, while $\sigma_b$ reaches a constant value determined by the inverse of the post-shock Mach number, namely $\sigma_b|_{M_1\gg1}=M_2^{-1}|_{M_1\gg1}$.
The initial condition of the shock perturbations is determined by knowing that the shock is initially planar, so that $\bar{\xi}_s= \bar{v}_s = 0$. Correspondingly, the initial perturbations of pressure and streamwise velocity must satisfy $\bar{u}_s +\bar{p}_s=0$, as dictated by the first acoustic wave emitted backwards, thereby giving
\begin{equation}
\bar{p}_{s0}= -\frac{1}{\sigma_b+1} \hat{u}_1
\label{ps0}
\end{equation}
for the initial shock pressure perturbation.
\section{Linear interaction analysis with monochromatic vorticity perturbations}
\label{sec:analysis_wave}
\subsection{Shock pressure and corrugation temporal evolution}
The asymptotic behavior of the corrugated shock can be inferred from the Laplace transform expression provided in \eqref{PsLaplace}, with the imaginary poles in the dispersion relationship
\begin{equation}
\left(s\sqrt{s^2+1}+\sigma_b s^2 + \sigma_c\right)\left(s^2+\zeta^2\right)=0\ ,
\label{denominator}
\end{equation}
indicating the possibility of asymptotic harmonic oscillations. The left-hand product in \eqref{denominator} accounts for the shock response in the absence of continuous perturbations, whereas the right-hand product refers to the induced oscillations from the non-homogeneous upstream flow. The characteristic dimensionless frequency $\zeta$ is provided in \eqref{zeta}. Notice that the term $\sqrt{s^2+1}$ may change the sign if the pole lies on the bottom half-space of the imaginary plane.
It has been found that equation $s\sqrt{s^2+1}+\sigma_b s^2 + \sigma_c =0$ has no poles, indicating that shock pressure perturbations decay with time in the absence of continuous excitement. Generally, the perturbations decay in time like $\tau^{-3/2}$, but this decay rate changes, however, for infinitely strong shocks with $\varepsilon=0$, since $\sigma_b=\sigma_c$, yielding $\tau^{-1/2}$ as the law describing the approach to the permanent solution \citep{Fraley1986}.
We are firstly interested in the long-time response of the accretion shock to mono-frequency perturbations. As $\sigma_c<\sigma_b$ the shock will oscillate only with the excitement frequency coming from upstream perturbations, $\omega_s = R M_2 k_x/k_y$, thereby yielding an asymptotic response qualitatively similar to the one found for adiabatic shock waves \citep{Wouchuk2009}
\begin{equation}
\bar{p}_{s}(\tau\gg 1) = \left \{ \begin{array}{ll}
\mathcal{P}_{lr} \cos\left(\omega_s \tau \right) + \mathcal{P}_{li} \sin\left( \omega_s \tau \right) & ,\zeta \leq 1 \\
\mathcal{P}_{s} \cos\left(\omega_s \tau \right) & ,\zeta \geq 1
\end{array} \right.
\label{pstau}
\end{equation}
except for the coefficients defining the amplitudes, which are provided by Eqs.~\eqref{Plr}-\eqref{Ps} in Appendix~\ref{App1}. As the planar infinitely-thin assumption does not provide any length scale, the shock oscillation period will be proportional to the upstream characteristic length. In dimensional variables, the time between pressure peaks is given by $t_{\text{per}}=\lambda_x/(2 \pi a_1 M_1)$.
As in previous LIA works \citep{Wouchuk2009,Huete2013,Huete2017}, the pressure perturbation field splits into two distinguished regimes depending on the dimensionless frequency
\begin{equation}
\zeta = \frac{k_x}{k_y}\frac{M_2 C_2}{\sqrt{1-M_2^2}} = \frac{\omega_s}{\sqrt{1-M_2^2}}\ .
\label{zeta}
\end{equation}
In the long-wavelength (low-frequency) regime, $\zeta <1$, the acoustic perturbation right behind the shock is composed by the amplitudes of two orthogonal contributions $\mathcal{P}_{lr}$, and $\mathcal{P}_{li}$, respectively. In this range, the amplitude of the pressure disturbances exponentially decays with the distance from the shock front. On the other hand, in the short-wavelength (high-frequency) regime $\zeta >1$, the acoustic radiation travels in the form of constant-amplitude waves. The critical value $\zeta =1$ then indicates the condition at which stable sonic perturbations downstream move parallel to the shock front in the shock reference frame.
As shown in equation~\eqref{Besselr} of Appendix~\ref{App1}, the post-shock pressure perturbation field can be computed as a linear combination of Bessel functions. In particular, right behind the shock, we have
\begin{equation}
\bar{p}_s(\tau)=\sum_{\nu=0}^{\infty}N_{\nu}J_{\nu}\left(r=\tau \sqrt{1-M_2^2}\right) \ ,
\label{psBesseltau}
\end{equation}
with the corresponding coefficients for $N_{\nu}$, provided in \eqref{Dm}, being obtained through the Laplace transform \eqref{Laplace} and the isolated-shock boundary condition. The temporal evolution of the shock ripple $\xi_s(\tau)$ is readily obtained through the integration of eq.~\eqref{xis}, whose solution can be expressed in terms of hyper-geometrical functions, as shown in \eqref{xisbeseel}. Akin to the shock pressure, the asymptotic long-time response is written in terms of harmonic functions, as provided in \eqref{xisasym}.
\begin{figure}
\includegraphics[width=0.47\textwidth]{figpstau.pdf} \vspace{2mm}
\\ \includegraphics[width=0.47\textwidth]{figxitau.pdf}
\caption{Shock pressure perturbation (top) and shock ripple amplitude (bottom) as a function of $\tau$ for $M_1=5$, $\zeta=1.2$, and for $\varepsilon = 0.4$. Solid: transient evolution \eqref{psBesseltau} and \eqref{xisbeseel}. Dashed: asymptotic long-time equations \eqref{pstau} and \eqref{xisasym}.}
\label{fig:pstau}
\end{figure}
The functions $\bar{p}_s(\tau)$ and $\xi_s(\tau)$ are computed in Fig. \ref{fig:pstau} as a function of $\tau$, for $M_1=5$, $\zeta=1.2$, and for $\varepsilon=0.4$. Both transient (solid line) and long-time response (dashed line) are shown. The shock transient evolution is found to agree fairly well with the asymptotic expressions provided in \eqref{pstau} and \eqref{xisasym}, then confirming that asymptotic functions can be used to compute the interaction with an isotropic spectrum without significant loss of accuracy.
\subsection{Downstream flow variables}
The spatial distribution of the flow variables, namely, pressure, density and velocity, are derived from the shock pressure evolution computed previously. For example, pressure perturbations downstream can be written in terms of Bessel functions as
\begin{equation}
\bar{p}\left(x,\tau\right)=\sum_{\nu=0}^{\infty}N_{\nu}J_{\nu}\left(\sqrt{\tau^2-x^2}\right)e^{-\nu\left[\tanh^{-1}\left(M_2\right)-\tanh^{-1}\left(\frac{x}{\tau}\right)\right]}\ .
\end{equation}
As the asymptotic expression \eqref{pstau} is found to reproduce accurately the shock pressure evolution, the asymptotic long-time response of the shock is employed to compute the post-shock disturbances.
Downstream linear perturbations are conveniently split into entropic-vortical, conveyed by the fluid particles, and traveling acoustic modes \citep{Kovasznay53,Chu1958}, namely
\begin{equation}
\begin{aligned}
&\bar{p}(x,\tau)= \bar{p}_a(x,\tau)\ ,&\quad &\bar{\rho}(x,\tau)= \bar{\rho}_a(x,\tau) + \bar{\rho}_e(x) \\
&\bar{u}(x,\tau)= \bar{u}_a(x,\tau)+\bar{u}_r(x)\ ,&\quad &\bar{v}(x,\tau)= \bar{v}_a(x,\tau) + \bar{v}_r(x)\ .\nonumber
\label{kova}
\end{aligned}
\end{equation}
In the absence of diffusive effects, the amplitudes of the entropic-solenoidal perturbations are given by their corresponding values generated right behind the shock, and they are steady in a reference frame co-moving with the fluid particles. Acoustic disturbances, on the other hand, refer to traveling sonic waves that escape from the shock when $\zeta>1$.
The acoustic radiation condition is then determined by $\omega_s>(1-M_2^2)^{1/2}$, a condition that depends on the upstream shear wave, since $\zeta=\left[0,\infty\right)$ depends on the relative properties of the perturbation field ahead of the shock. Small values of $\zeta$ represent the interaction with upstream vortices highly stretched in the streamwise direction $\lambda_x\gg\lambda_y$, while the opposite is true for $\zeta\gg 1$. In the latter low mode-number scenario ($\lambda_x\ll\lambda_y$), the problem reduces to the one-dimensional interaction of the shock with radial perturbation waves. Such stability analysis has been developed by \citet{Velikovich2016} for the classical Noh's configuration in adiabatic conditions. The asymptotic far-field solution for the acoustic disturbances is also written in terms of harmonic functions, representing stable traveling fronts that occur only when the shock oscillation frequency is sufficiently high, $\zeta > 1$.
Traveling sonic perturbations are functions of $(\omega_{a} \tau - k_{a} x)$, with the frequency $\omega_{a}$ and the wave number $k_{a}$ being determined by the post-shock adiabatic dispersion relationship $\omega_{a}^2=k_{a}^2+1$, and the shock oscillation frequency $\omega_{s}= \omega_{a} - M_2 k_{a}$, yielding
\begin{equation}
\omega_{a}=\frac{\omega_s - M_2\sqrt{\omega_s^2-1+M_2^2}}{1-M_2^2}
\label{omegaa}
\end{equation}
and
\begin{equation}
k_{a}=\frac{\omega_s M_2 - \sqrt{\omega_s^2-1+M_2^2}}{1-M_2^2}\ ,
\label{ka}
\end{equation}
respectively, which depend upon the shock frequency $\omega_s$. It is straightforward to see that $k_a$ can be either negative or positive, the former representing the sonic waves propagating downwards in the compressed gas reference frame, and the latter denoting the waves moving upwards, although never catching up the shock wave as dictated by the isolated-front boundary condition. The shock oscillation frequency, $\omega_s=1$, marks the standing acoustic wave regime, therefore separating the left-traveling solution $\omega_s>1$ from the right-traveling regime $(1-M_2^2)^{1/2}<\omega_s<1$ in the compressed gas reference frame. When the shock oscillates with two frequencies, the possibility of having sonic fronts running upstream and downstream is possible.
The asymptotic pressure and isentropic density perturbations, far behind the shock, equal
\begin{equation}
\bar{p}(x,\tau)=\bar{\rho}_a(x,\tau)= \mathcal{P}_s \cos\left(\omega_a \tau-k_a x \right)\ ,
\label{pa}
\end{equation}
with $\mathcal{P}_s$ standing for the amplitude of the shock pressure disturbances in the long-wavelength regime. The amplitude of the associated acoustic-velocity perturbations are proportional to the pressure changes through the functions \eqref{UVa}, provided in the Appendix \ref{App1}. The corresponding isentropic temperature variations induced by the acoustic-shock radiation are simply $\bar{T}_a(x,\tau) = \left(\gamma-1\right) \bar{p}(x,\tau)$.
The entropic contribution to the density perturbations $\bar{\rho}_e$ is computed from Rankine-Hugoniot relations~\eqref{massRH}-\eqref{eneRH}, after subtracting the acoustic part. It is readily seen that
\begin{equation}
\bar{\rho}_e(x)=\left(\mathcal{D}-1\right) \bar{p}_s\left(\tau=\frac{x}{M_2}\right)
\label{dene}
\end{equation}
with $\mathcal{D} = \left(2 M_2 \sigma_b -1\right)/M_2^2$ being the amplitude of the density perturbations behind the shock. As easily inferred from Fig.~\ref{fig:RH}, the value of $\mathcal{D}$ is found to be positive, and reaches a constant value in the strong-shock limit: $\mathcal{D}|_{M_1\gg1}=2 M_2^{-2}$. The corresponding isobaric temperature perturbation, scaled with base-flow temperature, is the function $\bar{T}_e(x) = -\bar{\rho}_e(x)= -(\mathcal{D}-1) \bar{p}_s(\tau=x/M_2)$.
Analogously, dimensionless vorticity disturbances are determined by
\begin{equation}
\bar{\omega}(x)=\frac{\partial \bar{v}}{\partial x}-\frac{\partial \bar{u}}{\partial y}=\Omega_2 \bar{p}_s\left(\tau=\frac{x}{M_2}\right) + \Omega_1 \cos\left(\frac{\omega_s}{M_2} x \right)
\label{vort}
\end{equation}
with
\begin{equation}
\Omega_1=C_2\left[1+\left(\frac{k_x}{k_y}\right)^2\right]=C_2\left(1+\frac{1-M_2^2}{C_2^2 M_2^2}\zeta^2\right)
\label{Omega1}
\end{equation}
indicating the contribution result of the one-dimensional compression effect, the shrinking of the vortices by the overall mass compression ratio, and
\begin{equation}
\Omega_2=\frac{M_2 \left(C_2-1\right) \sigma_a+ \sigma_b M_2 -1}{M_2}
\label{Omega2}
\end{equation}
referring to contribution induced by shock rippling proportional to pressure, a two-dimensional effect.
The rotational contribution for the velocity disturbances is readily computed through the vorticity field, by knowing that rotational perturbations are steady and isobaric in the linear-inviscid approach. The relationships
\begin{equation}
\bar{\omega}(x)=-\frac{\partial^2 \bar{u}_r}{\partial x^2}+\bar{u}_r \ ,\ \bar{v}_r(x)=-\frac{\partial \bar{u}_r}{\partial x}
\label{urot}
\end{equation}
are then employed to write the asymptotic longitudinal and transverse rotational-velocity distributions, provided in eqs.~\eqref{urasym} and \eqref{vrasym}.
\begin{figure}
\includegraphics[width=0.47\textwidth]{figdenrot.pdf}
\caption{Two-dimensional vector field plot for rotational-velocity perturbations superposed to iso-contours of entropic-density disturbances for a shock wave with $M_1=5$, $\varepsilon =0.4$, and $\zeta=1.2$.}
\label{fig:voren}
\end{figure}
The asymptotic expressions for the rotational-velocity and entropic-density perturbations are computed in Fig.~\ref{fig:voren}, for the same conditions as in Fig. \ref{fig:pstau}. Velocity perturbations are displayed in a two-dimensional vector field, with the length of the vectors being scaled within the maximum and minimum velocity amplitudes, $1.9$ and $0.45$, respectively. Transverse component of the velocity perturbations is found to be much greater than longitudinal contribution. The spatial frequency modulation, given by $\omega_s/M_2$ is clearly distinguished. The amplitude of the rotational perturbations depends on the the incident angle $\theta$, as shown in Fig.~\ref{fig:UV} for $M_1=5$. This dependence is later used to account for the interaction with a whole spectrum of vorticity waves, with $\theta$ ranging from $0$ to $\pi$, upon consideration of the isotropic probability distribution.
Superposed to the vector field, the entropic-density disturbances are displayed in a contour plot in Fig.~\ref{fig:voren}. The center of the eddies and the peaks of the density field are shifted in $\pi/2$ in the lateral coordinate, as the former are proportional to $\sin(y)$ and the second to $\cos(y)$. Along the streamwise direction the peak values of density and rotational perturbations are in phase for $\zeta>1$, as both periodic distributions are proportional to $\cos{\left(\omega_s/M_2\, x\right)}$. There exits a spatial shift between the rotational and entropic mode, $\Delta \phi=\phi_r-\phi_e$, for $\zeta<1$, given by the contribution of the orthogonal components, $\tan \phi_r =\Omega_2\mathcal{P}_{li} /(\Omega_2\mathcal{P}_{lr}+\Omega_1)$ and $\tan \phi_e =\mathcal{P}_{li} /\mathcal{P}_{lr}$.
\section{Linear interaction analysis with 3D isotropic vorticity perturbations}
\label{sec:analysis_field}
\subsection{Turbulent kinetic energy}
\label{sec:tke}
The three-dimensional upstream flow is assumed to be homogeneous and isotropic. Therefore, the amplitude of the incident shear wave $\hat{u}_1$ depends exclusively on the wave-number amplitude $|\vec{k}|=k$ as $\vec{k}$ is uniformly distributed over the unit sphere. The three-dimensional problem is conveniently formulated in spherical polar coordinates, so the upstream velocity field $(\bar{u}_1,\bar{v}_1,\bar{w}_1)=\hat{u}_1 (\sin \theta \sin\varphi,\cos \theta \sin\varphi,\cos\varphi )$ and the associated wave-number vector is $\vec{k}=k (\cos \theta ,-\sin\theta,0 )$. The interaction with the whole spectrum of perturbations is carried out by direct superposition of linear perturbations \citep{Batchelor1953}. The average upstream velocity perturbation is
\begin{equation}
\langle\bar{u}_1^2\rangle= \int_{k^3} |\bar{u}_1|^2{\rm d}k^3 = \frac{8\pi}{3} \int_0^{\infty}\hat{u}_1^2(k)k^2{\rm d}k\ ,
\label{uw3D}
\end{equation}
\begin{equation}
\langle\bar{v}_1^2\rangle= \langle\bar{w}_1^2\rangle= \int_{k^3} |\bar{v}_1|^2{\rm d}k^3 = \frac{2\pi}{3} \int_0^{\infty}\hat{u}_1^2(k)k^2{\rm d}k
\label{vw3D}
\end{equation}
so the corresponding turbulent kinetic energy (TKE) computes as
\begin{equation}
\text{TKE}_1 = \frac{1}{2} \left( \langle\bar{u}_1^2 \rangle + \langle \bar{v}_1^2 \rangle + \langle \bar{w}_1^2 \rangle\right)= 2\pi\int_0^{\infty} \hat{u}_1^2(k) k^2 {\rm d}k
\label{TKEo}
\end{equation}
with $\hat{u}_1(k)=\text{fun}(k)$ representing the isotropic energy spectrum.
The problem is further simplified by reducing the three-dimensional geometry into an equivalent two-dimensional case that accounts for the effect of vorticity perturbations that are parallel or perpendicular to the shock propagation velocity. After some straightforward algebra, the amplification ratio across the shock wave is
\begin{equation}
K= \frac{\text{TKE}_2}{\text{TKE}_1} = \frac{1}{2} \int_0^{\pi/2} \left(\bar{u}^2 + \bar{v}^2 \right)\sin^3\theta {\rm d}\theta + \frac{1}{2}
\label{K3Dtheta}
\end{equation}
which is conveniently rewritten in terms of the integration variable $\zeta$ as
\begin{equation}
\begin{aligned}
K=\frac{1}{3} \int_0^{\infty} \left(\bar{u}^2 + \bar{v}^2 \right) \text{P}(\zeta) {\rm d}\zeta+ \frac{1}{2}
\label{K3D}
\end{aligned}
\end{equation}
with
\begin{equation}
\text{P}(\zeta) = \frac{3}{2}\frac{M_2^4 C_2^4 \sqrt{1-M_2^2}}{\left[M_2^2 C_2^2 +\zeta^2 \left(1-M_2^2\right)\right]^{5/2}}
\label{pdf}
\end{equation}
standing for the normalized probability-density distribution obeying $\int_0^{\infty}\text{P}(\zeta){\rm d}\zeta=1$. It is readily seen that, although post-shock turbulence spectrum depends on upstream energy distribution $\int_0^{\infty}\hat{u}_1^2(k)k^2dk$, the kinetic energy amplification ratio does not as long as isotropic conditions are considered, namely $\hat{u}_1(k)=\text{fun}(k)$.
The amplification ratios for the longitudinal and transverse kinetic energy contributions can be computed with the aid of the probability density distribution. They are conveniently split into rotational and acoustic contributions, yielding
\begin{equation}
\begin{aligned}
L &= L_r + L_a = \int_0^{1}\left[\left(\mathcal{U}_{li}^r\right)^2+\left(\mathcal{U}_{li}^r\right)^2\right] \text{P}(\zeta) {\rm d}\zeta+\\
&+\int_1^{\infty} \left(\mathcal{U}_{s}^r\right)^2 \text{P}(\zeta) {\rm d}\zeta + \int_1^{\infty} \left(\mathcal{U}^a\right)^2 \text{P}(\zeta) {\rm d}\zeta
\label{L3D}
\end{aligned}
\end{equation}
for the longitudinal part. The variation of the velocity perturbation amplitudes with $\zeta$ is deduced from Fig.~\ref{fig:UV} (for $M_1=5$) knowing that $\zeta$ is inversely proportional to $\tan\theta$, as \eqref{zeta} reads.
Equivalently, the turbulent kinetic energy associated to the transverse contribution is
\begin{equation}
\begin{aligned}
T &= T_r + T_a = \frac{1}{2} \int_0^{1}\left[\left(\mathcal{V}_{li}^r\right)^2+\left(\mathcal{V}_{li}^r\right)^2\right] \text{P}(\zeta) {\rm d}\zeta+\\
&+ \frac{1}{2}\int_1^{\infty} \left(\mathcal{V}_{s}^r\right)^2 \text{P}(\zeta) {\rm d}\zeta + \frac{1}{2}\int_1^{\infty} \left(\mathcal{V}^a\right)^2 \text{P}(\zeta) {\rm d}\zeta+ \frac{3}{4}\ .
\end{aligned}
\label{T3D}
\end{equation}
The total turbulent kinetic energy, also split into
rotational and acoustic contributions through $K=K_r+K_a$, is computed with the aid of $K_r= (L_r+2 T_r)/3$ and $K_a= (L_a+2 T_a)/3$, or equivalently through $K=\left(L+2T\right)/3$.
\begin{figure}
\includegraphics[width=0.47\textwidth]{figL3D.pdf} \vspace{2mm}
\\ \includegraphics[width=0.47\textwidth]{figT3D.pdf}\vspace{2mm} \\ \includegraphics[width=0.47\textwidth]{figK3D.pdf}
\caption{Longitudinal $L$, transverse $T$ and total $K$ turbulent kinetic energy for $\varepsilon=$ 0, 0.2, and 0.4. The solid lines account for rotational contribution and the dashed lines show rotational and acoustic contributions.}
\label{fig:LTK}
\end{figure}
The variation of the longitudinal, transverse and total contributions for the turbulent kinetic energy are shown in Fig.~\ref{fig:LTK} as a function of $M_1$, for $\varepsilon=$ 0, 0.2, and 0.4. The solid lines show the rotational contribution and the dashed lines include the contribution of both rotational and acoustic kinetic energy. In agreement with Fig.~\ref{fig:UV}, the acoustic contribution is found to be greater for the longitudinal part $L$, although sufficiently small to be neglected for any $M_1$ and $\varepsilon$ considered. Although not clearly seen in Fig.~\ref{fig:LTK}, the function $K$ approaches a constant value in the strong shock limit $M_1\gg1$, they are 1.8, 7.1, and 9.8 for $\varepsilon=$ 0, 0.2, and 0.4, respectively. On the other hand, the weak shock limit $M_1-1\ll1$ provides 1, 1.4, and 1.6 for the same conditions. For a fixed value of the incident Mach number, the effect of nuclear dissociation is seen to increase the total kinetic energy. It is found that, for a Mach number close to 3, the total kinetic energy is less sensitive to dissociation energy, although longitudinal and transverse contributions are clearly counter-affected. It indicates that post-shock anisotropy is modified by $\varepsilon$. Longitudinal contribution is generally diminished by nuclear dissociation if the Mach number is sufficiently high, a region that covers the scenarios of most interest. It is also found that transverse perturbations across the shock are more sensitive to the shock passage, then conforming a post-shock flow that differs from the ideal 1D configuration.
\begin{figure}
\includegraphics[width=0.47\textwidth]{figLTK2016.pdf}
\caption{Longitudinal $L$, transverse $T$ and total $K$ turbulent kinetic energy for $M_1=5$ as a function of $\varepsilon$. The solid lines represent computations of eqs.~\eqref{L3D}, \eqref{T3D} and \eqref{K3D}, while the dashed lines show the predictions in \protect\cite{Abdikamalov16}.}
\label{fig:LTK1617}
\end{figure}
A direct comparison with the results obtained in \cite{Abdikamalov16} reveals that the dependence of the turbulent kinetic energy with $M_1$ and $\varepsilon$ is affected when endothermic effects are included in the linear perturbation analysis. Although similar trends, when increasing $\varepsilon$, is found in both works, the values may differ substantially when the energy employed in dissociating the gas is sufficiently high. For the sake of exemplification, predictions for $L$, $T$ and $K$ are computed in Fig. \ref{fig:LTK1617} by using eqs.~\eqref{L3D}, \eqref{T3D} and \eqref{K3D} (solid) and recasting the data in \cite{Abdikamalov16} (dashed). The differences become more pronounced with increasing shock strength, reaching $\sim 30\%$ in $K$ for $M_1 = 10$ and $\varepsilon=0.4$.
\subsection{Turbulent Mach number}
In is instructive to relate the pre-shock and post-shock turbulent Mach numbers. It is immediate to see that
\begin{equation}
\langle \delta M_2^2\rangle = -4 M_2 \langle \bar{u}\bar{a} \rangle + \langle \bar{u}^2 \rangle + \langle \bar{v}^2 \rangle + \langle \bar{w}^2 \rangle +3 M_2^2 \langle \bar{a}^2\rangle\ ,
\end{equation}
which can be split into entropic-rotational and acoustic contributions as $\langle \delta M_2^2\rangle=\langle \delta M_1^2 \rangle \left( \Phi_{er}+\Phi_{ac}\right)$ in terms of functions $\Phi_{er}$ and $\Phi_{ac}$ represent these two contributions. For isotropic turbulence in the upstream flow, the entropic-rotational part reads
\begin{equation}
\begin{aligned}
\Phi_{er} &=\frac{M_2^2C_2^2}{M_1^2}\left[\frac{\langle \bar{u}_r^2 \rangle + \langle \bar{v}_r^2 \rangle + \langle \bar{w}_r^2 \rangle}{3\langle \bar{u}_1^2 \rangle}+ \frac{M_2^2}{4} \frac{\langle \bar{\rho}_e^2 \rangle}{\langle \bar{u}_1^2 \rangle} + \frac{2 M_2}{3} \frac{\langle \bar{u}_r \bar{\rho}_e \rangle}{\langle \bar{u}_1^2 \rangle}\right]\\
&= \frac{M_2^2C_2^2}{M_1^2}\left[K_r + \frac{M_2^2}{4} D_e+
\frac{2 M_2}{3} B_{er}\right]\ ,
\end{aligned}
\end{equation}
while the acoustic contribution can be expressed as
\begin{equation}
\begin{aligned}
\Phi_{ac} &= \frac{M_2^2C_2^2}{M_1^2}\left[\frac{\langle \bar{u}_a^2 \rangle + \langle \bar{v}_a^2 \rangle}{3\langle \bar{u}_1^2 \rangle}+ \frac{M_2^2}{4} \frac{\langle\bar{\rho}_a^2 \rangle}{\langle \bar{u}_1^2 \rangle}- \frac{2M_2}{3} \frac{\langle \bar{u}_a\bar{\rho}_a \rangle}{\langle \bar{u}_1^2 \rangle} \right]\\
&=\frac{M_2^2C_2^2}{M_1^2}\left[K_a + \frac{M_2^2 (\gamma-1)^2}{4} D_a - \frac{2 M_2 (\gamma-1)}{3} B_a\right]\ .
\end{aligned}
\end{equation}
The values of $K_r$, $K_a$, $D_e$, $D_a$, $B_{er}$, and $B_a$ are provided in Eq.~\eqref{K3D} for the kinetic energy, in Eq.~\eqref{D3D} for the average density perturbations, and in Eq.~\eqref{B3D} for the buoyancy correlation. The mean value of the post-shock Mach number includes changes in the velocity field, density and the cross-product contribution. As $\bar{v}$ and $\bar{\rho}$ are orthogonal functions, only the longitudinal contribution correlates with density perturbations. The latter are being expressed as a function of shock pressure through $\bar{\rho}_e(x)= (\mathcal{D}-1)\bar{p}_s(\tau=x/M_2)$ for then entropic perturbations, and through $\bar{\rho}_a=\bar{p}_s(\tau=x/M_2)$ for the acoustic part.
The value of $\Phi=\Phi_{er}+\Phi_{ac}$ is computed in Fig.~\ref{fig:lum} as a function of the shock strength $M_1$ for $\varepsilon=$ 0, 0.2, and 0.4. For typical values these parameters ($0.2 \lesssim \varepsilon \lesssim 0.4$ and $M_1 \gtrsim 5$), $\Phi$ ranges from $\sim 0.3$ to $\sim 0.6$. Similarly to the turbulent kinetic energy in the post-shock region, most of the contribution to $\Phi$ comes from the entropic-rotational part, while the acoustic contribution $\Phi_{ac}$ is found to be negligibly small.
\begin{figure}
\includegraphics[width=0.47\textwidth]{figPhi3D.pdf}
\caption{Variable $\Phi$ as a function of the shock strength $M_1$ for $\varepsilon=$ 0, 0.2, and 0.4.}
\label{fig:lum}
\end{figure}
\subsection{Enstrophy}
The effect of the shock passage on the upstream isotropic vorticity field can be computed with the aid of eqs. \eqref{vort} and \eqref{pdf}. The amplification of the average squared vorticity perturbations, nondimensionalized with $(k a_2)^2$, is written as
\begin{equation}
W = \frac{\langle\bar{\omega}_{x}^2+\bar{\omega}_{y}^2+\bar{\omega}_{z}^2\rangle}{\langle\bar{\omega}_{1,x}^2+\bar{\omega}_{1,y}^2+\bar{\omega}_{1,z}^2\rangle} = \frac{1}{3}+ \frac{2}{3}\frac{\langle \bar{\omega}_{y}^2+\bar{\omega}_{z}^2\rangle}{\langle\bar{\omega}_{1,y}^2+\bar{\omega}_{1,z}^2\rangle}=\frac{1}{3}+ \frac{2}{3}W_{\perp}\ ,
\label{W3D}
\end{equation}
with the factor $1/3$ referring to the invariable component of the vorticity pointing in the streamwise direction, and $W_{\perp}$ being the amplification factor of the averaged squared vorticity perpendicular to the shock propagation velocity. The two-dimensional equivalent factor
\begin{equation}
\begin{aligned}
&W_z = \frac{\langle \bar{\omega}_{z}^2\rangle}{\langle \bar{\omega}_{1,z}^2\rangle}=\int_1^\infty\left(\Omega_1+\Omega_2\mathcal{P}_{s}\right)^2\frac{C_2^2 M_2^2}{C_2^2 M_2^2+(1-M_2^2)\zeta^2} \text{P}(\zeta) {\rm d}\zeta \\
&+ \int_0^1\left[\left(\Omega_1+\Omega_2\mathcal{P}_{lr}\right)^2+\Omega_2^2\mathcal{P}_{li}^2\right]\frac{C_2^2 M_2^2}{C_2^2 M_2^2+(1-M_2^2)\zeta^2} \text{P}(\zeta) {\rm d}\zeta
\label{Wz}
\end{aligned}
\end{equation}
is conveniently employed in computing the perpendicular contribution as $W_{\perp}=(C_2+3W_z)/4$.
\begin{figure}
\includegraphics[width=0.47\textwidth]{figW3D.pdf}
\caption{Mean value of squared vorticity perturbations, $W$, for $\varepsilon=$ 0, 0.2, and 0.4.}
\label{fig:W}
\end{figure}
The so-called enstrophy, $W$, is computed in Fig.~\ref{fig:W} for the same conditions as in Fig.~\ref{fig:LTK}. In consonance to the turbulent kinetic energy, the effect of nuclear dissociation across the shock is found to increase the average vorticity intensity, for a fixed value of $M_1$.
When the shock is expanding at variable Mach number, the theory still holds if base-flow changes are negligible within the perturbation wavelength distance. Upstream turbulent flows characterized by short wavelengths will meet this constriction. On the other side, perturbations must be sufficiently large for the shock to be seen as a pure discontinuity. In such case, the post-shock kinetic energy at any radial locus can be approximated by the one left by the expanding shock, whose instantaneous properties $M_1$ and $\varepsilon$ can be computed following the analysis presented in next section. The values obtained for the downstream kinetic energy and enstrophy can be then used to compute the evolution of the turbulent flow by viscous-dissipative effects. Then, Figs.~\ref{fig:LTK} and \ref{fig:W} serve as the onset for such post-shock stage, with the subsequent thermalization of the kinetic energy being inferred by the dissipation energy cascade associated to the dominant scales \citep{Mabanta17}.
\section{Nuclear Dissociation Energy and the Pre-shock Mach Number In CCSN models} \label{sec:varepsilon}
\begin{figure}
\includegraphics[width=0.47\textwidth]{e_m_rsh_vs_t_e_s15_h122.pdf}
\includegraphics[width=0.47\textwidth]{e_m_rsh_vs_t_e_s15_h123.pdf}
\includegraphics[width=0.47\textwidth]{e_m_rsh_vs_t_e_s25_h118.pdf}
\caption{{\bf Top panel:} Time evolution of the shock radius (dashed black line) and the nuclear dissociation parameter (solid red line) for non-exploding model $s15$ with heating factor $h=1.2$ (i.e., a group I model). For reference, the horizontal red dashed line shows the $\varepsilon=0.2$ line. {\bf Center panel:} The same as in top panel but for exploding model $s15$ with heating factor $h=1.23$ (i.e., a group II model). {\bf Bottom panel:} The same as in top panel but for model $s25$ with heating factor $h=1.18$ that undergoes strong shock oscillations (i.e., a group III model).}
\label{fig:rsh_e_m}
\end{figure}
\begin{figure}
\includegraphics[width=0.47\textwidth]{e_vs_rsh.pdf}
\caption{The nuclear dissociation parameter $\varepsilon$ as a function of the shock radius for non-exploding (group I) and exploding (group II) models. Each line represents a specific model and the color of each line indicates the time: the blue end of the line corresponds to $10\,\mathrm{ms}$ after bounce, while the red end corresponds to late postbounce time ($t-t_\mathrm{b} \sim 1\,\mathrm{s}$). For shock radii $R_\mathrm{shock}\lesssim 175 \,\mathrm{km}$, $\varepsilon$ scales as $\propto R_\mathrm{shock}$, while for large shock radii, the growth of $\varepsilon$ saturates and remains $\sim 0.5$ until $R_\mathrm{shock}\lesssim 600 \,\mathrm{km}$.}
\label{fig:e_vs_rsh2}
\end{figure}
This section presents the estimates of the nuclear dissociation energy and the pre-shock Mach number from a series of spherically-symmetric CCSN simulations using the {\tt GR1D} code with the leakage/heating scheme \citep{Oconnor10}. Eight \cite{Woosley07} progenitor star models with ZAMS masses of $12M_\odot$, $15M_\odot$, $18M_\odot$, $20M_\odot$, $25M_\odot$, $30M_\odot$, $40M_\odot$, and $70M_\odot$ were considered. Each progenitor model is evolved using several values of the heating parameter. This yields a variety of qualitatively different evolutionary paths for each stellar model, ranging from non-exploding models to rapidly exploding models. Each simulation is named using the following convention: for example, the simulation $s15h1.23$ uses a progenitor model with a ZAMS mass of $15M_\odot$ evolved with heating factor of $1.23$ \citep[for the definition of the heating factor, see, e.g.,][]{Oconnor10,Ott13}.
Our simulations use the SFHo finite-temperature nuclear EOS of \cite{Steiner13}\footnote{Available at {\tt www.stellarcollapse.org} \citep{Oconnor10}.} as this EOS employs an accurate treatment of light nuclei. Calculations with the \cite{Lattimer91} EOS with nuclear incompressibility of $K=220$ MeV revealed similar results. Across our computational domain, we use $1000$ logarithmic radial grids with the central resolution of $0.1\,\mathrm{km}$. The outer boundary is fixed at the radius where the initial density is $2\times10^3 \, \mathrm{g/cm^3}$.
The shock wave dissociates heavy nuclei into light nuclei such as $\alpha$ particles and free nucleons. The SFHo EOS includes the nuclei ${}^2$H , ${}^3$H, ${}^3$H, ${}^3$He, ${}^4$Li, $\alpha$ particles, and heavy nuclei. Based on the change of the mass fractions of nuclei across the shock, the nuclear dissociation parameter is calculated using formula (\ref{eq:varepsilon2}) derived in Appendix~\ref{App2}. The binding energies of the light nuclei are taken from the \cite{Audi03} database, while that of heavy nuclei are assumed to be equal to that of iron nuclei, i.e., $8.8$ MeV per nucleon. For calculating the dissociation energy at the shock, this is a reasonable assumption as the binding energies of heavy nuclei in the iron core and Si/O shells differ by at most $\sim 10\%$.
The qualitative behaviors of $\varepsilon$ and $M_1$ depends on the overall dynamics of each model. In this respect, all the models considered here can be categorized into three groups: ($i$) non-exploding models, in which the shock wave gradually decreases with time without exhibiting strong radial oscillations (group I), ($ii$) exploding models, in which the shock gradually expands without strong oscillations (group II), and ($iii$) models, in which the shock wave exhibits strong oscillations before either transitioning to explosion or failing to explode (group III). In the following, we describe these three different model groups separately.
The top panel of Fig.~\ref{fig:rsh_e_m} shows the shock radius (solid black line) and the dissociation parameter $\varepsilon$ (solid red line) as a function of post-bounce time for model $s15h1.22$. This is a non-exploding model, in which the shock gradually recedes without exhibiting strong radial oscillations, i.e., this model belongs to group I. After the initial period of $\sim 50\,\mathrm{ms}$, during which shock undergoes rapid expansion, the shock stalls until $t-t_\mathrm{b}\sim 100\,\mathrm{ms}$, after which $R_\mathrm{shock}$ starts receding monotonically. The qualitative behavior of $\varepsilon$ is similar to that of $R_\mathrm{shock}$: following the initial period of increase and subsequent stagnation, $\varepsilon$ gradually decreases with time. The dissociation parameters $\varepsilon$ falls below, e.g., $\varepsilon=0.2$ when $R_\mathrm{shock} \lesssim 55\,\mathrm{km}$. Other models of group I exhibit a similar behavior.
The center panel of Fig.~\ref{fig:rsh_e_m} shows the shock radius (dashed black line) and the dissociation parameter $\varepsilon$ (solid red line) as a function of post-bounce time for model $s15h1.23$. This is an exploding model, in which the shock gradually expands without exhibiting strong radial oscillations, i.e., it belongs to group II. In this model, the stalled shock phase lasts until $t-t_\mathrm{b}\sim 200\,\mathrm{ms}$, after which $R_\mathrm{shock}$ slowly increases. In this phase, $R_\mathrm{shock}$ exhibits only weak oscillations with a relative amplitude of a few percent. At $t-t_\mathrm{b}\sim 500\,\mathrm{ms}$, the shock starts rapidly expanding and the model quickly transitions towards explosion. In the early $t-t_\mathrm{b}\lesssim 500\,\mathrm{ms}$ after bounce, the dissociation parameter stays above $0.2$ and oscillates around the value of $\sim 0.5$. However, it rapidly decreases during the explosion phase, once the shock radius becomes $\gtrsim 800\,\mathrm{km}$. Other models of group II exhibit a similar behavior.
It is illuminating to analyze $\varepsilon$ as a function of shock radius, a plot of which is shown in Fig.~\ref{fig:e_vs_rsh2} for all of our models in group I and II. Each line in this plot corresponds to one model and the color of a point on this line reflect that the time after bounce: the blue end of each line corresponds to $t-t_\mathrm{b}=10\,\mathrm{ms}$, while the red part corresponds to the end of the simulations ($t-t_\mathrm{b}\sim 1 \,\mathrm{s}$). In all non-exploding models (group I), $\varepsilon$ scales as $\propto R_\mathrm{shock}$, with the proportionality depending on mass:
\begin{equation}
\varepsilon \sim \frac{2}{3} M_{1.3}^{-1} \left( \frac{R_\mathrm{shock}}{150\,\mathrm{km}} \right),
\label{eq:eps_scaling}
\end{equation}
This relation is qualitatively similar to Eq.~(4) predicted by \citet{Fernandez2009a}. However, as can be seen in Fig.~\ref{fig:e_vs_rsh2}, the $\varepsilon\propto R_\mathrm{shock}$ scaling becomes invalid as soon as shock becomes larger than $\sim 175\,\mathrm{km}$, which occurs in exploding models. In this regime, $\varepsilon$ stops growing with $R_\mathrm{shock}$ and saturates to $\sim 0.5$ for most models.
Figure~\ref{fig:m_vs_rsh} shows pre-shock Mach number $M_1$ as a function of shock radius for all of our models in groups I and II. As in Fig.~\ref{fig:e_vs_rsh2}, each line represents a single model and the color of each point on each line represents the post-bounce time. Except the immediate post-bounce time ($t-t_\mathrm{b}\sim 10-20\,\mathrm{ms}$), $M_1$ depends on $R_\mathrm{shock}$ as
\begin{equation}
M_1 \sim 6.5\times \left( \frac{150\,\mathrm{km}}{R_\mathrm{shock}} \right)^{0.37}
\label{eq:M1_scaling}
\end{equation}
This relations is only approximate and the spread of the values of $M_1$ at a given $R_\mathrm{shock}$ is caused by the fact that different models have somewhat different thermodynamic conditions (e.g., temperature), which leads to different values of the speed of sound, which, in turn, affects the Mach number.
Finally, the bottom panel of Fig.~\ref{fig:rsh_e_m} shows the shock radius (solid black line) and the nuclear dissociation parameter $\varepsilon$ (solid red line) as a function of time for models $s25h1.18$. This model exhibits strong radial shock oscillations from $\sim200\,\mathrm{ms}$ till $\sim800\,\mathrm{ms}$ after bounce. During this time, $\varepsilon$ also undergoes strong oscillations with the same frequency as the shock radius. The oscillations in the two quantities are somewhat out of phase. When the increase of $R_\mathrm{shock}$ is decelerating, $\varepsilon$ starts decreasing fast, reaching its local minimum just before $R_\mathrm{shock}$ does. It starts increasing when the shock radius is approaching its local minimum. At its minimum, $\varepsilon$ can become as small as $0.1$ for a brief period of time. The frequency of these oscillations are comparable to the frequencies of the infalling perturbations. For this reason, the linear formalism presented in this work is unlikely to be applicable to such models (cf. Section~\ref{sec:perturb_problem}). On the other hand, such oscillations are artificially strong in 1D models. Full 3D simulations are unlikely to exhibit strong oscillations, at least not in the angle-averaged shock radius. However, in the presence of strong SASI oscillations, the shock radius may oscillate along radial directions. In these situations, the values of $\varepsilon$ are likely exhibit similar oscillations as in models in group III.
\begin{figure}
\includegraphics[width=0.47\textwidth]{m_vs_rsh.pdf}
\caption{Mach number as a function of the shock radius. The color of each line indicates the corresponding post-bounce time at which this value of the Mach number is extracted. The blue end of the lines corresponds to early post-bounce time of $t-t_\mathrm{b} = 10 \, \mathrm{ms}$, while the red region corresponds to late post-bounce time ($t-t_\mathrm{b} \sim 1 \, \mathrm{s}$). The dashed black line represents fitting function (\ref{eq:M1_scaling}) that yield the values of the pre-shock Mach number as a function of the shock radius $R_\mathrm{shock}$ in the stalled shock phase.}
\label{fig:m_vs_rsh}
\end{figure}
\subsection{Amplification of turbulent kinetic energy as a function of the shock radius}
In addition to analyzing the amplification of turbulent kinetic energy across the shock as a function of parameters $\varepsilon$ and $M_1$, as was done in Section~\ref{sec:analysis_field}, one can get additional insight by looking at it as a function of the shock radius $R_\mathrm{shock}$. To this end, equations \eqref{eq:eps_scaling} and \eqref{eq:M1_scaling} allows us to express the nuclear dissociation degree $\varepsilon$ and shock strength $M_1$ as functions of the shock radius, $R_\mathrm{shock}$. These expressions are employed to compute $L$, $T$ and $K$ as a function of $R_\mathrm{shock}$ in Fig.~\ref{fig:LTKrs}. Each component of the turbulent kinetic energy appears to depend rather weakly on $R_\mathrm{shock}$. The transverse component increases by a factor of $\sim 3$, while the longitudinal component experiences no significant amplification. The total turbulent kinetic energy amplifies by a factor of $\sim 2$. As dictated by computations in Fig.~\ref{fig:e_vs_rsh2}, there exit two distinguished regions, the zone where $\varepsilon$ is linearly proportional to the shock position ($R_s\leq$175 km) and the region where nuclear dissociation is saturated. For small radius, the strong-shock adiabatic limit applies, as $M_1$ grows proportional to $R_s^{-0.37}$ and $\varepsilon$ approaches to zero.
The dashed lines in Fig.~\ref{fig:LTKrs} represent the amplification of the integrated kinetic energy in the region of space confined between the shock and the center through
\begin{equation}
\label{eq:LTK_int}
\begin{pmatrix} \bar L\\ \bar T\\ \bar K \end{pmatrix}
= \frac{3}{R_{\text{shock}}^3}\int_0^{R_{\text{shock}}}\begin{pmatrix} L(r)\\ T(r)\\ K(r) \end{pmatrix}r^2 \rm{d} r\ ,
\end{equation}
provided that the characteristic time of post-shock turbulent structures evolution due to viscous-diffusive effects is much longer than shock-time passage through the matter to the distance $R_{\text{shock}}$.
\begin{figure}
\includegraphics[width=0.45\textwidth]{figLTKRshock.pdf}
\caption{Longitudinal $L$, transverse $T$ and total $K$ turbulent kinetic energy for $M_1=5$ as a function of the shock radius. The dashed lines represent the amplification of the integration kinetic energy in the post-shock region (cf. Eq.~\ref{eq:LTK_int}).}
\label{fig:LTKrs}
\end{figure}
\section{Discussion: Impact on the CCSN Explosion Mechanism}
\label{sec:discussion}
Generally speaking, the pre-shock perturbations in CCSNe consist of different physical modes, including acoustic and entropy waves in addition to the vorticity modes considered in this work. Without including all of these modes, one cannot obtain a rigorous estimate of the impact of perturbations on the explosion condition. However, in the linear order and for uniform mean flow, all these modes evolve independently from each other. Therefore, we can study the effect of vorticity modes alone in this work. The effect of other modes will be given in a future work.
The impact of the perturbations on the explosion condition can be analyzed using the concept of the critical neutrino luminosity, i.e., the minimum neutrino luminosity that is necessary in order to produce an explosion for a given stellar model \citep{Burrows93}. The turbulence behind the supernova shock reduces the critical luminosity, an analytical estimate of which was obtained by \citet{Mueller15}:
\begin{equation}
\label{eq:crit_lum}
L_\mathrm{crit} \propto \left(\dot M M\right)^{3/5}r_\mathrm{gain}^{-2/5}\left(1+\frac{4}{3}\langle \delta M_2^2\rangle \right)^{-3/5},
\end{equation}
where $\delta M_2$ is the turbulent Mach number in the gain region. It is comprised of two contributions, one coming from neutrino-driven convection and/or SASI, another stemming from the perturbations crossing the shock. \citet{Mueller15} argue that the impact of the density perturbations generated by the advection of vorticity waves plays the dominant role in driving buoyancy-driven turbulence in the post-shock region. The resulting reduction in the critical luminosity was recently calculated by \citet{Mueller16}:
\begin{equation}
\begin{aligned}
\label{eq:dl}
\frac{\Delta L_\mathrm{crit}}{L_\mathrm{crit}} \simeq - \frac{0.15 \pi}{l \eta_\mathrm{acc}\eta_\mathrm{heat}} \sqrt{\langle \delta M^2_0 \rangle},
\end{aligned}
\end{equation}
where $\delta M_0$ is the turbulent Mach number in the convective nuclear burning shell prior to collapse, $l$ is the angular wavenumber of the dominant perturbation, $\eta_\mathrm{heat}$ and $\eta_\mathrm{acc}$ are the efficiencies of neutrino heating and accretion.
Expression (\ref{eq:dl}) is derived under the assumption that the advection of convective perturbations from Si/O shells towards the shock generates density perturbations of order
\begin{equation}
\label{eq:drho2_m2}
\sqrt{\langle \bar\rho^2_2 \rangle} \sim \sqrt{\langle \delta M_0^2 \rangle}
\end{equation}
behind the shock. This estimate does not include the density fluctuations associated with entropy perturbations generated in the post-shock region by the interaction of the shock with vorticity waves. Below, we estimate the impact of these perturbations on the critical luminosity.
\subsection{Density perturbations in the post-shock region}
\label{sec:density_pert}
According to the linearized RH equations, \eqref{massRH}-\eqref{eneRH}, the corrugated shock front induces density perturbations in the post-shock gas. Such perturbations are of entropic ($\hat{\rho}_e$) and acoustic ($\hat{\rho}_a$) nature, with the former remaining frozen to the fluid particles in the absence of diffusive effects. For isotropic field of incoming vorticity perturbations, the average of the squared density changes in the post-shock region can be written as
\begin{equation}
\begin{aligned}
\langle\bar{\rho}^2\rangle = D \int_{0}^\infty\hat{u}_1^2(k)k^2dk
\label{denave}
\end{aligned}
\end{equation}
with the dimensional pre-spectrum coefficient $D$, split into entropic $D_e$ and acoustic $D_a$ contributions, being computed as
\begin{equation}
\begin{aligned}
D &= D_e + D_a = \left(\mathcal{D}-1\right)^2\int_0^{1}\left(\mathcal{P}_{li}^2+\mathcal{P}_{li}^2\right) \text{P}(\zeta) {\rm d}\zeta+\\
&+\left(\mathcal{D}-1\right)^2\int_1^{\infty}\mathcal{P}_{s}^2 \, \text{P}(\zeta) {\rm d}\zeta + \int_1^{\infty} \mathcal{P}^2\, \text{P}(\zeta) {\rm d}\zeta\ .
\label{D3D}
\end{aligned}
\end{equation}
The terms involving the factor $\left(\mathcal{D}-1\right)^2$ correspond to the entropic contribution $D_e$, while the last term refers to the acoustic part $D_a$.
Figure~\ref{fig:D} shows the function $D_e$ and $D_a$ versus $M_1$ for $\varepsilon=0$, $0.2$, and $0.4$. Both $D_e$ and $D_a$ grows with $M_1$ and $\varepsilon$. The acoustic part $D_a$ is at least two orders of magnitude smaller than the entropic part $D_e$ and thus it is negligible.
In order to obtain a more intuitive insight, it is useful to express $\langle\bar{\rho}^2\rangle$ as a function of the pre-shock turbulent Mach number. The latter is related to the average upstream velocity perturbations as
\begin{equation}
\begin{aligned}
\langle \delta M_1^2 \rangle=3 \left(\frac{a_2}{a_1}\right)^2\langle \bar{u}_1^2 \rangle=\frac{3 M_1^2}{M_2^2C_2^2}\langle \bar{u}_1^2 \rangle
\end{aligned}
\end{equation}
Combining this with (\ref{uw3D}) and (\ref{denave}), we obtain
\begin{equation}
\begin{aligned}
\langle \bar\rho^2_2 \rangle = \frac{M_2^2C_2^2 D}{8 \pi M^2_1} \langle \delta M_1^2 \rangle = A \langle \delta M_1^2 \rangle
\end{aligned}
\end{equation}
Figure~\ref{fig:drho2dM1} shows the ratio $A=\langle \bar\rho^2_2 \rangle/\langle \delta M_1^2 \rangle$ as a function of the shock strength $M_1$ for $\varepsilon=0$, $0.2$, and $0.4$. For typical values of these parameters ($0.2 \lesssim \varepsilon \lesssim 0.4$ and $M_1 \gtrsim 5$), the ratio $\langle \bar\rho^2_2 \rangle/\langle \delta M_1^2 \rangle$ ranges from $\simeq\!0.1$ to $\simeq\!0.2$. Accordingly,
\begin{equation}
\label{eq:drho2vsMach1}
\sqrt{\langle \bar\rho^2_2 \rangle} \simeq (0.32 - 0.45) \times \sqrt{\langle \delta M_1^2 \rangle}.
\end{equation}
We can relate the turbulent Mach number $\sqrt{\langle \delta M_1^2 \rangle}$ immediately above the shock to that in the pre-collapse convective shells. During collapse, the Mach number of vorticity waves grows as $\propto r^{(3\gamma-7)/4}$ in the absence of dissipative effects \citep{Kovalenko98,Lai00}. If the convective shell falls from a radius of $\sim 1500\,\mathrm{km}$ to $\sim 200\,\mathrm{km}$, the turbulent Mach number should increase by a factor of $\sim 4.53$. Applying this to scaling (\ref{eq:drho2vsMach1}), we obtain
\begin{equation}\
\sqrt{\langle \bar\rho^2_2 \rangle} \simeq (1.45 - 2.04) \times \sqrt{\langle \delta M_0^2 \rangle}.
\label{eq:drho2vsMach12}
\end{equation}
The density perturbations predicted by this relation is significantly larger than that generated by the advection of the vorticity waves given by (\ref{eq:drho2_m2}). Below, we investigate if these perturbations contribute to the turbulence in the gain region.
\begin{figure}
\includegraphics[width=0.47\textwidth]{figD3De.pdf} \\
\includegraphics[width=0.47\textwidth]{figD3Da.pdf}
\caption{Variable $D$ as a function of $M_1$ for $\varepsilon=$ 0, 0.2, and 0.4. The upper panel shows the contribution of entropic-rotational perturbations $D_e$ and the lower panel displays the acoustic contribution $D_a$.}
\label{fig:D}
\end{figure}
\begin{figure}
\includegraphics[width=0.49\textwidth]{figA3D.pdf}
\caption{Ratio $A=\langle \bar\rho^2_2 \rangle/\langle \delta M_1^2 \rangle$ as a function of the shock strength $M_1$ for $\varepsilon=0$, $0.2$, and $0.4$.}
\label{fig:drho2dM1}
\end{figure}
\subsection{Generation of Turbulence from Density Perturbations}
When density perturbations are immersed in a gravitational field, buoyancy effects may play a significant role in contributing to the turbulent kinetic energy. The kinetic energy production or consumption can be scaled with $\langle \bar{\rho}\bar{u} \rangle g / a_2$ \citep[see, e.g., Chapter 8.2 of][]{Holton12}, with $\bar{u}$ being the velocity component parallel to the gravity field $g$, which in our case coincides with direction of the mean flow. Similarly as done for pure density perturbations, the correlation of velocity and density disturbances can be expressed as
\begin{equation}
\begin{aligned}
\langle\bar{\rho}\bar{u}\rangle = B \int_{0}^\infty\hat{u}_1^2(k)k^2dk
\label{buoave}
\end{aligned}
\end{equation}
where $B$ is a dimensionless pre-spectrum factor,
\begin{equation}
\begin{aligned}
B &= B_{er} + B_a = \left(\mathcal{D}-1\right)\int_0^{1}\left(\mathcal{U}_{lr}^r\mathcal{P}_{lr}+\mathcal{U}_{li}^r\mathcal{P}_{li}\right)\text{P}(\zeta) {\rm d}\zeta+\\
&+\left(\mathcal{D}-1\right)\int_1^{\infty} \mathcal{U}_{s}^r\mathcal{P}_{s}\,\text{P}(\zeta) {\rm d}\zeta + \int_1^{\infty} \mathcal{U}^a\mathcal{P}\, \text{P}(\zeta) {\rm d}\zeta.
\label{B3D}
\end{aligned}
\end{equation}
The entropic-rotational part are the terms proportional to the factor $\mathcal{D}-1$, while the last integral represent the acoustic contribution. For negative values of $\langle \bar{\rho}\bar{u} \rangle$ (i.e., positive velocity-temperature correlation), the density perturbation contributes constructively to the post-shock turbulent kinetic energy. The contrary applies for $\langle \bar{\rho}\bar{u} \rangle>0$.
Figure \ref{fig:DB} shows $B$ as a function of the the shock Mach number for $\varepsilon=$ 0, 0.2, and 0.4. Similarly to $D$, the acoustic contribution to $B$ is found to be negligible. The buoyancy perturbations are negative, meaning that the density perturbations will increase the value of the final turbulent kinetic energy.
\begin{figure}
\includegraphics[width=0.47\textwidth]{figB3D.pdf}\vspace{2mm}
\caption{Correlated density-velocity $B$ as a function of shock strength $M_1$ for $\varepsilon=$ 0, 0.2, and 0.4.}
\label{fig:DB}
\end{figure}
In the light of this finding, we can substitute the density fluctuations (\ref{eq:drho2vsMach12}) from entropy waves into the expression for the reduction of the critical luminosity (\ref{eq:dl}) and obtain
\begin{equation}
\begin{aligned}
\label{eq:dlf}
\frac{\Delta L_\mathrm{crit}}{L_\mathrm{crit}} \simeq - (1.45 - 2.04) \times \frac{0.15 \pi}{l \eta_\mathrm{acc}\eta_\mathrm{heat}} \sqrt{\langle \delta M^2_0 \rangle},
\end{aligned}
\end{equation}
For typical values of $\eta_\mathrm{acc}=2$, $\eta_\mathrm{heat}=0.1$, $\sqrt{\langle \delta M^2_0 \rangle} \sim 0.1$, and $l=2$, we get $17-24\%$ reduction in the critical luminosity. This roughly agrees with the results of 3D simulations \citep{Mueller17}.
\subsection{Impact of acoustic waves and direct injection of kinetic energy}
In addition to the impact of entropy perturbations on the explosion condition of CCNSe, one can in principle study the role of other effects such as the acoustic waves generated by the interaction of the shock with vorticity waves and the direction injection of the kinetic energy of vorticity waves to the post-shock region. The reduction of the critical luminosity due to the latter was estimated by \citet{Abdikamalov16}:
\begin{equation}
\label{eq:dldi}
\frac{\Delta L_\mathrm{crit}}{L_\mathrm{crit}} \sim 0.6 \langle \delta M^2_1 \rangle .
\end{equation}
For the same parameters used for estimate ($\ref{eq:dlf}$), equation (\ref{eq:dldi}) yields $\sim 12\%$ reduction in the critical luminosity. This is smaller than that due to the entropy perturbations calculated above. Hence, the direct injection of turbulent kinetic energy of vorticity waves is expected to play a sub-dominant effect, in agreement with the estimate of \citet{Mueller15}.
As we saw above in Sections~\ref{sec:tke} and \ref{sec:density_pert}, the acoustic waves have negligibly small contribution to the perturbations of velocity and density compared to the contributions of the vorticity and entropy modes in the post-shock. For this reason, the acoustic waves in the post-shock region are expected to have negligibly small effect on the explosion condition of CCSNe (see also the discussion in \cite{Mueller16}).
\section{Conclusions}
\label{sec:conclusion}
The shock-turbulence interplay plays a key role in facilitating core-collapse supernova (CCSN) explosions. In this paper, we studied how vorticity waves from nuclear shell burning affect the shock dynamics once they encounter in the aftermath of stellar core collapse. Our study accounts for the interaction of the shock with intermediate vortical scales, i.e., those whose characteristic length is sufficiently small for the shock to be considered a planar front, yet sufficiently large for the shock to be a seen as a discontinuity front. The mathematical formalism is based on the solution of the linearized hydrodynamics equations in the post-shock region \citep{Wouchuk2009,Huete2017}, which captures the full time evolution of shock-vorticity system in the linear order. In particular, this allowed us to take into account the perturbation of the nuclear dissociation itself, which was not included previously \citep{Abdikamalov16}. We demonstrated that this effect plays an important role in shock-turbulence interaction.
When a vorticity wave encounters a shock, it deforms and generates a post-shock field of vorticity, entropy, and acoustic waves. We have analyzed the properties of these fluctuations for a wide range of the parameters of the incoming vorticity waves and mean flow (Sections~\ref{sec:analysis_wave}-\ref{sec:analysis_field}). We have found that, within the limits of validity of the model, the density perturbations in the post-shock region are dominantly of entropic nature, while the contribution of the acoustic waves is negligibly small.
We show that the entropy perturbations in the post-shock region is the dominant factor in generating turbulence in the post-shock flow due to work by buoyancy forces (Section~\ref{sec:discussion}). For typical program parameters, the amplitude of density perturbations is about $1.45-2.04$ times the turbulent Mach number in the Si/O shell. Following the method proposed by \cite{Mueller16}, we show that this results in $17-24\%$ reduction in the critical luminosity for producing explosion (cf. Section~\ref{sec:discussion}). This approximately agrees with the results of recent 3D neutrino-hydrodynamics simulations \citep{Mueller17}.
This paper is the first in a series of two papers that aims at establishing the linear physics of interactions of shocks with turbulent flow. In future, we will study the effect of other perturbation modes that originate from convective shells. Also, the interaction of pre-collapse perturbations with the hydrodynamic instabilities in the post-shock region has to be treated in a more rigorous way as in, e.g., \citet{Takahashi16}. This will be the subject of future studies.
\section*{Acknowledgements}
We thank B. M\"uller for carefully reading the manuscript and for many valuable comments that significantly improved the manuscript. We also thank T. Foglizzo, M. Hempel and A. L. Velikovich for useful discussions. This work is supported by the Ministry of Science, MEC (ENE2015-65852-C2-2-R) and Fundaci\'on Iberdrola (BINV-ua37crdy), Spain (for C. Huete), by ORAU grant at Nazarbayev University (for E. Abdikamalov), by Max-Planck/Princeton Center (MPPC) for Plasma Physics (NSF PHY-1144374) and a Schmidt Fellowship (for D. Radice). The computations were performed at the NULITS Linux cluster at Nazarbayev University. We thank S. Bubin for his contribution to set up the cluster.
|
1,108,101,566,617 | arxiv | \section{Introduction}
The advent of the LHC era has led to renewed interest in extensions of the
Standard Model which have a strongly-coupled symmetry-breaking (Higgs) sector.
The most promising of these theories are the so-called Technicolor theories
\cite{Weinberg:1979bn,Susskind:1978ms}. These are QCD-like theories with
massless fermions (techni-quarks) in which the Goldstone bosons of
spontaneously-broken chiral symmetry (techi-pions) play the r\^{o}le of the
Higgs field, giving masses to the $W$ and $Z$. Such models need to be extended
in order to also give masses to the quarks and leptons. Phenomenological
difficulties with such models, such as flavour-changing neutral currents which
are too large, can be avoided if the fermion content of a candidate (extended)
Technicolor theory is such that the running gauge coupling constant evolves
very slowly. Such theories are referred to as Walking Technicolor models
\cite{Holdom:1981rm,Yamawaki:1985zg,Akiba:1985rr,Appelquist:1986an}.
Let us consider a Yang-Mills gauge theory with $N_f$ fermions in a specified
(not-too-large) representation of the ``colour'' group. The evolution of the
coupling constant $g$ in such a theory is described by the Callan-Symanzik
beta function $\beta(g)$ defined by
\begin{equation}
\beta(g) = \mu{\partial g \over \partial \mu} =
- \beta_0{g^3 \over (4\pi)^2} - \beta_1{g^5 \over (4\pi)^4}...
\end{equation}
where $\mu$ is the momentum scale at which the running coupling constant
$g(\mu)$ is defined. $\beta_0$,$\beta_1$,... are given by perturbation theory.
For $N_f$ sufficiently small, $\beta_0$ and $\beta_1$ are both positive,
the theory is asymptotically free and confining, and chiral symmetry is
spontaneously broken. There exists some value of $N_f$ above which $\beta_0$
(and $\beta_1$) are negative and asymptotic freedom is lost. Between these
two regimes is a range of $N_f$ over which $\beta_0$ is positive and $\beta_1$
is negative. In this range the theory remains asymptotically free, but if
this 2-loop beta function describes the physics, it has a second zero which
is an infrared (IR) fixed point and the theory is a conformal (unparticle)
field theory. There is, however, another possibility. If the coupling becomes
strong enough that a chiral condensate forms before this would-be IR fixed
point can be reached, the fermions will become less effective at screening
technicolor. $\beta$ will then start to decrease again, and the theory will be
confining in the infrared. There will, however, be a range of $\mu$ over which
$\beta$ is small and $g$ evolves only slowly, i.e. it walks. Since the
formation of a chiral condensate which spontaneously breaks chiral symmetry is
a non-perturbative process, the boundary between conformal and walking
behaviour cannot be determined perturbatively. Lattice gauge theory simulations
can enable one to decide between these two options for a theory with a
specified gauge group, fermion technicolor representation and $N_f$.
For $SU(N)$ gauge theories, the most promising candidates for walking behaviour
have fermions in the fundamental, adjoint, symmetric 2-index tensor or
antisymmetric 2-index tensor representations of the gauge group. A good summary
of what is known is given in reference \cite{Dietrich:2006cm}. Estimates of
the value of $N_f$ below which a gauge theory walks and above which it is
conformal have been made using various methods , none of which can be
guaranteed to capture the full non-perturbative behaviour of QCD-like theories
\cite{Appelquist:1988yc,Sannino:2004qp,Poppitz:2009uq,Armoni:2009jn,
Ryttov:2007cx,Antipin:2009wr}. This has led people to use lattice gauge theory
simulations to study this boundary. There have been a number of simulations of
QCD with $N_f$ fundamental quarks, with $N_f$ large enough that conformal or
walking behaviour might be expected
\cite{Kogut:1985pp,Fukugita:1987mb,Ohta:1991zi,Kim:1992pk,Brown:1992fz,
Iwasaki:1991mr,Iwasaki:2003de,Deuzeman:2008sc,Deuzeman:2009mh,
Appelquist:2009ty,Appelquist:2007hu,Jin:2008rc,Jin:2009mc,Fodor:2009wk,
Fodor:2009ff,Yamada:2009nt}. While progress has been made, the boundary value
for $N_f$ is still uncertain. Some simulations have been made of $SU(2)$ gauge
theory with 2 Dirac flavours of adjoint fermions
\cite{Catterall:2007yx,Catterall:2008qk,Catterall:2009sb,DelDebbio:2008zf,
DelDebbio:2009fd,Bursa:2009we,Hietanen:2008mr,Hietanen:2009az}. While early
indications are that this is a conformal field theory, there is still a great
deal of uncertainty.
For QCD with colour-sextet (symmetric tensor) quarks, $\beta_1$ changes sign
at $N_f=1\frac{28}{125}$, and asymptotic freedom is lost at
$N_f=3\frac{3}{10}$. Hence only $N_f=2$ and $N_f=3$ lie in the domain of
interest. $N_f=3$ is just below the value where asymptotic freedom is lost and
is thus expected to be conformal. This leaves $N_f=2$ as the most likely
candidate for walking behaviour. DeGrand, Shamir and Svetitsky have simulated
lattice QCD with 2 flavours of colour-sextet Wilson quarks
\cite{Shamir:2008pb,DeGrand:2008kx,DeGrand:2009hu}. Their initial results
suggest that this is a conformal field theory. Fodor et al. have performed
some preliminary simulations of lattice QCD with 2 sextet quarks using
domain-wall quarks \cite{Fodor:2008hm}.
We are performing simulations of lattice QCD with 2 colour-sextet staggered
quarks. Staggered quarks have the advantage over Wilson quarks in having a
simple chiral order parameter, and no parameter tuning is needed to find the
chiral limit. We are performing simulations at finite temperatures, where the
deconfinement and chiral-symmetry restoration temperatures can be used as
measures of the scales of confinement and chiral-symmetry breaking
respectively. By varying the number of time slices $N_t$ we can determine
whether these transitions scale as finite temperature transitions or whether
they are bulk transitions. Preliminary results of these simulations were
presented at Lattice 2009 \cite{Sinclair:2009ec}.
Our simulations indicate that, whereas for fundamental quarks the deconfinement
and chiral-symmetry restoration transitions appear coincident, for
colour-sextet quarks these transitions are well separated. Chiral-symmetry
restoration occurs at a much weaker coupling than deconfinement. This differs
from what is seen with Wilson quarks by DeGrand, Shamir and Svetitsky
\cite{DeGrand:2008kx} where the two transitions appear coincident. Such a
separation has been reported in earlier simulations with colour-adjoint quarks
\cite{Karsch:1998qj,Engels:2005te}, and in $SU(2)$ gauge theory with
colour-adjoint quarks \cite{Kogut:1985xa}. Both transitions move to
significantly weaker couplings when $N_t$ is increased from $4$ to $6$, which
is what would be expected for finite temperature transitions governed by
asymptotic freedom. This in turn favours the walking scenario.
In the deconfined region, just above the deconfinement transition, we find 3
states, where the Wilson Line (Polyakov Loop) is oriented in the directions of
the 3 cube roots of unity, similar to what occurs for quenched QCD or QCD with
adjoint quarks. For $N_t=4$ only the state with a real positive Polyakov Loop
appears stable. The other two states, while long-lived, appear to be only
metastable. For $N_t=6$, all 3 states appear to be stable. Between the
deconfinement and chiral-symmetry restoration transitions there is another
transition where the 2 states with complex Polyakov Loops disorder into a
state with a real, negative Polyakov Loop. Machtey and Svetitsky have argued
that such additional states where the Polyakov Loop has arguments $\pm 2\pi/3$
and $\pi$ are to be expected, and have presented evidence for their existence
and metastability in simulations using Wilson quarks \cite{Machtey:2009wu}.
In section~2 we discuss our simulations. Our results are described in section~3.
Section~4 presents discussions and conclusions and identifies directions for
ongoing and future investigation.
\section{Lattice simulations with sextet quarks}
For our simulations we use the simple Wilson gauge action
\begin{equation}
S_g=\beta \sum_\Box \left[1-\frac{1}{3}{\rm Re}({\rm Tr}UUUU)\right].
\end{equation}
The fermion action is based on the unimproved staggered-quark action written
formally as
\begin{equation}
S_f=\sum_{sites}\left[\sum_{f=1}^{N_f/4}\psi_f^\dagger[D\!\!\!\!/+m]\psi_f
\right]
\end{equation}
where $D\!\!\!\!/ = \sum_\mu \eta_\mu D_\mu$ with
\begin{equation}
D_\mu \psi(x) = \frac{1}{2}[U^{(6)}_\mu(x)\psi(x+\hat{\mu})-
U^{(6)\dagger}_\mu(x-\hat{\mu})\psi(x-\hat{\mu})].
\end{equation}
To allow tuning the number of flavours to values of $N_f$ which are not
multiples of 4, and to use a positive-definite operator for the transition to
pseudofermions, this is replaced with
\begin{equation}
S_f=\sum_{sites}\chi^\dagger\{[D\!\!\!\!/+m][-D\!\!\!\!/+m]\}^{N_f/8}\chi.
\end{equation}
We use the RHMC algorithm for our simulations in which the fractional powers
of the positive-definite Dirac operator are approximated to any desired accuracy
by a rational approximation, and each trajectory is subjected to a global
Metropolis accept/reject step, thus removing all dependence on the updating
increment.
We perform simulations on $8^3 \times 4$, $12^3 \times 4$ and
$12^3 \times 6$ lattices, over a range of values of $\beta=6/g^2$ which covers
all transitions. For each lattice size we perform runs at quark mass $m=0.005$,
$m=0.01$ and $m=0.02$ in lattice units, to allow us to access the chiral limit.
Away from the transitions our run lengths are typically 10,000 length-1
trajectories for a given $m$ and $\beta$. Near deconfinement transitions run
lengths of 50,000 to 200,000 trajectories are used for each $\beta$ and $m$.
Some runs of 50,000 trajectories have also been used near the transitions from
negative Polyakov Loop states to complex Polyakov Loop states.
We create deconfined states with positive Polyakov Loops by starting
a run at $\beta=7.0$ (weak coupling) from a completely ordered state (all
$U$s equal to the identity matrix). The configurations from these runs are
used to start runs at progressively smaller $\beta$s (and masses). The states
with negative Polyakov Loops are obtained by starting a run at $\beta=7.0$
with all $U$s equal to the identity matrix, except for the timelike $U$s on
a single time slice, which are set to the matrix ${\rm diag}(1,-1,-1)$.
The triplet Wilson Line (Polyakov Loop) is used to identify the position
of the deconfinement transition. The chiral-symmetry restoring phase transition
occurs at that $\beta$ above which the chiral condensate
($\langle\bar{\psi}\psi\rangle$) vanishes in the chiral ($m \rightarrow 0$)
limit. Since the chiral extrapolation is difficult to perform with the masses
we use, we estimate the position of the chiral transition from the positions
of the peaks in the chiral susceptibilities $\chi_{\bar{\psi}\psi}$ as
functions of mass.
\begin{equation}
\chi_{\bar{\psi}\psi} = V\left[\langle(\bar{\psi}\psi)^2\rangle
- \langle\bar{\psi}\psi\rangle^2\right]
\label{eqn:chi}
\end{equation}
where the $\bar{\psi}\psi$s in this formula are lattice averaged quantities
and $V$ is the space-time volume of the lattice. Since we use stochastic
estimators for $\bar{\psi}\psi$, we obtain an unbiased estimator for this
quantity by using several independent estimates for each configuration (5, in
fact). Our estimate of $(\bar{\psi}\psi)^2$ is then given by the average of
the (10) estimates which are `off diagonal' in the noise.
\section{Results from simulations}
\subsection{$N_t=4$}
We perform simulations at a selection of $\beta$ values in the range
$5.0 \le \beta \le 7.0$ on $8^3 \times 4$ and $12^3 \times 4$ lattices.
For each of the 3 chosen masses ($0.005$, $0.01$, $0.02$) the values of the
Wilson Line and chiral condensate on the $8^3 \times 4$ and $12^3 \times 4$
lattices are so close that we conclude that finite size effects are negligible.
Figure~\ref{fig:wil-psi_12x4} shows the Wilson Line and chiral condensates
\begin{figure}[hb]
\epsfxsize=6in
\epsffile{wil-psi_12x4.ps}
\caption{Wilson line (Polyakov Loop) and $\langle\bar{\psi}\psi\rangle$
as functions of $\beta$ on a $12^3 \times 4$ lattice.}
\label{fig:wil-psi_12x4}
\end{figure}
as functions of $\beta$ for each of the 3 masses ($0.005$, $0.01$, $0.02$) on
a $12^3 \times 4$ lattice. In the deconfined phase, we have included only
those states where the Wilson Line is real and positive, noting that runs
which start in a state with a positive Wilson Line remain in this state for
the duration of the run. We have not included the results for the $8^3 \times
4$ lattice, since at the resolution of figure~\ref{fig:wil-psi_12x4}
the points for the 2 lattice sizes would be virtually indistinguishable.
It is clear that the Wilson Line is very small below $\beta \approx 5.4$,
and rises rapidly shortly after this value for all 3 quark masses. This is
taken as a signal for the deconfinement transition. It is also clear that
chiral symmetry remains broken well beyond this point. Thus the deconfinement
and chiral-symmetry restoration transitions are far apart, unlike what is
observed for Wilson quarks, where they appear to be coincident
\cite{DeGrand:2008kx}.
\begin{figure}[htb]
\epsfxsize=5.0in
\epsffile{b542m02_wilson_hist.ps}
\caption{Histogram of the Wilson Line at $\beta=5.42$, $m=0.02$ on a
$12^3 \times 4$ lattice.}
\label{fig:wilhist-5.42}
\end{figure}
Figure~\ref{fig:wilhist-5.42} shows a histogram of the Wilson Line at
$\beta=5.42$, $m=0.02$, showing a clear two-state signal, suggesting that
this $\beta$ is very close to the transition. The separation of the two states
suggests that this transition is a first-order phase transition. We conclude
that at $m=0.02$ the deconfinement transition is at $\beta=\beta_d=5.420(5)$.
For $m=0.01$, two-state signals are seen at $\beta=5.411$ and $\beta=5.412$
leading to an estimate $\beta_d=5.4115(5)$. Finally we note that for $m=0.005$,
$\beta=5.4$ appears to lie below the transition while $\beta=5.41$ appears to
be above the transition leading to an estimate $\beta_d=5.405(5)$. Thus the
mass dependence of the deconfinement $\beta$, $\beta_d$, is very weak.
Now let us turn to the chiral transition. Because this is only expected
to be a phase transition at $m=0$, and the crossover becomes smoother as the
quark mass is increased, it is difficult to determine its position directly
from the chiral condensate at the masses we use. This is made more difficult
since it is clear from the measured condensates as functions of mass, that
the mass dependence is far from linear, making it extrapolating to $m=0$
difficult if not impossible. We therefore use the peak in the chiral
susceptibility defined in equation~\ref{eqn:chi} as an estimate of the position
of the crossover for finite mass. This is shown in figure~\ref{fig:chi4} for
the two smallest masses.
\begin{figure}[htb]
\epsfxsize=5.0in
\epsffile{chipbp_12x4.ps}
\caption{Chiral susceptibilities $\chi_{\bar{\psi}\psi}$ as functions of $\beta$
on a $12^3 \times 4$ lattice for $m=0.005,0.01$, for a $\beta$ range which
includes the chiral transition.}
\label{fig:chi4}
\end{figure}
From these graphs we estimate that the transition occurs at $\beta=6.30(5)$
for both $\beta$s. We thus estimate that the phase transition at $m=0$ occurs
at $\beta=\beta_\chi=6.3(1)$. Note that the spacing of the $\beta$s in this
range is too large to allow us to use Ferrenberg-Swendsen reweighting to
determine the positions of these transitions with more resolution. (The
distributions of values of the plaquette action for adjacent $\beta$s do not
overlap in this region.)
\begin{figure}[htb]
\epsffile{b545m02a_argwl_12x4.ps}
\caption{Time evolution of the argument of the Wilson Line (Polyakov Loop)
at $m=0.02$, $\beta=5.45$ showing a decay of a state with a complex Wilson
Line (argument close to $2\pi/3$) to a state with a real positive Wilson
Line (argument close to $0$).}
\label{fig:meta}
\end{figure}
At each of our two larger masses, we have performed a series of runs starting
from $\beta=7.0$ with a negative Wilson Line. From $\beta=7.0$ down to
$\beta=6.0$, the Wilson Lines remain negative over the length of the 10,000
trajectory run for each $(\beta,m)$. By $\beta=5.8$, these states have
transitioned to states in which the Wilson Line is oriented in the direction of
one of the complex cube roots of unity. Hence we deduce that there is a
transition at $\beta \approx 5.9$. More discussion of this transition is to be
found in the $N_t=6$ subsection. On the $12^3 \times 4$ lattice with $m=0.02$
these states with complex Wilson Lines persist down to $\beta=5.48$, and
appear stable over at least 50,000 trajectories. For $m=0.01$ these persist
down to $\beta=5.46$. For $\beta \le 5.46$ at $m=0.02$ or $\beta \le 5.45$ at
$m=0.01$ but above the deconfinement transition, these states with complex
Wilson Lines appear to be metastable and eventually decay into states with
positive Wilson Lines. Figure~\ref{fig:meta} shows an example of such
metastability in our $12^3 \times 4$ simulations. As is to be expected, this
metastability starts at larger $\beta$ values on an $8^3 \times 4$ lattice. We
have observed no cases where configurations with positive Wilson Lines evolve
to configurations with complex Wilson Lines for $\beta$ values above the
deconfinement transition.
\subsection{$N_t=6$}
We perform simulations on a $12^3 \times 6$ lattice at quark masses $m=0.005$,
$m=0.01$ and $m=0.02$ for values of $\beta=6/g^2$ in the range
$5.0 \le \beta \le 7.2$. We perform two series of runs starting at $\beta=7.0$,
for $m=0.01,0.02$. The first starts with the Wilson Line real and positive, and
the second with the Wilson Line negative. For the lowest quark mass $m=0.005$
we only perform one set of runs starting with a positive Wilson Line at
$\beta=7.0$.
\begin{figure}[htb]
\epsffile{b558m02_wlc_12x6.ps}
\epsffile{b558m02_argwl_12x6.ps}
\caption{a) Scatterplot of Wilson Lines for $m=0.02$ and $\beta=5.58$ on a
$12^3 \times 6$ lattice showing the 3-state signal. \newline
b) `Time' evolution of the argument(phase) of the Wilson Lines for one of the
2 runs included in part (a).}
\label{fig:Z3}
\end{figure}
Above the deconfinement transition we again see a 3-state signal where the
Wilson Lines orient themselves in the directions of one of the 3 cube roots of
unity. A scatterplot showing such a 3-state signal is shown in
figure~\ref{fig:Z3}a. Unlike the $N_t=4$ case, there is no sign of
metastability and transitions between all 3 states are seen over the duration
of our runs, up to $\beta$ values far enough above the deconfinement transition
that the mean relaxation time between tunnelings exceeds the lengths of our
runs (50,000 trajectories). Figure~\ref{fig:Z3}b shows an example of such
tunnelings. Thus to make meaningful measurements of the Wilson Line, we need
to separate these 3 states. This we do by binning the Wilson Lines measured at
the end of each trajectory according to their phase $\phi$ into bins
$-\pi < \phi < -\pi/3$, $-\pi/3 < \phi < \pi/3$, $\pi/3 < \phi < \pi$. To
increase our statistics, we use symmetry to include the complex conjugates of
the Wilson Lines in the first of these bins, with the Wilson Lines in the last
of these bins. Other observables are binned according to the values of the
corresponding Wilson Lines.
\begin{figure}[htb]
\epsfxsize=4.0in
\epsffile{rwil-psi_12x6.ps}
\epsfxsize=4.0in
\epsffile{cwil-psi_12x6.ps}
\caption{a) Wilson Line and chiral condensate for the state with a real positive
Wilson Line as functions of $\beta$ for each of the 3 masses on a
$12^3 \times 6$ lattice. \newline
b) Magnitude of the Wilson Line and chiral condensate for the states
with a complex or a real negative Wilson Line as functions of $\beta$ for each
of the 3 masses on a $12^3 \times 6$ lattice.}
\label{fig:wil-psi_4x6}
\end{figure}
Figure~\ref{fig:wil-psi_4x6}a shows the Wilson Lines (Polyakov Loops) and
chiral condensates as functions of $\beta$ for all 3 masses from our simulations
on a $12^3 \times 6$ lattice for the states with positive Wilson Lines
($-\pi/3 < \phi < \pi/3$). Figure~\ref{fig:wil-psi_4x6}b shows the magnitudes
of the average Wilson Lines and the chiral condensates for the states with
complex or negative Wilson Lines. The deconfinement transition manifests itself
as a rapid increase in the Wilson Line, which is clearly seen at a $\beta$ just
above $5.5$. The fact that the $Z_3$ centre symmetry is broken manifests
itself in the difference between the magnitudes of the Wilson Lines in the
positive Wilson Line and complex Wilson Line states. The rapid change in the
magnitude of the complex/negative Wilson line between $\beta=6.2$ and
$\beta=6.6$ marks the transition between a state whose Wilson Line is oriented
in the direction of one of the complex cube roots of unity and one where the
Wilson Line is real and negative. The chiral transition, above which
$\langle\bar{\psi}\psi\rangle$ vanishes in the chiral ($m \rightarrow 0$)
limit, clearly occurs at a $\beta$ appreciably larger $\beta_d$.
\begin{figure}[htb]
\epsfxsize=3.2in
\epsffile{ABSwilson_hist005_12x6.ps}
\epsfxsize=3.2in
\epsffile{ABSwilson_hist01_12x6.ps}
\epsfxsize=3.2in
\epsffile{ABSwilson_hist02_12x6.ps}
\caption{Histograms of the magnitudes of Wilson Lines for $\beta$ values
bracketing the deconfinement transition on a $12^3 \times 6$ lattice for
a) $m=0.005$, b) $m=0.01$, c) $m=0.02$.}
\label{fig:wilhist}
\end{figure}
To determine the position of the deconfinement transition we examine histograms
of the magnitudes of the Wilson Line close to the transition for each mass.
Although the explicit breaking of the $Z_3$ centre symmetry means that the
magnitudes of the positive and complex Wilson Lines are not identical, they
are close enough in the vicinity of the deconfinement transition that this
fact can be ignored. Such histograms are presented in figure~\ref{fig:wilhist}.
For the lower 2 masses the histogram for each beta represents 50,000
trajectories. At $m=0.02$ the histograms for $\beta=5.55$ and $\beta=5.57$
represent 100,000 trajectories each, while that at $\beta=5.56$ is for 200,000
trajectories. In each case the histogram peaks at a low value below the
transition and an appreciably higher value above the transition. Very close to
the transition the peak of the histogram is relatively flat, with some hint of
a double peaked structure. From these graphs we estimate that the transition
$\beta$s ($\beta_d$) are $\beta_d(m=0.005)=5.545(5)$,
$\beta_d(m=0.01)=5.550(5)$ and $\beta_d(m=0.02)=5.560(5)$, respectively. As in
the $N_t=4$ case, the mass dependence is weak.
\begin{figure}[htb]
\epsfxsize=5.0in
\epsffile{chipbp_12x6.ps}
\caption{Chiral susceptibilities $\chi_{\bar{\psi}\psi}$ as functions of $\beta$
on a $12^3 \times 6$ lattice for $m=0.005,0.01$, for a $\beta$ range which
includes the chiral transition.}
\label{fig:chi6}
\end{figure}
Again we estimate the position of the chiral-symmetry restoration transition by
examining the peaks in the chiral susceptabilities for $m=0.005,0.01$. As for
$N_t=4$ we obtain estimates of the chiral condensate from 5 independent noise
vectors, which yields an unbiased estimate for the chiral susceptibility. These
chiral susceptibilities are plotted versus $\beta$ in figure~\ref{fig:chi6} for
each of these lowest two masses. Again, the $\beta$s we use in this region are
two sparse to allow the use of Ferrenberg-Swendsen reweighting to interpolate
between them. Since our estimate for the peak of the susceptibility plots for
each mass is $\beta=6.60(5)$, we estimate for the position of the chiral
transition at $m=0$ is $\beta=\beta_\chi=6.6(1)$. Note that this estimate is
for the states with positive real Wilson Lines only.
\begin{figure}[htb]
\epsffile{arg_wl_12x6m02.b64m02a.ps}
\epsffile{arg_wl_12x6m02.b66m02a.ps}
\caption{The `time' evolution of the argument(phase) of the Wilson Line for
$m=0.02$ on a $12^3 \times 6$ lattice, a) for $\beta=6.4$ and b) for
$\beta=6.6$. The horizontal lines are at $2\pi/3$, $\pi$ and $4\pi/3$.}
\label{fig:neg.to.complex}
\end{figure}
Starting at $\beta=7.0$, $m=0.02$ with a real negative Wilson Line, we find
that this state is stable for runs of up to 50,000 trajectories for $\beta$s
down to $\beta=6.6$. Decreasing this to $\beta=6.4$, we find that the system
has transitioned to a state with a Wilson Line oriented in the direction of one
of the complex cube roots of unity. Figure~\ref{fig:neg.to.complex} shows
the time evolution of the phases of the Wilson Lines at $\beta=6.4$ and
$\beta=6.6$ for $m=0.02$. We see that the way the transition proceeds is that
the fluctuations in the phase become large as $\beta$ approaches the transition
from above until eventually they reach $2\pi/3$ and $4\pi/3$($-2\pi/3$). When
this occurs the system spends appreciably more time at these two values that at
intermediate values. We can then consider the system as being in a state with
its Wilson Line oriented in the direction of one of the two complex cube roots
of unity. On the finite lattice it tunnels between these 2 states. Approaching
this transition from below we could describe it as the disordering of the 2
states with Wilson Lines in the directions of the complex cube roots of unity,
as suggested by Machtey and Svetitsky. The position of this transition for
$m=0.02$ is at $\beta \approx 6.5$, while for $m=0.01$ it occurs at $\beta
\approx 6.4$.
\section{Discussion and conclusions}
We simulated QCD with two flavours of staggered quarks at finite temperatures
using the RHMC method. Our simulations were performed on $8^3 \times 4$,
$12^3 \times 4$ and $12^3 \times 6$ lattices with 3 quark masses, to allow
for chiral extrapolations. We find widely separated deconfinement and
chiral-symmetry restoration transitions. Both the deconfinement and chiral
transitions move to significantly lower couplings as $N_t$ is increased from
4 to 6, which is the expected behaviour for finite temperature transitions in
an asymptotically free theory. This suggests that the theory is confining
with spontaneously-broken chiral symmetry, while being under the control of
the weak-coupling asymptotically-free ultraviolet fixed point, i.e. that it
walks.
The deconfinement transition occurs at $\beta=\beta_d$ where for $N_t=4$,
$\beta_d(m=0.02)=5.420(5)$, $\beta_d(m=0.01)=5.4115(5)$ and
$\beta_d(m=0.005)=5.405(5)$ while for $N_t=6$, $\beta_d(m=0.02)=5.560(5)$,
$\beta_d(m=0.01)=5.550(5)$ and $\beta_d(m=0.005)=5.545(5)$. The chiral-symmetry
restoration transition occurs at $\beta=\beta_\chi$, where $\beta_\chi=6.3(1)$
at $N_t=4$ and $\beta_\chi=6.6(1)$ at $N_t=6$.
The large separation of the deconfinement and chiral transitions indicates that
the enhanced attraction between quark-antiquark pairs over that for fundamental
quarks due to the larger quadratic Casimir operator for the sextet
representation ($10/3$ versus $4/3$), causes a chiral condensate to form at a
distance much shorter than the scale of confinement. This effectively removes
the quarks from consideration at longer distances where the theory will behave
more like a pure glue (quenched) theory. This is presumably why, in the
deconfined phase, we see a three state system, where the Wilson Lines align in
the directions of one of the cube roots of unity, a relic of the now-broken
$Z_3$ symmetry. The breaking of $Z_3$ is seen in the difference in magnitudes
of the real positive Wilson Lines versus those with phases close to $\pm
2\pi/3$. At $N_t=4$, the states with complex Wilson Lines are only metastable,
while at $N_t=6$ all 3 states appear to be stable. The existence of states
with all 3 $Z_3$ Wilson Line orientations has been predicted by Machtey and
Svetitsky \cite{Machtey:2009wu} who observed them in their simulations with 2
flavours of colour-sextet Wilson quarks. They also observed the metastability
of the states with complex Wilson Lines. Earlier simulations by DeGrand,
Shamir and Svetitsky had reported deconfined states with Wilson Lines oriented
in the directions of the complex cube roots of unity \cite{DeGrand:2008kx}.
More work needs to be done to determine whether the chiral-symmetry restoring
transition of these complex/negative Wilson Line states is coincident with
that of the state with a positive Wilson Line. In addition we would like to
know whether this chiral transition shows the scaling properties of the
expected $O(2)/O(4)$ universality class. The fact that this transition occurs
at relatively weak coupling should help us in this endeavour.
We have also observed an additional transition. At $\beta \approx 5.9$ on an
$N_t=4$ lattice with $m=0.02$ or $m=0.01$, the states with complex Wilson Lines
disorder to a state with a negative Wilson Line. Such a transition is also
observed for $N_t=6$ at $\beta \approx 6.5$ for $m=0.02$ and
$\beta \approx 6.4$ for $m=0.01$. The existence of states with negative Wilson
Lines and of such a transition is predicted and observed by Machtey and
Svetitsky. Such a transition would be expected to be either a second-order
phase transition in the universality class of the 3-dimensional Ising model,
or a first-order phase transition. The large increase in the $\beta$ for this
transition between $N_t=4$ and $N_t=6$ makes us suspect that this is a lattice
artifact. We also note that by $N_t=6$, it is close to the chiral transition
and could well merge with it at larger $N_t$. Since the negative Wilson Line
state (phase $\pi$) comes from disordering the two states with phases $\pm
2\pi/3$ (a fact also predicted by Machtey and Svetitsky), the magnitude of the
Wilson Line above the transition is approximately half what it is below the
transition. Just below the transition, the magnitude of Wilson Line in the
complex Wilson Line states is approximately $2/3$ that of the positive Wilson
Line state, so that after the transition the magnitude of the negative Wilson
Line is approximately $1/3$ of that of the positive Wilson Line. This would
suggest that the transition might be associated with the symmetry breaking
$SU(3) \rightarrow SU(2) \times U(1)$.
We need to understand why we see well-separated deconfinement and chiral
transitions with staggered fermions, whereas DeGrand, Shamir and Svetitsky
observed these transitions to be coincident with Wilson fermions. Of course,
since we are far from the weak-coupling limit, different fermion actions do
not have to have the same physics. This is especially true, if we happen to be
in the strong-coupling domain, beyond the infrared fixed point of a conformal
field theory. Of course, the observation from our simulations that the coupling
at each of the transitions is decreasing as the lattice spacing is decreasing
would appear to exclude this possibility, since the $\beta$ function becomes
positive above this fixed point. However, it has recently been suggested that
there could be two non-trivial fixed points \cite{Kaplan:2009kr} in such
theories. In this case, if we are beyond the second non-trivial fixed point
(which would be an ultraviolet fixed point), the coupling would decrease at
short distances. Of course, if we are beyond the region where the ultraviolet
behaviour of the theory is controlled by asymptotic freedom, drawing any
conclusions is pure speculation.
To better understand our results, and to help determine whether the theory is
indeed walking, rather than conformal as indicated by the work of DeGrand,
Shamir and Svetitsky using Wilson fermions, we are now extending our
simulations to $N_t=8$, where finite lattice spacing effects should be
reduced, to see if the deconfinement and chiral-symmetry restoration
transitions remain consistent with being finite-temperature transitions.
We also plan to perform simulations at zero temperature, measuring
the chiral condensate, the hadron spectrum, $f_\pi$, etc., to test other
aspects of the theory which should help us determine whether this theory is
conformal or walking. In addition we will try to determine the running of a
suitably-defined renormalized coupling. Having two different spatial lattice
sizes at $N_t=4$ showed us that finite size effects are small. We need a
second spatial lattice size for $N_t=6$. We have learned that the authors of
reference \cite{Fodor:2008hm} are now starting simulations of this theory
using improved staggered fermions, which should help resolve these issues
\cite{KN}.
We are now performing simulations of lattice QCD with 3 flavours of
staggered quarks at finite temperature. Since this theory is almost certainly
conformal, it is interesting to determine whether its behaviour is
qualitatively different from that of the $N_f=2$ case, and whether our
simulations can see this conformality.
\section*{Acknowledgements}
DKS is supported in part by the U.S. Department of Energy, Division of High
Energy Physics, Contract DE-AC02-06CH11357, and in part by the
Argonne/University of Chicago Joint Theory Institute. JBK is supported in part
by NSF grant NSF PHY03-04252. These simulations were performed on the Cray XT4,
Franklin at NERSC under an ERCAP allocation, and on the Cray XT5, Kraken at
NICS under an LRAC/TRAC allocation.
DKS thanks J.~Kuti, D.~Nogradi and F.~Sannino for helpful discussions.
|
1,108,101,566,618 | arxiv | \section{Introduction}
\label{intro}
The Hyperspherical Harmonics (HH) basis set has been extensively used
to describe bound states and scattering processes
in $A=3,4$ systems~\cite{report}. One of the main reasons of using the
HH basis resides in its flexibility, however it suffers from a large
degeneracy, which, up to now, has prevented a systematic use with realistic
potentials for $A\ge 4$ systems.
The authors have recently proposed an approach \cite{gatto1,gatto2,gatto3} in which the
HH basis is not symmetrized. In particular, in Ref.~\cite{gatto3} the nonsymmetrized
basis has been used to describe systems up to $A=6$ particles. It was shown that the
large degeneracy of HH can be tackled noticing that
the Hamiltonian can be expressed as an algebraic
combination of sparse matrices, and the diagonalization procedure was implemented by
means of an iterative diagonalization, where only the action of the Hamiltonian on
a vector is required.
In this work we continue the study on the $A=6$ system, interacting {\it via} a
two-body Volkov potential acting in $s$-wave,
analyzing how the introduction of Coulomb interaction breaks the original
permutation symmetry $S_6$, in the case of two or three protons.
\section{The method}
\label{sec:1}
Following Ref.~\cite{gatto1}, we introduce the Jacobi
coordinates, $\mathbf x_1,\dots,\mathbf x_N$, with $N=A-1$,
and we ``adapt'' the coordinates to the particle pair $(i,j)$
defining $\mathbf x_N=\mathbf r_i-\mathbf r_j$, where $\mathbf r_i$ and $\mathbf
r_j$ are the Cartesian coordinates of particles $i$ and $j$.
From the Jacobi coordinates, we introduce the hyperspherical coordinates, that
means an hyper-radius $\rho$ and $3N-1$ hyperangles $\Omega_N^{ij} = (\hat x_1,
\dots, \hat x_N, \phi_2, \dots, \phi_N) $.
The HH functions with fixed angular momentum $LM$, ${\mathcal
Y}^{LM}_{[K]}(\Omega_N^{ij})$, are defined as the eigenfunctions of the grand angular
momentum. They are labelled by the grand angular quantum number $K$
and a set of $3N-1$ quantum numbers indicated by $[K]$. In Ref.~\cite{gatto3}
it was shown that the potential energy, at fixed values of the hyper-radius, can be written as
\begin{equation}
\sum_{ij} V_{ij}(\rho)=\sum_{ij}
[{\cal B}^{LM}_{ij}]^t\, V_{12}(\rho)\,{\cal B}^{LM}_{ij} \,,
\label{eq:vpot}
\end{equation}
where the potential matrix
$[V_{12}(\rho)]_{[K'][K]}= \langle{\cal Y}^{LM}_{[K']}(\Omega^{12}_N)|V(1,2)|{\cal
Y}^{LM}_{[K]}(\Omega^{12}_N)\rangle$ is a sparse matrix and the matrix
${\cal B}^{LM}_{ij}$ is given by a product of sparse matrices
corresponding to the transformation of the HH vector defined on $\Omega_N^{ij}$ to
the one defined on
$\Omega_N^{12}$. The potential energy matrix is obtained after integrating on $\rho$ using
a Laguerre basis. Furthermore, the kinetic energy is diagonal in HH basis, and
this allows to write the full Hamiltonian as an algebraic combination of sparse
matrices. The diagonalization of the Hamiltonian is obtained {\it via} an
iterative scheme, where only the action of the Hamiltonian on a vector is
required.
\section{Results}
For our numerical application, we chose a Volkov potential
\begin{equation}
V(r)=V_R \,{\rm e}^{-r^2/R^2_1} + V_A\, {\rm e}^{-r^2/R^2_2} \,,
\end{equation}
with $V_R=144.86$ MeV, $R_1=0.82$ fm, $V_A=-83.34$ MeV, and $R_2=1.6$ fm, which
only acts in $s$-wave, and with the mass such that $\hbar^2/m =
41.47~\text{MeV\,fm}^{2}$. We have calculated the binding energy for a system
of $A=6$ particles, and we repeated our calculations for the same system with
Coulomb interaction between two particles, {\it i.e.} a model of
$\,^6\text{He}$, and between three particles, {\it i.e.} $\,^6\text{Li}$.
In Table~\ref{tab} we show the results for the $L=0$ state. Without the
Coulomb interaction the symmetry
group is $S_6$ and to antisymmetrize the wave function, taking also
into account the spin and isospin degree of freedoms, the eigenvalue must belong
to the irreducible representation $[\mathbf 4\,\mathbf 2]$. We want to stress
the fact that, using the nonsymmetrized basis,
the eigenvectors belong to all of the irreducible representations of $S_6$, and
in particular that of interest.
When we add Coulomb interaction between two particles, the symmetry is broken as
$S_6 \rightarrow S_2\otimes S_4$, and the original level, having degeneracy
9, is split into 4 sub-levels
\begin{equation}
[\mathbf 4\, \mathbf 2] \rightarrow [\mathbf 1^2]\otimes[\mathbf 3\, \mathbf
1] + [\mathbf 2]\otimes[\mathbf 4] +
[\mathbf 2]\otimes [\mathbf 3\,\mathbf 1] + [\mathbf 2]\otimes[\mathbf 2^2] \,,
\label{}
\end{equation}
where only $[\mathbf 2]\otimes[\mathbf 2^2]$ is physical, as is the only one
that can be antisymmetrized with respect the four neutrons using the spin degree
of freedom, and describes the ground state of $\,^6\text{He}$. When the Coulomb
interaction is extended to three particles, the symmetry breaking is
$S_6 \rightarrow S_3\otimes S_3$, and the split reads
\begin{equation}
[\mathbf 4\, \mathbf 2] \rightarrow [\mathbf 2\,\mathbf 1]\otimes[\mathbf 3] +
[\mathbf 2\,\mathbf 1]\otimes[\mathbf 2\,\mathbf 1] +
[\mathbf 3]\otimes [\mathbf 3] + [\mathbf 3]\otimes[\mathbf 2\,\mathbf 1] \,,
\label{}
\end{equation}
with the only physical state, describing $\,^6\text{Li}$, being $[\mathbf
2\,\mathbf 1]\otimes[\mathbf 2\,\mathbf 1]$.
We have shown the power and the flexibility of the
nonsymmetrized HH approach. In particular we were able to include basis states up
to $K=22$ for a six-body system (corresponding to a basis set of
$38^.798^.760$ elements using 11 Laguerre polynomials ). Furthermore,
using a symmetry-adapted Lanczos algorithm, we were able to trace the
irreducible representation of the eigenvector of interest and to select the
corresponding eigenvalue.
\begin{table}
\caption{Binding energies calculated with Volkov's potential in $s$-wave, for
$A=6$ particles and $L=0$, with and without Coulomb interation, using 11
Laguerre's polynomials. }
\label{tab}
\begin{center}
\begin{tabular*}{0.9\linewidth}{@{\extracolsep{\fill}}c c c c c}
\hline
\hline
$K_{\text{max}}$ \rule{0pt}{12pt} & $N_{\text{HH}}$ & $E_0$ (MeV) &
$E_{[\,^6\text{He}]}$ (MeV) & $E_{[\,^6\text{Li}]}$ (MeV) \\
& & [\bf{4}\,\bf{2}] & $[\mathbf{2}]\otimes[\mathbf{2}^2]$ &
$[\mathbf{2\,1}] \otimes [\mathbf{2\,1}]$ \\
\hline \\
2 & 15 & 24.793 & 24.064 & 22.974 \\
4 & 120 & 28.791 & 28.016 & 26.988 \\
6 & 680 & 30.723 & 29.935 & 28.947 \\
8 & 3045 & 31.645 & 30.851 & 29.889 \\
10 & 11427 & 32.244 & 31.446 & 30.496 \\
12 & 37310 & 32.708 & 31.908 & 30.964 \\
14 & 108810 & 33.075 & 32.275 & 31.334 \\
16 & 288990 & 33.358 & 32.558 & 31.620 \\
18 & 709410 & 33.561 & 32.762 & 31.827 \\
20 & 1628328 &33.710 & 32.912 & 31.980 \\
22 & 3527160 &33.814 & 33.016 & 32.087 \\
\hline
\end{tabular*}
\end{center}
\end{table}
|
1,108,101,566,619 | arxiv | \section{#1}}
\renewcommand{\theequation}{\thesection.\arabic{equation}}
\newcommand{ \mu}{ \mu}
\newcommand{ \lambda}{ \lambda}
\newcommand{ C_0 }{ C_0 }
\newcommand{ \varphi}{ \varphi}
\newcommand{ \kappa}{ \kappa}
\newcommand{ \varepsilon}{ \varepsilon}
\newcommand{$\spadesuit$}{$\spadesuit$}
\newcommand{$\bullet$}{$\bullet$}
\newcommand{{\mathrm{IR}}}{{\mathrm{IR}}}
\newcommand{{\mathrm{UV}}}{{\mathrm{UV}}}
\newcommand{{\mathrm{ml}}}{{\mathrm{ml}}}
\newcommand{{\mathrm{ms}}}{{\mathrm{ms}}}
\newcommand{{\mathrm{n.s.}}}{{\mathrm{n.s.}}}
\newcommand{{\mathrm{eff}}}{{\mathrm{eff}}}
\newcommand{\til}[1]{\tilde{#1}}
\def\PRL#1#2#3{{\sl Phys. Rev. Lett.} {\bf#1} (#2) #3}
\def\NPB#1#2#3{{\sl Nucl. Phys.} {\bf B#1} (#2) #3}
\def\NPBFS#1#2#3#4{{\sl Nucl. Phys.} {\bf B#2} [FS#1] (#3) #4}
\def\CMP#1#2#3{{\sl Commun. Math. Phys.} {\bf #1} (#2) #3}
\def\PRD#1#2#3{{\sl Phys. Rev.} {\bf D#1} (#2) #3}
\def\PRB#1#2#3{{\sl Phys. Rev.} {\bf B#1} (#2) #3}
\def\PLA#1#2#3{{\sl Phys. Lett.} {\bf #1A} (#2) #3}
\def\PLB#1#2#3{{\sl Phys. Lett.} {\bf #1B} (#2) #3}
\def\JMP#1#2#3{{\sl J. Math. Phys.} {\bf #1} (#2) #3}
\def\PTP#1#2#3{{\sl Prog. Theor. Phys.} {\bf #1} (#2) #3}
\def\SPTP#1#2#3{{\sl Suppl. Prog. Theor. Phys.} {\bf #1} (#2) #3}
\def\AoP#1#2#3{{\sl Ann. of Phys.} {\bf #1} (#2) #3}
\def\PNAS#1#2#3{{\sl Proc. Natl. Acad. Sci. USA} {\bf #1} (#2) #3}
\def\RMP#1#2#3{{\sl Rev. Mod. Phys.} {\bf #1} (#2) #3}
\def\PR#1#2#3{{\sl Phys. Reports} {\bf #1} (#2) #3}
\def\AoM#1#2#3{{\sl Ann. of Math.} {\bf #1} (#2) #3}
\def\UMN#1#2#3{{\sl Usp. Mat. Nauk} {\bf #1} (#2) #3}
\def\FAP#1#2#3{{\sl Funkt. Anal. Prilozheniya} {\bf #1} (#2) #3}
\def\FAaIA#1#2#3{{\sl Functional Analysis and Its Application} {\bf #1} (#2)
#3}
\def\BAMS#1#2#3{{\sl Bull. Am. Math. Soc.} {\bf #1} (#2) #3}
\def\TAMS#1#2#3{{\sl Trans. Am. Math. Soc.} {\bf #1} (#2) #3}
\def\InvM#1#2#3{{\sl Invent. Math.} {\bf #1} (#2) #3}
\def\LMP#1#2#3{{\sl Letters in Math. Phys.} {\bf #1} (#2) #3}
\def\IJMPA#1#2#3{{\sl Int. J. Mod. Phys.} {\bf A#1} (#2) #3}
\def\AdM#1#2#3{{\sl Advances in Math.} {\bf #1} (#2) #3}
\def\RMaP#1#2#3{{\sl Reports on Math. Phys.} {\bf #1} (#2) #3}
\def\IJM#1#2#3{{\sl Ill. J. Math.} {\bf #1} (#2) #3}
\def\APP#1#2#3{{\sl Acta Phys. Polon.} {\bf #1} (#2) #3}
\def\TMP#1#2#3{{\sl Theor. Mat. Phys.} {\bf #1} (#2) #3}
\def\JPA#1#2#3{{\sl J. Physics} {\bf A#1} (#2) #3}
\def\JSM#1#2#3{{\sl J. Soviet Math.} {\bf #1} (#2) #3}
\def\MPLA#1#2#3{{\sl Mod. Phys. Lett.} {\bf A#1} (#2) #3}
\def\JETP#1#2#3{{\sl Sov. Phys. JETP} {\bf #1} (#2) #3}
\def\JETPL#1#2#3{{\sl Sov. Phys. JETP Lett.} {\bf #1} (#2) #3}
\def\PHSA#1#2#3{{\sl Physica} {\bf A#1} (#2) #3}
\def\PHSD#1#2#3{{\sl Physica} {\bf D#1} (#2) #3}
\begin{titlepage}
\vspace*{-2 cm}
\noindent
\begin{flushright}
\end{flushright}
\vskip 1 cm
\begin{center}
{\Large\bf Self-Duality from New Massive Gravity Holography} \vglue 1 true cm
{U. Camara dS}$^{*}$\footnote {e-mail: [email protected]}, {C.P. Constantinidis}$^{*}$\footnote {e-mail: [email protected]}, { A.L. Alves Lima}$^{*}$\footnote {e-mail: [email protected]} and { G.M.Sotkov}$^{*}$\footnote {e-mail: [email protected], [email protected]}\\
\vspace{1 cm}
${}^*\;${\footnotesize Departamento de F\'\i sica - CCE\\
Universidade Federal de Espirito Santo\\
29075-900, Vitoria - ES, Brazil}\\
\vspace{5 cm}
\end{center}
\normalsize
\vskip 0.5cm
\begin{center}
{ {\bf ABSTRACT}}\\
\end{center}
\vspace{0.5cm}
The holographic renormalization group (RG) flows in certain self-dual two dimensional QFT's models are studied. They are constructed as holographic duals to specific New Massive 3d Gravity (NMG) models coupled to scalar matter with ``partially self-dual'' superpotentials. The standard holographic RG constructions allow us to derive the exact form of their $\beta$- functions in terms of the corresponding NMG's domain walls solutions. By imposing invariance of the free energy, the central function and of the anomalous dimensions under specific matter field's duality transformation, we have found the conditions on the superpotentials of two different NMG's models, such that their dual 2d QFT's are related by a simple strong-weak coupling transformation.
\vspace{0.5 cm}
KEYWORDS: New Massive Gravity, Holographic RG Flows, 2d phase transitions, strong-weak coupling duality
\end{titlepage}
\tableofcontents
\setcounter{equation}{0}
\section{Introduction}
In the lack of small parameters, the concepts and methods of the strong-weak coupling duality \cite{seiberg,seibit} are known to be the main tool for the description of relevant physical phenomena, and for the derivation of non-perturbative strong coupling results.
In all the known examples of self-dual (supersymmetric) QFT$_d$'s (with $d=2,3,4$), this duality is realized as an inversion transformation or, more generally, as fractional linear transformations of the couplings belonging to certain discrete subgroups of SL(2,C),
which leave invariant the corresponding partition functions \cite{seiberg,seibit,haro,cardy-itz}. The gauge/gravity duality \cite{malda,witt,maldadu,ooug} on the other hand, together with the holographic Renormalization Group (RG) \cite{VVB,rg}, establish an equivalence relation between certain limits of the (semi-)classical $d$-dimensional gravity models and the strong-coupling regime of $(d-1)$-dimensional gauge theories. According to the off-critical holographic RG version of the AdS/CFT correspondence, the QFT$_{d-1}$'s dual to certain asymptotically AdS$_d$ geometries of domain wall (DW's) type \cite{rg} may involve --- together with the original gauge strong coupling --- a few other relevant or/and marginal couplings. These models can be also realized as certain conformal perturbations around a given CFT, thus defining non-conformal theories called pCFT's \cite{x, cardy}. We are interested in the specific holographic features of such dual, non-conformal QFT's, in the case when they belong to the family of the self-dual theories w.r.t. one (or a few) of these couplings. More precisely, we shall address the question of how can one derive the ``holographic gravitational'' counterparts of certain duality symmetries, such as the above mentioned inversions and fractional linear transformations. We consider the 3-dimensional New Massive Gravity (NMG) model \cite{1}, coupled to scalar self-interacting matter, and will look for the specific restrictions to be imposed on the form of the matter superpotential in order to ensure the strong-weak coupling self-duality of the corresponding two-dimensional pCFT$_2$, constructed by the methods of the NMG holography \cite{sinha,nmg,oldholo,iran}.
The recent progress in the understanding of the t'Hooft limits ($N, \, k \rightarrow \infty$ but finite $\frac{N}{N+k}$) of certain cosets of SU$(N)_k$ WZW models (as for example the $W_N$ minimal models) as an appropriate higher spin extension of the 3d Einstein gravity \cite{gaber, gab, minmod} has renewed the interest in the identification of appropriate limits of the most famous family of CFT$_2$'s --- the BPZ and the Liouville minimal models \cite{bpz} --- as holographic duals of certain extended 3d gravity models \cite{sinha, nmg, oldholo, iran}. There exists an indication that the holographic description of these CFT$_2$'s in the case of relatively large, but \emph{finite central charges}, can be achieved by considering the quantum 3d gravity contributions beyond the (semi-)classical one \cite{gab}, or/and of certain ``higher curvature" extensions of the Einstein gravity, including powers of the curvature and of the Ricci tensor at the classical level. The simplest model of such an extended 3d gravity is given by the following action, called New Massive Gravity
\footnote{One may consider the new $R^2$-type terms as one loop counter-terms appearing in the perturbative quantization of 3d Einstein gravity with $\Lambda<0$.}
\cite{1}:
\begin{eqnarray}
&& S_{{\mathrm{NMG}}}(g_{\mu\nu};\kappa,\Lambda)= \frac{1}{\kappa^2}\int d^3x\sqrt{-g}\left[\epsilon R+ \frac{1}{m^2} \Big(R_{\mu\nu}R^{\mu\nu}-\frac{3}{8}R^2\Big)-2\Lambda \right],\label{acao} \\
&& \kappa^2=16\pi G,\;\; \epsilon=\pm1. \nonumber
\end{eqnarray}
At the linearised level, it describes a massive graviton with two polarizations. As it was shown by Bergshoeff, Hohm and Townsend (BHT) \cite{1}, the above model turns out to be \emph{unitary} consistent (ghost free) for both choices, $\epsilon=\pm1$, of the ``right'' and ``wrong'' signs of the $R$-term, under certain restrictions on the values of the cosmological constant $\Lambda=-m^2\lambda$, as for example\cite{more}:
\begin{equation}
-1\le\lambda<0 \; , \quad\quad \epsilon = -1 \, ,\quad \;\; m^2<0 \; . \label{bhtun}
\end{equation}
in the case of the negative $\lambda$ \emph{BHT-unitary window}.
An important feature of the central charges of the CFT$_2$'s dual to these NMG models is the presence of a particular $m^2$-dependent term \cite{more,china}:
\begin{eqnarray}
c_{nmg} =\frac{3\epsilon L}{2l_{pl}}\left(1+\frac{L_{gr}^2}{L^2}\right) \; , \quad L_{gr}^2=\frac{1}{2\epsilon m^2}\gg l_{pl}^2 \; , \quad \Lambda_{{\mathrm{eff}}}=-\frac{1}{L^2}=-2m^2(\epsilon\pm\sqrt{1+\lambda}). \label{ch}
\end{eqnarray}
Compared to the standard 3d Einstein gravity case, which one resumes to in the $m^2\rightarrow\infty$ limit, the above central charges yield a remarkable new \emph{self-duality} property: $c_{nmg} (L)=c_{nmg} \left( L_{gr}^2 / L \right)$, coinciding with the well known ``$b$ to $1/b$" duality of the (exact, non-perturbative) central charges $c^{\pm}(b)=1 \pm 6(b\pm\frac{1}{b})^2$ of the Liouville ($c^+$) and of the BPZ ($c^-$) minimal models\cite{bpz,azz}. It is then natural to expect that appropriate perturbations of these CFT's give rise to certain strong-weak coupling self-dual non-conformal $pCFT_2$'s we are interested in. Although the proper identification of the CFT$_2$'s dual to NMG model (\ref{acao}), is not yet fully understood, the \emph{off-critical} AdS/CFT methods based on the DW's solutions of NMG model coupled to massive self-interacting scalar field with an action \cite{nmg,pos}:
\begin{eqnarray}
&& S_{{\mathrm{NMGm}}}(g_{\mu\nu},\sigma;\kappa,\Lambda) = \frac{1}{\kappa^2} \int d^3x\sqrt{-g} \left\{ \epsilon R+ \frac{1}{m^2} {\cal K}-\kappa^2 \left(\frac{1}{2} |\vec{\nabla}\sigma|^2+V(\sigma)\right)\right\} ; \label{acaoo}\\
&& {\cal{K}} =R_{\mu\nu}R^{\mu\nu}-\frac{3}{8}R^2, \quad \Lambda=-\frac{\kappa^2}{2}V(\sigma^*),\quad V'(\sigma*)= 0 ; \nonumber
\end{eqnarray}
as well as the NMG holographic RG results related to them \cite{nmg,oldholo}, provide the necessary tools for the selection of the conditions on the NMG-matter interactions which lead to such \emph{self-dual} pCFT$_2$'s.
Our main result consists in the explicit construction of the duality transformations between \emph{pairs of 3d NMG-matter models} (\ref{acaoo}), whose holographic 2d images represent specific strong-weak coupling transformations which keep invariant the free energy, the corresponding $C$-function and the anomalous dimensions of their pCFT$_2$ duals. We also derive the explicit form of a partially self-dual matter superpotential (with all the vacua within the negative BHT-unitary window (\ref{bhtun})) giving rise to a holographic, self-dual, pCFT$_2$ model, presenting both strong- and weak-coupling phases and critical points. The practical importance of the concept of \emph{partial self-duality}, introduced in Sect.\ref{Examples of dual and self-dual}., is that it provides an efficient method for the identification of such holographic pCFT$_2$'s of a given exact $\beta$-function, by comparing the results concerning its weak-coupling phases with the standard and well known perturbative CFT$_2$'s calculations around a given (weak-coupling) critical point \cite{cardy, x, fat, gms}.
\setcounter{equation}{0}
\section{NMG holography}
The models involved in the ``boundary'' QFT$_2$'s part of the \emph{off-critical} AdS$_3$/CFT$_2$ correspondence \cite{gub} are usually identified as certain CFT$_{2}$'s, perturbed by marginal or/and relevant operators that break the conformal symmetry of it's Poincar\'e subgroup:
\begin{eqnarray}
S_{pCFT_2}^{ren}(\sigma)=S_{CFT_2}^{{\mathrm{UV}}}+\sigma(L_*)\int \! d^2x \; \Phi_{\sigma}(x^i) . \label{eq28}
\end{eqnarray}
The scale-radial duality \cite{VVB,rg} allows to further identify the ``running'' coupling constant $\sigma(L_*)$ of the pCFT$_2$ with the scalar field $\sigma(z)$, and the RG scale $L_*$ with the scale factor $e^{\varphi(z)}$ of the DW's solutions of the bulk gravity coupled to scalar matter, as follows:
\begin{eqnarray}
ds^2=dz^2+e^{\varphi(z)}(dx^2-dt^2),\quad\quad
\sigma(x^i,z)\equiv\sigma(z),\quad \ L_*=l_{pl}e^{-\varphi/2} . \label{intro1}
\end{eqnarray}
The main ingredients of the NMG holography -- the NMG's vacua and DW's solutions, the values of the central charges of the conjectured dual $CFT_2$'s and the holographic RG flows -- were extensively studied by different methods \cite{1,2,3,nmg,oldholo,8}. As is well known from
the example of Einstein gravity \cite{6} ,the properties and the proper existence of the holographic RG flows in its 2d dual QFT$_2$, strongly depend on the form of bulk matter interactions. If they permit DW's solutions relating two unitary NMG vacua of different $\lambda_A$, then we might have massless RG flows in the dual pCFT$_2$. The explicit construction of all the DWs solutions of the corresponding second order system of equations:
\begin{eqnarray}
&&\ddot{\sigma}+\dot{\sigma}\dot{\varphi}-V'(\sigma)=0\nonumber\\
&&\ddot{\varphi}\Big(1-\frac{\dot{\varphi}^2}{8\epsilon m^2}\Big)+\frac{1}{2}\dot{\varphi}^2\Big(1-\frac{\dot{\varphi}^2}{16\epsilon m^2}\Big)+\epsilon\kappa^2\Big(\frac{1}{2}\dot{\sigma}^2+V(\sigma)\Big)=0\nonumber\\
&&\dot{\varphi}^2\Big(1-\frac{\dot{\varphi}^2}{16\epsilon m^2}\Big)+\epsilon\kappa^2(-\dot{\sigma}^2+2V(\sigma))=0\label{eq4}
\end{eqnarray}
is a rather difficult problem, and in general it requires the use of numerical methods. However, one particular class of such solutions which are ``stable'' and exact can be obtained by introducing an auxiliary function $W(\sigma)$, called superpotential, which allows to reduce the corresponding DW's gravity-matter equations to an specific BPS-like $I^{st}$ order system \cite{3,nmg}:
\begin{eqnarray}
\kappa^2V(\sigma)&=&2(W')^2\Big(1-\frac{\kappa^2W^2}{2\epsilon m^2}\Big)^2-2\epsilon\kappa^2 W^2\Big(1-\frac{\kappa^2 W^2}{4\epsilon m^2}\Big),\nonumber\\
\dot{\varphi}&=&-2\epsilon\kappa W, \quad \dot{\sigma}=\frac{2}{\kappa}W'\Big(1-\frac{\kappa^2W^2}{2\epsilon m^2}\Big) \, , \label{sis}
\end{eqnarray}
where $W'(\sigma)=dW/d\sigma$, $\dot{\sigma}= d\sigma / dz$ etc.
This provides the explicit form of qualitatively new DW's relating ``old" and ``new" purely NMG vacua, as well as of the corresponding pCFT$_2$ model's $\beta$-function \cite{nmg}.
Given the form of the superpotential $W(\sigma)$ and the $I^{st}$ order system (\ref{sis}) --- which describes the radial evolution of the NMG's scale factor and of the scalar field $\sigma(z)$ ---, the scale-radial identifications (\ref{intro1}) provide the explicit form of the $\beta$-function of the conjectured dual QFT$_2$ \cite{VVB,rg}:
\begin{eqnarray}
\frac{d\sigma}{dl}=-\beta(\sigma)=\frac{2\epsilon}{\kappa^2}\frac{W'(\sigma)}{W(\sigma)}\bigg(1-\frac{W^2(\sigma)\kappa^2}{2\epsilon m^2}\bigg),\quad\quad l=\ln L_* \; . \label{rg}
\end{eqnarray}
The admissible constant solutions $\sigma^*_{A}$ of the above RG equation (\ref{rg}) are defined by the zeros of the $\beta$-function, and they indeed coincide with the NMG-matter models vacua solutions of AdS$_3$ type. The variety of different \emph{non-constant} solutions $\sigma_{ij}=\sigma(l;\sigma^*_{A_i},\sigma^*_{A_j})$
representing the way the coupling constant $\sigma(l)$ of the dual QFT$_2$ is running (with the RG scale $L_*$ increasing) between two consecutive critical points (i.e. for $j=i+1$) describe the RG flows (and the phase transitions) that occur in the QFT$_2$.
Let us briefly remind how one can extract the information about the critical properties of such QFT$_2$ models from eq.(\ref{rg}) and the way the CFT$_2$ data is related to the asymptotic behaviour of the NMG's domain wall solutions \cite{nmg}, or equivalently to the shape of the matter potential $V(\sigma)$.
\subsection {QFT$_2$ critical behaviour}
The zeros $\sigma^*_{A}$ of the $\beta$-function determine a set of \emph{critical points} in the coupling space, where the corresponding QFT$_2$ becomes conformal invariant and the phase transitions of second or infinite order take place. The nature of the observed changes in the behaviour of the thermodynamical (TD) potentials and certain correlation functions at the neighbours of each critical point $\sigma_A^*$ does depend on the multiplicity $n_A$ of these zeros. In the case of simple zeros, we have $y(\sigma^*_{A}) = - d\beta / d\sigma |_{\sigma = \sigma^*_{A}} \neq 0$ and hence $\beta(\sigma) \approx -y(\sigma^*_{A})(\sigma- \sigma_A^*)$. The corresponding \emph{second order phase transitions} are characterized by the scaling laws and the critical exponents of their TD potentials as for example $y_A = 1/\nu _A$, related to the singular part (s.p.) of the reduced free energy per unit 2d volume $F^A_s = e^{2l}$ and to the correlation length $\xi_{A}=e^{-l}$ :
\begin{eqnarray}
F^A_s(\sigma)\approx \left(\sigma- \sigma_A^*\right)^{\frac{2}{y_{A}}}, \quad\quad\quad \xi_A \approx(\sigma - \sigma_A^*)^{-\frac{1}{y_A}} ,
\label{sl}
\end{eqnarray}
at the neighbourhood of $\sigma_A^*$. Once the $\beta$-function (\ref{rg}) is given \footnote{Conjectured as in the case of the holographic RG or perturbatively calculated from the explicit form of the pCFT$_2$ action (\ref{eq28}).}, the above ``near-critical forms" of $F^A_s(\sigma)$ and $\xi_A$ can be easily derived from the following RG equations:
\begin{eqnarray}
\beta(\sigma)\frac{dF_s(\sigma)}{d\sigma} + 2F_s(\sigma)=0,\quad\quad \quad \beta(\sigma)\frac{d\xi(\sigma)}{d\sigma} =\xi(\sigma), \label{fs}
\end{eqnarray}
which determine the scaling properties of the TD potentials, etc. under infinitesimal RG transformations (see for example \cite{cardy}).
If one divides the coupling space $\sigma\in R$ into intervals $p_{k, \, k+1} = (\sigma_*^k, \sigma_*^{k+1})$ limited by vacua $\sigma_*$, then each interval will correspond to a different phase. Two such consecutive phases share the same UV critical point $\sigma_{\mathrm{UV}}^k$, where a second order phase transition, driven by a relevant operator $\Phi_\sigma$, may occur. The nature of this phase transition indeed depend on the properties of the neighbours, i.e. if $\sigma_*^{k \pm 1} = \sigma_{\mathrm{IR}} , \, \sigma_s , \, \infty$, which also determine the features of the considered the QFT$_2$ - phase: massive, massless, etc.
An efficient method for the analytic description of these QFT$_2$'s phase transitions is given by the conformal perturbation theory $pCFT_{2}(\sigma^k_{{\mathrm{UV}}})$, based on the action (\ref{eq28}) and on the knowledge of the exact correlation functions of $\Phi_{\sigma}$, once the $CFT_{2}(\sigma^k_UV)$ is known and the relevant operator $\Phi_{\sigma}$ is appropriately chosen \cite{x}.
In the case of integrable perturbations of $\Phi_{13}$-type\footnote{These, for unitary models, are known to be the only consistent one coupling perturbations.} for Virasoro and Liouville (minimal) models (or of $\Phi_{adj}$-type for, say, $W_N$ m.m.s) \cite{x,fat,gms} the calculations involving conformal OPEs :
\begin{eqnarray}
\Phi(1)\Phi(2)\approx I+C_{\Phi\Phi\Phi}\Phi(2)+...\label{ope}
\end{eqnarray}
allow us to derive the $\beta$-function at first order in perturbation theory around the critical point:
\begin{eqnarray}
\beta(\sigma)\approx -y_{13}(\sigma- \sigma_A^*)+C_{\Phi\Phi\Phi}(\sigma- \sigma_A^*)^2 +...\label{pertrg}
\end{eqnarray}
It is well known that the phase structure of such pCFT$_2$ is of massless-to-massive $(\sigma^{{\mathrm{IR}}},\sigma^{{\mathrm{UV}}},\infty)$ type \cite{x}.
\subsection{On the NMG$_3$/QFT$_2$ correspondence}
We begin our short $NMG_3/pCFT_2$ dictionary by remembering one specific ``NMG feature" \cite{nmg} --- the existence of two types of distinct \emph{critical} points: the usual \emph{type} (a) vacua, given by $W'(\sigma_{a}^*)=0$, and therefore representing the extrema of $W(\sigma)$; and the ``new'' vacua of \emph{type} (b), given by the real solutions of the equations $W^2(\sigma_{b}^*) = 2\epsilon m^2 / \kappa^2$, which exist only in the case when $\epsilon m^2 >0$. Both types of vacuum are extrema of the matter potential, $V'(\sigma_A^*)=0$, for
\begin{eqnarray}
\kappa^2V'(\sigma)=4W'\Big(1-\frac{W^2\kappa^2}{2\epsilon m^2}\Big)\omega(\sigma) ,
\end{eqnarray}
but there are others extrema of $V(\sigma)$, given by the real constant solutions of the algebraic equation:
\begin{eqnarray*}
\omega(\sigma^{*})=W''\Big(1-\frac{\kappa^2W^2}{2\epsilon m^2}\Big)-\frac{\kappa^2}{\epsilon m^2}(W')^2W-\epsilon\kappa^2W=0 ,
\end{eqnarray*}
which \emph{do not} represent (vacuum) solutions of the $I^{st}$ order eqs. (\ref{sis}). We will fix our attention, in what follows, in the vacua of type (a) and (b). As one can see from eqs.(\ref{eq4}), such vacua are defined by $\dot{\sigma}=0$ and $\dot{\varphi}=-2\epsilon \kappa W(\sigma_{A}^*)= {\mathrm{const}}$. It is then evident that they both present the geometry of an AdS$_3$ vacuum $(\sigma_A^*,\Lambda^A_{{\mathrm{eff}}})$ of the NMG model:
\begin{eqnarray*}
ds^2=dz^2+e^{-2\epsilon\sqrt{|\Lambda_{{\mathrm{eff}}}^A|}z}(dx^2-dt^2), \quad\quad A=a,b \; .
\end{eqnarray*}
As usually, the corresponding effective cosmological constants $\Lambda^A_{{\mathrm{eff}}}$ are realised as the vacuum values of the 3d scalar curvature $R(\varphi)$, which, for the considered DW's and vacua solutions (\ref{intro1}), is given by
\begin{eqnarray}
R=-2\ddot{\varphi}-\frac{3}{2}\dot{\varphi}^2 = 8\epsilon(W')^2\left(1-\frac{\kappa^2W^2}{2\epsilon m^2}\right)-6\kappa^2W^2 \; ; \label{curvature}
\end{eqnarray}
hence at a vacuum $\sigma^*_A$ we have $R_{vac} = -6\kappa^2W^2(\sigma_{A}^*) = 6\Lambda_{{\mathrm{eff}}}^A= - 6 / L_A^2$. Notice that the NMG vacua of $W(\sigma^*) = 0$ have the geometry of \emph{flat} Euclidean E$_3$ or Minkowski M$_3$ space.
The critical exponents also play a crucial part on the asymptotic behaviour of the matter field $\sigma(z)$. In the non-degenerated case we have
\begin{eqnarray}
\sigma(z)\stackrel{z\rightarrow\infty}{\approx}\sigma_{A}^* - \sigma_{A}^{0}e^{-
y_A\sqrt{|\Lambda^A_{{\mathrm{eff}}}|}z}, \quad
\Delta_A=2-y_A=1+\sqrt{1-\frac{m_{\sigma}^2(A)}{\Lambda_{{\mathrm{eff}}}^A}}, \quad \ m_{\sigma}^2=V''(\sigma_A^*) \; . \label{asymp}
\end{eqnarray}
Thus $y_A\neq 0$ provide the boundary conditions (b.c.'s) for the corresponding DW's solutions of the NMG model (see ref.\cite{nmg}), as one can easily verify by considering the near-boundary/horizon approximation of eqs.(\ref{sis}): $\dot{\sigma}=-\epsilon \kappa \beta(\sigma)W(\sigma)\approx y_A (\sigma - \sigma_A^*)\epsilon \kappa W(\sigma_A^*)$, and taking into account the identification $\kappa W_A= - \epsilon / L_A$.
As it is expected, the quantities characterizing the pCFT$_2$ present in the asymptotic limits of these QFT$_2$ can also be described by the geometric properties of the associated NMG-matter model, and written in terms of the superpotential. Let us first consider the critical exponents $y_{A}=y(\sigma_{A}^{*})=-\beta'(\sigma_{A}^{*})$ in the case when all the critical points have multiplicities $n_A=1$, i.e. both $\sigma_{a}^*$ and $\sigma_{b}^*$ are first order zeros of $\beta(\sigma)$, and $W(\sigma_{A}^*)\neq 0$\cite{nmg,oldholo}:
\begin{eqnarray}
y_{a}=y(\sigma_{a}^{*})=\frac{2\epsilon W''_{a}}{\kappa^2W_{a}}\Big(1-\frac{\kappa^2W_{a}^2}{2\epsilon m^2}\Big),\quad
y_b=y(\sigma_{b}^{*})=-\frac{4\epsilon(W'_{b})^2}{\kappa^2W_{b}^2},\quad \ W_{b}^2=\frac{2\epsilon m^2}{\kappa^2} . \label{sdim}
\end{eqnarray}
The structure constants $C^{A}_{\Phi\Phi\Phi}=\frac{1}{2}\beta^{''}(\sigma^*_{A})$ can be calculated from eqs.(\ref{rg}):
\begin{eqnarray}
C^a_{\Phi\Phi\Phi}=- \frac{\epsilon W_a^{'''}}{\kappa^2 W_a}\bigg(1-\frac{W_a^2\kappa^2}{2\epsilon m^2}\bigg),\quad
C^b_{\Phi\Phi\Phi}=\frac{2\epsilon}{\kappa^2}\Bigg(3\frac{W_b^{''}W_b^{'}}{W_b^2}- \frac{(W_b^{'})^3}{W_b^3}\Bigg) . \label{str}
\end{eqnarray}
By definition, their $CFT_{2}(\sigma^k_{{\mathrm{UV}}})$ counterparts represent the ratio of the constants of 3-point and 2-point functions of the perturbing field $\Phi_{\sigma}$ \cite{bpz}. Finally, the Zamolodchikov's $C$-function can be written in terms of the superpotential as follows\cite{x}:
\begin{equation}
C(\sigma)= \frac{-3}{2G\kappa W(\sigma)}\bigg(1+\frac{\kappa^2W^2(\sigma)}{2\epsilon m^2}\bigg) \; . \label{cf}
\end{equation}
This particular form is derived in refs. \cite{8,sinha,nmg} by the Brown-Henneaux asymptotic method \cite{9}.
The central charge at a vacuum $c_A = C(\sigma_A^*)$ can therefore be evaluated when the superpotential is given.
It is important to note a specific feature of the NMG-induced 2d models, namely that the CFT$_2$'s describing all the type (b) critical points have equal central charges $c_{b}= 3L_{gr}/l_{pl}$ (with $L_{gr}^2= 1 / 2\epsilon m^2$)\footnote{corresponding to the lower bound $\lambda_b=-1$ of the negative BHT-unitary window(\ref{bhtun})}, while the type (a) central charges $c_a = (3\epsilon L_a/2l_{pl})\left(1+ L_{gr}^2/L_a^2 \right)$ are parametrized by the corresponding critical values of the superpotential: $W^2(\sigma^*_a)= 1 / \kappa^2 L_a^2$.
\setcounter{equation}{0}
\section{Strong-weak coupling duality} \label{Duality}
Motivated by the eventual existence of holographic self-dual pCFT$_2$'s \footnote{representing few critical points and having massive and massless phases}, we address the problem about the properties of the pairs of the their dual 3d NMG-matter models and about the nature of the ''duality'' transformations relating their superpotentials.
\subsection{Pairs of dual NMG models}
Given a NMG model coupled to scalar field $\sigma$ of superpotential $W(\sigma)$ (\ref{acaoo}), we define its \textit{dual} as a specific NMG model, whose scalar field $\til \sigma$ and superpotential $\til W(\til\sigma)$ are fulfilling the following two conditions:
\begin{equation}
\bullet \;\;\;\; \varphi(\sigma) = \til \varphi(\til\sigma)\; ,
\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \bullet \;\;\;\; W(\sigma) = \frac{1}{ \kappa^2 L_{gr}^2 \til W(\til\sigma)} \label{def dual}
\end{equation}
We also impose an additional requirement that all the critical points $\sigma_k$ and $\til\sigma_k$ of the pair of superpotentials, i.e. $W'(\sigma_k)=0=\til W'(\til\sigma_k)$ correspond to true AdS$_3$ vacua : $W(s_k)\neq 0$ and $\til W(\til\sigma_k)\neq 0$. It is natural to expect that the above pairs of duals $NMG_3$ models are mapped by the $AdS/CFT$ correspondence rules in certain pairs of duals (or self-dual) $CFT_2$'s models. The particular form of the NMG's matter superpotentials transformations (\ref{def dual}) is chosen in the way that the coupling space duality transformations between the corresponding pairs of duals $CFT_2$'s, induced by eqs.(\ref{def dual}), preserve the form of the central charges at the critical points, the form of the central function (\ref{cf}) and of the corresponding s.p. of their free energy.
The first requirement, i.e. the invariance of the scale factor of the NMG's domain walls, ensures the desired invariance of singular part of the reduced free energy $F^A_s (\sigma)= e^{- \varphi(\sigma)}$ of theirs duals pCFT$_2$ models. It is equivalent to the condition $l(\sigma) = \til l(\til\sigma)$ of the invariance of the QFT$_2$ scales under such transformation. We next recall that the central charge associated with a vacuum $\sigma_A$ has the form:
\begin{equation}
c_A = \frac{3 \epsilon L_A}{2l_{pl}} \, \left[ 1 + \frac{L_{gr}^2}{L_A^2} \right] , \label{central charge duality}
\end{equation}
where $L_A = \pm 1 / \kappa W(\sigma_A)$ -- with the sign chosen in order to make $L_A$ positive --, and also that $L_{gr}^2 = 1/2m^2 \epsilon > 0$ denotes the radius of the type (b) vacua (cf. eq. (\ref{sis})). Then the second condition in (\ref{def dual}) implies
\begin{equation}
\til L_A = \frac{L_{gr}^2}{L_A} . \label{dual scales}
\end{equation}
Therefore the above NMG's superpotential transformation ensures the invariance of the $CFT_2$'s central charges\footnote{and of the corresponding pCFT$_2$'s C-function (\ref{cf}) as well} and it has the same form as the well known central charges duality properties of the Liouville and minimal models, namely $c(L_A) = c(L_{gr}^2/L_A)$. Notice that
this is a direct consequence of the curvature quadratic terms in the action (\ref{acaoo}), which generates the specific form of the central charge (\ref{central charge duality}), which \emph{is not present} in EH gravity. The transformation $\sigma \to \til\sigma$ maps the AdS$_3$ spaces of large radii (and small cosmological constants) to certain ''dual'' AdS$_3$ spaces of small radii (and large cosmological constants)\footnote{Note that if the types (a) and (b) vacua coincide, then all the scales remain invariant: $L_a = L_{gr} = \til L_a$.}, but the corresponding dual $CFT_2$'s share \emph{equal} central charges.
An important (implicit) element of the above introduced concept of pairs of duals NMG's (and corresponding pairs of duals pCFT$_2$'s) is that the mapping is always between the vacua of the same kind, i.e. $\sigma_a \to \til\sigma_a$ and $\sigma_b \to \til\sigma_b$. This must be proved however. We first note that according to the condition $ \varphi (\sigma) = \til \varphi (\til\sigma)$, the transformation of the $beta$-funtion $\beta(\sigma) = - d\sigma/d l$, with $l = - \varphi/2$ is given by:
\begin{equation}
\beta(\sigma) = \frac{d \sigma}{d \til\sigma} \til\beta(\til\sigma) . \label{dual beta}
\end{equation}
The next step is to calculate the derivative $d \sigma / d \til\sigma$ in terms of the corresponding superpotentials, by substituting the explicit form (\ref{rg}) of the both $\beta$-functions into eq.(\ref{dual beta}), and then taking into account eqs.(\ref{def dual}) to eliminate $\til W$ :
\begin{equation}
d \til\sigma/d \sigma = - 1/ \kappa L_{gr} \, W(\sigma). \label{dutrans}
\end{equation}
Due to the additional requirements $W(\sigma_k)\neq 0$ and $\til W(\til\sigma_k)\neq 0$ it is not singular at the critical points, and therefore the zeros of $\beta(\sigma)$ are also zeros of $\til\beta(\til\sigma)$ and vice-versa. Hence the vacua of one theory are also vacua of its dual, and the transformation (\ref{def dual}) maps vacua into vacua. As a consequence we find the explicit form of the NMG's scalar fields duality transformation as follows:
\begin{equation}
\til\sigma(\sigma) = - \frac{1}{ \kappa L_{gr}} \, \int^\sigma \! \frac{d x}{W(x)} + {\mathrm{constant}} . \label{int s til}
\end{equation}
The above properties confirm the fact that the type (a) NMG -vacua are mapped into the type (a) vacua of the dual NMG model $\sigma_a \rightarrow \til\sigma_a$, as one can see from the identity $d \til W(\til\sigma) / d \til\sigma = ( \kappa L_{gr} \, W(\sigma))^{-1} d W(\sigma) /d \sigma$. The type (b) vacua remain invariant under the duality transformation, since their defining equation $1 - \kappa^2 L_{gr}^2 W^2(\sigma) = 0$ is mapped by (\ref{def dual}) into itself.
Taking into account the explicit form of the I-st order eqs.(\ref{sis}) for the pairs of NMG dual models, it is not difficult to derive the relation between the dual ``radial" coordinates $\til z(z)$:
\begin{equation}
\til z(z) = \kappa^2 L_{gr}^2 \int^z \! d x \; W^2(x) + {\mathrm{constant}} ,
\end{equation}
or, in terms of $\sigma$ we get
\begin{equation}
\til z(\sigma) = \frac{ \kappa^2 L_{gr}^2}{2} \int^\sigma \! \; \frac{ W^2(x) \, dx}{W'(x) \left[ 1 - \kappa^2 L_{gr}^2 W^2(x) \right]} + {\mathrm{constant}}.
\end{equation}
It remains to demonstrate one of the most important properties of the duality transformations (\ref{def dual}): namely, that they keep invariant the critical exponents $y_A$, $A = a, b$ given by (\ref{sdim}). Starting by their definitions $y_A = d \beta / d \sigma \mid_{\sigma_A}$ at the corresponding critical points $\sigma_A$, and further by using eqs. (\ref{cf}), and the fact that the vacua are the zeros of these $\beta$-functions, we find
$$ y_A = - \frac{d}{d \sigma} \beta (\sigma) \mid_{\sigma_A} = - \left\{ \frac{d \til\sigma}{d\sigma} \, \frac{d}{d \til\sigma} \left[ \frac{d\sigma}{d\til\sigma} \, \til\beta(\til\sigma) \right] \right\}_{\til\sigma_A} = - \left\{ \frac{d \til\sigma}{d\sigma} \frac{d\sigma}{d\til\sigma} \, \frac{d}{d \til\sigma} \til\beta(\til\sigma) \right\}_{\til\sigma_A} = - \frac{d}{d \til\sigma} \til\beta(\til\sigma) \mid_{\til\sigma_A} $$
Thus we can conclude that indeed $y_A = \til y_A$.
Let us summarize the main features of the duality transformations (\ref{def dual}) between two specific NMG -matter models, whose superpotentials are ''inversely proportional'': their matter potentials are different, but they do have equal number of vacua such that the pairs of type (a) dual vacua are representing $AdS_3$ spaces of different radii that are inversely proportional to each other and their type (b) vacua are coinciding. The most relevant characteristics of the corresponding pairs of dual $CFT_2$'s models (and of the pairs of pCFT$_2$'s as well) are: (1) they have different holographic $\beta$-functions, whose type (a) critical points have different values (one in the weak-coupling another in the strong coupling regions), but still identical central charges and central functions; (2) the critical exponents $y_A = \til y_A$ remains invariant under such duality transformations and (3) their s.p. free energies are identical by construction. It remains to answer the important question concerning the explicit construction of relatively simple and physically interesting pairs of such dual NMG models and to describe the nature phase transitions and of the different phases of the corresponding pairs of duals pCFT$_2$'s, whose exact holographic $\beta$-functions are related by the eqs.(\ref{dual beta}).
\subsection{Examples of dual and self-dual NMG models} \label{Examples of dual and self-dual}
In order to illustrate how the concepts of NMG duality transformations (\ref{def dual}) introduced above can be realized in practice, we consider few representative simple examples of pairs of NMG dual models. An important problem addressed in this subsection concerns one particular class of duality transformations $\sigma=\sigma(\til\sigma)$, that together with the definitions (\ref{def dual}) and (\ref{int s til}) satisfy the new ''self-duality'' condition: namely, when substituted in the second of the eqs.(\ref{def dual}) to give rise of a very special self-dual superpotentials:
\begin{equation}
\bullet \;\; \text{self-duality}:\;\; W(g_k,\sigma)=\til W(g_k,\til\sigma),\;\;\;\;\;\;\;\;\;\bullet\;\;\text{partial self-duality}:\;\; W(g_k,\sigma)=\til W(\til g_k,\til\sigma),
\end{equation}
where the parameters $g_k$ and $\til g_k$ determine the coupling constants and the masses in the corresponding $NMG_3$ matter potentials $V(g_k,\sigma)$ and $\til V(\til g_k,\til\sigma)$. In both cases the shapes of the pairs of duals NMG superpotentals are coinciding, but in the second case the particular ``partial self-duality'' transformations are mapping the NMG-matter couplings $\til g_k=\til g_k(g_k)$ as well. The particular examples analysed in this section are all chosen to provide a kind of ''strong-to-weak couplings'' duality transformations $\sigma=\sigma(\til\sigma)$ between the corresponding pairs of dual pCFT$_2$'s.
\subsubsection{Self-duality} \label{self-duality}
Consider the following quadratic superpotential:
\begin{equation}
W(\sigma) = B \sigma^2 \; , \;\; B > 0 .\label{linear}
\end{equation}
We assume that there exist at least one (b) vacuum, i.e. $m^2\epsilon > 0$, which is the fixed point of the transformation (\ref{def dual}). Because of the $Z_2$ symmetry of the superpotential, we can consider the $\sigma > 0$ only. There is no type (a) vacuum for such superpotentials: the vacuum at $\sigma_M = 0$ is of zero cosmological constant, i.e. it represents a Minkowski vacuum. The exact form of the scale factor is easily derived by solving the corresponding I-st order system (\ref{sis}) and it determines a particular asymptotically AdS$_3$ ( or H$_3$ in the euclidean case) geometry with a naked singularity at $\sigma \rightarrow \infty$ \cite{nmg,oldholo}. The eqs.(\ref{def dual}) and (\ref{int s til}) applied for the linear $W$ (\ref{linear}) provides the explicit form of the NMG duality transformation:
\begin{equation}
\til \sigma = \frac{1}{ \kappa L_{gr} B \, \sigma} ,\;\;\;\;\;\;\;\;\;\;\; \; \til W(\til\sigma) = 1 / \kappa^2 L_{gr}^2 B \sigma^2 = B \til\sigma^2, \label{sdu}
\end{equation}
where the constant of integration has been chosen to be zero. Therefore the dual superpotential has exactly the same shape of the original one and coinciding parameters $B=\til B$, that determine the coupling constants in the corresponding matter potentials $V(\sigma)$ and $\til V(\til\sigma)$. The critical points, however, are `` interchanged'' in the dual model: the original Minkowski vacuum is mapped into the dual naked singularity and the original naked singularity is mapped into the dual Minkowski vacuum.
\subsubsection{Partial self-duality} \label{partial self-duality}
The simplest example of partially self-dual NMG -models is given by the following hyperbolic superpotential:
\begin{equation}
W(\sigma) = B \, \sinh(D \sigma) \, \;\;\quad B > 0.
\end{equation}
It does not lead to physically interesting self-dual pCFT$_2$, due to the fact that, similarly to the linear superpotential model considered in the beginning of this section, it has only one type (b) vacuum at $\sigma_b = D^{-1} \sinh^{-1} \{ (B \kappa L_{gr})^{-1} \}$, a naked singularities at $\sigma \rightarrow \pm \infty$ and no one type (a) vacua \footnote{Although there is no problem with the geometry, the $\beta$-function diverges at $\sigma = 0$, so the holographic description is not well defined in this point.}. The explicit form of the corresponding duality transformation (\ref{int s til}) can be found by simple integration:
\begin{equation}
\cosh( D \sigma ) = \coth \left( \kappa L_{gr} B D \til\sigma \right) , \label{sh s e til}
\end{equation}
By substituting it in the defining equation (\ref{def dual}), we deduce the following form of the dual superpotential:
$$\til W(\til\sigma) = \til B \sinh (-\til D \til\sigma) , \;\; {\mathrm{with}} \;\; \til B = \frac{1}{ \kappa^2 L_{gr}^2 B} , \;\; \til D = \kappa L_{gr} BD. $$
Therefore the original duality transformation (\ref{def dual}) in the case of the hyperbolic superpotential leaves invariant its shape, but it is changing its parameters. The true self-duality is achieved for a specific ''critical'' value of $B$, namely $B = 1 / \kappa L_{gr}$.
\vspace{0.5cm}
We next consider another example of partially self-dual superpotential:
\begin{equation}
W(\sigma) = \left[ B (\sigma - \sigma_a)^2 + D \right]^{3/2} , \;\; D > 0 . \label{W dmn}
\end{equation}
that give rise to an interesting strong-weak coupling self-dual pCFT$_2$, representing dual massive and massless phases and also few self-duals $CFT_2$'s describing its (a) and (b) type vacua. The type (a) vacuum is placed at the critical value $\sigma = \sigma_a$ with $\kappa L_a = D^{-3/2}$ an its type (b) vacua at
\begin{equation}
\sigma_b^{\pm} = \sigma_a \pm \sqrt{\frac{1}{B ( \kappa L_a)^{2/3}} \left[ \left(\frac{L_a}{L_{gr}} \right)^{2/3} - 1 \right]} . \label{sigma b gmn}
\end{equation}
Their number depends on how many real values $\sigma_b^\pm \in \mathbb{R}$ can take. Thus, the existence and the number of the type (b) vacua is determined by the sign of $B$ and on the values of the ratio $L_a/L_{gr}$. Notice that, if $B < 0$, there are Minkowski vacua at
\begin{equation}
\sigma_M^\pm = \sigma_a \pm \sqrt{\frac{D}{|B|}} ,
\end{equation}
allowing the relation (\ref{sigma b gmn}) to be written as
\begin{equation}
\frac{(\sigma_b - \sigma_a)^2}{(\sigma_M - \sigma_a)^2} = 1 - \left( \frac{L_a}{L_{gr}} \right)^{3/2} ,
\end{equation}
which is valid for $B < 0$ only. Since for $B < 0$ we have $L_a < L_{gr}$, the relation above shows that $0 < (\sigma_b - \sigma_a) / (\sigma_M - \sigma_a) < 1$, i.e. the Minkowski vacua are farther from $\sigma_a$ than the type (b) vacua.
We complete our description of the vacua structure of the NMG model with superpotential (\ref{W dmn}) by listing all the possible different sets of allowed vacua, depending on the signs and the values of the parameters of this superpotential (see fig.1). In all the cases there exists one type (a) vacuum. With regard to the other vacua, we have:
\begin{itemize}
\item (I) : $L_a > L_{gr}$
\subitem I.a . $B > 0$ : There are vacua of type (b);
\subitem I.b . $B < 0$ : There are Minkowski vacua;
\item (II) : $L_a < L_{gr}$
\subitem II.a . $B > 0$ : There are no Minkowski nor type (b) vacua;
\subitem II.b . $B < 0$ : There are both Minkowski and type (b) vacua;
\item (III) (critical case) : $L_a = L_{gr}$
\subitem III.a . $B > 0$ : The only vacuum is $\sigma_{a} = \sigma_b$;
\subitem III.b . $B < 0$ : There are Minkowski vacua as well as $\sigma_a = \sigma_b$.
\end{itemize}
The explicit form of the duality transformation (\ref{int s til}) specific for the considered superpotential (\ref{W dmn}) is given by:
\begin{equation}
\til\sigma - \til\sigma_a = \frac{L_a}{L_{gr}} \, \frac{\sigma - \sigma_a}{\sqrt{1 + \frac{B}{D}(\sigma - \sigma_a)^2}} , \label{til sigma gmn}
\end{equation}
where the (arbitrary) integration constant is denoted by $\til\sigma_a$. It determines the position of the (a) type vacua dual to the original type (a) one, i.e. we have $\sigma_a \rightarrow \til\sigma_a$ under the duality transformation (\ref{til sigma gmn}). This arbitrariness can (and will) be used to fix one of the (b) vacua $\sigma^\pm_b$ as a fixed point of the duality transformation.
Substituting eq. (\ref{til sigma gmn}) into (\ref{def dual}), one derives the form of the dual superpotential
\begin{equation}
\til W(\til\sigma) = \left[ \til B(\til\sigma - \til\sigma_a)^2 + \til D \right]^{3/2} ,\quad \til B = - ( L_{gr} / L_a )^{2/3} \, B \; ,\;\;\quad \til D = \frac{1}{( \kappa L_{gr})^{4/3} D} . \label{dual param. gmn}
\end{equation}
The last equation for $\til D$ was to be expected, since it reflects only the fact that $L_a = L_{gr}^2 / \til L_a$ (recall that $L_a = ( \kappa D)^{-3/2}$). The difference of sign between the dual superpotentials is not important\footnote{The global sign of the superpotential (or it's dual) is relevant only in the identification $W(\sigma_A) = \pm 1 / L_A$, where the sign must be chosen in order to make $L_A$ positive.}, and $\til W(\til\sigma)$ has the same vacua structure as it's dual, which is described by cases $(Ia)$, etc. above -- but now with the ``tilde'' quantities $\til B$, $\til L_a$, etc. Since the transformation (\ref{dual param. gmn}) changes the sign of $B$, i.e. $B/\til B < 0$, we establish the duality equivalence between the following models :
$$ {\mathrm{(I.a)}} \Leftrightarrow {\mathrm{(II.b)}} \; ; \;\;\; {\mathrm{(I.b)}} \Leftrightarrow {\mathrm{(II.a)}} \; ; \;\;\; {\mathrm{(III.a)}} \Leftrightarrow {\mathrm{(III.b)}}$$
as one can see on fig.2. The most interesting case is the first one, so we will analyse it in more detail. It corresponds to $L_a > L_{gr}$ and $B > 0$, thus $\til L_a < L_{gr}$ and $\til B < 0$.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.6]{diagr_3_2}
\begin{quotation}
\caption[ed]{\small Graphic representation of the admissible vacua stricture and of the expected RG flows for $\epsilon=-1$ and $m^2<0$.}
\label{fig:1}
\end{quotation}
\end{figure}
The vacua structure compiled in the cases (I) to (III) above is not complete without the information about the stability (UV versus IR) of the corresponding vacua, according to the sign of the critical exponents $y_A$ given by (\ref{sdim}). We have
\begin{eqnarray}
y_{a} = - 6 B \frac{L_a^{2/3}}{ \kappa^{4/3}} \, \left[ 1 - \left( \frac{L_{gr}}{L_a} \right)^2 \right] ; \;\;\; y_b = 4 B \frac{L_{gr}^{2/3}}{ \kappa^2} \left[ 1 - \left( \frac{L_{gr}}{L_a} \right)^{2/3} \right] .
\end{eqnarray}
and therefore in the cases (I.a) and (II.b) where $y_a < 0$ - the type (a) vacuum is an IR critical point. The type (b) vacuum has $y_b > 0$ and hence it corresponds to an UV critical point. In the cases (I.b) and (II.a) the sign of $y_a$ is reversed, i.e. $y_a >0$ and now the type (a) vacua are representing the UV critical points.
The type (b) vacua $\sigma^\pm_b$ are mapped into the type (b) vacua $\til \sigma_B^\pm$ of the dual theory through eq. (\ref{til sigma gmn}):
\begin{equation}
\til\sigma_b^\pm - \til\sigma_a = \left( \frac{L_a}{L_{gr}} \right)^{2/3} (\sigma_b^\pm - \sigma_a) . \label{til s b dmn}
\end{equation}
As said before, the constant of integration $\til\sigma_a$ can be chosen in order to set one of the type (b) vacua as a fixed point of the duality transformation, namely $\sigma_b^- = \til\sigma_b^-$. Thus we must have
\begin{equation}
\til\sigma_a = \left( \frac{L_a}{L_{gr}} \right)^{2/3} \sigma_a - \left[ \left( \frac{L_a}{L_{gr}} \right)^{2/3} - 1 \right] \sigma_b^- .
\end{equation}
On the other hand, the constant $\sigma_a$ is also arbitrary, since it can be changed by a translation of $\sigma$. Hence we can further adjust it in order to put the fixed point $\sigma_b^-$ at the origin. By taking
\begin{equation}
\sigma_a = \sqrt{ \frac{1}{B ( \kappa L_a)^{2/3}} \, \left[ \left( \frac{L_a}{L_{gr}} \right)^{2/3} - 1 \right] } , \label{sigma a fixed}
\end{equation}
we get $\sigma_b^- = 0$, and also that $\sigma_b^+ = 2 \sigma_a$ (cf. eq. (\ref{sigma b gmn})). An important consequence of this choice is that the values of the corresponding critical couplings $\til\sigma_a$ of the dual model
\begin{equation}
\til\sigma_a = \left( \frac{L_a}{L_{gr}} \right)^{2/3} \sigma_a ,
\end{equation}
are greater then $\sigma_a$, i.e. $\til\sigma_a > \sigma_a$ in the cases (I.a) and (I.b), when we have $L_a > L_{gr}$. Therefore in the asymptotic regime of very large scales $L_a >> L_{gr}$, we realize that the weak coupling critical point $\sigma_a - \sigma_b^- = \sigma_a << 1$, is mapped to the strong coupling regime of the dual model since now we have that $\til\sigma_a - \til\sigma_b^- = \til\sigma_a >> 1$.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.45]{dual_3_2}
\begin{quotation}
\caption[ed]{\small Symbolic diagram demonstrating the duality between different phases in the case $L_a>L_b$ and $B>0$.}
\label{fig:2}
\end{quotation}
\end{figure}
In order to find the scale factor, we integrate the $\beta$-function equation, obtaining
$$ \varphi(\sigma) = - \frac{ \kappa^2}{3 \varepsilon B} \int \! \frac{B(\sigma-\sigma_a)^2 + D}{(\sigma - \sigma_a) \left\{ 1 - \kappa^2 L_{gr}^2 \left[ B (\sigma - \sigma_a)^2 + D \right]^3 \right\}} \, d\sigma + {\mathrm{const}} .$$
With the substitution $g(\sigma) = B(\sigma-\sigma_a)^2 + D$, we get
\begin{eqnarray}
& \varphi(\sigma) = \frac{1}{6 \epsilon B L_{gr}^2} \sum_{i=0}^3 A_i \log(g - g_i) + \varphi_{\infty}, \nonumber\\
& e^{ \varphi(\sigma)} = e^{ \varphi_\infty} \, \prod_{i=0}^3 |g - g_i|^{x_i},\quad x_i = A_i / 6 \epsilon B L_{gr}^2\nonumber
\end{eqnarray}
where
\begin{eqnarray}
&g_1 = \kappa^{-2/3} L_{gr}^{-2/3} ,\quad g_2 = g_3^* = -( \kappa L_{gr})^{-2/3} \left(\frac{1 + i \sqrt{3}}{2} \right),\quad A_0 \equiv A_a = \frac{- \kappa^{4/3} L_{gr}^2L_a^{-2/3}}{ 1- (L_{gr} /L_a)^{2} },\nonumber\\
&g_0 \equiv g_a = D, \quad A_1 = \frac{ \kappa^{4/3} L_{gr}^{4/3}}{3 \left[ 1- (L_{gr}/L_a)^{2/3} \right]}, \quad A_2 = A_3^* = \frac{ \kappa^{4/3} L_{gr}^{4/3} \, (i - \sqrt{3})}{3 \left[2 i + \left( i + \sqrt{3} \right) (L_{gr}/L_a)^{2/3} \right]}. \label{A23}
\end{eqnarray}
Notice that although both the exponents $x_{2,3}$ and the roots $g_{2,3}$ are complex, the last two terms of the product above are real, and so is the expression for the scale factor. Also notice that $\sum x_i = 0$, hence if $\sigma \to \infty$ we have $e^ \varphi \to e^{ \varphi_\infty}$, allowing for the possibility of a naked singularity. This property also allow us to rewrite the scale factor more explicitly as
\begin{eqnarray}
e^{ \varphi(\sigma)} = e^{ \varphi_{\infty}} \, (\sigma - \sigma_a)^{2 x_a} \, (\sigma - \sigma_b^+)^{x_1} (\sigma - \sigma_b^-)^{x_1} \prod_{i=2}^3 \left[ (\sigma - \sigma_a)^2 + (D - g_i) / B \right]^{x_i} , \label{scale factor dmn}
\end{eqnarray}
where $\sigma_b^- = 0$ and $\sigma_b^+ = 2 \sigma_a$, with the constant $\sigma_a$ being given by (\ref{sigma a fixed}). It is now evident the singular behaviour of the scale factor (and hence of the correlation length) near to the vacua (i.e. the critical points).
It is worthwhile to mention that the particular (degenerate) case $D=0$ leads to superpotential:
\begin{eqnarray}
W(\sigma)=E(\sigma-\sigma_M)^3,\quad\quad \ E>0,
\end{eqnarray}
which has different vacua structure and in fact it is \emph{not any more partially self-dual}. The duality transformation in this case has the form:
\begin{eqnarray}
\sigma-\sigma_M=\sqrt{\frac{1}{2L_{gr}E}}\frac{1}{\sqrt{\tilde\sigma_M-\tilde\sigma}}, \ \quad\quad \tilde\sigma_M>\tilde\sigma,\label{aaaa}
\end{eqnarray}
which allows us to deduce the explicit form of the corresponding dual superpotential:
\begin{eqnarray}
\tilde W(\tilde\sigma)=\tilde E\left(\tilde\sigma_M-\tilde\sigma\right)^{3/2},\quad \;\;\;\;\;\;\tilde E=2^{3/2}\left(\frac{E}{L_{gr}}\right)^{1/3},\nonumber
\end{eqnarray}
Evidently we have an example of dual transformation that is \emph{not preserving} neither the shape of the superpotential nor the values of its parameters.
Let us also mention that the examples we have studied in this subsection, do not exhaust all the possible partially self-dual superpotentials of one or two type (a) vacua. Another physically interesting example of pairs of NMG's is given by the following periodic superpotential:
\begin{equation}
W(\sigma) = B \left[ D - \cos (\alpha \sigma) \right] , \quad\quad B < 0 , \label{W cos}
\end{equation}
whose vacua structure -of \emph{two} type (a) vacua-, its duality properties and also certain features of the phases of its dual $pCFT_2$ are described in the Appendix below.
The construction of examples of self-duals and partially self-dual NMG's based on superpotentials having more then \emph{two type (a) non-degenerate} critical points and the explicit forms (\ref{til sigma gmn}) of the corresponding duality transformations, involves relatively big number of W-parameters (i.e. the matter fields couplings $g_k$ as B,D etc.). It represents a rather complicated open problem and requires better understanding of the group properties of the couplings $\sigma$-transformations and of the group structure behind our definition of the
partial self-duality, as well as further investigations of the group-theoretical nature of the W-parameters $\til g_k=\til g_k(g_k)$ transformations, see (\ref{dual param. gmn}).
\subsection{Unitary consistency of duality}
An important test for the physical consistency of the pairs of dual vacua, i.e. pair of AdS$_3$'s of dual radii $L_a$ and $\til L_a=L^2_{gr}/L_a$, is the verification of whether and under what conditions (if any) they both belong to the same BHT negative unitary window (\ref{bhtun}).
Let us first briefly remind the content of the BHT unitarity conditions for the NMG models \cite{1,more}. Remember that the negative value of $m^2_{\sigma}(A)$ for scalar fields (tachyons) in AdS$_3$ backgrounds, which appears in the dimensions of the relevant operators, do not cause problems when the Breitenlohner-Freedman (BF) condition \cite{BF},
\begin{eqnarray}
\Lambda_{{\mathrm{eff}}}^A\le m_{\sigma}^2(A), \label{A3}
\end{eqnarray}
is satisfied. The unitarity of the purely gravitational sector of the NMG model (\ref{acao}) requires that the following Bergshoeff-Hohm-Townsend (BHT) conditions \cite{1,more}:
\begin{eqnarray}
m^2\left(\Lambda_{{\mathrm{eff}}}^A-2\epsilon m^2\right)>0,\quad\quad
\Lambda_{{\mathrm{eff}}}^A\le M_{gr}^2(A)=-\epsilon m^2+\frac{1}{2}\Lambda_{{\mathrm{eff}}}^A, \label{bht}
\end{eqnarray}
take place. Taking into account that for the each vacua $\sigma_A$ the BHT -parameter $ \lambda_A$ can be realized as follows:
\begin{eqnarray}
&& \lambda_a= \lambda(\sigma^*_a)=\frac{ \kappa^2}{2m^2}V(\sigma^*_a)=\frac{L^2_{gr}}{L^2_a}\big(\frac{L^2_{gr}}{L^2_a}-2\big), \label{launit}
\end{eqnarray}
which for the type $(b)$ vacuum, i.e. $W_{\pm}^2=\frac{2\epsilon m^2}{ \kappa^2}$, reproduces the lower bound $ \lambda=-1$ of BHT-condition (\ref{bhtun}).
In order to derive the unitarity restrictions on the generic type $(a)$ vacuum we introduce the following notation: $q=\frac{\Lambda_{eff}^{a}}{\Lambda_{eff}^{b}}=\frac{ \kappa^2 W_{*}^2}{2\epsilon m^2}$. Then we have $ \lambda_{(a,b)}=q(q-2)\equiv \lambda(q)$,
which makes evident that $ \lambda(q)= \lambda(2-q)$. Therefore the $ \lambda_a$ values for which the unitarity condition (\ref{bhtun})) is satisfied are imposing restrictions on the allowed $L_a$ values:
\begin{eqnarray}
0\leq \frac{L^2_{gr}}{L^2_a} \le 2, \quad\quad \epsilon=-1, \quad\quad m^2<0 \label{bhtlaa}
\end{eqnarray}
and consequently on the central charges (\ref{ch}) of the corresponding CFT's. The type (b) NMG vacua are known to be always unitary \cite{nmg} of $ \lambda=-1$ and whether it represents UV or IR critical points of the dual pCFT$_2$ depends on the sign factor only: UV for $\epsilon=-1$, since we have $y_b>0$, and IR for $\epsilon=1$. The properties of the type (a) critical points (UV or IR) do depend on both the sign of $\epsilon$ and the particular form of the matter superpotential, as one can see from eq.(\ref{sdim}). The unitarity of the NMG-matter model is still an open problem, and it requires further analysis of the linear fluctuations around the DW's relating, say, two unitary BHT-vacua from the negative \emph{BHT-unitary window}: $-1\leq \lambda<0, \epsilon=-1$, $m^2<0$. We are however obliged to require that at least all the NMG-matter model's vacua are BHT-unitary.
In the context of the NMG duality transformations, when applied for the critical points $\til L_a = \frac{L_{gr}^2}{L_a}$, we impose an additional condition, namely that the ''dual'' scales $\til L_a$ and $L_a$ are both belonging to the same (negative) BHT - unitary window(\ref{bhtun}). Taking into account the eqs.(\ref{bhtlaa}) and (\ref{dual scales}) we conclude that the NMG duality (\ref{def dual}) is compatible with the NMG unitarity only when the following conditions are fulfilled :
\begin{eqnarray}
\frac{L_{gr}}{\sqrt{2}} \leq \til L_a \leq L_{gr} \leq L_a \leq L_{gr} \sqrt{2} \label{unidual}
\end{eqnarray}
Hence when $L_a$ and its dual scale $\til L_a$ both belong to certain finite interval of values $(L_{gr}/\sqrt{2}, L_{gr} \sqrt{2})$ they describe dual pairs of unitary NMG's vacua.
\subsection{On the group properties of partial self-duality}
One of the main features of the strong-weak coupling duality is that in the self-dual 2-,3- and 4-dimensional (supersymmetric) QFT's, it is always realized as an inversion transformation and more generally as fractional linear transformations belonging to certain (discrete)\footnote{i.e. of $SL(2,Z)$ as for example in the cases of models having discrete spectrum of energies or/and charges- electric and magnetic etc.}
subgroups of $SL(2,C)$ \cite{seiberg,seibit,cardy-itz}. It is therefore important to verify whether these well known properties of the QFT's duality (or certain limits of them) take place in the particular examples of $pCFT_2$'s duals to (pairs of) NMG models with appropriately chosen superpotentials (\ref{def dual}). The question about the gravitational $d=3$ NMG meaning of the $d=2$ conditions of strong-weak coupling duality symmetries in the considered two dimensional $pCFT$'s is also addressed.
Let us remind that the requirements on the NMG's superpotentials (\ref{def dual}), that select holographic self-dual $pCFT_2$'s, have been introduced by extending the ''critical'' duality transformation $\til L_A = \frac{L_{gr}^2}{L_A}$ (at each critical point $\sigma^*_A$) to its ''off-critical'' equivalent (\ref{def dual}). As a consequence we have deduced the explicit form (\ref{int s til}) of the corresponding coupling's $\til \sigma =\til \sigma (\sigma)$ transformations that are keeping invariant the central charges, central functions, s.p. of the reduced free energy, but in principal they are changing the form of the exact holographic $\beta$-functions, according to eqs.(\ref{dual beta}). Notice that the $L_A$'s transformation (and the W's as well) represents a particular $G_L\in GL(2,R)$ transformation, i.e.
\begin{equation}
\til L_A=\frac{aL_A+b}{cL_A+d},\quad\quad G=\left( \begin{array}{cc}
a & b \\
c & d
\end{array} \right),\quad\quad G_L=\left( \begin{array}{cc}
0 & L^2_{gr} \\
1 & 0
\end{array} \right),\quad \quad G^{-1}_L=\left( \begin{array}{cc}
0 & 1 \\
\frac{1}{L^2_{gr}} & 0
\end{array} \right) \label{fraclin}
\end{equation}
By introducing their dimensionless counterparts, say $l_A=L_A/L_{gr}$ we indeed recover the well known standard large-small radii $Z_2$ inversion transformation:
\begin{equation}
\til l_A= 1/l_A, \quad\quad\quad \text{i.e.} \quad G_I=\left( \begin{array}{cc}
0 & 1 \\
1 & 0
\end{array} \right)=G^{-1}_I,
\end{equation}
such that the large $l_A\gg 1$ (i.e. $L_A\gg L_{gr}$) are mapped to the very small $\til l_A\ll 1$ ones (in the $L_{gr}$ units of length).
We next consider the problem of the similarities and the differences between the group properties of the particular strong-weak coupling $pCFT_2$'s duality transformations (\ref{sdu}) and (\ref{til sigma gmn}), present in the specific -one type (a) vacua- examples of self-dual(SD) and partially self-dual(PSD) models, studied in Sect.3.2.
\textit{Self-dual models.} The corresponding SD coupling's transformation is almost identical of the $L_A$'s ones:
\begin{equation}
\til \sigma=\frac{\sigma^2_+}{\sigma} ,\quad\quad \sigma_+ = 1 / \sqrt{ \kappa L_{gr}B} \quad\quad \text{with} \quad\quad \sigma_+ = \til \sigma_+ ,\label{sdtr}
\end{equation}
which takes the standard inversion form $\til u_{sd}=1/ u_{sd}$ for the rescaled coupling $u_{sd}=\frac{\sigma}{\sigma_+}$. Notice that the strong couplings $\sigma\gg \sigma_+$ are mapped to the weak ones $\til \sigma\ll \sigma_+$. An important feature of this self-dual model is that the above SD transformations are leaving invariant \footnote{together with the free energy, central function and the anomalous dimensions} the RG equation:
\begin{eqnarray}
\frac{du_{sd}}{dl} = \frac{4\epsilon}{\kappa^2 u_{sd}}\big(1-u^4_{sd}\big)=-\beta_{sd}(u_{sd}), \label{rgsd}
\end{eqnarray}
and the form of the corresponding exact $\beta-$function, i.e. we have $\beta_{sd}(u)=\beta_{sd}(\til u)$ as well. It is important to mention that this $\beta_{sd}$-invariance property is indeed consistent with the general covariance requirement (\ref{dual beta}). It reflects a very particular form of our SD superpotential and related to it $\beta_{sd}$.
\textit{Partially self-dual models.} Let us consider the PSD transformation (\ref{til sigma gmn}) for the square of the coupling $u=(\sigma-\sigma_a)^2$, i.e.\footnote{ we are simultaneously rescaling the B and D parameters in the way that the equivalent rescaling of the superpotential $w= \kappa L_{gr} W$ leads to the standard inversion form of the duality condition (\ref{def dual}): $\til w(\til \sigma)=1/w(\sigma)$.}:
\begin{equation}
\tilde u = \left(\frac{1}{d^2}\right) \frac{u}{d+bu} , \quad\quad u>0, \quad d=( \kappa L_{gr})^{2/3}D, \quad\quad b =( \kappa L_{gr})^{2/3}B,\quad d>0, \quad b>0 .\label{psdconf}
\end{equation}
It is then evident that it represents a \emph{two parameters subgroup} of the (general) fractional linear transformations $G_{psd}(d,b)\in GL(2,R)$:
\begin{eqnarray}
& G_{psd}=\left( \begin{array}{cc}
1/d^2 & 0 \\
b & d
\end{array} \right)=\left( \begin{array}{cc}
1/d^2 & 0 \\
0 & d
\end{array} \right)\left( \begin{array}{cc}
1 & 0 \\
b/d & 1
\end{array} \right) ; \; G^{-1}_{psd}=\left( \begin{array}{cc}
d^2 & 0 \\
-bd & 1/d
\end{array} \right) =\left( \begin{array}{cc}
1/\til d^2 & 0 \\
\til b & \til d
\end{array} \right), \nonumber\\ \label{psdgroup}
\end{eqnarray}
composed as a semi-direct product of one specific ``dilatation'', of ${\mathrm{Det}} \, G_{dil}=1/d$, and the special conformal transformation\footnote{remember that one can always realize the special conformal transformation as a product of tree consecutive transformations --- inversion, translation by $b/d$ and one more inversion.} of parameter $b/d$ --- with the well known group laws: $d_3=d_1d_2$ and $b_3=b_2d_1+b_1/d^2_2$. Notice that, differently from the SD transformation (i.e. the simple inversion), the inverse element $G^{-1}_{psd}$ in the PSD case \emph{is not coinciding} with $G_{psd}$. It is instead providing a group-theoretical meaning of the duality transformations (\ref{dual param. gmn}) for the parameters of the superpotential, that according to our general duality formula (\ref{int s til}) are parametrizing the group of the duality transformations: $d \til\sigma/d \sigma = - \kappa L_{gr} \, \til W(\til \sigma)$. Hence the parametric form of the partially self-dual superpotential (\ref{W dmn}) is determined by the PSD duality group elements $G_{psd}(b,d)\in GL(2,R)$.
Thus, our particular choice of the PSD superpotential (\ref{W dmn}) introduces certain group structure on the space of W-parameters, representing the set of couplings in the potential $V(\sigma,G_{pds})$ of the 3d matter field of the NMG-matter model. The superpotential $\til W(\til \sigma, G^{-1}_{psd})$ of the second member of the dual pair of NMG models is then parametrized by the corresponding inverse elements $G^{-1}(b,d)$. This is in fact the NMG \emph{gravitational counterpart} of the $d=2$ QFT's self-duality requirements. It is also in the origin of the important property of the partially self-dual models, namely that the $\beta-$ functions of such pairs of models have the same form, i.e. the PSD transformations (\ref{til sigma gmn}) and (\ref{psdconf}) are keeping invariant the form of the corresponding RG equation:
\begin{eqnarray}
\frac{d\sigma}{dl} = \frac{6\epsilon B (\sigma-\sigma_a)}{\kappa^2 (B(\sigma-\sigma_a)^2+D)}\big(1- \kappa^2 L^2_{gr}(B(\sigma-\sigma_a)^2+D)^3\big)=-\beta_{psd}(\sigma;\sigma_a,B,D), \label{rgsd}
\end{eqnarray}
but with its W-parameters $\sigma_a, B,D$ replaced by their duals: $\til \sigma_a$, $\til B$ and $\til D$. Thus, the RG's equation of the dual $pCFT_2$ has the expected form $d\til \sigma/dl=-\beta_{psd}(\til \sigma;\til \sigma_a,\til B,\til D)$. The ''slight'' difference between the \emph{invariance conditions} of the RG equations and of forms of the $\beta$-functions of the considered SD and PSD models has its origin in the different group properties of their coupling transformations (\ref{sdu}) and (\ref{til sigma gmn}).
The specific ``fractional-linear'' form of the PSD transformation (\ref{psdconf}) requires further investigation of the problem of whether strong couplings $u$ are mapped to weak ones $\til u$ for all the values of the parameters $b$ and $d$. Let us first note one particular feature of our PSD transformation (\ref{psdconf}), namely that $ u(0)=0=\til u(0)$ and $u(\infty)=1/bd^2=\til u(1/bd^2)$ and therefore it is mapping the the positive semi-axis $u\in (0,\infty)=R_+$ to the finite interval $\til u \in (0,1/bd^2)$. It is then clear that in order to transform the large values of $u$ ( and of $\sigma$ as well) into the small ones of the $\til u$ and vice-versa we have to impose the following restriction on the values of the parameters $b$ and $d$:
\begin{equation}
bd^2\gg 1 \quad\quad \text{or equivalently} \quad\quad BD^2\gg \frac{1}{ \kappa^2 L^2_{gr}}=2|m^2|/ \kappa^2 . \label{strwe}
\end{equation}
\noindent
\textit{Symmetries of RG equations vs. Duality.} As we have shown, the ``duality invariance'' of the RG equations and of the form of the holographic $\beta$-functions turns out to be one of the main features specific for the class of the SD and PSD models only\footnote{ in the case of generic duality transformations (\ref{def dual}) and (\ref{dutrans}), the pairs of dual $\beta-$functions are related by the eq.(\ref{dual beta}) and the corresponding RG equations does not remain invariant.}. It is important however to mention that the SD and PSD duality transformations are \emph{not exhausting} all the symmetries of the RG equation. In fact one can find more symmetries of the corresponding RG equations, that are \emph{not preserving} neither the central function nor the free energy. Therefore the invariance of RG equations under a kind of strong-weak coupling transformations \emph{can't be considered} as a definition of (partial) self-duality of $pCFT_2$'s under investigation. We shall give an example of such ``additional'' symmetries of the RG eq.(\ref{rgsd})for the SD model. Let us first rewrite it in the following equivalent form :
\begin{eqnarray}
\frac{dg}{dl}=g^2-a^2, \quad\quad g=8B^2L^2_{gr}\sigma^2,\quad\quad a=\frac{8BL_{gr}}{ \kappa}.\label{eq.g}
\end{eqnarray}
Apart of the already discussed duality symmetry $\til g =\frac{a^2}{g}$, it is also invariant under specific fractional linear transformations
\begin{eqnarray}
g(l)\rightarrow g'(l)=\frac{\cosh(a\gamma)g(l)-a \sinh(a \gamma)}{-\frac{\sinh(a\gamma)}{a}g(l)+\cosh(a\gamma)},\label{uly}
\end{eqnarray}
where $\gamma\in R$ is an arbitrary real parameter. These transformations\footnote{notice that the corresponding transformations of the original ``coupling variable'' $\sigma=\frac{\sqrt{g}{2}}{2a \kappa}$ are also forming an $SO(1,1)$ group} can be recognized as an $SO(1,1)$ subgroup of $SO(2,1)$. In spite of the fact that for certain restrictions on the parameters $a$ and $\gamma$ they are mapping strong to weak couplings, they \emph{are not} keeping invariant the corresponding central
functions,anomalous dimensions and free energy and therefore are not representing duality transformations at all.
It is worthwhile to mention that the above eq.(\ref{eq.g}) also appears as RG equation for two rather different models:(a)the RG eq. of the $pCFT_2$ dual to the NMG model of linear superpotential (see ref.\cite{nmg}); (b) the well known one-loop RG equation with the perturbative $\beta$-function given by (\ref{pertrg}), specific for the perturbations of the so called $\Phi_{13}$ relevant operators of the minimal $CFT_2$'s \cite{cardy,x}. In both cases however neither its inversion symmetry $\til g =\frac{a^2}{g}$ nor the above considered $SO(1,1)$ symmetry (\ref{uly}) act as proper strong-weak coupling duality.
\textit{Few comments and relevant open questions:}
$\bullet$ The two simple examples of self-dual and partially self-dual superpotentials, that generate very specific (limits of) duality groups and give rise to self-dual pCFT$_2$'s, are indeed not representing all the possible (partial) self-duality transformations. One could considerer, for example, a simple three parameters quadratic superpotential, that turns out to generate (within the NMG context considered in this section) more general $SL(2,R)$ duality transformations.
$\bullet$ The most interesting cases of explicit realizations of the self-duality in the mentioned 2d and 4d QFT's models(see for example \cite{seiberg,seibit,cardy-itz}), that have the $SL(2,Z)$ (sub)group as duality symmetries, are known to be with complex valued coupling constant (or equivalently of two real couplings). In the case of the considered NMG-matter models, it will corresponds to specific two scalar fields matter interactions superpotentials. The problem of the generalizations of the concepts of NMG duality (\ref{def dual}) to the case of complex fields, based on an appropriate I-st order system of DW's equations, and of the corresponding constructions of the two $\beta-$functions in terms of these superpotential is under investigation.
\setcounter{equation}{0}
\section{Holographic RG flows and self-duality} \label{Holographic RG flows}
The off-critical NMG$_3$/QFT$_2$ conjecture, based on the holographic RG eqs.(\ref{rg}), is a natural generalization of the standard ($m^2\rightarrow \infty$) holographic RG \cite{VVB,rg}. Let us remind its content: there exists a family of QFT$_2$ such that their near-critical behaviour and phase structure admit a non-perturbative geometrical description in terms of DW's solutions of the NMG-matter model (\ref{acaoo}) with an appropriately chosen superpotential $W(\sigma)$. The first part of this statement concerns the identification of the NMG vacua $(\sigma^*_A,L_A,y_A)$ with the critical $CFT_2$-data of the dual QFT$_2$ as we have done in Sect. 2 above. Its second part is about the explicit relation between the set of ``consecutive" DW solutions
$$DW_{k,k+1}=\Big(\sigma(z),e^{\varphi(z)};z\in R \quad|\quad\sigma^*_k,L_k\rightarrow \sigma^*_{k+1},L_{k+1}\Big),\;\;\;\;\sigma \in R,$$
and all the $QFT_2$ phases $p^{ml}_{k,k+1}=(\sigma^*_{k}(IR),\sigma^*_{k+1}(UV))$ described by the coupling constant $\sigma_{k,k+1}(l)$ and the s.p. of the free energy $F_s(\sigma)\approx e^{-\varphi(\sigma)}$ behaviours. In what follows, our attention is
concentrated on the properties of the couples of neighbour DW's of common boundary ($\sigma^*_{{\mathrm{UV}}},L_{{\mathrm{UV}}},y_{{\mathrm{UV}}}$) that have different (IR)-horizons b.c.'s, say for example $(\sigma^*_{{\mathrm{IR}}},\sigma^*_{{\mathrm{UV}}})$ and $(\sigma^*_{{\mathrm{UV}}},\infty)$. They represent the main ingredient in the description of the phase transitions and of the nature of the holographic RG flows \cite{nmg,oldholo}.
\subsection{The phases of the self-dual superpotential}
Let us recall which of the solutions of the RG eqs. (\ref{rg}) and (\ref{fs}) -- defined within a given interval, say $\sigma\in (\sigma_{+},\infty)$ or $\sigma\in (\sigma_{-},\sigma_s)$, etc. -- can be identified as describing the particular \emph{massive RG flows} in the related QFT$_2$. The main requirement is that the running coupling $|\sigma(l)-\sigma_{+}|$ gets its maximal value for a \emph{finite} RG distance, for example $\sigma(L_{max})=\infty$ or $\sigma(L_{max})=\sigma_{max}=|\sigma_s - \sigma_+ |$, etc., imposing that the correlation length, say $\xi(\infty)=\xi_{max}= 1/M_s$ always has a finite maximal value. Then its inverse defines the smallest mass gap in the energy spectrum, and as a consequence of eqs. (\ref{fs}) the corresponding 2-point correlation function manifests an \emph{exponential decay}: $e^{-M_{ms}|x_{12}|}$ -- typical for the IR limit of the propagator of a free massive particle. This behaviour has to be compared to the one corresponding to the \emph{massless RG flows}, where the maximal distance $|\sigma_{{\mathrm{IR}}}-\sigma_{{\mathrm{UV}}}|$ from the starting (at $L_*=0$) UV critical point is reached for $L_{max}=\infty$, i.e. $\xi(L_{max})= \infty$ and therefore no mass gap exists, since $M^2=0$. As a result, the correlation functions at an IR critical point have power-like (scale invariant) behaviour.
Examples of such massless phases are found in the self-dual superpotential $W=B\sigma^2$. Taking $B>0$, we have two massive phases $p_{{\mathrm{flat}}}^{\mathrm{ms}}=(0,\sigma_{+})$ and $p_{{\mathrm{n.s.}}}^{\mathrm{ms}} =(\sigma_{+},\infty)$, described holographically by two DW's, one of $E_3/AdS_3$ type and the other of AdS$_3$/n.s. type, with a common boundary at the type (b) vacuum $\sigma_+ = 1 / \sqrt{ \kappa L_{gr}B}$. We consider here only positive values of $\sigma$ because of the $Z_2$ symmetry of the superpotential. This massive nature of the phases can be apprehended by the correlation length $\xi(\sigma)$, which can be found through the corresponding RG equation:
\begin{eqnarray}
\frac{d\sigma}{dl}=-\beta_{qp}(\sigma)= \frac{4\epsilon}{\kappa^2\sigma}\big(1-\kappa^2L^2_{gr}B^2\sigma^4\big) . \label{qpot}
\end{eqnarray}
It has as solution $\sigma^2(l)=\sigma^2_{+} \, \coth(l_0-\frac{y_{+}l}{2})$, leading to
\begin{eqnarray}
e^{-l}\approx\xi(\sigma) = \left[ \frac{(\sigma^{2}/\sigma_+^2) + 1}{(\sigma^{2}/\sigma_+^2) - 1} \right]^{\frac{1}{y_{+}}} \left[ \frac{(\sigma^{2}_0 / \sigma_+^2) - 1}{(\sigma^{2}_0 / \sigma_+^2) + 1} \right]^{\frac{1}{y_{+}}},\quad y_{+}=-\frac{16\epsilon BL_{gr}}{\kappa}. \label{flatsol}
\end{eqnarray}
This expression is singular at $\sigma_+$, and the divergence (for $\epsilon = -1$) of the scale factor shows that it is an UV vacuum. On the other hand, the correlation length takes \emph{finite} values at both the singular point $\sigma_s = 0$, which is a flat vacuum in the weak-coupling ``massive-flat" phase, and at $\sigma \to \infty$, in the standard strong-coupling massive phase. It is easy to calculate the corresponding mass gaps, say
$$ M_{{\mathrm{n.s.}}}(\sigma_0)=1/\xi(\infty)= \big(\kappa L_{gr}B\sigma^{2}_0-1\big)^{\frac{1}{y_{+}}}\big(\kappa L_{gr}B\sigma^{2}_0+1\big)^{-\frac{1}{y_{+}}} , $$
thus confirming the massive nature of $p_{{\mathrm{flat}}}^{\mathrm{ms}} = (0,\sigma_{+})$ and $p_{{\mathrm{n.s.}}}^{\mathrm{ms}} = (\sigma_{+},\infty )$.
The duality transformation here is known from Sect.\ref{self-duality} to be $\til \sigma = \sigma_+^2/ \sigma$, leaving the superpotential invariant: $\til W(\til\sigma) = B \til\sigma^2$, as well as the vacuum $\sigma_+ = \til\sigma_+$. But the singular points are ``exchanged'' through $\til\sigma (\sigma_s = 0) = \infty$ and $\til\sigma_s (\sigma \to \infty) = 0$, and so there is a correspondence between the two massive phases with strong and weak coupling: $M_{{\mathrm{n.s.}}}(\sigma_0)=M_{{\mathrm{flat}}}(\til\sigma_0)$.
\subsection{Phase transitions and partial self-duality}
The phase structure of the partially self-dual superpotential $W(\sigma) = [B (\sigma -\sigma_a)^2 + D]^{3/2}$, studied in Sect.3.2., depend on the range of the values of the parameters $B$ and $L_a^{-1} = \kappa D^{3/2}$, as shown on fig.1. In the case (I.a), corresponding to $L_a > L_{gr}$ and
$B> 0$, we have an IR critical point at $\sigma_a$ and two UV critical points at $\sigma_b^\pm$. There are four DW's, which describe the four different phases of the corresponding dual $QFT_2$ in this region of the parameter space:
\begin{eqnarray}
p^{\mathrm{ms}}_{{\mathrm{n.s.}}} = (-\infty \; , \; \sigma_b^-) \; ; \;\;\;\; p^{\mathrm{ml}}_{{\mathrm{UV}}/{\mathrm{IR}}} = (\sigma_b^- \; ,\; \sigma_a) \; ; \;\;\;\; p^{\mathrm{ml}}_{{\mathrm{IR}}/{\mathrm{UV}}} = (\sigma_a \; , \; \sigma_b^+) \; ; \;\;\;\; p^{\mathrm{ms}}_{\mathrm{n.s.}} = (\sigma_b^+ \; , \; \infty) .
\end{eqnarray}
The nature -- massive (ms) or massless (ml) -- of the phases can be easily read from the scale factor's (\ref{scale factor dmn}) analytic properties, which determines the correlation length of the dual $pCFT_2$:
\begin{equation}
\xi(\sigma) \approx \left( \frac{\sigma_0 - \sigma_a}{\sigma - \sigma_a} \right)^{\frac{1}{y_a}} \left( \frac{\sigma_0 - \sigma_b^+}{\sigma - \sigma_b^+} \right)^{\frac{1}{y_+}} \left( \frac{\sigma_0 - \sigma_b^-}{\sigma - \sigma_b^-} \right)^{\frac{1}{y_-}} \prod_{i=2}^3 \left[ \frac{(\sigma_0 - \sigma_a)^2 + (D - g_i) / B}{(\sigma - \sigma_a)^2 + (D - g_i) / B} \right]^{- x_i} . \label{correl length dmn}
\end{equation}
The critical exponents are given by eqs.(\ref{A23}). They also satisfy the remarkable NMG ''resonance'' condition
$$\frac{1}{y_a}+\frac{1}{y_+}+\frac{1}{y_-}=\sum_{i=2}^{3}x_i,$$
that turns out to hold for all the QFT's models, obtained by $NMG_3$ holography \cite{nmg}. The ``initial condition'' $\sigma_0 \equiv \sigma|_{l = 0}$ of RG rescaling can be further fixed by requiring that $L_*^{(0)} \approx 1$.
As we have shown in Sect.\ref{Examples of dual and self-dual}, for $\epsilon = -1$ we have $y_a < 0$ and consequently $\xi(\sigma_a) \to 0$; therefore $\sigma_a$ is an IR critical point, while for the (b) type critical points : $y_\pm > 0$ hence $\xi(\sigma_B^\pm) \to \infty$ and $\sigma_b^\pm$ are UV critical points. Notice that the finite values of $\xi(\sigma)$ when $\sigma \to \pm \infty$ and, as a consequence, the existence and properties of the massive phase are due to the above mentioned NMG resonance condition, i.e. the fact that the sum of the critical exponents $\nu_k$ (of all the critical points) vanishes. The corresponding values of the mass gaps for the massive phases can be evaluated at these limits $\sigma \to \pm \infty$, which correspond to naked singularities in the NMG-geometry. For example, the strong-coupling massive phase $p^{\mathrm{ms}}_{\mathrm{n.s.}} = (\sigma_b^+ \; , \; \infty)$, is characterized by the asymptotic value of the correlation length (\ref{correl length dmn}), which determines the smallest mass in the dual model:
\begin{equation}
M_{({\mathrm{ms}})} \approx \xi^{-1}|_{\sigma \to \infty} = \left( \sigma_0 - \sigma_a \right)^{\frac{1}{y_a}} \left( \sigma_0 - \sigma_b^+\right)^{\frac{1}{y_+}} \left( \sigma_0 - \sigma_b^- \right)^{\frac{1}{y_-}} \prod_{i=2}^3 \left[ (\sigma_0 - \sigma_a)^2 +(D - g_i)/B \right]^{-x_i}
\end{equation}
We next describe the duality between the strong- and weak-coupling phases of the considered partially self-dual pCFT$_2$ model, i.e. how the duality transformation (\ref{til sigma gmn})-(\ref{dual param. gmn}) is effectively mapping the phases of this model. As we have demonstrated in Sect.3.2., the phases \emph{duals} of the above considered (I.a) case are those of the (II.b)- model (see fig.2.):
\begin{eqnarray}
p^{\mathrm{ms}}_{flat} = (\til\sigma_M^- , \til\sigma_b^-) \; ; \;\;\;\; p^{\mathrm{ml}}_{{\mathrm{UV}}/{\mathrm{IR}}} = (\til\sigma_b^- ,\til \sigma_a) \; ; \;\;\;\; p^{\mathrm{ml}}_{{\mathrm{IR}}/{\mathrm{UV}}} = (\til\sigma_a , \til \sigma_b^+) \; ; \;\;\;\; p^{\mathrm{ms}}_{flat} = (\til\sigma_b^+ , \til \sigma_M^+) .
\end{eqnarray}
i.e. of our original partially self-dual model, but now with different range of the values of the parameters: $\til B < 0$ and $\til L_a < L_{gr}$. The correlation length $\til\xi(\til\sigma)$ has the same form (\ref{correl length dmn}) as above, but with the parameters exchanged by the duality according to eq.(\ref{dual param. gmn}). Notice that although $B$ changes its sign, the critical exponents do not, since the ratio $L_{gr}/\til L_a$ is now greater than unity. We have to remind that the corresponding ''dual massive'' phases correspond to non-singular, E$_3$/AdS$_3$ DW's solutions, with a mass gap given by
\begin{eqnarray}
& \til M_{\mathrm{ms}} \approx \til \xi |_{\til \sigma \to \til \sigma_M^+} = \left( \frac{\til\sigma_M^+ - \til\sigma_a}{\til\sigma_0 - \til\sigma_a} \right)^{- \frac{1}{\til y_a}} \left( \frac{\til\sigma_M^+ - \til\sigma_b^+}{\til\sigma_0 - \til\sigma_b^+} \right)^{-\frac{1}{\til y_+}} \times \nonumber \\
& \quad\quad\quad\quad\quad\quad \times
\left( \frac{\til\sigma_M^+ - \til\sigma_b^-}{\til\sigma_0 - \til\sigma_b^-} \right)^{-\frac{1}{\til y_-}} \prod_{i=2}^3 \left[ \frac{(\til\sigma_M^+ - \til\sigma_a)^2 + (\til D - g_i) / \til B}{(\til \sigma_0 - \til\sigma_a)^2 + (\til D - g_i) / \til B} \right]^{\til x_i} . \nonumber
\end{eqnarray}
while in the (Ia) case they are related to the singular $AdS_3/n.s.$ DW's, interpolating between one $AdS_3$ vacua and a naked singularity. The large values of the formerly unbounded coupling $\sigma$ is now mapped at the (small) finite values of $\til \sigma$ in the neighbours of the Minkowski vacua\footnote{ Notice that such vacua of $W(\sigma_M)=0$ are \emph{not} representing
''conformal critical'' points, but instead are defining a particular massive phase \cite{nmg},\cite {oldholo}.}. We can conclude that in the dual theory the ``infinitely strong'' couplings are mapped into a finite values, both however corresponding to massive phases: hence the strong coupling massive phase is mapped to certain ''dual'' weak coupling massive phase. The dual massless phases, on the other hand, are ``stretched'' by the duality transformation, as one can see from eq.(\ref{til s b dmn}): the interval $(\til\sigma_a, \til\sigma_b^\pm)$ is ``longer'' than its dual, for $L_a/L_{gr} > 1$.
Similar statements are valid for all the other pairs of dual models described in Sect.3.2.:
$$ {\mathrm{(I.a)}} \Leftrightarrow {\mathrm{(II.b)}} \; ; \;\;\; {\mathrm{(I.b)}} \Leftrightarrow {\mathrm{(II.a)}} \; ; \;\;\; {\mathrm{(III.a)}} \Leftrightarrow {\mathrm{(III.b)}}$$
Let us mention that the behaviour of the correlation length and the properties of the marginally degenerate cases $(III.a)$ and $(III.b)$, that in fact describe a pair of dual models with an infinite order phase transition at the critical point $\sigma_{a} = \sigma_b$ and having two massive phases, are quite similar to the ones of the NMG model of quadratic superpotential, studied in ref.\cite {oldholo}.
Few comments are now in order:
(a) the phase structure, the corresponding RG flows and the duality relations between different phases of our second example of partially self-dual ''periodic'' superpotential (\ref{W cos})(we have introduced in App.A.), are rather similar to the one we have described in this subsection;
(b) the holographic RG flows in the pCFT$_2$ model dual to the NMG of quadratic superpotential
\begin{equation}
W(\sigma) = B (\sigma -\sigma_a)^2 + D ,\;\;\;\;\;\;\; D\neq 0
\end{equation}
can be easily found by applying the methods developed in Sect.3.1. and by using the results of refs. \cite{nmg}, \cite{oldholo}. Although (for $D\neq 0$) it is neither self-dual (as in the $D=0$ case) nor partially self-dual, it possess a rich and interesting phase structure \cite {nmg,oldholo}. It is worthwhile to also mention the well known fact that it represents the near-critical behaviour of an arbitrary (even) superpotential.
\setcounter{equation}{0}
\section{Discussion}
The holographic RG methods, when applied to the NMG-matter models with appropriate superpotentials, provide important critical (about certain CFT$_2$'s) and off-critical (of the corresponding pCFT$_2$'s) data, which can be used for their identification with the --- already known --- perturbative and exact CFT$_2$ and pCFT$_2$'s results \cite{x, bpz, fat,gms}. It is worthwhile to remind once more that all the information about the holographic RG flows and phase transitions in the QFT$_{2}$'s dual to the NMG model (\ref{acaoo}) are not sufficient for the complete identification of the pCFT$_2$ dual to a given NMG-matter model. One has to further consider the difficult problem of the construction of the off-critical correlation functions of 2d fields dual to the 3d matter scalar, by studying the linear fluctuations of the metrics and of the scalar field around the DW's solutions \cite{gub, rg, japa, 8}. The real problem with the verification of the validity of the off-critical $(a)AdS_3/pCFT_2$ conjecture consists however in the comparison of the \emph{strong-coupling holographic} results, based on the exact $\beta$-functions, with the known \emph{perturbative}, near-critical calculations of the corresponding 2d models \cite{azz, cardy, x, fat, gms}. The construction of a particular class of strong-weak coupling self-dual pCFT$_2$'s models, i.e. the holographic duals of selected pairs of NMG-matter models with partially self-dual superpotentials, described in Sects.\ref{Duality} and \ref{Holographic RG flows}, represents an important exception. In this case it becomes possible to compare the holographic non-perturbative results with the ones obtained by the conformal perturbation theory \cite{x,cardy}.
Another important problem concerning the $(a)$AdS$_3$/pCFT$_2$ correspondence, in the particular case of the NMG model (\ref{acaoo}), is related to the \emph{negative values} of the central charges (\ref{ch}) for $\epsilon=-1$ and $m^2<0$. These are usually interpreted as \emph{non-unitary} CFT$_2$'s. Let us assume that all these CFT$_2$'s, without any extra symmetries present, are described by the representations of two commuting Virasoro algebras, characterized by their central charges $c_L = c_R = c$, and the set of scaling dimensions and spins \cite{bpz}. In all the cases when $ c<0$, the corresponding CFT$_2$'s contain primary fields (states) of negative dimensions (and negative norms), and hence they represent non-unitary QFT$_2$'s \footnote{Some of them turn out to describe interesting 2d statistical models, as for example the one of central charge $c=- 22 / 5$, known as Lee-Yang edge singularity \cite{muss}.}. As it is well known, in the interval $0<c<1$ there exists an infinite series of ``minimal" \emph{unitary quantum models} corresponding to
$c^-_{{\mathrm{quant}}}(p)= 1 - 6 Q_p^2$, with $Q_p= \sqrt{\frac{p+1}{p}} -\sqrt{\frac{p}{p+1}}$ and $ p=3,4,5,...$, while
the models with $c > 25$ give rise to unitary representations used in the quantization of the Liouville model \cite{azz}: $c_{+}(b) = 1 + 6(b+\frac{1}{b})^2$, where the parameter $b$ is related to the Liouville coupling constant. On the other hand, the derivation of the Brown-Henneaux \cite{9} central charge formula $c=\frac{3L}{2G}$, as well as its NMG generalizations (\ref{ch}), are based on the ``Dirac quantization'' of the classical Poisson brackets of the Virasoro algebra, and by further identifying the classical central charge $c_{{\mathrm{class}}}$ for $L\gg l_{pl}$ with the ``quantum'' central charge $c_{{\mathrm{quant}}}$ of the ``dual'' boundary CFT$_2$. The well known fact, coming from the standard procedure of the Liouville models \cite{azz} and of the ``minimal'' models quantizations \cite{fat}, is that this classical central charge is receiving quantum corrections, i.e. starting from the $c_{{\mathrm{class}}}^{\pm} = \pm 6b^2$ we are getting their ``corrected'', exact values $c_{{\mathrm{quant}}}^{\pm}=1\pm 6(b\pm\frac{1}{b})^2$.
In the classical limit $\hbar\rightarrow 0$ one obtains $c_{{\mathrm{quant}}}^-\rightarrow c_{\mathrm{class}}^-\approx-\infty$, i.e. the corresponding classical (and semiclassical) central charges are very big, \emph{negative} numbers \cite{fat}. Similarly, for the limits of the central charges of the Liouville's model \cite{azz}, we have $c_{\mathrm{class}}^+\approx\infty$. Hence the classical (and semi-classical) large negative central charges are a common feature of all the $c^-_{\mathrm{quant}} <1$ models and of their supersymmetric $N=1$ extensions. It is therefore important to bear in mind that given the values of the (semi-)classical limits of the central charges of certain class of CFT$_2$'s, further investigations of the limiting properties of the anomalous dimensions of the primary fields are also needed, in order to conclude whether such 2d CFT's belong to the non-unitary ($c^-_{\mathrm{quant}}<0 $) case, or else to the interval $0<c^-_{\mathrm{quant}}<1$, where unitary models are known to exist.
Our final comment concerns the eventual higher dimensional $d>3$ generalizations of the duality concepts and of the specific examples we have considered in the present paper. It should be stressed that the presence of the $R^2$ terms (specific for the NMG gravity) and the knowledge of the corresponding I-st order system of eqs.(\ref{sis}) were \emph{essential} in the derivation of our 3d NMG duality conditions (\ref{def dual}). Due to the specific form of the NMG central function, it is clear that the pure EH action coupled to scalar matter, and the corresponding dual pCFT$_{d-1}$, do not provide examples of dual and self-dual models (even in the 3-dimensional case); at least not in the context proposed in Sect.\ref{Duality} above. Therefore one has to look for appropriate higher dimensional ``higher curvature'' gravitational actions of Lovelock type, as for example the ones containing the Gauss-Bonnet term and/or specific combinations of cubic or quartic powers of the curvature tensors similar to the actions of Quasi-Topological gravities \cite{myers, oliva, mann}. As in the case of 3d NMG models studied in the present paper, the main ingredients of such holographic duality constructions are again the explicit forms of the corresponding $a$- and $c$-central functions, of the exact $\beta$-functions and of the holographic free energy. There exist many indications of how one can formulate an appropriate generalization of the considered NMG duality conditions in certain higher dimensional models, for which the holographic RG methods, based on the DW's solutions and on the first order order system of equations \cite{lovedw, loverg} are well established. Our preliminary results \cite{sdual} provide convincing arguments that the NMG-like duality conditions (\ref{def dual}) can be realised only in a very particular class of higher dimensional gravity models: For $d=4$, i.e. for the construction of self-dual pCFT$_3$'s, the appropriate model allowing such partial self-dualities is the $d=4$ \emph{cubic} Quasi-Topological gravity \cite{oliva, lovedw, loverg}; while for the $d=5$ case it turns out to be the recently constructed \emph{quartic} Quasi-Topological Gravity, with the linear and the quartic terms only \cite{mann}.
\section{Appendix. Partially self-dual NMG's with periodic superpotential}
\label{App}
The vacua structure of the following superpotential
\begin{equation}
W(\sigma) = B \left[ D - \cos (\alpha \sigma) \right] , B < 0 \label{W cos}
\end{equation}
consists in two type (a) vacua at $\sigma_a^{(0)} = 0$ and $\sigma_a^{(\alpha)} = \pi /\alpha$ and few type (b) ones (within the interval $\sigma\in (0, \pi/\alpha)$). We can further rewrite the parameters $B$ and $D$ in an equivalent form as
\begin{equation}
B = - \frac{L_0 - L_\alpha}{2 \kappa L_0 L_\alpha} \; , \;\; D = \frac{L_0 + L_\alpha}{L_0 - L_\alpha} ,
\end{equation}
by introducing an obvious notation for the vacua scales $L_{0, \alpha}$.
The condition $B < 0$, i.e. $L_0 > L_\alpha$, implies that $D > 1$, hence Minkowski vacua or Janus-type geometries are excluded.
Using (\ref{int s til}), we have
\begin{equation}
\tan \left[ \frac{B \alpha \kappa L_{gr} \sqrt{ D^2 - 1} \, \til\sigma}{2} \right] = \sqrt{\frac{D+1}{D-1}} \, \tan \left[ \frac{\alpha \sigma}{2} \right] . \label{s til cos}
\end{equation}
This gives:
\begin{equation}
\til W(\til\sigma) = \til B \left[ \til D - \cos (\til \alpha \til\sigma) \right] ,
\end{equation}
where
\begin{equation}
\til B = - \frac{1}{ \kappa^2 L_{gr}^2 B ( D^2 - 1)} \; , \;\; \til D = - D \; , \;\; \til\alpha = B \kappa L_{gr} \sqrt{D^2 - 1} \, \alpha . \label{parameters cos dual}
\end{equation}
Thus, we see that the case considered: $B < 0$, $D > 1$, is dual to other case: $\til B > 0$, $\til D < -1$. We can integrate the scale factor, to find
\begin{equation}
e^{ \varphi (\sigma)} = e^{ \varphi_0} (1 + \cos \alpha\sigma)^{x_1} \, (1 - \cos \alpha\sigma)^{x_2} \, |\delta_+ - \cos \alpha \sigma |^{x_3} \, |\delta_- - \cos \alpha \sigma |^{x_4} , \label{scale factor cos}
\end{equation}
where
\begin{eqnarray}
&& x_1 = - \frac{L_0 L_\alpha^2}{2 \alpha^2 \left[ L_0 - L_\alpha \right] \left[ L_\alpha^2 - L_{gr}^2 \right] } , \;\; x_2 = \frac{L_\alpha L_0^2}{2 \alpha^2 \left[ L_0 - L_\alpha \right] \left[ L_0^2 - L_{gr}^2 \right] } , \\ \label{x1 cos}
&& x_3 = \frac{L_0 L_\alpha }{4 \alpha^2 \left[ L_{gr} + L_\alpha \right] \left[ L_0 + L_{gr} \right] } , \;\; x_4 = \frac{L_0 L_\alpha }{4 \alpha^2 \left[ L_0 - L_{gr} \right] \left[ L_\alpha - L_{gr} \right] } , \\
&& \delta_\pm = \frac{1}{\left( L_0 - L_\alpha \right)} \left[ L_0 + L_\alpha \pm 2 \frac{L_0 L_\alpha}{L_{gr}} \right] . \label{delta cos}
\end{eqnarray}
The condition for the existence of a DW solution connecting two type (a) vacua, i.e. the condition for the absence of singularities of the scale factor for $\sigma \in (\sigma_a^{(0)} , \sigma_a^{(\alpha)} )$, is that $\delta_+ > 1$ and $\delta_- < -1$, implying $L_{0,\alpha} > L_{gr}$, thus
$$0 < L_{gr} < L_\alpha < L_0 . $$
In this case, we have a DW connecting a boundary at $\sigma = 0$ and a horizon at $\sigma = \pi/\alpha$.
The description of its phase structure, the nature of the phase transitions as well as the duality relations between the different phases (for different ''dual'' values of the superpotential parameters), following the methods developed in Sect.3.2.2. and Sect.4.2, is straightforward.
|
1,108,101,566,620 | arxiv | |
1,108,101,566,621 | arxiv | \section{Scrap / Things That Weren't Needed}
\subsubsection{A Node-Weighted Version of The FRT Cutting Scheme}
In order to give our deterministic construction we must unpack the black box of the \citet{fakcharoenphol2004tight} cutting scheme.
The \citet{fakcharoenphol2004tight} cutting scheme given metric $(V, d)$ where $d(u,v) \geq 1$ for all $u,v \in V$ produces a hierarchical decomposition $\mathcal{H} = \{\mathcal{P}_0, \ldots, \mathcal{P}_h\}$ and is as follows. We first pick a uniformly random permutation $\pi$ on $V$ and a uniformly random value $\beta \in [\frac{1}{2}, 1)$. We let the radius for level $i$ be $r_i := 2^{i-1} \cdot \beta$.
We let $\mathcal{P}_h$ be the trivial partition containing all vertices of $V$. Next, we construct $\mathcal{P}_{i}$ by refining $\mathcal{P}_{i+1}$; in particular we divide each part $P_{i+1} \in \mathcal{P}_{i+1}$ into additional parts as follows. Each $v \in P_{i+1}$ is assigned to the first vertex $u$ in $\pi$ for which $v \in B(u, r_i)$. Notice that $u$ need not be in $P_{i+1}$. Let $C_u$ be all vertices in $P_{i+1}$ which are assigned to $u$ and add to $\mathcal{P}_i$ all $C_u$ which are non-empty.
One can easily verify that the resulting partitions indeed form a hierarchical decomposition.
We slightly modify the FRT cutting scheme to account for node weights. In particular, consider the \citet{fakcharoenphol2004tight} cutting scheme as described above but where a node's position in $\pi$ is chosen proportional to its weight in some probability distribution $p$ over $V$. In particular, $\pi$ can be thought of as being iteratively constructed as follows: Suppose $\pi$ is an ordering on nodes $V' \subseteq V$; then, letting $\bar{V} = V \setminus V'$, $v \in \bar{V}$ is selected as the next node in $\pi$ with probability $p_v / \sum_{u \in \bar{V}} p_u$; we repeat this process until $\pi$ is an ordering on all nodes in $V$ and therefore a permutation on $V$.
\begin{lemma}\label{lem:permOrder}
Given the node-weighted random permutation, $\pi$, computed as above, the probability that node $u$ precedes all nodes in $V' \subseteq V$ in $\pi$ is $p_u / p(V')$ where $p(U) := \sum_{w \in V'} p_w$.
\end{lemma}
\begin{proof}
{\color{red} \textbf{TODO}}
\end{proof}
Moreover, whereas the original \citet{fakcharoenphol2004tight} makes use of the fact that $H_n := \sum_{i=1}^n \frac{1}{i} \leq O(\log n)$, because we are dealing with node weights we will have to prove a slight generalization of this fact which takes node weights into account. In particular we prove the following lemma which implies $H_n \leq O(\log n)$ when all $p^{(i)}$ are equal.
\begin{lemma}\label{lem:harmGen}
Let $p^{(1)}, \ldots, p^{(n)}$ be real numbers where $p^{(i)} > 0$, $\sum_i p^{(i)} = 1$ and $p^{(1)} \geq \frac{1}{\text{poly}(n)}$. Then
\begin{align*}
\sum_{i=1}^n \frac{p^{(i)}}{\sum_{j \leq i}p^{(j)}} \leq O(\log n).
\end{align*}
\end{lemma}
\begin{proof}
Let $t_i := \sum_{j \leq i}p^{(j)}$ be the total mass contained in the first $i$ reals; i.e.\ we would like to show $\sum_{i=1}^n \frac{p^{(i)}}{t_i} \leq O(\log n)$. We know $t_n = 1$ and initially $t_1 \geq \frac{1}{\text{poly}(n)}$. Our strategy will be to show that a large term in $\frac{p^{(i)}}{\sum_{j \leq i}p^{(j)}}$ causes a large multiplicative increase from $t_{i-1}$ to $t_i$ which, given the fact that $t_n = 1$, cannot happen too often.
More formally, we have the following identity
\begin{align*}
t_i = t_{i-1} + p^{(i)} = t_{i-1} \left(1 + \frac{p^{(i)}}{t_{i-1}} \right).
\end{align*}
Rearranging this we find that
\begin{align}\label{eq:ratioOfTotal}
\frac{t_i}{t_{i-1}} = \left(1 + \frac{p^{(i)}}{t_{i-1}} \right).
\end{align}
Using a telescoping product and $t_n = 1$, we have that
\begin{align*}
1 = t_n = t_1 \cdot \frac{t_2}{t_1} \cdot \frac{t_3}{t_2} \ldots \frac{t_n}{t_{n-1}}
\end{align*}
which when combined with \Cref{eq:ratioOfTotal} and the fact that $t_1 = p^{(1)} \geq \frac{1}{n^c}$ for some constant $c>0$ gives
\begin{align*}
\frac{1}{t_1} &= \cdot \prod_{i=2}^n \left[ 1 + \frac{p^{(i)}}{t_{i-1}} \right] \\
c \cdot \log n &\geq \sum_{i=1}^n \left[ \log\left(1 + \frac{p^{(i)}}{t_{i-1}}\right)\right]
\end{align*}
Using the fact that $\log(1+x) \geq \frac{x}{2}$ for $x \in [0, 2]$ and flipping our inequality we have $\sum_{i=2}^n \frac{p^{(i)}}{t_{i-1}} \leq 2c \cdot \log n $. Finally, using the fact that $t_{i-1} \leq t_{i}$, we have
\begin{align*}
\sum_{i=1}^n \frac{p^{(i)}}{t_i} &= 1 + \sum_{i=2}^n \frac{p^{(i)}}{t_i}\\
& \leq 1 + \sum_{i=2}^n \frac{p^{(i)}}{t_{i-1}}\\
& \leq 1 + 2c \cdot \log n = O(\log n),
\end{align*}
as desired.
\end{proof}
\subsubsection{Derandomizing via Multiplicative Weights}
As discussed above, our goal is to derandomize \Cref{lem:FRTIsPadded} while taking node weights into account.
We are now ready to formalize our node-weighted derandomization.
\begin{lemma}\label{lem:derandPadding}
There is a deterministic algorithm which given metric $(V, d)$ and a distribution over nodes $p_v$ returns a hierarchical decomposition $\mathcal{H}$ in which at least a $.95$ fraction of nodes are $\frac{1}{\log n}$-padded by weight; i.e.
\begin{align*}
\sum_v p_v \cdot \mathbb{I}\left(\text{$v$ is $\Omega\left(\frac{1}{\log n}\right)$-padded in $\mathcal{H}$}\right)\geq .95.
\end{align*}
\end{lemma}
\begin{proof}
The analysis will be similar to that of \citet{fakcharoenphol2004tight} and \citet{gupta2006oblivious} with slight modifications to account for node-weights; in particular, we will make use of \Cref{lem:harmGen}.
Now, given $\pi$ selected in this fashion, run the \citet{fakcharoenphol2004tight} cutting scheme as above to get a hierarchical decomposition $\mathcal{H}$.
We first claim that this process pads a large fraction of nodes by weight. In particular, we claim that
\begin{align}
\E_{\pi, \beta}\left[\sum_v p_v \cdot \mathbb{I}(\text{$v$ is $\alpha$-padded in $\mathcal{H}$})\right] \geq .95.
\end{align}
for $\alpha = \frac{c'}{\log n}$ for constant $c' > 0$ to be chosen later.
Fix a node $v$. We will show that for each $i$, the ball $B_i := B(v, \alpha 2^i)$ is cut with sufficiently small probability and then show, by a union bound, that for a fixed $v$ the probability that $B_i$ is cut for some $i$ is at most $.05$.
Say that node $u$ \emph{protects} $B_i$ if its ball at level $i$ contains $B_i$, i.e.\ if $r_i \geq d(u,v) + 2^i \alpha$. Say that $u$ \emph{threatens} $B_i$ if its ball at level $i$ intersects $B_i$ but does not contain it, i.e.\ $d(u,v) - \alpha 2^i < r_i < d(u,v) + 2^i \alpha$. Finally, say that $u$ \emph{cuts} $B_i$ if it threatens $B_i$ and is the first node in $\pi$ to threaten or protect $B_i$. Clearly if $B_i$ is not cut by any node for all $i$ then $v$ will be $\alpha$-padded.
In order for $B_i$ to be cut by $u$ it must be the case that $u$ threatens $B_i$ and no node before $u$ in $\pi$ threatens or protects $B_i$. By how we choose $r_i$, $u$ threatens $B_i$ if
\begin{align}
d(u,v) - 2^i \alpha < \beta \cdot 2^{i-1} < d(u,v) + 2^i \alpha
\end{align}
and since $\beta \cdot 2^{i-1}$ is distributed uniformly in $[2^{i-2}, 2^{i-1})$, this happens with probability $2^{i+1}\alpha/2^{i-2} = 8\alpha$.
In order for $u$ to be the first node to threaten or protect $B_i$, it certainly must be the case that every node which is closer to $v$ than $u$ appears after $u$ in $\pi$ (since every such node either threatens or protects $B_i$). Letting, $N_v(u) := \{w \in V : d(w, v) \leq d(u,v) \}$ be all nodes as close to $v$ as $u$, by \Cref{lem:permOrder} we have that this happens with probability $p_u / p(N_v(u))$ where $p(N_v(u)) := \sum_{w \in N_v(u)} p_w$.
Lastly, a node which is too far or too close to $v$ cannot cut $B_i$. In particular, a node $u$ can only cut $B_i$ if
\begin{align*}
2^{i-2} - 2^i \alpha \leq d(u,v) \leq 2^{i-1} +2^i \alpha
\end{align*}
We let $C_i := \{u : 2^{i-2} - 2^i \alpha \leq d(u,v) \leq 2^{i-1} + 2^i\alpha \}$ be all such nodes which might cut $B_i$.
Thus, we have that the probability that $B_i$ is cut is at most
\begin{align*}
\sum_{u \in C_i} \Pr(\text{$u$ precedes all $w \in N_v(u)$ in $\pi$ where $w \neq u$}) \cdot \Pr(\text{$u$ threatens $B_i$}) \leq \sum_{u \in C_i} \frac{p_u}{p(N_v(u))} \cdot 8 \alpha
\end{align*}
Thus, by a union bound the probability that some $B_i$ centered around $v$ for some $i$ is cut is at most
\begin{align*}
8 \alpha \sum_i \sum_{u \in C_i} \frac{p_u}{p(N_v(u))}
\end{align*}
Next, we claim that each $u$ occurs in at most $3$ of the $C_i$. {\color{red} \textbf{TODO}}
Thus, letting $p^{(l)} = p_u$ where $u$ is the $l$th closest node to $v$ we have that the probability that some $B_i$ centered around $v$ is cut is at most
\begin{align*}
24 \alpha \sum_l \frac{p^{(l)}}{\sum_{j \leq l}p^{(j)}}.
\end{align*}
Applying \Cref{lem:harmGen} we know $\frac{p^{(l)}}{\sum_{j \leq l}p^{(j)}} \leq c \cdot \log n$ for some constant $c > 0$ and so we conclude that this is at most
\begin{align*}
24 c \cdot \alpha \cdot \log n = \frac{24 c}{c'}
\end{align*}
which for $c'$ sufficiently small is at most $.05$.
We now use the method of conditional expectation to derandomize this process.
{\color{red} \textbf{TODO}}
\end{proof}
Using the above node-weighted derandomization lemma gives our deterministic repetition HST construction. In particular, we run the following multiplicative-weights-type algorithm with $\epsilon = .01$ and set the number of iterations as $T:=4 \ln n / \epsilon^2$. In the following we let $p_v^{(t)} : = w^{(t)}_v / \sum_v w_v^{(t)}$ be the proportional share of $v$'s weight in iteration $t$.
\begin{enumerate}
\item Uniformly set the initial weights: $w_v^{(1)}=1$ for all $v \in V$.
\item For $t \in [T]$:
\begin{enumerate}
\item Run the algorithm given in \Cref{lem:derandPadding} using distribution $p^{(t)}$ and let $\mathcal{H}_t$ be the resulting hierarchical decomposition.
\item \textbf{Set mistakes:} For each vertex $v$ which is $\frac{1}{\log n}$-padded in $\mathcal{H}_t$ let $m_v^{(t)} = 1$. Let $m_v^{(t)} = 0$ for all other $v$.
\item \textbf{Update weights:} for all $v \in V$, let $w_v^{(t+1)} \gets \exp(-\epsilon m_v^{(t)}) \cdot w_v^{(t)}$.
\end{enumerate}
\item Return $(\mathcal{H}_t)_t$.
\end{enumerate}
We state a well-known fact regarding multiplicative weights in our notation. Readers familiar with multiplicative weights may recognize this as the fact that the expected performance of mutliplicative weights over logarithmically-many rounds is competitive with the best expert.
\begin{lemma}[\cite{arora2012multiplicative}]\label{lem:MWAvg}
The above algorithm guarantees that for any $v \in V$ we have
\begin{align*}
\frac{1}{T} \sum_{t \leq T} p^{(t)} \cdot m^{(t)} \leq \epsilon + \frac{1}{T} \sum_{t \leq T} m_v^{(t)}
\end{align*}
where $p^{(t)} \cdot m^{(t)} := \sum_v p^{(t)}_v m_v^{(t)}$ is the usual inner product.
\end{lemma}
Using this fact we conclude that the above algorithm gives a $(\log n, \log n)$-repetition HST.
\begin{lemma}
There is a deterministic polynomial time algorithm which returns a $(\log n, \log n)$-repetition HST.
\end{lemma}
\begin{proof}
We let the repetition HST's trees be the trees corresponding to $(\mathcal{H}_t)_t$ and we let the good nodes of the tree corresponding to $\mathcal{H}_t$ be the nodes which are $\Omega(\frac{1}{\log n})$-padded in $\mathcal{H}_t$.
Each tree corresponding to each $\mathcal{H}_t$ is a tree-embedding by construction and by \Cref{lem:padGivesDist} the distances between all good nodes are preserved up to an $O(\log n)$ stretch as required. Since $T:=4 \ln n / \epsilon^2$ we know that $T = O(\log n)$.
We need only argue, then, that each node is good in at least a $.9$ fraction of the $T$ total $\mathcal{H}_t$. Let $f_v = \frac{1}{T} \sum_{t \leq T} \mathbb{I}(\text{$v$ $\Omega(\frac{1}{\log n})$-padded in $\mathcal{H}_t$})$ be the fraction of the trees in which $v$ is good. Consider a fixed node $v$. By \Cref{lem:MWAvg} we know that
\begin{align}\label{eq:mwguar}
\frac{1}{T} \sum_{t \leq T} p^{(t)} \cdot m^{(t)} \leq \epsilon + \frac{1}{T} \sum_{t \leq T} m_v^{(t)}
\end{align}
By definition of $m_v^{(t)}$ we have that the right hand side of \Cref{eq:mwguar} is $\epsilon + f_v$. On the other hand, by how we set $m^{(t)}$, the left hand side of \Cref{eq:mwguar} is $\frac{1}{T}\sum_t\sum_v p_v \cdot \mathbb{I}(\text{$v$ is $\frac{1}{\log n}$-padded in $\mathcal{H}$})$ which by \Cref{lem:derandPadding} is at least $.95$. Combining these facts we have $.95 \leq \epsilon + f_v$ and so by our choice of $\epsilon$ we know $.9 \leq f_v$.
\end{proof}
\section{Introduction}
Probabilistic embedding of general metrics into distributions over trees are one of the most versatile tools in combinatorial and network optimization. The beauty and utility of these tree embeddings comes from the fact that their application is often simple, yet extremely powerful. Indeed, when modeling a network with length, costs, or capacities as a weighted graph, these embeddings often allow one to pretend that the graph is a tree. A common template for countless network design algorithms is to (1) embed the input weighted graph $G$ into a randomly sampled tree $T$ that approximately preserves the weight structure of $G$; (2) solve the input problem on $T$
and; (3) project the solution on $T$ back into $G$.
A long and celebrated line of work \cite{karp19892k,alon1995graph,bartal1996probabilistic,fakcharoenphol2004tight} culminated in the embedding of Fakcharoenphol, Rao and Talwar \cite{fakcharoenphol2004tight}---henceforth the ``FRT embedding''---which showed that any weighted graph on $n$ nodes can be embedded into a distribution over weighted trees in a way that $O(\log n)$-approximately preserves distances in expectation. Together with the above template this reduces many graph problems to much easier problems on trees at the cost of an $O(\log n)$ approximation factor. This has lead to a myriad of approximation, online, and dynamic algorithms with poly-logarithmic approximations and competitive ratios for NP-hard problems such as for $k$-server \cite{bansal2011polylogarithmic}, metrical task systems \cite{bartal1997polylog}, group Steiner tree and group Steiner forest \cite{alon2006general,naor2011online, garg2000polylogarithmic}, buy-at-bulk network design \cite{awerbuch1997buy} and (oblivious) routing \cite{racke2002minimizing}. For many of these problems tree embeddings are the only known way of obtaining such algorithms on general graphs
However, probabilistic tree embeddings have one drawback: Algorithms based on them naturally require randomization and their approximation guarantees only hold in expectation. For approximation algorithms---i.e., in the offline setting---there
are derandomization tools, such as the FRT derandomizations given in \cite{charikar1998approximating,fakcharoenphol2004tight}, to overcome these issues. These derandomization results are so general that essentially any offline algorithm based on tree embeddings can be transformed into a deterministic algorithm with matching approximation guarantees (with only a moderate increase in running time). Unfortunately, these strategies are not applicable to online or dynamic settings where an adversary progressively reveals the input. Indeed, to our knowledge, all online and dynamic algorithms that use FRT are randomized (e.g.\ \cite{guo2020facility,gupta2019permutation,alon2006general,fiat2003better,bartal1997polylog,naor2011online,englert2017reordering,englert2007reordering})
This overwhelming evidence in the literature is driven by a well-known and fundamental barrier to the use of probabilistic tree embeddings in deterministic online and dynamic algorithms. More specifically and even worse, this is a barrier which prevents these algorithms from working against all but the weakest type of adversary. In particular, designing an online or dynamic algorithm which is robust to an oblivious adversary (which fixes all requests in advance, independently of the algorithm's randomness) is often much easier than designing an algorithm which is robust to an adaptive adversary (which chooses the next request based on the algorithm's current solution). As the actions of a deterministic algorithm can be fully predicted this distinction only holds for randomized algorithms---any deterministic algorithm has to always work against an adaptive adversary. For these reasons, many online and dynamic algorithms have exponentially worse competitive ratios in the deterministic or adaptive adversary setting than in the oblivious adversary setting. This is independent of computational complexity considerations.
The above barrier results from a repeatedly recognized and seemingly unavoidable phenomenon which prevents online algorithms built on FRT from working against adaptive adversaries. Specifically, there are graphs where every tree embedding must have many node pairs with polynomially-stretched distances \cite{bartal1996probabilistic}. There is nothing that prevents an adversary then from learning through the online algorithm's responses which tree was sampled and then tailoring the remainder of the online instance to pairs of nodes that have highly stretched distances. The exact same phenomenon occurs in the dynamic setting; see, for example, \citet{guo2020facility} and \citet{gupta2019permutation} for dynamic algorithms with expected cost guarantees that only hold against oblivious adversaries because they are based on FRT. In summary, online and dynamic algorithms that use probabilistic tree embeddings seem inherently randomized and seem to necessarily only work against adversaries oblivious to this randomness.
Similar, albeit not identical,\footnote{We remark that, unlike the online and dynamic setting, the barrier to obtaining demand-robust algorithms which work against the ``adaptive adversary'' implicit in the setting is merely computational and thus seems potentially less inherent.} issues also arise in other settings, most notably demand-robust optimization. The demand-robust model is a well-studied model of optimization under uncertainty \cite{dhamdhere2005pay,hershkowitz2018prepare,feige2007robust,gupta2015robust,gupta2010thresholded,golovin2006pay} in which an algorithm first buys a partial solution given a large collection of potential problem instances. An ``adaptive adversary'' then chooses which of the potential instances must be solved and the algorithm must extend its partial solution to solve the selected instance at inflated costs. The adversary is adaptive in the sense that it chooses the final instance with full knowledge of the algorithm's partial solution. To thwart an algorithm which reduces a demand-robust problem to its tree version via a sampled FRT tree, the adversary can present a collection of potential instances which for every tree $T$ in the FRT distribution contains an instance for which $T$ is an arbitrarily bad approximation and then always choose the worst-case problem instance.
The fact that there do not exist any demand-robust algorithms which use FRT despite this setting having received considerable attention seems at least partially due to the issues pointed out here.
Overall it seems fair to say that prior to this work tree embeddings seemed fundamentally incapable of enabling adaptive-adversary-robust and deterministic algorithms in several well-studied settings.
\subsection{Our Contributions}
We provide a conceptually new type of metric embedding---the copy tree embedding--- which is deterministic and therefore also adaptive-adversary-robust.
Specifically, we show that any weighted graph $G$ can be deterministically embedded into a single weighted tree with a small number of copies for each vertex. Any subgraph of $G$ will project onto this tree in a connectivity and approximate-cost preserving way.
To precisely define our embeddings we define a copy mapping $\phi$ which maps a vertex $v$ to its copies.
\begin{definition}[Copy Mapping]
Given vertex sets $V$ and $V'$ we say $\phi : V \to 2^{V'}$ is a copy mapping if every node has at least one copy (i.e.\ $|\phi(v)| \geq 1$ for all $v \in V$), copies are disjoint (i.e.\ $\phi(v) \cap \phi(u) = \emptyset$ for $u \neq v$) and every node in $V'$ is a copy of some node (i.e. for every $v' \in V'$ there is some $v \in V$ where $v' \in \phi(v)$). For $v' \in V'$, we use the shorthand $\phi^{-1}(v')$ to stand for the unique $v \in V$ such that $v' \in \phi(v)$.
\end{definition}
A copy tree embedding for a weighted graph $G$ now simply consists of a tree $T$ on copies of vertices of $G$ with one distinguished root and two mappings $\pi_{G \to T}$ and $\pi_{T \to G}$ which map subsets of edges from $G$ to $T$ and from $T$ to $G$ in a way that preserves connectivity and approximately preserves costs. We say that \emph{two vertex subsets $U, W$ are connected} in a graph if there is a $u \in U$ and $w \in W$ such that $u$ and $w$ are connected. We also say that a mapping $\pi : 2^E \to 2^{E'}$ is \emph{monotone} if for every $A \subseteq B$ we have that $\pi(A) \subseteq \pi(B)$. A rooted tree $T = (V, E, w)$ is \emph{well-separated} if for all edges $e$ if $e'$ is a child edge of $e$ in $T$ then $w(e') \leq \frac{1}{2}w(e)$.
\begin{definition}[$\alpha$-Approximate Copy Tree Embedding with Copy Number $\chi$]\label{dfn:repTree}
Let $G = (V, E, w)$ be a weighted graph with some distinguished root $r \in V$. An $\alpha$-approximate copy tree embedding with copy number $\chi$ consists of a weighted rooted tree $T = (V', E',w')$, a copy mapping $\phi : V \to 2^{V'}$ and edge mapping functions $\pi_{G \to T} : 2^E \to 2^{E'}$ and $\pi_{T \to G} : 2^{E'} \to 2^{E}$ where $\pi_{T \to G} : 2^{E'} \to 2^{E}$ is monotone and:
\begin{enumerate}
\item \textbf{Connectivity Preservation:} For all $F \subseteq E$ and $u,v \in V$ if $u, v$ are connected by $F$, then $\phi(u), \phi(v) \subseteq V'$ are connected by $\pi_{G \to T}(F)$. Symmetrically, for all $F' \subseteq E'$ and $u', v' \in V'$ if $u'$ and $v'$ are connected by $F'$ then $\phi^{-1}(u')$ and $\phi^{-1}(v')$ are connected by $\pi_{T \to G}(F')$.
\item \textbf{$\alpha$-Cost Preservation}: For any $F \subseteq E$ we have $w(F) \leq \alpha \cdot w'(\pi_{G \to T}(F))$ and for any $F' \subseteq E'$ we have $w'(F') \leq w(\pi_{T \to G}(F'))$.
\item \textbf{Copy Number:} $|\phi(v)| \leq \chi$ for all $v \in V$ and $\phi(r) = \{r'\}$ where $r'$ is the root of $T$.
\end{enumerate}
A copy tree embedding is efficient if $T$, $\phi$, and $\pi_{T \to G}$ are deterministically poly-time computable and well-separated if $T$ is well-separated.
\end{definition}
We emphasize that, whereas standard tree embeddings guarantee costs are preserved in expectation, our copy tree embeddings preserve costs deterministically. Also notice that for efficient copy tree embeddings we do not require that $\pi_{G \to T}$ is efficiently computable; this is because $\pi_{G \to T}$ will be used in our analyses but not in any of our algorithms.
We first give two copy tree embedding constructions which trade off between the number of copies and cost preservation. Both constructions are based on the idea of merging appropriately chosen tree embeddings as pictured in \Cref{fig:constrPart} and \Cref{fig:constrFRT} where we color nodes according to the node whose copy they are.
\begin{figure}
\centering
\begin{subfigure}[b]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth,trim=0mm 100mm 0mm 60mm, clip]{./figures/embedPart1.pdf}
\caption{Graph $G$.}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth,trim=0mm 150mm 0mm 60mm, clip]{./figures/embedPart2.pdf}
\caption{Compute partial tree embeddings.}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth,trim=0mm 150mm 0mm 60mm, clip]{./figures/embedPart3.pdf}
\caption{Merge trees.}
\end{subfigure}
\hfill
\caption{Illustration of our first construction where we merge $O(\log n)$ partial tree embeddings.}\label{fig:constrPart}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[b]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth,trim=0mm 100mm 0mm 60mm, clip]{./figures/embedFRT1.pdf}
\caption{Graph $G$.}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth,trim=0mm 150mm 0mm 60mm, clip]{./figures/embedFRT2.pdf}
\caption{Enumerate FRT support.}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth,trim=0mm 150mm 0mm 60mm, clip]{./figures/embedFRT3.pdf}
\caption{Merge trees.}
\end{subfigure}
\hfill
\caption{Illustration of our second construction where we merge the $O(n \log n)$ trees in the FRT support.}\label{fig:constrFRT}
\end{figure}
\textbf{Construction 1: Merging Partial Tree Embeddings (\Cref{sec:partTree})}.
The cornerstone of our first construction is the idea of merging embeddings which give good \emph{deterministic} distance preservation. If our goal is to embed the entire input metric into a tree this is impossible. However, it is possible to embed a random constant fraction of nodes in an input metric into a tree in a way that deterministically preserves distances of the embedded nodes; an embedding which we call a ``partial tree embedding'' (see also \citet{gupta2006oblivious,haeupler2020tree}). We then use the method of conditional expectation to derandomize a node-weighted version of this random process and apply this derandomization $O(\log n)$ times, down-weighting nodes as they are embedded. The result of this process is $O(\log n)$ partial tree embeddings where a multiplicative-weights-type argument shows that each node appears in a constant fraction of these embeddings. Merging these $O(\log n)$ embeddings gives our copy tree while an Euler-tour-type proof shows that subgraphs of the input graph can be mapped to our copy tree in a cost and connectivity-preserving fashion. The following theorem summarizes our first construction.
\begin{restatable}{theorem}{repTree}\label{thm:repTreeConst}
There is a poly-time deterministic algorithm which given any weighted graph $G = (V, E, w)$ and root $r \in V$ computes an efficient and well-separated $O(\log^2n)$-approximate copy tree embedding with copy number $O(\log n)$.
\end{restatable}
\textbf{Construction 2: Merging FRT Support (\Cref{sec:FRTSup}).}
Our second construction follows from a known fact that the size of the support of the FRT distribution can be made $O(n \log n)$ and this support can be computed deterministically in poly-time \cite{charikar1998approximating}. Merging each tree in this support at the root and some simple probabilistic method arguments give a copy tree embedding that is $O(\log n)$-cost preserving but with an $O(n \log n)$ copy number. The next theorem summarizes this construction.
\begin{restatable}{theorem}{frtSupp}\label{thm:frtSupp}
There is a poly-time deterministic algorithm which given any weighted graph $G = (V, E, w)$ and root $r \in V$ computes an efficient and well-separated $O(\log n)$-approximate copy tree embedding with copy number $O(n \log n)$.
\end{restatable}
While our second construction achieves a slightly better cost bound than our first construction, it has the significant downside of a linear copy number. Notably, this linear copy number makes our second construction unsuitable for some applications, including, for example, our second application as described below. Moreover, our first construction also has several desirable properties which our second does not which we expect might be useful for future applications. These include: (1) $\pi_{G \to T}$ is monotone (in addition to $\pi_{T \to G}$ being monotone as stipulated by \Cref{dfn:repTree}); (2) if $u$ and $v$ are connected by $F \subseteq E$ then $\Omega(\log n)$ vertices of $\phi(u)$ are connected to $\Omega(\log n)$ vertices of $\phi(v)$ in $\pi_{G \to T}(F)$ (as opposed to just one vertex of $\phi(u)$ and one vertex of $\phi(v)$ as in \Cref{dfn:repTree}) and; (3) if $u$ is connected to $r$ by $F \subseteq E$ then every vertex in $\phi(u)$ is connected to $\phi(r)$ in $\pi_{G \to T}(F)$ (as opposed to just one vertex of $\phi(u)$ as in \Cref{dfn:repTree}).
We next apply our constructions to obtain new results for several online and demand-robust connectivity problems whose history we briefly summarize now. Group Steiner tree and group Steiner forest are two well-studied generalizations of set cover and Steiner tree. In the group Steiner tree problem, we are given a weighted graph $G=(V,E,w)$ and groups $g_1, \ldots, g_k \subseteq V$ and must return a subgraph of $G$ of minimum weight which contains at least one vertex from each group. The group Steiner forest problem generalizes group Steiner tree. Here, we are given $A_i, B_i \subseteq V$ pairs and for each $i$ we must connect some vertex from $A_i$ to some vertex in $B_i$. \citet{alon2006general} and \citet{naor2011online} each gave a poly-log approximation for online group Steiner tree and forest respectively but both of these approximation guarantees are randomized and only hold against oblivious adversaries because they rely on FRT. Indeed, \citet{alon2006general} posed the existence of a deterministic poly-log approximation for online group Steiner tree as an open question which has since been restated several times \cite{buchbinder2009design,bienkowski2020nearly}. Similarly, while demand-robust minimum spanning tree and special cases of demand-robust Steiner tree have received considerable attention \cite{dhamdhere2005pay,khandekar2008two,kasperski2011approximability}, there are no known poly-log approximations for demand-robust Steiner tree, group Steiner tree or group Steiner forest
\textbf{Application 1: Reducing Deterministic Online Group Problems to Tree Case (\Cref{sec:detOGST}).}
In our first application we demonstrate that our copy tree embeddings reduce solving online group Steiner tree and forest deterministically on a general graph to the case of solving it on a tree. In particular, we show that a deterministic poly-log approximation for online group Steiner tree and forest on a tree graph gives a deterministic poly-log approximation on general graphs, thereby reducing the aforementioned open question of \citet{alon2006general} to its tree case.
\begin{theorem}\label{thm:GSTAndFor}
If there exists an $\alpha$-competitive poly-time deterministic algorithm for group Steiner tree (resp. group Steiner forest) on well-separated trees then there exists an $O(\log n \cdot \alpha)$-competitive poly-time deterministic algorithm for group Steiner tree (resp. group Steiner forest) on general graphs.
\end{theorem}
Group Steiner tree has the notable property that mapping it onto a copy tree embedding simply results in another instance of the group Steiner tree problem, this time on a tree (our application 2 shows that this is not always the case). Therefore, this result is nearly immediate from either of the above constructions. In particular, if we have an instance of group Steiner tree on a general graph with groups $\{g_i\}_i$ then we can solve group Steiner tree on our embedding with groups $\{g_i'\}_i$ where $g_i' := \bigcup_{v \in g_i} \phi(v)$ and our root is the one copy of $r$, say $r'$. The connectivity properties of our mappings guarantee that a feasible solution for one of these problems is a feasible solution for the other when projected: if $g_i$ is connected to $r$ by $F$ then $g_i'$ is connected to $r'$ by $\pi_{G\to T}(F)$ and if $g_i'$ is connected to $r'$ by $F'$ then $g_i$ is connected to $r$ by $\pi_{T \to G}(F')$. Moreover, the cost preservation of $\pi_{G \to T}$ applied to the optimal solution on the input graph shows that our problem on the embedding has a cheap solution while the cost preservation of $\pi_{T \to G}$ allows us to map our solution on the embedding back to the input graph without increasing its cost. Lastly, the monotonicity of $\pi_{T \to G}$ guarantees that the resulting online algorithm only adds and never attempts to remove edges from its solution in $G$.
\textbf{Application 2: Deterministic Online Partial Group Steiner Tree (\Cref{sec:onPGST}).} We next introduce a new group connectivity problem---the online partial group Steiner tree problem. Partial group Steiner tree is group Steiner tree but where we must connect at least half of the vertices in each group to the root. As we discuss in \Cref{sec:onPGST}, partial group Steiner tree generalizes group Steiner tree. However, unlike group Steiner tree it admits a natural bicriteria relaxation: instead of connecting $\frac{1}{2}$ of the nodes in each group we could require that our algorithm only connects, say, $\frac{(1-\epsilon)}{2}$ of all nodes in each group for some $\epsilon > 0$. Thus, this result can be seen as showing that there is indeed a deterministic poly-log competitive algorithm for online group Steiner tree---as posed in the above open question of \citet{alon2006general}---\emph{provided the algorithm can be bicriteria} in the relevant sense. More formally, we obtain a deterministic poly-log bicriteria approximation for this problem which connects at least $\frac{1-\epsilon}{2}$ of the nodes in each group (notated ``$(1-\epsilon)$-connection competitive'' below) by using our copy tree embeddings and a ``water-filling'' algorithm to solve the tree case.
\begin{restatable}{theorem}{partGST}
There is a deterministic poly-time algorithm for online partial group Steiner tree which given any $\epsilon >0$ is $O\left(\frac{\log ^ 3 n}{\epsilon} \right)$-cost-competitive and $(1-\epsilon)$-connection competitive.
\end{restatable}
As we later observe, providing a deterministic poly-log-competitive algorithm for online partial group Steiner tree with any constant bicriteria relaxation is strictly harder than providing a deterministic poly-log-competitive algorithm for online (non-group) Steiner tree. Thus, this result also generalizes the fact that a deterministic poly-log approximation is known for online (non-group) Steiner tree \cite{imase1991dynamic}. Additionally, as a corollary we obtain the first non-trivial deterministic approximation algorithm for online group Steiner tree---albeit one with a linear dependence on the maximum group size.\footnote{We explicitly note here that this bicriteria guarantee does not yield a solution to the open problem of \cite{alon2006general} of finding a poly-log deterministic approximation to the online group Steiner tree problem.} As mentioned above, our approach for this problem requires that we use a copy tree with a poly-log copy number, thereby requiring that we use our first rather than our second construction.
We next adapt and apply our embeddings in the demand-robust setting.
\textbf{Application 3: Demand-Robust Steiner Problems (\Cref{sec:DRGSTF}).} We begin by generalizing copy tree embeddings to demand-robust copy tree embeddings. Roughly, these are copy tree embeddings which simultaneously work well for every possible demand-robust scenario. We then adapt our analysis from our previous constructions to show that these copy tree embeddings exist. Lastly, we apply demand-robust copy tree embeddings to give poly-log approximations for the demand-robust versions of several Steiner problems---Steiner forest, group Steiner tree and group Steiner forest---for which, prior to this work, nearly nothing was known. In particular, the only non-trivial algorithms known for demand-robust Steiner problems prior to this work are an algorithm for Steiner tree \cite{dhamdhere2005pay} and an algorithm for demand-robust Steiner forest \emph{on trees} with exponential scenarios \cite{feige2007robust} (which is, in general, incomparable to the usual demand-robust setting). To show these results, we apply our demand-robust copy tree embeddings to reduce these problems to their tree case. Thus, we also give our results on trees which are themselves non-trivial.
\begin{restatable}{theorem}{DRSTT}\label{thm:demand-robust-steiner-tree-algo} There is a randomized poly-time $O(\log^2 n)$-approximation algorithm for the demand-robust group Steiner tree problem on weighted trees.
\end{restatable} \vspace{-\baselineskip}
\begin{restatable}{theorem}{DRSFT}\label{thm:demand-robust-steiner-forest-algo}
There is a randomized poly-time $O(D \cdot \log^3 n)$-approximation algorithm for the demand-robust group Steiner forest problem on weighted trees of depth $D$.
\end{restatable}
\begin{theorem}\label{thm:demand-robust-steiner-tree-algo-on-general-graph}
There is a randomized poly-time $O(\log^4 n)$-approximation algorithm for the demand-robust group Steiner tree problem on weighted graphs.
\end{theorem}
\begin{theorem}\label{thm:demand-robust-steiner-forest-algo-on-general-graph}
There is a randomized poly-time $O(\log^6 n)$-approximation for the demand-robust group Steiner forest problem on weighted graphs with polynomially-bounded aspect ratio.
\end{theorem}
Demand-robust group Steiner forest generalizes demand-robust Steiner forest and prior to this work no poly-log approximations were known for demand-robust Steiner forest; thus the above result gives the first poly-log approximation for demand-robust Steiner forest. We solve the tree case of the above problems by observing a connection between demand-robust and online algorithms. In particular, we exploit the fact that for certain online rounding schemes a demand-robust problem can be seen as an online problem with two time steps provided certain natural properties are met. Notably, these properties will be met for these problems \emph{on trees}. Thus, we emphasize that going through the copy tree embedding is crucial for our application---a more direct approach of using online rounding schemes on the general problem does not seem to yield useful results.
\textbf{Further Applications.} Lastly, we note that copy tree embeddings were integral to a follow-up work of the same set of authors~\cite{haeupler2020tree}, in which we gave the first poly-log approximations for the hop-constrained version of many classic network design problems, including hop-constrained Steiner forest \cite{agrawal1995trees}, group Steiner tree and buy-at-bulk network design~\cite{awerbuch1997buy}.
\section{Additional Related Work}
We survey some additional work before moving on to our results.
\subsection{Group Steiner Tree and Group Steiner Forest}
The group Steiner tree problem was introduced by \citet{reich1989beyond} as an important problem in VLSI design. \citet{garg2000polylogarithmic} gave the first randomized poly-log approximation for offline group Steiner tree using linear program rounding. \citet{charikar1998rounding} derandomized this result and \citet{chekuri2006greedy} showed that a greedy algorithm achieves similar results. \citet{demaine2009node} gave improved algorithms for group Steiner tree on planar graphs.
As earlier mentioned, \citet{alon2006general} gave the first randomized poly-logarithmic algorithm for \emph{online} group Steiner tree which works against oblivious adveraries and posed the existence of a deterministic poly-log approximation as an open question. Very recently \citet{bienkowski2020nearly} made exciting progress towards this open question by giving a poly-log deterministic approximation for online non-metric facility location---which is equivalent to the online group Steiner tree on trees with depth $2$. We complement this result by narrowing the remaining gap on this question ``from the other end'' by showing that the tree case is all that needs to be considered. The authors also note that they believe that their methods could be used to give a deterministic poly-log-competitive algorithm for group Steiner tree on trees which, when combined with our own results, would settle this open question.
\citet{alon2006general} introduced the group Steiner \emph{forest} problem to study online network formation. \citet{chekuri2011set} gave the first poly-log approximation algorithm for offline group Steiner forest and posed the existence of a poly-log-competitive online algorithm as an open question. \citet{naor2011online} answered this question in the affirmative by showing that a randomized algorithm which works against oblivious adversaries exists but presently no adaptive-adversary-robust or deterministic poly-log-competitive online algorithm is known.
We note some nuances regarding necessary assumptions on the power of online algorithms for group Steiner tree and forest with an adaptive adversary. \citet{alon2003online} observed that online set cover has no sub-polynomial-competitive algorithm against an adaptive adversary if the set system is not known beforehand. On the other hand, the same work showed how to give a poly-log-competitive algorithm for online set cover if the algorithm knows all possible elements the adaptive adversary might reveal (where the poly-log is poly-logarithmic in the total number of possible revealed elements). Set cover can easily be reduced to group Steiner tree on a tree where edges correspond to sets and elements correspond to leaves of the tree. Consequently, formulating any poly-log-competitive and adaptive-adversary-robust or deterministic algorithm for group Steiner tree requires that the algorithm knows all possible groups the adversary might reveal and that the number of possible groups is polynomially-bounded. As group Steiner tree is a special case of group Steiner forest, an analogous fact holds for group Steiner forest; namely all possible $(A_i, B_i)$ pairs that the adaptive adversary might reveal must be known beforehand to the algorithm for a poly-log competitive ratio and the number of such pairs must be polynomially-bounded.
\subsection{Tree Embedding Variants}
Our embeddings are similar in spirit to Ramsey trees and Ramsey tree covers \cite{mendel2006ramsey,naor2012scale,blelloch2016efficient,abraham2018ramsey,bartal2019covering}. Specifically, it is known that for every metric $(V,d)$ and $k$ there is some subset $S \subseteq V$ of size at least $n^{1-1/k}$ which embeds into a tree---a so-called Ramsey tree---with distortion $O(k)$ \cite{mendel2006ramsey}. Recursively applying (a slight strengthening of) this fact shows that there exist collections of Ramsey trees---so-called Ramsey tree covers---where each vertex $v$ has some ``home tree'' in which the distances to $v$ are preserved. A concurrent work of \citet{filtser2021clan} employed this machinery to devise ``clan embeddings'' where the trees of a Ramsey tree cover are merged and---like in our work---each vertex is mapped to its copies. This line of work has led to many applications in metric-type problems such as compact routing schemes. However, the guarantees of Ramsey tree covers and the embeddings built on them are insufficient for the connectivity problems in which we are interested in a slightly subtle way. We are interested in preserving the costs of entire subgraphs which, roughly speaking, requires that pairwise distances be preserved in \emph{every} tree that we merge. For this reason our copy tree embedding construction will use much of the machinery of the ``well-padded tree covers'' of \citet{gupta2006oblivious} which (implicitly) give exactly this guarantee rather than Ramsey-tree-type machinery.
Another recent work of \citet{barta2020online} was also concerned with tree embeddings for (not necessarily deterministic) online algorithms. This work designed tree embeddings to give competitive algorithms for network design problems competitive ratios are poly-logarithmic in the number of relevant terminals as opposed to the total number of nodes, $n$.
Lastly, we note that there has been considerable work on extending the power of tree embeddings to a variety of other settings including tree embeddings for planar graphs \cite{konjevod2001approximating}, dynamic tree embeddings \cite{forster2020dynamic,chechik2020dynamic}, distributed tree embeddings \cite{khan2012efficient} and tree embeddings where the resulting tree is a subgraph of the input graph \cite{alon1995graph,elkin2008lower,abraham2008nearly,koutis2011nearly,abraham2012using}.
\section{Graph Notation And Assumptions}
Throughout this paper we will work with weighted graphs of the form $G = (V, E, w)$ where $V$ and $E$ are the vertex and edge sets of $G$ and $w : E \to \mathbb{R}_{\ge 1}$ gives the weight of edges. We typically assume that $n := |V|$ is the number of nodes and write $[n] = \{ 1, 2, \ldots, n \}$. We will also use $V(G)$, $E(G)$ and $w_G$ to stand for the vertex set, edge set and weight function of $G$. Similarly, we will use $w_e$ to stand for $w(e)$ where convenient. For a subset of edges $F \subseteq E$, we use the notation $w(F) := \sum_{e \in F} w_G(e)$. We use $d_G : V \times V \to \mathbb{R}_{\geq 0}$ to give the shortest path metric according to $w$. We will talk about the diameter of a metric $(V,d)$ which is $\max_{u,v \in V} d(u,v)$; we notate the diameter with $D$. We use $B(v, x) := \{u \in V : d(v, u) \leq x\}$ to stand for the closed ball of $v$ of radius $x$ in metric $(V,d)$ and and $B_G(v, x)$ if $(V,d)$ is the shortest path metric of $G$ and we need to disambiguate which graph we are taking balls with respect to. We will sometimes identify a graph with the metric which it induces.
Notice that we have assumed that edge weights are non-zero and at least $1$. This will be without loss generality as for our purposes any $0$ weight edges may be contracted and scaling of edge weights ensures that the minimum edge weight is at least $1$.
\section{Copy Tree Embedding Constructions}\label{sec:partTree}
In this section we give our two constructions of copy tree embeddings. We begin by giving our first copy tree embedding construction based on merging partial tree embeddings.
\repTree*
If it were possible to give a single tree embedding which simultaneously preserved all distances between all nodes then we could simply take such a tree embedding as our copy tree embedding. However, such a tree embedding is, in general, impossible. The key insight we use to overcome this issue is that one can approximately preserve distances in a \emph{deterministic} way if one only embeds a constant fraction of all nodes in the input metric; we call such an embedding a partial tree embedding. Combining $O(\log n)$ such partial tree embeddings will give our construction.
In more detail, in \Cref{sec:padHDtoRHST} we show that an appropriate $O(\log n)$ ``padded hierarchical decompositions'' gives $O(\log n)$ partial tree embeddings where every node is embedded a constant number of times. Next, we show that such a collection of partial tree embeddings indeed gives us a copy tree embedding as in \Cref{thm:repTreeConst}; the main observation that this reduction relies on is the constant congestion induced by Euler tours which will allow us to project from our input graph to our partial tree embeddings in a cost and connectivity-preserving fashion. Thus, our goal after this point is to compute an appropriate collection of padded hierarchical decompositions.
In \Cref{sec:detpHD} we proceed to show how to compute the required collection of padded hierarchical decompositions. Our construction of hierarchical decompositions will make use of the FRT cutting scheme and paddedness properties of it previously observed by \citet{gupta2006oblivious}. To this end, we provide a novel derandomization of a node-weighted version of the FRT cutting scheme by combing the powerful multiplicative weights methodology~\cite{arora2012multiplicative} together with the classic method of conditional expectation and pessimistic estimators.
\subsection{From Padded Hierarchical Decompositions to Copy Tree Embeddings}\label{sec:padHDtoRHST}
\citet{gupta2006oblivious} introduced the idea of padded hierarchical decompositions which we illustrate in \Cref{fig:HDAndPadded}.
\begin{definition}
A hierarchical decomposition $\mathcal{H}$ of a metric $(V,d)$ of diameter $D$ is a sequence of partitions $\mathcal{P}_0, \ldots, \mathcal{P}_h$ of $V$ where $h = \Theta(\log D)$ and:
\begin{enumerate}
\item The partition $\mathcal{P}_h$ is one part containing all of $V$;
\item Each part in $\mathcal{P}_i$ has diameter at most $2^i$;
\item $\mathcal{P}_{i}$ is a refinement of $\mathcal{P}_{i+1}$; that is, every part in $\mathcal{P}_{i}$ is contained in some part of $\mathcal{P}_{i+1}$.
\end{enumerate}
\end{definition}
Notice that each part of $\mathcal{P}_0$ is a singleton node by our assumption that edge weights are at least $1$ (we assume that the constant in the theta notation of $h = \Theta(\log D)$ is sufficiently large).
\begin{definition}[$\alpha$-Padded Node]
For some $\alpha \le 1$, a node $v$ is $\alpha$-padded in hierarchical decomposition $\mathcal{P}_0, \ldots, \mathcal{P}_h$ if for all $i \in [0, h]$ the ball $B(v, \alpha \cdot 2^i)$ is contained in some part of $\mathcal{P}_i$.
\end{definition}
\begin{figure}
\centering
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth,trim=0mm 20mm 0mm 20mm, clip]{./figures/HD.pdf}
\caption{Hierarchical decomposition.}\label{fig:HDEG}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth,trim=0mm 20mm 0mm 20mm, clip]{./figures/HDPad.pdf}
\caption{Why node on left is padded and node on right is not.}
\end{subfigure}
\hfill
\caption{Illustration of a hierarchical decomposition $\mathcal{H}$ with $h=4$ with $n=7$. Each part in each $\mathcal{P}_i \in \mathcal{H}$ is colored according to $i$; singleton parts not pictured. We give $\alpha$-padded nodes in green and all other nodes in red where we illustrate why the node on the far left is $\alpha$-padded and the node on the far right is not by drawing $B(v, \alpha \cdot 2^i)$ for $i \geq 1$ in colors according to $i$ for these two nodes.}\label{fig:HDAndPadded}
\end{figure}
The main result we show in this section is how to use a collection of padded hierarchical decompositions to construct a copy tree embedding.
\begin{restatable}{lemma}{RTEFromHDs}\label{lem:RTEFromHDs}
Let $\{\mathcal{H}_i\}_{i=1}^{k}$ be a collection of hierarchical decompositions of weighted graph $G = (V, E , w)$ such that every $v$ is $\alpha$-padded in at least $.9k$ decompositions. Then, there is a poly-time deterministic algorithm which, given $\{\mathcal{H}_i\}_{i=1}^{k}$ and a root $r \in V$, returns an efficient and well-separated $O(\frac{k}{\alpha})$-approximate copy tree embedding with copy number $k$.
\end{restatable}
\subsubsection{From Padded Hierarchical Decompositions to Partial Tree Embeddings}
We now formalize the notion of a partial tree embedding.
\begin{definition}[Partial Tree Embedding]
A $\gamma$-partial tree embedding of metric $(V, d)$ is a well-separated weighted tree $T = (V', E', w)$ where:
\begin{enumerate}
\item \textbf{Partial Embedding:} $V' \subseteq V$;
\item \textbf{Worst-Case Distance Preservation} For any $u,v \in V'$ we have $d(u,v) \leq d_T(u,v) \leq \gamma \cdot d(u,v)$.
\end{enumerate}
\end{definition}
In the remainder of this section we show how good padded hierarchical decompositions deterministically give good partial tree embeddings.
The reason padded decompositions will be useful for us is that---as we prove in the following lemma---all distances between padded nodes are well-preserved.\footnote{This fact seems to be implicit in \citet{gupta2006oblivious} but is never explicitly proven.} Given a hierarchical decomposition $\mathcal{H}$ we let $T_\mathcal{H}$ be the natural well-separated tree corresponding to $\mathcal{H}$. In particular, a hierarchical decomposition $\mathcal{H}$ naturally corresponds to a well-separated tree which has a node for each part and an edge of weight $2^i$ between a part in $\mathcal{P}_i$ and a part in $\mathcal{P}_{i+1}$ if the latter contains the former. In \Cref{fig:HDtoTreeEG} we illustrate the well-separated tree corresponding to the hierarchical decomposition in \Cref{fig:HDEG}. We will slightly abuse notation and identify each singleton set in such a tree with its one constituent vertex.
\begin{lemma}\label{lem:padGivesDist}
If nodes $u, v$ are $\alpha$-padded in a hierarchical decomposition $\mathcal{H}$ then $d(u,v) \leq d_{T_\mathcal{H}}(u,v) \leq O\left(\frac{1}{\alpha}\cdot d(u,v)\right)$.
\end{lemma}
\begin{proof}
Let $T_\mathcal{H}$ be the well-separated tree corresponding to $\mathcal{H}$. Let $w$ be the least common ancestor of $u$ and $v$ in $T_\mathcal{H}$ and let $l$ be the height of $w$ in $T_\mathcal{H}$. By the definition of $T_\mathcal{H}$, the distance between $u$ and $v$ in $T_\mathcal{H}$ is $d_{T_{\mathcal{H}}}(u,v) = 2 \cdot \sum_{i=0}^{l} 2^i$ and so we have
\begin{align}\label{eq:lcadist}
2^{l+1} \leq d_{T_\mathcal{H}}(u,v) \leq 2^{l+2}.
\end{align}
We next prove that $d_{T_\mathcal{H}}(u, v) \leq O(\frac{1}{\alpha} \cdot d(u,v))$. Notice that for $j = \lceil \log (d(u,v)/\alpha) \rceil$ we know that $B(v, \alpha \cdot 2^j)$ contains $u$ since for this $j$ it holds that $\alpha \cdot 2^j \geq d(u, v)$. Since $\mathcal{H}$ is $\alpha$-padded it follows that $B(v, \alpha \cdot 2^j)$ is contained in some part of $\mathcal{P}_j$; but it then follows that the least common ancestor of $u$ and $v$ is at height at most $j$ and so $l \leq \lceil \log (d(u,v)/\alpha) \rceil$. Combining this with the upper bound in \Cref{eq:lcadist} we have
\begin{align*}
d_{T_\mathcal{H}}(u,v) &\leq 2^{l+2}\\
&\leq 2^{\lceil \log (d(u,v)/\alpha) \rceil + 2}\\
& \leq O\left(\frac{1}{\alpha} \cdot d(u,v) \right)
\end{align*}
We now prove that $d(u,v) \leq d_{T_\mathcal{H}}(u,v)$. Since the diameter of each part in $\mathcal{P}_i$ is at most $2^i$ we know that the least common ancestor of $u$ and $v$ in $T$ corresponds to a part with diameter at most $2^l$. However, since the least common ancestor of $u$ and $v$ corresponds to a part which contains both $u$ and $v$, we must have $d(u,v) \leq 2^l \leq 2^{l+1}$. Combining this with the lower bound in \Cref{eq:lcadist} we have $d(u,v) \leq d_{T_\mathcal{H}}(u,v)$ as desired.
\end{proof}
\begin{figure}
\centering
\begin{subfigure}[b]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth,trim=150mm 200mm 200mm 45mm, clip]{./figures/HDCon1.pdf}
\caption{Tree corresponding to \Cref{fig:HDEG} hierarchical decomposition.}\label{fig:HDtoTreeEG}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth,trim=150mm 200mm 200mm 45mm, clip]{./figures/HDCon2.pdf}
\caption{Contract to ensure $r$ is root of resulting tree.}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth,trim=150mm 200mm 200mm 45mm, clip]{./figures/HDCon3.pdf}
\caption{Multiply weights by $4$ and contract non-$\alpha$-padded vertices.}
\end{subfigure}
\hfill
\caption{How to turn a hierarchical decomposition into a partial tree embedding. We color nodes from the input metric in green if they are padded and red otherwise. Remaining nodes colored according to their corresponding hierarchical decomposition part. $r$ is the node on the far left of the tree.}\label{fig:HDContract}
\end{figure}
We show how to turn a hierarchical decomposition into a partial tree embedding in the next lemma which we illustrate in \Cref{fig:HDContract}.
\begin{lemma}\label{lem:decompToPartialHST}
Given a hierarchical decomposition $\mathcal{H}$ on metric $(V,d)$ and root $r \in V$ which is $\alpha$-padded in $\mathcal{H}$, one can compute in deterministic poly-time a $O(\frac{1}{\alpha})$-partial tree embedding $T=(V', E')$ with root $r$ where $V' := \{v \in V : \text{$v$ is $\alpha $ padded}\}$,
\end{lemma}
\begin{proof}
Let $T_{\mathcal{H}}$ be the well-separated tree which corresponds to $\mathcal{H}$ as described above.
We construct $T$ from $T_{\mathcal{H}}$ using \Cref{lem:padGivesDist} and a trick of \citet{konjevod2001approximating}. Let $V'$ be all leaves of $T_{\mathcal{H}}$ whose corresponding nodes are $\Omega(\frac{1}{\log n})$-padded in $\mathcal{H}$. Next, contract the path from $r$ to the root of $T_{\mathcal{H}}$ and identify the resulting node with $r$. Then, delete from $T_{\mathcal{H}}$ all sub-trees which do not contain a node in $V'$; in the resulting tree every node is either in $V'$ or the ancestor of a node in $V'$. Next, while there exists a node $v$ such that its parent $u$ is not in $V'$ we contract $\{v,u\}$ into one node and identify the resulting node with $v$. Lastly, we multiply the weight of every edge by $4$ and return the result as $T = (V', E', w)$ where, again, $w$ is the weight function of $T_{\mathcal{H}}$ times $4$.
Clearly, the vertex set of $T$ will be $V'$. Moreover, $T$ is well-separated since $T_{\mathcal{H}}$ was well-separated and $r$ will be the root of $T$ by construction.
We now use an analysis of \citet{konjevod2001approximating} to show that for any pair of vertices $u,v \in V'$ we have
\begin{align}\label{eq:raviFRTTrick}
d_{T_{\mathcal{H}}}(u,v) \leq d_T(u,v) \leq 4 \cdot d_{T_{\mathcal{H}}}(u,v)
\end{align}
The upper bound is immediate from the fact that we only contract edges and then multiply all edge weights by $4$. To see the lower bound---$d_{T_{\mathcal{H}}}(u,v) \leq d_T(u,v)$---notice that if $u$ and $v$ have a least common ancestor $a$ at height $l$ in $T_{\mathcal{H}}$, then $d_{T_{\mathcal{H}}}(u, v) = 2^{l+2} - 4$. However, the closest $u$ and $v$ can be in $T$ is if (without loss of generality) $u$ is identified with $a$ and (without loss of generality) $v$ is a child of $u$ in $T$; the length of this edge is the length of a child edge of $a$ in $T_{\mathcal{H}}$ times four which is $2^{l+2}$. Thus $d_{T_{\mathcal{H}}}(u, v) = 2^{l+2} - 4 \leq 2^{l+2} = d_T(u,v)$.
Finally, we conclude by applying \Cref{lem:padGivesDist}. In particular, it remains to show $d(u,v) \leq d_T(u,v) \leq O(\frac{1}{\alpha} \cdot d(u,v))$ but this is immediate by combining \Cref{lem:padGivesDist} and \Cref{eq:raviFRTTrick}.
\end{proof}
\subsubsection{From Partial Tree Embeddings to Copy Tree Embeddings}
We now describe how partial tree embeddings satisfy useful connectivity properties and then use these properties to construct a copy tree embedding from a collection of good partial tree embeddings.
The following two lemmas demonstrate how to map to and from partial tree embeddings in a way that preserves cost and connectivity.
\begin{lemma}[Graph $\to$ Partial Tree Projection]\label{lem:conPresGrapTree}
Let $G = (V, E, w_G)$ be a weighted graph and let $T = (V', E', w_T)$ be a $\gamma$-partial tree embedding of (the metric induced by) $G$.
There exists a deterministic, poly-time computable function $\pi : 2^{E} \to 2^{E'}$ such that for all sets of edges $F \subseteq E$ the following holds:
\begin{enumerate}
\item \textbf{Connectivity Preservation:} If $u, v \in V'$ are connected by $F$ in $G$, then they are connected in $\pi(F)$ in $T$;
\item \textbf{Cost Preservation:} $w_T(\pi(F)) \leq O(\gamma) \cdot w_G(F)$.
\end{enumerate}
\end{lemma}
\begin{proof}
We first simplify $F$ by noticing it is sufficient to prove the claim on every connected component in isolation. Furthermore, we can assume without loss of generality that $F$ is a tree since taking a spanning tree of $F$ can only decrease $w_G(F)$ and appropriately maintains connectivity. Finally, we delete every leaf that is not in $V'$, which decreases $w_G(F)$ and maintains connectivities in $V'$.
We define $\pi(F)$ to be the unique minimal subtree of $T$ which contains all nodes of $V'$ that are incident to an edge in $F$. By transitivity of connectedness, we know that if $u,v \in V'$ are connected in $F$ then they must also be connected in $\pi(F)$. Also, note that $\pi$ is trivially deterministic poly-time computable.
It remains to argue the $\gamma$-cost preservation property. Double the edges of $F$; we call this multigraph $2F$. Since the degree of every vertex in $2F$ is even, we know that $2F$ has an Euler tour. Using this tour we can partition $2F$ into a set $\mathcal{P}$ of paths where each path connects two nodes in $V'$ and the paths in $\mathcal{P}$ are multiedge-disjoint. Therefore, we have that $2 w_G(F) = \sum_{P \in \mathcal{P}} w_G(P)$.
For each path $P \in \mathcal{P}$ in the tour between nodes $u, v \in V'$, we say that $P$ \textbf{covers} all edges in $T$ between $u$ and $v$ and let $P'$ be the path in $T$ between $u$ and $v$. We note that every edge in $\pi(F)$ is covered by at least one path, hence $w_T(\pi(F)) \le \sum_{P \in \mathcal{P}} w_T(P')$.
For every path in $G$ connecting two nodes $u, v \in V'$ the distance-preservation properties of $\gamma$-partial tree embeddings implies that $w_T(P') \le O(\gamma) \cdot w_G(P)$. Hence we have that $w_T(\pi(F)) \le \sum_{P \in \mathcal{P}} w_T(P') \le O(\gamma) \cdot \sum_{P \in \mathcal{P}} w_G(P) \le O(\gamma) \cdot w_G(F)$ as required.
\end{proof}
We now show how to project in the reverse direction.
\begin{lemma}[Partial Tree $\to$ Graph Projection]\label{lem:conPresTreeGraph}
Let $G = (V, E, w_G)$ be a weighted graph and let $T = (V', E', w_T)$ be a $\gamma$-partial tree embedding of (the metric induced by) $G$.
There exists a deterministic, poly-time computable function $\imath : 2^{E'} \to 2^{E}$ such that for all sets of edges $F' \subseteq E'$ the following holds:
\begin{enumerate}
\item \textbf{Connectivity Preservation:} If $u, v \in V'$ are connected by $F'$ in $T$, then they are connected by $\imath(F)$ in $G$;
\item \textbf{Cost Preservation:} $w_G(\imath(F')) \le w_T(F')$.
\end{enumerate}
\end{lemma}
\begin{proof}
For an edge $e' \in E'$, connecting $u, v \in V'$, we define $\imath(\{e'\})$ as some shortest path between $u$ and $v$ in $G$. Note that this implies that $w_G(\imath(\{e'\})) \le w_T(e')$ by the properties of a partial tree embedding. We extend $\imath$ to $F' \subseteq E'$ by defining $\imath(F') := \bigcup_{e' \in F'} \imath(\{e'\})$. Notice that $\imath$ is indeed deterministic, poly-time computable and is connectivity preserving by the transitivity of connectivity.
We now verify the cost preservation of $\imath$: we have that $w_G(\imath(F')) = w_G(\bigcup_{e' \in F'} \imath(\{e'\})) \le \sum_{e' \in F'} w_G(\imath(\{e'\})) \le \sum_{e' \in F'} w_T(e') = w_T(F')$.
\end{proof}
Using these two properties we can conclude our proof of \Cref{lem:RTEFromHDs}, which we restate here.
\RTEFromHDs*
\begin{proof}
Our embedding is gotten by combining the above lemmas in the natural way.
Specifically, we first apply \Cref{lem:decompToPartialHST} to all decompositions in $\{\mathcal{H}_i\}_{i=1}^{k}$ in which $r$ is $\alpha$-padded to get back $O(\frac{1}{\alpha})$-partial tree embeddings $\{T_i\}_{i}$ where $V(T_i) = \{ v : \text{$v$ is $\alpha$-padded in $\mathcal{H}_i$\}}$. Next we apply \Cref{lem:conPresGrapTree} and \Cref{lem:conPresTreeGraph} to each $T_i$ to get back mapping functions $\pi_i$ and $\imath_i$ respectively.
We now describe our $O(\frac{k}{\alpha})$-approximate copy tree embedding $(T, \phi, \pi_{G \to T}, \pi_{T \to G})$. We let $T$ be the tree resulting from taking all trees in $\{T_i\}_i$ and then identifying all copies of $r$ as the same vertex. Similarly, we let $\phi(v)$ be the set of all copies of $v$ in $T$ in the natural way. Next we let $\pi_{G \to T}(F)$ be $\bigcup_i \pi_i(F)$ where $\pi_i$ is projected onto $T$ in the natural way. We let $\pi_{T \to G}(F') := \bigcup_i \imath_i(F')$ be defined analogously.
Since each vertex appears in at least a $.9$ fraction of all $T_i$, by the pigeonhole principle we know that any pair connected by $F$ in $G$ must occur in some $\mathcal{H}_i$ together with $r$ and so must be connected in $\pi_i(F)$ for some $i$ where $T_i \in \{T_i\}_i$ and so some pair of corresponding copies are connected by $\pi_{G \to T}$; an analogous result holds for $\pi_{T \to G}$. The remaining properties of our embedding are immediate from the above cited lemmas.
\end{proof}
\subsection{Deterministically Constructing Padded Hierarchical Decompositions}\label{sec:detpHD}
In the previous section we reduced computing good copy tree embeddings to computing good hierarchical decompositions. The existence of good hierarchical decompositions is immediate from prior work of \citet{gupta2006oblivious} and FRT.
\begin{lemma}[\citet{gupta2006oblivious}]\label{lem:FRTIsPadded}
Let $\mathcal{H}$ be the hierarchical decompositions resulting from $a$ tree drawn from the \citet{fakcharoenphol2004tight} cutting scheme. Then, every vertex is $\Omega(\frac{1}{\log n})$-padded with constant probability in $\mathcal{H}$.
\end{lemma}
A simple Chernoff and union bound proof then gives that $O(\log n)$ draws gives a collection of hierarchical decompositions in which every vertex is $\Omega(\frac{1}{\log n})$-padded in a constant fraction of the decompositions \emph{with high probability}, i.e.\ at least $1-\frac{1}{\text{poly} (n)}$.
However, we are ultimately interested in a deterministic algorithm which is robust to adaptive adversaries and so we must derandomize the above with high probability result. We proceed to do so in this section.
To our knowledge, prior derandomizations of this cutting scheme---see, e.g. \citet{chekuri2006approximation} or \citet{fakcharoenphol2004tight}---do not provide sufficiently strong guarantees for our purposes.
We also note that the authors of \citet{gupta2006oblivious} claim to give a deterministic algorithm for computing hierarchical decompositions in a forthcoming journal version of their paper but said journal version never seems to have been published.
\subsubsection{Derandomization Intuition}
The intuition behind our derandomization is as follows. A single draw from the FRT cutting scheme guarantees that each node is $\Omega(1 / \log n)$-padded with constant probability. If we could derandomize this result then we could produce one hierarchical decomposition such that at least a $.99$ fraction of all nodes are $\Omega(1 / \log n)$-padded. Indeed, as we will see, standard derandomization techniques---the method of pessimistic estimators and conditional expectation---will allow us to do exactly this. However, since we must produce a collection of hierarchical decompositions in which every node is in a large percentage in all decompositions it is not clear how, then, to handle the remaining $.01$ fraction of nodes. One might simply rerun the aforementioned derandomization result on the remaining $.01$ nodes, then on the remaining $.001$ nodes and so on logarithmically-many times; however, it is easy to see that in the resulting collection of decompositions, while every node is padded in some decomposition, no node is necessarily padded in a large fraction of all the decompositions.
Rather, we would like to repeatedely run our derandomization on all nodes but in a way that takes into account which nodes are already padded in a large fraction of the decompositions we have already produced. In particular, if a node was already padded in most of the decompositions we have so far produced, we need not worry about producing decompositions in which this node is padded. Thus, we would like to derandomize in a way that would make such a node less likely to be padded in the remaining decompositions we produce while making nodes which have not so far been padded in many decompositions we produced more likely to be padded.
To accomplish this, we will formulate and then derandomize a \emph{node-weighted} version of \Cref{lem:FRTIsPadded}; this, in turn, will allow us to down-weight nodes which are padded in a large fraction of the decompositions we have so far produced when we run our derandomization; a multiplicative-weights-type analysis will then allow us to conclude our deterministic construction.
\subsubsection{The FRT Cutting Scheme}
In order to give our deterministic construction we must unpack the black box of the FRT cutting scheme.
The \citet{fakcharoenphol2004tight} cutting scheme given metric $(V, d)$ where $d(u,v) \geq 1$ for all $u,v \in V$ produces a hierarchical decomposition $\mathcal{H} = \{\mathcal{P}_0, \ldots, \mathcal{P}_h\}$ and is as follows. We first pick a uniformly random permutation $\pi$ on $V$ and a uniformly random value $\beta \in [\frac{1}{2}, 1)$. We let the radius for level $i$ be $r_i := 2^{i-1} \cdot \beta$.
We let $\mathcal{P}_h$ be the trivial partition containing all vertices of $V$ with $h = O(\log \max_{u,v} d(u,v))$. Next, we construct $\mathcal{P}_{i}$ by refining $\mathcal{P}_{i+1}$; in particular we divide each part $P_{i+1} \in \mathcal{P}_{i+1}$ into additional parts as follows. Each $v \in P_{i+1}$ is assigned to the first vertex $u$ in $\pi$ for which $v \in B(u, r_i)$. Notice that $u$ need not be in $P_{i+1}$. Let $C_u$ be all vertices in $P_{i+1}$ which are assigned to $u$ and add to $\mathcal{P}_i$ all $C_u$ which are non-empty. Notice that here $C_u$ really depends on $i$; we suppress this dependence in our notation for cleanliness of presentation.
One can easily verify that the resulting partitions indeed form a hierarchical decomposition.
\subsubsection{Derandomizing via Multiplicative Weights and Pessimistic Estimators}
As discussed above, our goal is to derandomize \Cref{lem:FRTIsPadded} while taking node weights into account. Suppose we have a distribution $\{p_v\}_v$ over vertices in $v$; intuitively this distribution how important each vertex is in regards to being $\alpha$-padded. Then by \Cref{lem:FRTIsPadded} and linearity of expectation we have
\begin{align*}
\E_{\pi, \beta} \left[\sum_v p_v \cdot \mathbb{I}\left(\text{$v$ is $\Omega\left(\frac{1}{\log n}\right)$-padded in $\mathcal{H}$}\right) \right] &= \sum_v p_v \cdot \Pr_{\pi, \beta} \left(\text{$v$ is $\Omega\left(\frac{1}{\log n}\right)$-padded in $\mathcal{H}$}\right)\\ &\geq .99.
\end{align*}
where $\mathbb{I}$ is the indicator function.
Thus, our goal will be to gradually fix the randomness of $\pi$ and $\beta$ until we have found a way to deterministically set $\beta$ and $\pi$ so that at least a $.95$ fraction of nodes (weighted by $p_v$s) are $\Omega(\frac{1}{\log n})$-padded. That is, we aim to use the method of conditional expectation. We will treat a permutation $\pi$ as an ordering of the elements of $[V]$. E.g. $(v_2, v_1, v_3)$ is a permutation of $V = \{v_1, v_2, v_3\}$. Now, suppose we have fixed a prefix $\pi_P$ of $\pi$ which orders nodes $P \subseteq V$ and among the remaining $\bar{P} := V \setminus P$ we uniformly at randomly choose the remaining suffix $\pi_{\bar{P}}$. That is, $\pi = \pi_P \odot \pi_{\bar{P}}$ where $\pi_P$ is fixed and $\pi_{\bar{P}}$ is a uniformly random permutation over $\bar{P}$ and $\odot$ is concatenation. Notice that it follows that every vertex of $P$ will precede every vertex of $\bar{P}$ in $\pi$.
Let $\mathcal{H}(\pi_P, \beta)$ be the hierarchical decomposition returned when we run the FRT cutting scheme as above with the input value of $\beta$ and with $\pi$ chosen as $\pi = \pi_P \odot \pi_{\bar{P}}$. Notice that provided $P \neq V$ we have that $\mathcal{H}$ is a randomly generated. Let $f(\pi_P, \beta) := \sum_v p_v \cdot \Pr_{\pi_{\bar{P}}} \left(\text{$v$ is $\Omega\left(\frac{1}{\log n}\right)$-padded in $\mathcal{H}(\pi_{P}, \beta)$}\right)$ be the fraction of $\Omega(\frac{1}{\log n})$-padded nodes by weight in expectation in $\mathcal{H}(\pi_P, \beta)$. We now show that there is a so called ``pessimistic estimator'' $\hat{f}$ of $f$.
\begin{lemma}\label{lem:pessEst}
There is a function $\hat{f}$ such that
\begin{enumerate}
\item \textbf{Good start:} There is some deterministically poly-time computable set $R \subseteq \mathbb{R}$ such that for some $\beta \in R$ we have $\hat{f}(\pi_\emptyset, \beta) \geq .95$.
\end{enumerate}
and for any $P \subseteq V$, $\pi_P$ and $\beta$
\begin{enumerate} \setcounter{enumi}{1}
\item \textbf{Computable:} $\hat{f}(\pi_P, \beta)$ is computable in deterministic poly-time;
\item \textbf{Monotone:} $\hat{f}(\pi_P, \beta) \leq \hat{f}(\pi_{P \cup \{v\}}, \beta)$ for some $v \in \bar{P}$;
\item \textbf{Pessimistic:} $\hat{f}(\pi_P, \beta) \leq f(\pi_P, \beta)$ for all $\pi_P$ and $\beta$.
\end{enumerate}
\end{lemma}
\begin{proof}
We will use an analysis similar to \citet{gupta2006oblivious} but which accounts for the fixed prefix $\pi_P$ of our permutation, demonstrates the above properties of our pessimistic estimator and which guarantees that $R$ is computable in deterministic, poly-time.
We begin by defining $\hat{f}$. Fix a $\pi_P$ and $\beta$ and let $\alpha = \Omega(\frac{1}{\log n})$.
For node $v$, let $B_{i, v} := B(v, \alpha 2^i)$. Say that node $u$ \emph{protects} $B_{i,v}$ if its ball at level $i$ contains $B_{i,v}$, i.e.\ if $r_i \geq d(u,v) + 2^i \alpha$. Say that $u$ \emph{threatens} $B_{i,v}$ if its ball at level $i$ intersects $B_{i,v}$ but does not contain it, i.e.\ $d(u,v) - \alpha 2^i < r_i < d(u,v) + 2^i \alpha$. Finally, say that $u$ \emph{cuts} $B_{i,v}$ if it threatens $B_{i,v}$ and is the first node in $\pi$ to threaten or protect $B_{i,v}$. Notice that if $B_{i,v}$ is not cut by any node for all $i$ then $v$ will be $\alpha$-padded.
In order for $B_{i,v}$ to be cut by $u$ it must be the case that $u$ threatens $B_{i,v}$ and no node before $u$ in $\pi$ threatens or protects $B_{i,v}$. By how we choose $r_i$, $u$ threatens $B_{i,v}$ if
\begin{align}\label{eq:threaten}
d(u,v) - 2^i \alpha < \beta \cdot 2^{i-1} < d(u,v) + 2^i \alpha
\end{align}
In order for $u$ to be the first node to threaten or protect $B_{i,v}$, it certainly must be the case that every node which is closer to $v$ than $u$ appears after $u$ in $\pi$ (since every such node either threatens or protects $B_{i,v}$). Thus, we let $N_{v}(u) := \{w : d(w,v) \leq d(u,v)\}$ be all nodes which are nearer to $v$ than $u$.
Lastly, a node which is too far or too close to $v$ cannot cut $B_{i,v}$. In particular, a node $u$ can only cut $B_{i,v}$ if
\begin{align}\label{eq:cutSet}
2^{i-2} - 2^i \alpha \leq d(u,v) \leq 2^{i-1} +2^i \alpha
\end{align}
We let $C_{i,v} := \{u : 2^{i-2} - 2^i \alpha \leq d(u,v) \leq 2^{i-1} + 2^i\alpha \}$ be all such nodes which might cut $B_{i,v}$.
It follows that we have that $B_{i,v}$ is cut only if there exists some $u$ in $C_{i,v}$ which both threatens $v$ and precedes all $w \in N_v(u) \setminus \{u\}$ in $\pi$. Thus, we define $\hat{f}$ as follows
\begin{align*}
\hat{f}(\pi_P, \beta) := 1 - \sum_{v,i} p_v \sum_{u \in C_{i,v}}\Pr_{\pi_{\bar{P}}}(\text{$u$ precedes all $w \in N_v(u) \setminus \{u\}$ in $\pi$}) \cdot \mathbb{I}(\text{$u$ threatens $B_{i,v}$}).
\end{align*}
where, again, $\mathbb{I}$ is the indicator function. We now verify properties (2)-(4).
\begin{enumerate}\setcounter{enumi}{1}
\item Computable: Clearly $C_{i,v}$ is deterministically computable in poly-time since we need only check if \Cref{eq:cutSet} holds for each vertex. Similarly $\mathbb{I}(\text{$u$ threatens $B_{i,v}$})$ for each $u \in C_{i,v}$ can be computed by checking if \Cref{eq:threaten} holds. We can deterministically compute $\Pr_{\pi_{\bar{P}}}(\text{$u$ precedes all $w \in N_v(u) \setminus \{u\}$ in $\pi$})$ for each $u \in C_{i,v}$ as follows: if $u$ precedes all $w \in N_v(u) \cap \pi_P$ then this probability is $1$; if $u$ is preceded in $\pi_P$ by some $w \in N_v(u)$ then this probability is $0$; otherwise $\pi_P \cap N_v(u) = \emptyset$, meaning all nodes in $N_v(u)$'s order in $\pi$ are set by $\pi_{\bar{P}}$; in this case $u$ precedes all nodes in $N_v(u)\setminus \{u\}$ with probability exactly $\frac{1}{|N_v(u)|}$.
\item Monotonicity is immediate by an averaging argument: in particular, $\hat{f}(\pi_P, \beta)$ is just an expectation taken over the randomness of $\pi_{\bar{P}}$ and so there must be some way to fix an element of $P$ to achieve the expectation.
\item Pessimism is immediate from the above discussion; in particular, as discussed above a ball $B_{i,v}$ is cut only if there is some $u \in C_{i,v}$ which threatens $B_{i,v}$ and which precedes all $w$ in $N_v(u) \setminus \{u\}$ in $\pi$; it follows by a union bound that $v$ fails to be $\alpha$-padded with probability at most
\begin{align*}
\sum_{i} \sum_{u \in C_{i,v}}\Pr_{\pi_{\bar{P}}}(\text{$u$ precedes all $w \in N_v(u) \setminus \{u\}$ in $\pi$}) \cdot \mathbb{I}(\text{$u$ threatens $B_{i,v}$}).
\end{align*}
\end{enumerate}
Finally, we conclude property (1): that there is some $\beta \in R$ where $R$ is computable in deterministic poly-time and $\hat{f}(\pi_\emptyset, \beta) \geq .95$. Consider drawing a $\beta \in [\frac{1}{2}, 1]$ as in the FRT cutting scheme; we will argue that $\E_\beta \left[\hat{f}(\pi_\emptyset, \beta) \right] \geq .95$ and so there must be some $\beta$ for which $\hat{f}(\pi_\emptyset, \beta) \geq .95$.
Letting $\pi$ be a uniformly random permutation, we have
\begin{align*}
\E_{\beta}\left[\hat{f}(\pi_\emptyset, \beta) \right] = 1- \sum_{v,i} p_v \sum_{u \in C_{i,v}}\Pr_{\pi}(\text{$u$ precedes all $w \in N_v(u) \setminus \{u\}$ in $\pi$}) \cdot \Pr_\beta(\text{$u$ threatens $B_{i,v}$}).
\end{align*}
If $u$ is the $s$th closest node to $v$ then we have that $\Pr_{\pi}(\text{$u$ precedes all $w \in N_v(u) \setminus \{u\}$ in $\pi$}) = \frac{1}{s}$. Moreover, $u$ threatens $B_{i,v}$ only if \Cref{eq:threaten} holds and since $\beta \cdot 2^{i-1}$ is distributed uniformly in $[2^{i-2}, 2^{i-1})$, this happens with probability $2^{i+1}\alpha/2^{i-2} = 8\alpha$. Next, we claim that for a fixed $v$, each $u$ occurs in at most $3$ of the $C_{i,v}$. In particular, notice that if $u$ is in $C_{i,v}$ and $C_{i', v}$ then we know that $2^{i-2}-2^i \alpha \leq d(u,v) \leq 2^{i'-1} + 2^{i'}\alpha$ which for $\alpha \leq \frac{1}{8}$ (which we may assume since $\alpha = \Omega(\frac{1}{\log n})$) implies $i < i' + 3$. Combining these facts with the fact that $H_n := \sum_{i=1}^n \frac{1}{i} \leq O(\log n)$ we get
\begin{align*}
\E_{\beta}\left[\hat{f}(\pi_\emptyset, \beta) \right] \geq 1 - O(\alpha \log n).
\end{align*}
and since $\alpha = \Omega(\frac{1}{\log n})$, by fixing the constant in the $\Omega(\frac{1}{\log n})$ to be sufficiently small we have $\E_{\beta}\left[\hat{f}(\pi_\emptyset, \beta) \right] \geq .95$ as desired
Lastly, we define $R$ and argue that there must be some $\beta\in R$ such that $\hat{f}(\pi_\emptyset, \beta) \geq .95$. In particular, notice that since $\E_{\beta}\left[\hat{f}(\pi_\emptyset, \beta) \right] \geq .95$, it suffices to argue that there are polynomially-many efficiently computable intervals which partition $[\frac{1}{2}, 1)$ such that any $\beta_1$ and $\beta_2$ in the same interval satisfy $\hat{f}(\pi_\emptyset, \beta_1) = \hat{f}(\pi_\emptyset, \beta_2)$; letting $R$ take an arbitrary element from each such interval will give the desired result.
Notice that $\hat{f}(\pi_\emptyset, \beta_1) \neq \hat{f}(\pi_\emptyset, \beta_2)$ only if there is some $i,v$ and $u$ such that $u$ threatens $B_{i,v}$ with $\beta$ set to $\beta_1$ but does not threaten $B_{i,v}$ with $\beta$ set to $\beta_2$. By definition of what it means to threaten, we have
\begin{align*}
d(u,v) - 2^i \alpha < \beta_1 \cdot 2^{i-1} < d(u,v) + 2^i \alpha
\end{align*}
but either $d(u,v) - 2^i \alpha \geq \beta_2 \cdot 2^{i-1}$ or $\beta_2 \cdot 2^{i-1} \geq d(u,v) + 2^i \alpha$. We then have either
\begin{align}\label{eq:changeOne}
\beta_2 \leq d(u,v)\cdot2^{1-i} - 2\alpha < \beta_1
\end{align}
or
\begin{align}\label{eq:changeTwo}
\beta_1 < d(u,v) \cdot 2^{1-i} + 2 \alpha \leq \beta_2.
\end{align}
With Equations \ref{eq:changeOne} and \ref{eq:changeTwo} in mind, we define $R_l := \{ d(u,v) \cdot 2^{1-i} + 2 \alpha: u, v \in V, i \in [h] \}$ to be all the lower thresholds of when a change in $\beta$ affects $\hat{f}$ and define $R_u := \{ d(u,v)\cdot2^{1-i} - 2\alpha: u, v \in V, i \in [h] \}$ to be all such upper thresholds. Let $t^{(l)}$ be the $l$th largest element of $(R_l \cup R_r) \cap [\frac{1}{2}, 1)$ and let $R$ consist of one arbitrary element from the interval between $t^{(l)}$ and $t^{(l+1)}$ for $l \geq 0$ where the interval includes $t^{(l)}$ only if $t^{(l)} \in R_l$ and $t^{(l+1)}$ only if $t^{(l+1)} \in R_u$; $t^{(0)} = \frac{1}{2}$ is always included and $t^{(|R|)} = 1$ is never included. By the above discussion every $\beta_1$ and $\beta_2$ which are in the same interval satisfy $\hat{f}(\pi_\emptyset, \beta_1) = \hat{f}(\pi_\emptyset, \beta_1)$; moreover, these intervals partition $[\frac{1}{2}, 1]$ by construction.
We know $|R| = \text{poly}(n)$ since $h \leq O(\log n)$ by our assumption that $\max_{u,v}d(u,v)$ is $\text{poly}(n)$ and there are $n^2$ pairs $u,v$. Clearly $R$ is computable in deterministic poly-time. Thus, by the above discussion $R$ must contain some $\beta$ such that $\hat{f}(\pi_\emptyset, \beta) \geq .95$.
\end{proof}
We now formalize our node-weighted derandomization.
\begin{lemma}\label{lem:derandPadding}
There is a deterministic algorithm which given metric $(V, d)$ and a distribution $\{p_v\}_v$ over nodes returns a hierarchical decomposition $\mathcal{H}$ in which at least a $.95$ fraction of nodes are $\Omega(\frac{1}{\log n})$-padded by weight; i.e.
\begin{align*}
\sum_v p_v \cdot \mathbb{I}\left(\text{$v$ is $\Omega\left(\frac{1}{\log n}\right)$-padded in $\mathcal{H}$}\right)\geq .95.
\end{align*}
\end{lemma}
\begin{proof}
Our derandomization algorithm is as follows. First, choose the $\beta \in R$ which maximizes $\hat{f}(\pi_\emptyset, \beta)$. Call this $\beta^*$. Next, initially let $P = \emptyset$ and repeat the following until $P = V$: for $v \in \bar{P}$ we compute $f(\pi_{P \cup \{v\}}, \beta^*)$; we add to $P$ whichever $v$ maximizes $f(\pi_{P \cup \{v\}}, \beta^*)$. Lastly, we return $\mathcal{H}(\pi_V, \beta^*)$.
By \Cref{lem:pessEst} we know that $\beta^*$ will satisfy $\hat{f}(\pi_\emptyset, \beta^*) \geq .95$. Moreover, since $\hat{f}$ is monotone by \Cref{lem:pessEst} we know that the $\pi_V$ we choose will satisfy $\hat{f}(\pi_V, \beta^*) \geq .95$. Lastly, since $\hat{f}$ is pessimistic, it follows that $f(\pi_V, \beta^*) \geq f(\pi_V, \beta^*) \geq .95$ and so $\mathcal{H}(\pi_V, \beta^*)$ is padded on a $.95$ fraction of nodes by weight as desired.
The deterministic polynomial runtime of our algorithm is immediate from the deterministic poly-time computability of $\hat{f}$ and the fact that $R$ is computable in deterministic poly-time.
\end{proof}
Using the above node-weighted derandomization lemma gives our deterministic copy tree embedding construction. In particular, we run the following multiplicative-weights-type algorithm with $\epsilon = .01$ and set the number of iterations as $\tau:=4 \ln n / \epsilon^2$. In the following we let $p_v^{(t)} : = w^{(t)}_v / \sum_v w_v^{(t)}$ be the proportional share of $v$'s weight in iteration $t$.
\begin{enumerate}
\item Uniformly set the initial weights: $w_v^{(1)}=1$ for all $v \in V$.
\item For $t \in [\tau]$:
\begin{enumerate}
\item Run the algorithm given in \Cref{lem:derandPadding} using distribution $p^{(t)}$ and let $\mathcal{H}_t$ be the resulting hierarchical decomposition.
\item \textbf{Set mistakes:} For each vertex $v$ which is $\Omega(\frac{1}{\log n})$-padded in $\mathcal{H}_t$ let $m_v^{(t)} = 1$. Let $m_v^{(t)} = 0$ for all other $v$.
\item \textbf{Update weights:} for all $v \in V$, let $w_v^{(t+1)} \gets \exp(-\epsilon m_v^{(t)}) \cdot w_v^{(t)}$.
\end{enumerate}
\item Return $(\mathcal{H}_t)_{t=1}^\tau$.
\end{enumerate}
We state a well-known fact regarding multiplicative weights in our notation. Readers familiar with multiplicative weights may recognize this as the fact that the expected performance of mutliplicative weights over logarithmically-many rounds is competitive with every expert.
\begin{lemma}[\citet{arora2012multiplicative}]\label{lem:MWAvg}
The above algorithm guarantees that for any $v \in V$ we have
\begin{align*}
\frac{1}{T} \sum_{t \leq \tau} p^{(t)} \cdot m^{(t)} \leq \epsilon + \frac{1}{T} \sum_{t \leq \tau} m_v^{(t)}
\end{align*}
where $p^{(t)} \cdot m^{(t)} := \sum_v p^{(t)}_v m_v^{(t)}$ is the usual inner product.
\end{lemma}
Using this fact we conclude that we are able to produce a good set of hierarchical decompositions.
\begin{lemma}\label{lem:HDConstr}
The above algorithm returns a collection of hierarchical decompositions $\{\mathcal{H}_t\}_{t=1}^\tau$ where $\tau = \Theta(\log n)$ and every vertex is $\Omega(1 / \log n)$-padded in at least $.9\tau$ of the decompositions.
\end{lemma}
\begin{proof}
Since $\tau:=4 \ln n / \epsilon^2$ we know that $\tau = \Theta(\log n)$.
We need only argue, then, that each node is padded in at least a $.9$ fraction of the $\tau$ total $\mathcal{H}_t$. Let \[f_v := \frac{1}{\tau} \sum_{t \leq \tau} \mathbb{I}\left(\text{$v$ is $\Omega\left(\frac{1}{\log n}\right)$-padded in $\mathcal{H}_t$}\right)\] be the fraction of the decompositions in which $v$ is padded. Consider a fixed node $v$. By \Cref{lem:MWAvg} we know that
\begin{align}\label{eq:mwguar}
\frac{1}{\tau} \sum_{t \leq \tau} p^{(t)} \cdot m^{(t)} \leq \epsilon + \frac{1}{\tau} \sum_{t \leq \tau} m_v^{(t)}
\end{align}
By definition of $m_v^{(t)}$ we have that the right hand side of \Cref{eq:mwguar} is $\epsilon + f_v$. On the other hand, by how we set $m^{(t)}$, the left hand side of \Cref{eq:mwguar} is $\frac{1}{\tau}\sum_t\sum_v p_v^{(t)} \cdot \mathbb{I}(\text{$v$ is $\Omega(\frac{1}{\log n})$-padded in $\mathcal{H}$})$ which by \Cref{lem:derandPadding} is at least $.95$. Combining these facts we have $.95 \leq \epsilon + f_v$ and so by our choice of $\epsilon$ we know $.9 \leq f_v$ as desired.
\end{proof}
Combining \Cref{lem:HDConstr} with \Cref{lem:RTEFromHDs} gives \Cref{thm:repTreeConst}.
\subsection{Construction 2: Merging FRT Support}\label{sec:FRTSup}
In this section we observe that the support of the FRT distribution can be merged to produce copy tree embeddings with cost stretch $O(\log n)$ and copy number $O(n \log n)$. In particular, we rely on the known fact that one can make the size of the support of the FRT distribution $O(n \log n)$ and compute said support in deterministic poly-time, as summarized in the following theorem.
\begin{theorem}[\cite{charikar1998approximating,fakcharoenphol2004tight,konjevod2001approximating}]\label{thm:charikFRTSup}
Given a weighted graph $G = (V,E, w)$ and root $r \in V$, there exists a distribution $\mathcal{D}$ being supported over $O(n \log n)$ well-separated weighted trees on $V$ rooted at $r$ where for any $u,v \in V$ we have $\E_{T \sim \mathcal{D}}[d_T(u,v)] \leq O(\log n \cdot d_G(u, v))$ and for every $T$ in the support of $\mathcal{D}$ we have $d_G(u,v) \leq d_T(u, v)$. Also, (the support and probabilities of) $\mathcal{D}$ can be computed in deterministic poly-time.
\end{theorem}
Merging the trees of this distribution and some simple probabilistic method arguments give a copy tree embedding with the desired properties.
\frtSupp*
\begin{proof}
Let $T_1, \ldots, T_k$ with $k = O(n \log n)$ be the trees in the support of the distribution $\mathcal{D}$ as guaranteed by \Cref{thm:charikFRTSup}. Then, we let $T$ be the result of identifying each copy of $r$ as the same vertex in each $T_i$ (but not identifying copies of other vertices in $V$ as the same vertex); that is, $|V(T)| = k \cdot n - (k-1)$. $T$'s weight function is inherited from each $T_i$ in the natural way. Similarly, we let $\phi(v)$ be the set containing each copy of $v$ in each of the $T_i$. It is easy to verify that $\phi$ is indeed a copy mapping. Also, note that $\phi(v)$ is computable in deterministic poly-time, our copy number is $O(n \log n)$ by construction and that $T$ is well-separated since each $T_i$ is well-separated.
We next specify $\pi_{G \to T}(F)$ for a fixed $F$. For tree $T_i$, let $T_i' \subseteq T_i$ be the subgraph of $T_i$ which contains the unique tree path between $u$ and $v$ iff $\{u, v\} \in F$. By \Cref{thm:charikFRTSup} we know that $\E_{T_i \sim D}[w_{T_i}(T_i')] \leq O(\log n \cdot w_G(H))$ and so there must be some $j$ such that $w_{T_j}(T_j') \leq O(\log n \cdot w_G(F))$. Thus, we let $\pi_{G \to T}(F) := T_j'$. We argue that $\pi_{G \to T}$ requires the stated connectivity properties. In particular, notice that by construction we have that if $u$ and $v$ are connected in $F$ then they will have some copy connected in $\pi_{G \to T}(F)$: if $u$ and $v$ are connected in $F$ by path $(v_1, v_2, \ldots)$ then the path in $T_j$ which connects the copy of $v_l$ and the copy of $v_{l+1}$ is contained in $\pi_{G \to T}(F)$ and the concatenation of these paths for all $l$ connects the copies of $u$ and $v$ contained in $T_j$. Moreover, notice that $\pi_{G \to T}(F)$ satisfies the required cost preservation properties since $w_T(\pi_{G \to T}(F)) = w_{T_j}(T_j') \leq O(\log n \cdot w_G(F))$ by construction.
Lastly, we specify $\pi_{T \to G}(F')$. We let $\pi_{T \to G}(F')$ be the graph induced by $\{P_{uv} : \{u',v' \} \in F' \}$ where $P_{uv}$ is an arbitrary shortest path in $G$ between $u$ and $v$ and $u'$ and $v'$ are copies of $u$ and $v$. We first verify the required connectivity preservation properties: if $u'$ and $v'$ are connected in $F'$ by path $(v_1', v_2' \ldots)$ then we know that $v_l$ and $v_{l+1}$ will be connected in $\pi_{T \to G}(F')$ for every $l$ by $P_{v_{l}v_{l+1}}$ where $v_i'$ is some copy of $v_i$. Thus, $u$ and $v$ will be connected in $\pi_{T \to G}(F')$. We next verify the required cost-preservation properties. By \Cref{thm:charikFRTSup} we have for every $i$ that $w_{T_i}(e') \geq w_G(P_{uv})$ for each $e' = \{u', v'\} \in T_i$. Thus, $w_T(F') = \sum_{e' \in F'} w_T(e') \geq \sum_{\{u',v'\} \in F'} w_G(P_{uv}) \geq w_G(\pi_{T \to G}(F'))$ where we have again used $u$ and $v$ to stand for the $\phi^{-1}(u')$ and $\phi^{-1}(v')$ respectively. Lastly, we note that $\pi_{T \to G}(F')$ is trivially computable in deterministic poly-time.
\end{proof}
\section{Deterministic Online Group Steiner Tree/Forest Reductions}\label{sec:detOGST}
In this section we prove that the guarantees of our copy tree embeddings are sufficient to generalize any deterministic algorithm for online group Steiner tree on trees to general graphs, thereby reducing an open question posed by \citet{alon2006general} to its tree case. We show that a similar result holds for the online group Steiner forest problem which generalizes online group Steiner tree.
In general, mapping an instance of a problem $P$ onto an equivalent instance $I'$ on the copy tree embedding often results that $I'$ is not an instance of the same problem $P$. However, group Steiner tree (resp., forest) problems have the notable property that mapping them onto a copy tree embedding simply results in another instance of the group Steiner tree (resp., forest) problem, this time on a tree. This property, albeit somewhat hidden in the proof, is the main reason why copy tree embeddings are well suited for these two problems.
Because past work on group Steiner and group Steiner forest have stated runtimes and approximation guarantees as functions of the maximum group size and number of groups rather than just $n$---see e.g.\ \cite{garg2000polylogarithmic,barta2020online}---we will give our results in the same generality with respect to these parameters.
\subsection{Deterministic Online Group Steiner Tree}\label{sec:det-online-group-steiner-tree}
We begin with our results for online group Steiner tree.
\textbf{Offline Group Steiner Tree:} In the group Steiner Tree problem we are given a weighted graph $G = (V, E, w)$ as well as pairwise disjoint groups $g_1, g_2, \ldots, g_k \subseteq V$ and root $r \in V$. We let $N:= \max_i |g_i|$ be the maximum group size. Our goal is to find a (connected) tree $T$ rooted at $r$ which is a subgraph of $G$ and satisfies $T \cap g_i \neq \emptyset$ for every $i$. We wish to minimize our cost, $w(T) := \sum_{e \in E(T)} w(e)$.\footnote{The assumption that the tree is rooted in group Steiner tree is without loss of generality as we may always brute-force search over a root. Similarly, the assumption that all groups are pairwise disjoint is without loss of generality since if $v$ is in groups $\{g_1, g_2, \ldots \}$ then we can remove $v$ from all groups and add vertices $v_1, v_2, \ldots$ to $G$ which are connected only to $v$ so that $v_i \in g_i$ and $w((v, v_i)) = 0$ for all $i$.}
\textbf{Online Group Steiner Tree:} Online group Steiner tree is the same as offline group Steiner tree but where our solution need not be a tree and groups are revealed in time steps $t = 1, 2, \ldots$. That is, in time step $t$ an adversary reveals a new group $g_t$ and the algorithm must maintain a solution $T_t$ where: (1) $T_{t-1} \subseteq T_{t}$; (2) $T_t$ is feasible for the group Steiner tree problem on groups $g_1, \ldots g_t$ and; (3) $T_t$ is competitive with the optimal offline solution for this problem where the competitive ratio of our algorithm is $\max_t w(T_t)/ \mathrm{OPT}_t$ where $\mathrm{OPT}_t$ is the cost of the optimal offline group Steiner tree solution on the first $t$ groups. Here, we will let $k$ be the number of possible groups revealed by the adversary.
\begin{theorem}
If there exists:
\begin{enumerate}\label{thm:detGST}
\item A poly-time deterministic algorithm to compute an efficient, well-separated $\alpha$-approximate copy tree embedding with copy number $\chi$ and;
\item A poly-time $f(n, N, k)$-competitive deterministic algorithm for online group Steiner tree on well-separated trees
\end{enumerate}
then there exists an $(\alpha \cdot f(\chi n, \chi N, k))$-competitive deterministic algorithm for group Steiner tree (on general graphs).
\end{theorem}
\begin{proof}
We will use our copy tree embedding to produce a single tree on which we must solve deterministic online group Steiner tree.
In particular, consider an instance of online group Steiner tree on weighted graph $G = (V, E, w)$ with root $r$. Then, we first compute a copy tree embedding $(T, \phi, \pi_{G \to T}, \pi_{T \to G})$ deterministically with respect to $G$ and $r$ as we assumed is possible by assumption. Next, given an instance $I_t$ of group Steiner tree on $G$ with groups $g_1, \ldots g_t$, we let $I_t'$ be the instance of group Steiner tree on $T$ with groups $\phi(g_1), \ldots \phi(g_t)$ and root $r' := \phi(r)$ where we have used the notation $\phi(g_i) := \bigcup_{v \in g_i} \phi(g_i)$. Then, if the adversary has required that we solve instance $I_t$ in time step $t$, then we require that our deterministic algorithm for online group Steiner tree on trees solves $I_t'$ in time step $t$ and we let $H_t'$ be the solution returned by our algorithm for $I_t'$. Lastly, we return as our solution for $I_t$ in time step $t$ the set $H_t := \pi_{T \to G}(H_t')$.
Let us verify that the resulting algorithm is indeed feasible and of the appropriate cost.
First, we have that $H_t \subseteq H_{t+1}$ for every $t$ since $H_t' \subseteq H_{t+1}'$ because our algorithm for trees returns a feasible solution for its online problem and $\pi_{T \to G}$ is monotone by definition of a copy tree embedding. Moreover, we claim that $H_t$ connects at least one vertex from each $g_i$ to $r$ for $i \leq t$ and every $t$. To see this, notice that $H_t'$ connects at least one vertex from $\phi(g_t)$ to $r' = \phi(r)$ in $t$ since it is a feasible solution for $I_t'$ and so at least one copy of a vertex in $g_t$; by the connectivity preservation properties of a copy tree it follows that at least one vertex from $g_t$ is connected to $r$. Thus, our solution is indeed feasible in each time step.
Next, we verify the cost of our solution. Let $\mathrm{OPT}_t'$ be the cost of the optimal solution to $I_t'$ and let $n'$ and $N'$ be the number of vertices and maximum size of a group in $I_t'$ for any $t$. By our assumption on the cost of the algorithm we run on $T$ and since $n' \leq \chi n$ and $N' \leq \chi N $ by definition of copy number, we know that
\begin{align*}
w_T(H_t') \leq {\mathrm{OPT}}_t' \cdot f(n', N', k) = {\mathrm{OPT}}_t' \cdot f(\chi n, \chi N, k).
\end{align*}
Next, let $H^*_t$ be the optimal solution to $I_t$. We claim that $\pi_{G \to T}(H^*_t)$ is feasible for $I_t'$. This follows because $H^*_t$ connects a vertex from $g_1, \ldots, g_t$ to $r$ and so by the connectivity preservation property of copy tree embeddings we know that some vertex from each of $\phi(g_1), \ldots, \phi(g_t)$ is connected to $r' = \phi(r)$. Applying this feasibility of $\pi_{G \to T}(H_t^*)$ and the cost preservation property of our copy tree embedding, it follows that $\mathrm{OPT}_t' \leq w_T(\pi_{G \to T}(H_t^*)) \leq \alpha \cdot w_G(H_t^*) = \alpha \cdot \mathrm{OPT}_t$.
Similarly, we know by the cost preservation property of our copy tree embedding that $w_G(\pi_{T \to G}(H_t')) \leq w_T(H_t')$. Combining these observations we have
\begin{align*}
w_G(\pi_{T \to G}(H_t')) \leq w_T(H_t') \leq {\mathrm{OPT}}_t' \cdot f(\chi n, \chi N, k) \leq {\mathrm{OPT}}_t \cdot \alpha \cdot f(\chi n, \chi N, k),
\end{align*}
thereby showing that our solution is within the required cost bound.
\end{proof}
Plugging in our first construction (\Cref{thm:repTreeConst}) or our second construction (\Cref{thm:frtSupp}) of a copy tree embedding immediately gives the follow corollary.
\begin{corollary}\label{cor:GST}
If there is an $f(n, N, k)$-competitive deterministic algorithm for online group Steiner tree on well-separated trees then there are $O(\log n \cdot f(O(n^2 \log n), O(n N), k ))$ and $O(\log ^ 2 n \cdot f(O(n \log n), O(N \log n), k))$-competitive deterministic algorithms for online group Steiner tree (on general graphs).
\end{corollary}
\subsection{Deterministic Online Group Steiner Forest}\label{sec:det-online-group-steiner-forest}
In this section we show a black-box reduction from the poly-log-approximate online deterministic group Steiner forest in a general graph $G$ to poly-log-approximate online deterministic group Steiner forest when the underlying graph is a tree. A formal definition of the problem follows.
\textbf{Offline Group Steiner Forest:} In the group Steiner forest problem we are given a weighted graph $G = (V, E, w)$ as well as pairs of subsets of nodes $(A_1, B_1), (A_2, B_2), \ldots, (A_k, B_k)$ where $A_i, B_i \subseteq V$. Our goal is to find a forest $F$ which is a subgraph of $G$ and in which for each $i$ there is an $a_i \in A_i$ and $b_i \in B_i$ such that $a_i$ and $b_i$ are connected in $F$. We wish to minimize our cost, $w(F) := \sum_{e \in E(F)} w(e)$. We let $N := \max_i \max(|A_i|, |B_i|)$ be the maximum subset size.
\textbf{Online Group Steiner Forest:} Online group Steiner forest is the same as group Steiner forest but each pair $(A_t, B_t)$ is revealed at time step $t = 1,2, \ldots$ by an adversary and in each time step $t$ we must maintain a forest $F_t$ which is feasible for pairs $(A_1, B_1), \ldots (A_t, B_t)$ so that $F_{t-1} \subseteq F_t$. The competitive ratio of an online algorithm with solution $\{F_t\}_t$ is $\max_t w(F_t) / \mathrm{OPT}_t$ where $\mathrm{OPT}_t$ is the optimal offline solution for the group Steiner forest problem we must solve in time step $t$. For the online problem let $k$ be the number of possible pairs revealed by the adversary.
Note that the group Steiner forest directly generalizes group Steiner tree since a tree instance on a weighted graph $G$ with root $r \in V(G)$ can be reduced to an equivalent forest instance on the same graph $G$ by mapping each group $g$ to the pair $(\{r\}, g)$. This reductions is valid in both the offline and online setting (also in the later defined, demand-robust, setting).
We now show that a deterministic algorithm for online group Steiner forest on trees gives a deterministic algorithm for online group Steiner forest on general graphs up to small losses. These results and the corresponding proofs will be quite similar to those of the previous section so we defer a full proof to the appendix.
\begin{restatable}{theorem}{onGSF}\label{thm:onGSF}
If there exists:
\begin{enumerate}
\item A poly-time deterministic algorithm to compute an efficient, well-separated $\alpha$-approximate copy tree embedding with copy number $\chi$ and;
\item A poly-time $f(n, N, k)$-competitive deterministic algorithm for online group Steiner forest on well-separated trees
\end{enumerate}
then there exists an $(\alpha \cdot f(\chi n, \chi N, k))$-competitive deterministic algorithm for group Steiner forest (on general graphs).
\end{restatable}
\begin{proof}[Proof Sketch]
The properties of a copy tree embedding show that an instance of group Steiner forest on a tree exactly map to an instance of group Steiner forest on our copy tree. In particular, if we must connect $(A_i, B_i)$ in the general graph then we can just connect $(\bigcup_{v \in A_i} \phi(v), \bigcup_{v \in B_i}\phi(v))$ on our copy tree and map back the solution with $\pi_{T \to G}$. The full proof is available in \Cref{sec:defProof}.
\end{proof}
Plugging in our first construction (\Cref{thm:repTreeConst}) or our second construction (\Cref{thm:frtSupp}) of a copy tree embedding immediately gives the follow corollary.
\begin{corollary}\label{cor:GSF}
If there is an $f(n, N, k)$-competitive deterministic algorithm for online group Steiner forest on well-separated trees then there are $O(\log n \cdot f(O(n^2 \log n), O(n N), k ))$ and $O(\log ^ 2 n \cdot f(O(n \log n), O(N \log n), k ))$-competitive deterministic algorithms for online group Steiner forest (on general graphs).
\end{corollary}
Lastly, we note that \Cref{thm:GSTAndFor} follows immediately from \Cref{cor:GST} and \Cref{cor:GSF}.
\section{Online Partial Group Steiner Tree}\label{sec:onPGST}
In this section we give a deterministic bicriteria algorithm for the online partial group Steiner tree problem which is the same as online group Steiner tree but where we must connect at least $\frac{1}{2}$ of all vertices from each group to the root. The algorithm is bicriteria in the sense that it relaxes both the $1/2$-connectivity guarantee and the cost.
As mentioned in the introduction, this problem generalizes group Steiner tree. In particular, we can reduce an instance of group Steiner tree on weighted graph $G = (V, E, w)$ with groups $\{g_i\}_i$ and root $r$ to an instance of partial group Steiner tree as follows. For each group $g_i$ we add $|g_i|-1$ new vertices with an edge of cost $0$ attached to $r$. Our partial group Steiner tree problem will be on the resulting graph with root $r$ and groups $\{g_i'\}_i$ where $g_i'$ consists of $g_i$ along with its corresponding $|g_i|-1$ dummy nodes. Any partial group Steiner tree solution on the resulting graph will connect at least one vertex from each $g_i$ to $r$. Conversely, by connecting all of the dummy nodes we added to our graph to $r$ by their cost $0$ edges, it is easy to see that a solution for group Steiner tree on the input graph exactly corresponds to a solution for our partial group Steiner tree instance.\footnote{As a minor techincal caveat: we have assumed that edge weights are at least $1$ throughout this paper; it is easy to see that by scaling weights up by a polynomial factor and then using weight $1$ edges instead of weight $0$ edges this reduction still works.}
Moreover, it is also easy to see that any deterministic bicriteria algorithm for online partial group Steiner tree also gives a poly-log-competitive deterministic (unicriteria) algorithm for online (non-group) Steiner tree. In particular, given an instance of Steiner tree on weighted graph $G = (V, E, w)$ with root $r$ where we must connect terminals $A \subseteq V$ to $r$, it suffices to solve the partial group Steiner tree problem where each vertex in $A$ is in a singleton group with any constant bicriteria relaxation. This is because connecting any $c > 0$ fraction of each group to $r$ will connect at least one vertex to $r$ by the integrality of the number of connected vertices. Thus, our result generalizes the fact that deterministic poly-log approximations are known for online (non-group) Steiner tree \cite{imase1991dynamic}. However, we do note that our (deterministic) poly-log-approximate bicriteria online partial group Steiner tree algorithm does not imply there is a (deterministic) poly-log-approximate online (non-partial) group Steiner tree algorithm (due to the nature of the bicriteria guarantee).
Mapping the online partial group Steiner tree problem on a copy tree embedding yields a problem that is slightly different than the original one (unlike, e.g., group Steiner tree). Our result will, therefore, be for a problem which generalizes partial group Steiner tree: we give a deterministic $\tilde{O}(\max_i \frac{|g_i|}{f_i \cdot \epsilon})$ bicriteria approximation for what we call the $f$-partial group Steiner tree problem which requires connecting at least $f_i$ vertices from group $g_i$ to the root; our bicriteria algorithm will connect at least $f_i \cdot (1 - \epsilon)$ vertices from each group for any specified input $\epsilon > 0$. It will be convenient for us to consider this problem as opposed to partial group Steiner tree since group Steiner tree is just $f$-partial group Steiner tree with $f_i = 1$ for all $i$. Thus, as an immediate corollary of our algorithm we will be able to give a deterministic algorithm for online group Steiner tree with a competitive ratio that is linear in the maximum group size.
\textbf{Offline $f$-Partial Group Steiner:} In the $f$-partial group Steiner Tree problem we are given a weighted graph $G = (V, E, w)$ as well as pairwise disjoint groups $g_1, g_2, \ldots, g_k \subseteq V$, desired connected vertices $1 \leq f_i \leq |g_i|$ for each group $g_i$ and root $r \in V$.
Our goal is to find a tree $T$ rooted at $r$ which is a subgraph of $G$ and satisfies $|T \cap g_i| \geq f_i$ for every $i$. We wish to minimize our cost, $w(T) := \sum_{e \in E(T)} w(e)$.\footnote{As with group Steiner tree the assumption that the tree is rooted and that the groups are pairwise disjoint is without loss of generality.}
\textbf{Online $f$-Partial Group Steiner:} Online $f$-partial group Steiner tree is the same as offline partial group Steiner tree but where our solution need not be a tree and groups are revealed in time steps $t = 1, 2, \ldots$. That is, in time step $t$ an adversary reveals a new group $g_t$ and the algorithm must maintain a solution $T_t$ where: (1) $T_{t-1} \subseteq T_{t}$; (2) $T_t$ is feasible for the (offline) $f$-partial group Steiner tree problem on groups $g_1, \ldots g_t$ and; (3) $T_t$ is cost-competitive with the optimal offline solution for this problem where the cost-competitive ratio of our algorithm is $\max_t w(T_t)/ \mathrm{OPT}_t$ where $\mathrm{OPT}_t$ is the cost of the optimal offline $f$-partial group Steiner tree solution on the first $t$ groups. We will give a bicriteria approximation for online $f$-partial group Steiner tree; thus we say that an online solution is $\rho$-connection-competitive if for each $t$ we have $|T_t \cap g_i| \geq (f_i \cdot \rho)$ for every $i \leq t$.
We note that the partial group Steiner tree problem as mentioned above is simply the special case of $f$-partial group Steiner tree but where $f_i = \frac{g_i}{2}$ for every $i$.
\subsection{Online $f$-Partial Group Steiner Tree on a Tree}
We begin by giving a bicriteria deterministic online algorithm for $f$-partial group Steiner tree on trees based on a ``water-filling'' approach. Informally, in iteration $t$ each unconnected vertex in each group will grow the solution towards the root at an equal rate until at least $f_i \cdot (1 - \epsilon)$ vertices in $g_t$ are connected to $r$.
\subsubsection{Problem}
More formally we will solve a problem which is a slight generalization of $f$-partial group Steiner tree on trees. We solve this problem on a tree rather than just $f$-partial group Steiner tree on a tree because, unlike group Steiner tree, the ``groupified'' version of $f$-partial group Steiner tree is not necessarily another instance of $f$-partial group Steiner tree. Roughly, instead of groups we now have groups of groups, hence we call this problem $2$-level $f$-partial group Steiner tree.
\textbf{Offline $2$-Level $f$-Partial Group Steiner Tree}: In $2$-level $f$-Partial Group Steiner tree we are given a weighted graph $G = (V, E, w)$, root $r \in V$ and groups of groups $\mathcal{G}_1, \ldots \mathcal{G}_k$ where $\mathcal{G}_i$ consists of groups $\{g_1^{(i)}, \ldots g_{k_i}^{(i)}\}$ where each $g_j^{(i)} \subseteq V$. We are also given connectivity requirements $f_1, \ldots, f_k$. Our goal is to compute a minimum-weight tree $T$ containing $r$ where for each $i \leq k$ we have $|\{g_j^{(i)} : g_j^{(i)} \cap T \neq \emptyset\}| \geq f_i$. We let $n_i := |\{v : \exists j \text{\xspace s.t. \xspace} v \in g_j^{(i)}\}|$. Notice that $f$-partial group Steiner tree is just 2-level $f$-partial group Steiner tree where each $g_i^{(j)}$ is a singleton set.
\textbf{Online $2$-Level $f$-Partial Group Steiner Tree}: Online $2$-level $f$-Partial Group Steiner tree is the same as the offline problem but where $\mathcal{G}_t$ is revealed in time step $t$ by an adversary. In particular, for each time step $t$ we must maintain a solution $T_t$ where: (1) $T_{t-1} \subseteq T_{t}$ for all $t$; (2) $T_t$ is feasible for the (offline) 2-level $f$-partial group Steiner tree problem on $\mathcal{G}_1, \ldots, \mathcal{G}_t$ with connectivity requirements $f_1, \ldots, f_t$ and; (3) $T_t$ is cost-competitive with the optimal offline solution for this problem where the cost-competitive ratio of our algorithm is $\max_t w(T_t)/ \mathrm{OPT}_t$ where $\mathrm{OPT}_t$ is the cost of the optimal offline 2-level $f$-partial group Steiner tree solution on the first $t$ groups of groups.
We will give a bicriteria approximation for online 2-level $f$-partial group Steiner tree on trees; thus we say that an online solution is $\rho$-connection-competitive if for each $t$ we have $|\{g_j^{(i)} : g_j^{(i)} \cap T \neq \emptyset\}| \geq \rho \cdot f_i$ for every $i \leq t$.
\subsubsection{Algorithm}
We now formally describe our algorithm for 2-level $f$-partial group Steiner tree on weighted tree $T = (V, E, w)$ given an $\epsilon > 0$. We will maintain a fractional variable $0 \leq x_e \leq w_e$ for each edge indicating the extent to which we buy $e$ where our $x_e$s will be monotonically increasing as our algorithm runs. Say that an edge $e$ is saturated if $x_e = w_e$.
Let us describe how we update our solution in the $t$th time step. Let $T_t$ be the connected component of all saturated edges containing $r$. Then, we repeat the following until $|\{g_j^{(t)} : g_j^{(t)} \cap T_t \neq \emptyset\}| \geq f_t \cdot (1-\epsilon)$. Let $\mathcal{G}_t' := \{g_j^{(t)} \in \mathcal{G}_t: g_j^{(t)} \cap T_t = \emptyset\}$ be all groups in $\mathcal{G}_t$ not yet connected and let $g_t' := \bigcup_{S \in \mathcal{G}_t'}S$ be all vertices in a group which have not yet been connected to $r$. We say that $e$ is on the frontier for $v \in g_t'$ if it is the first edge on the path from $v$ to $r$ which is not saturated. Similarly, let $r_e$ be the number of vertices in $g_t'$ for which $e$ is on the frontier for $v$. Then, for each edge $e$ we increase $x_e$ by $r_e \cdot \delta$ where $\delta = \min_e (w_e-x_e)/r_e$. Our solution in the $t$th time step is $T_t$ once $|\{g_j^{(t)} : g_j^{(t)} \cap T_t \neq \emptyset\}| \geq (1-\epsilon) \cdot f_t$.
We illustrate one iteration of this algorithm in \Cref{fig:waterFill}
\begin{figure}
\centering
\begin{subfigure}[b]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth,trim=120mm 100mm 120mm 10mm, clip]{./figures/water1.pdf}
\caption{Graph $T$.}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth,trim=120mm 100mm 120mm 10mm, clip]{./figures/water2.pdf}
\caption{$\mathcal{G}_1$ arrives.}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth,trim=120mm 100mm 120mm 10mm, clip]{./figures/water3.pdf}
\caption{``Fill water.''}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth,trim=120mm 100mm 120mm 10mm, clip]{./figures/water4.pdf}
\caption{Choose solution.}
\end{subfigure}
\hfill
\caption{Solution our algorithm gives after one group of groups, $\mathcal{G}_1$, is revealed where $f_1 = 2$. Nodes in groups in $\mathcal{G}_1$ outlined in green and nodes colored according to the group of $\mathcal{G}_1$ which contains them. Saturated edges given in blue and edges with $0 < x_e < w_e$ annoted with ``$x_e/w_e$''. All other edges labeled by $w_e$.}\label{fig:waterFill}
\end{figure}
\subsubsection{Analysis}
We proceed to analyze the above algorithm and give its properties.
\begin{theorem}\label{thm:partialGSTOnTrees}
There is a deterministic poly-time algorithm for online $2$-level $f$-partial group Steiner tree on trees which is $\frac{1}{\epsilon} \cdot (\max_i \frac{n_i}{f_i})$-cost-competitive and $(1-\epsilon)$-connection-competitive.
\end{theorem}
\begin{proof}
We begin by verifying that our algorithm returns a monotonically increasing and $(1-\epsilon)$-connection-competitive solution. First, notice that our solution is monotonically increasing since our $x_e$s are monotonically increasing and our solution only includes saturated edges. To see that our solution is $(1-\epsilon)$-connection-competitive notice that at least one new edge becomes saturated from each update to the $x_e$s (namely $\argmin_e (w_e-x_e)/r_e$) and since if all edges are saturated then $T_t = T$ which clearly satisfies $|\{g_j^{(t)} : g_j^{(t)} \cap T_t \neq \emptyset\}| \geq (1-\epsilon) \cdot f_t$, this process will eventually halt with a $(1-\epsilon)$-connection-competitive solution in the $t$th iteration. For the same reason our algorithm is deterministic poly-time.
It remains to argue that our solution is $\frac{1}{\epsilon} \cdot (\max_i \frac{n_i}{f_i})$-cost-competitive. We will argue that we can uniquely charge each unit of increase of our $x_e$s to an appropriate cost portion of the optimal solution. Fix an iteration $t$. Next, let $\delta^{(i,j)}$ for $i \leq t$ be the value of $\delta$ in the $i$th iteration the $j$th time we increase the value of our $x_e$s. Similarly, let $\delta_x^{(i,j)}$ be the increase in $\sum_e x_e$ when we do so and let $\delta_y^{(i,j)}$ be the increase in $\sum_{e \in T_t^*}x_e$ where $T_t^*$ is the optimal offline solution to the $2$-level $f$-partial group Steiner problem we must solve in the $t$th iteration. Lastly, let $y := \sum_{i \leq t} \sum_j \delta_y^{(i,j)}$ be the value of $\sum_{e \in T_t^*}x_e$ at the end of the $t$th iteration; clearly we have $y \leq \mathrm{OPT}_t$. We claim that it suffices to show that for each $i \leq t$ and each $j$ that $\delta_x^{(i,j)} \leq \frac{1}{\epsilon} \delta_y^{(i,j)} \frac{n_i}{f_i}$ since it would follow that at the end of iteration $t$ we have that
\begin{align*}
w(T_t) \leq \sum_e x_e = \sum_{i \leq t} \sum_j \delta_x^{(i,j)} \leq \frac{1}{\epsilon} \sum_{i \leq t} \sum_j \frac{n_i}{f_i} \delta_y^{(i,j)} \leq \frac{1}{\epsilon} \left(\max_i \frac{n_i}{f_i} \right) y \leq \frac{1}{\epsilon} \left(\max_i \frac{n_i}{f_i}\right) {\mathrm{OPT}}_t.
\end{align*}
We proceed to show that $\delta_x^{(i,j)} \leq \frac{1}{\epsilon} \delta_y^{(i,j)} \frac{n_i}{f_i}$ for each $i \leq t$ and $j$. We fix an $i$ and $j$ and for cleanliness of notation we will drop the dependence on $i$ and $j$ in our $\delta$s henceforth.
First, notice that we have that
\begin{align}\label{eq:xBound}
\delta_x \leq n_i \cdot \delta
\end{align}
since each vertex $v \in g_i$ is uniquely responsible for up to a $\delta$ increase on $x_e$ where $e$ is the edge on $v$'s frontier.
On the other hand, notice that if a group in $\mathcal{G}_i$ is connected to $r$ by $T_t^*$ but is not yet connected by $T_i$ then such a group uniquely contributes at least $\delta$ to $\delta_y$. Since $T_t^*$ connects at least $f_i$ groups in $\mathcal{G}_i$ to $r$ but at the moment of our increase $T_i$ connects at most $(1-\epsilon) \cdot f_i$, there are at least $\epsilon \cdot f_i$ such groups in $\mathcal{G}_i$ which are connected to $r$ by $T_t^*$ but not by $T_i$. Thus, we have that
\begin{align}\label{eq:yBound}
\delta_y \geq \epsilon \cdot f_i \cdot \delta
\end{align}
Combining Equations \ref{eq:xBound} and \ref{eq:yBound} shows $\delta_x \leq \frac{1}{\epsilon} \delta_y \frac{n_i}{f_i}$ as required.
\end{proof}
\subsection{Online $f$-Partial Group Steiner Tree on General Graphs}
Next, we apply our first construction to give an algorithm for $f$-partial group Steiner tree on general graphs. Crucially, the following result relies on a single copy tree embedding with poly-logarithmic copy number, making our second construction unsuitable for this problem
\begin{theorem}\label{thm:fPart}
There is a deterministic poly-time algorithm for online $f$-partial group Steiner tree (on general graphs) which is $O(\frac{\log ^ 3 n}{\epsilon} \cdot \max_i \frac{|g_i|}{f_i})$-cost-competitive and $(1-\epsilon)$-connection-competitive.
\end{theorem}
\begin{proof}
We will use our copy tree embedding to produce a single tree on which we must deterministically solve online $2$-level partial group Steiner tree. We will then apply the algorithm from \Cref{thm:partialGSTOnTrees} to solve online $2$-level partial group Steiner tree on this tree.
More formally, consider an instance of online partial group Steiner tree on weighted graph $G = (V, E, w)$ with root $r$. Then, we first compute a copy tree embedding $(T, \phi, \pi_{G \to T}, \pi_{T \to G})$ deterministically with respect to $G$ and $r$ as in \Cref{thm:repTreeConst} with cost approximation $O(\log ^ 2 n)$ and copy number $O(\log n)$. Next, given our instance $I_t$ of partial group Steiner tree on $G$ with groups $g_1, \ldots g_t$ and connection requirements $f_1, \ldots, f_t$ we let $I_t'$ be the instance of $2$-level partial group Steiner tree on $T$ with groups of groups $\mathcal{G}_1, \ldots \mathcal{G}_t$ where $\mathcal{G}_i = \{\phi(v) : v \in g_i\}$, connection requirements $f_1, \ldots, f_t$ and root $\phi(r)$. Then if the adversary has required that we solve instance $I_t$ in time step $t$, then we require that the algorithm in \Cref{thm:partialGSTOnTrees} solves $I_t'$ in time step $t$ and we let $H_t'$ be the solution returned by our algorithm for $I_t'$. Lastly, we return as our solution for $I_t$ in time step $t$ the set $H_t := \pi_{T \to G}(H_t')$.
Let us verify that the resulting algorithm is indeed feasible (i.e.\ monotone and $\frac{1}{2}$-connection-competitive) and of the appropriate cost.
First, we have that $H_t \subseteq H_{t+1}$ for every $t$ since $H_t' \subseteq H_{t+1}'$ because our algorithm for trees returns a feasible solution for its online problem and $\pi_{T \to G}$ is monotone by definition of a copy tree embedding. Moreover, we claim that $H_t$ connects at least $(1-\epsilon)\cdot f_i$ vertices from $g_i$ to $r$ for $i \leq t$ and every $t$. To see this, notice that there at least $(1-\epsilon)\cdot f_i$ groups from $\mathcal{G}_i$ containing a vertex connected to $r$ by $H_t'$. Since each such group consists of the copies of a distinct vertex, by the connectivity preservation properties of a copy tree it follows that $H_t$ connects at least $(1-\epsilon)\cdot f_i$ vertices from $g_i$ to $r$.
Next, we verify the cost of our solution. Let $\mathrm{OPT}_t'$ be the cost of the optimal solution to $I_t'$. Notice that since our copy number is $O(\log n)$, it follows that $n_i \leq O(\log n \cdot |g_i|)$. Thus, by the guarantees of \Cref{thm:partialGSTOnTrees} we have
\begin{align}\label{eq:optPBnd}
w_T(H_t') \leq \frac{1}{\epsilon} \cdot \left(\max_i \frac{n_i}{f_i} \right) {\mathrm{OPT}}_t' \leq O\left(\frac{\log n}{\epsilon}\right) \cdot \left(\max_i \frac{|g_i|}{f_i} \right) {\mathrm{OPT}}_t'.
\end{align}
Next, we bound $\mathrm{OPT}_t'$. Let $H^*_t$ be the optimal solution to $I_t$. We claim that $\pi_{G \to T}(H^*_t)$ is feasible for $I_t'$. This follows because $H^*_t$ connects at least $f_i$ vertices from $g_i$ to $r$ for $i \leq t$ and so by the connectivity preservation property of copy tree embeddings we know that there are at least $f_i$ groups in $\mathcal{G}_i$ with a vertex connected to $r$ by $\pi_{G \to T}(H_t^*)$. Thus, combining this with the $O(\log^ 2 n)$ cost preservation of our copy tree embedding we have
\begin{align}\label{eq:optPSBound}
{\mathrm{OPT}}_t' \leq w_T(\pi_{G \to T}(H^*_t)) \leq O(\log ^ 2 n) \cdot w_G(H^*_t).
\end{align}
Lastly, by the cost preservation property of our copy tree embedding we have that $w_G(H_t) \leq w_T(H_t')$ which when combined with Equations \ref{eq:optPBnd} and \ref{eq:optPSBound} gives
\begin{align*}
w_G(H_t) \leq O\left(\frac{\log ^ 3 n}{\epsilon} \cdot \max_i \frac{|g_i|}{f_i} \right) \cdot w_G(H_t^*).
\end{align*}
thereby showing that our solution is within the required cost bound.
\end{proof}
As a consequence of the above result we have a poly-log bicriteria deterministic approximation algorithm for online partial group Steiner tree; we restate the relevant theorem below.
\partGST*
Since group Steiner tree is exactly $f$-partial group Steiner tree where $f_i = 1$ in which case $\max_i \frac{|g_i|}{f_i} \leq N$ where again $N$ is the maximum size of a group. Moreover, since any solution can only connect an integral number of vertices from each group, it follows that a $\frac{1}{2}$-connection-competitive solution for partial group Steiner tree where $f_i = 1$ (i.e.\ for group Steiner tree) connects at least one vertex from each group. Thus, as a corollary of the above result we have the following deterministic algorithm for online group Steiner tree.\footnote{We note that one can use an aforementioned property of our first construction---that if $u$ is connected to $r$ by $F \subseteq E$ then every vertex in $\phi(u)$ is connected to $\phi(r)$ in $\pi_{G \to T}(F)$---to reduce the $O(\log ^ 3 n)$s in this section to $O(\log ^ 2 n)$s. In particular, if one were to use this property then when we map the solution to our $f$-partial group Steiner tree problem on $G$ to our copy tree embedding, the resulting solution will connect at least $f_i$ groups in $\mathcal{G}_i$ at least $\Theta (\log n)$ times. It follows that when we run our water filling algorithm each time it increases $\sum_e x_e$ by $1$ we know that it cover at least $\Omega(\log n)$ units of the optimal solution by weight rather than $1$ unit of the optimal solution as in the current analysis.}
\begin{corollary}
There is an $O(N \log ^ 3 n)$-competitive deterministic algorithm for online group Steiner tree where $N := \max_i |g_i|$ is the maximum group size.
\end{corollary}
\section{Demand-Robust Group Steiner Tree/Forest}\label{sec:DRGSTF}
In this section, we give a poly-log-approximate algorithm for the demand-robust versions of the group Steiner tree and group Steiner forest problems. The high-level approach will be to find a black-box reduction from the problem on a general graph to a problem on a tree, and then to solve the tree problem. However, the properties that the copy tree embedding need to ensure in this setting are slightly different, hence we will define and introduce a new, demand-robust copy tree embedding, in \Cref{def:demand-robust-copy-tree}.
On a general note, the demand-robust setting provides a robust counterpart to classic optimization problems like (group) Steiner tree, minimum cut, shortest path, etc. In this setting, instead of a single input, one is given a set of scenarios $\mathcal{S} = \{ S_1, \ldots, S_m \}$, where each scenario $S_i$ corresponds to a classic input to the problem. The goal is to ``prepare'' for the worst-case scenario in $\mathcal{S}$ by buying a ``first-stage solution'' $X_0$ at a discount before one knows which scenario is realized. After committing to $X_0$, the realized scenario $S_i$ is revealed and one needs to extend $X_0$ with a ``second-stage solution'' $X_i$ (where the cost of $X_i$ is inflated by a factor $\sigma_i \ge 1$) such that $X_0 \cup X_i$ satisfies scenario $S_i$. We want to minimize the total cost (of both the first-stage and the second-stage solution) in case of a realization of the worst-case scenario.
We first give formal descriptions of the demand-robust group Steiner tree and group Steiner forest problems. Note that the formal descriptions of the offline versions were given in \Cref{sec:det-online-group-steiner-tree} and \Cref{sec:det-online-group-steiner-forest}, respectively.
\textbf{Demand-robust versions of the group Steiner tree/forest problem:} Let $G = (V, E, w)$ be a weighted graph with a distinguished node $r \in V$ called the root where the weight $w(e)$ is the ``first-stage cost'' of an edge $e$. We are given a set of scenarios $\mathcal{S} = \{ S_1, \ldots, S_m \}$ with $m := |\mathcal{S}| \le \text{poly}(n)$ where:
\begin{enumerate}
\item In the group Steiner tree problem, a scenario $S_i$ consists of a set of groups $g_{i,1}, g_{i,2}, \ldots, g_{i, k(i)}$, with $g_{i, j} \subseteq V$, and an inflation factor $\sigma_i \ge 1$. We assume $k(i) \le \text{poly}(n)$.
\item In the group Steiner forest problem, a scenario $S_i$ consists of a set of pairs $(A_{i,1}, B_{i, 1}), (A_{i,1}, B_{i, 1}), \ldots, \allowbreak (A_{i, k(i)}, B_{i, k(i)})$, with $A_{i, j}, B_{i, j} \subseteq V$, and an inflation factor $\sigma_i \ge 1$. We assume $k(i) \le \text{poly}(n)$.
\end{enumerate}
We wish to buy the (optimal) set of first-stage edges $X_0 \subseteq E$ in order to minimize the cost of the worst-case scenario being realized. The cost of scenario $S_i$ being realized is the smallest value $w(X_0) + \sigma_i \cdot w(X_i)$ over all set of edges $X_i \subseteq E$ such that $X_0 \cup X_i$ is a valid solution to the offline version of the problem for scenario $i$ (e.g., in the group Steiner tree problem, $X_0 \cup X_i$ connects at least one node $v \in g_{i, j}$ to the root $r$ for each group $g_{i, j}$in scenario $i$):
An alternative way to define the demand-robust version of the above problems is to say that we want to find subsets $X_0, X_1, \ldots, X_m$ which minimize $\max_{i=1}^m w(X_0) + \sigma_i \cdot w(X_i)$ such that $\forall 1 \le i \le m, X_0 \cup X_i$ satisfies scenario $S_i$ for the offline version. Let $\mathrm{OPT} := \max_{i=1}^m w(X_0) + \sigma_i \cdot w(X_i)$ be the cost of the optimal solution.
\subsection{Demand-Robust Copy Tree Embeddings}
We now introduce the demand-robust copy tree embedding and prove its existence. One notable difference between this embedding (which is appropriate for the demand-robust setting) and the copy tree embedding of \Cref{dfn:repTree} is that the forward- and backward-mapping function map tuples of subgraphs to tuples of subgraphs (of equal length). This is because the first- and second-stage solutions must be mapped in a coordinated fashion, a requirement that was not necessary in the previous settings.
\begin{definition}\label{def:demand-robust-copy-tree}
Let $G = (V, E, w)$ be a weighted graph with some distinguished root $r \in V$. An $\alpha$-approximate demand-robust copy tree embedding $\mathcal{C} = (T, \phi, \pi_{G \to T}, \pi_{T \to G})$ consists of a weighted rooted tree $T = (V', E', w')$ with root $r'$, a copy mapping $\phi : V \to 2^{V'}$ with $\phi(r) = \{r'\}$, and edge mapping functions $\pi_{G \to G}$ and $\pi_{T \to G}$ that maps tuples of subgraphs (of any length $m$) to equal-length tuples of subgraphs.
The ``forward-mapping function'' $\pi_{G \to T}$ maps at most $m \le \text{poly}(n)$ subgraphs (more precisely, subsets of $E$), namely $X_0, X_1, \ldots, X_m$, to subsets of $E'$, namely $X'_0, X'_1, \ldots, X'_m$ such that the following always holds:
\begin{enumerate}
\item \textbf{Demand-robust Connectivity Preservation}: For all $1 \le i \le m$, and all $u, v \in V$ that are connected via $X_0 \cup X_i$, we have that $\phi(u)$ and $\phi(v)$ are connected via $X'_0 \cup X'_i$.
\item \textbf{Cost Preservation}: For every $1 \le i \le m$ we have that $w'(X'_i) \le \alpha \cdot w(X_i)$.
\end{enumerate}
The ``backward-mapping function'' $\pi_{T \to G}$ maps $m \le \text{poly}(n)$ subsets of $E'$, namely $X'_0, X'_1, \ldots, X'_m$, to subsets of $E$, namely $X_0, X_1, \ldots, X_m$ such that the following always holds:
\begin{enumerate}
\item \textbf{Demand-Robust Connectivity Preservation}: For all $1 \le i \le m$, and all $u, v \in V'$ that are connected via $X'_0 \cup X'_i$, we have that $\phi^{-1}(u)$ and $\phi^{-1}(v)$ are connected via $X_0 \cup X_i$.
\item \textbf{Cost Preservation}: For every $1 \le i \le m$ we have that $w(X_i) \le w'(X'_i)$.
\end{enumerate}
A copy tree embedding is efficient if $T$, $\phi$, and $\pi_{T \to G}$ are all poly-time computable, and well-separated if $T$ is well-separated.
\end{definition}
Comparing the above with \Cref{dfn:repTree}, we note that an $\alpha$-approximate demand-robust copy tree embedding is also an $\alpha$-approximate copy tree embedding. However, the converse might not hold---for example, the ``merging FRT support construction'' as defined in \Cref{sec:FRTSup} (in particular, where the mapping function $\pi_{G \to T}$ simply embeds a subgraph into the cheapest tree) is not a $\log^{O(1)} n$-approximate demand-robust copy tree embedding. However, changing the forward mapping function of the FRT support construction, we are able to obtain the following guarantees.
\begin{theorem}\label{thm:demand-robust-copy-tree}
There is a poly-time deterministic algorithm which given any weighted graph $G = (V, E, w)$ and root $r \in V$ computes an efficient and well-separated $O(\log^2n)$-approximate demand-robust copy tree embedding.
\end{theorem}
\begin{proof}
We show that the ``merging FRT support construction'' (same as \Cref{sec:FRTSup}, which we reintroduce here for convenience) also suffices for the demand-robust setting. We let $T_1, T_2, \ldots, T_{q}$ be the trees in the support of the FRT distribution guaranteed by \Cref{thm:charikFRTSup}. Then, we let $T$ be the result of identifying each copy of $r$ as the same vertex in each $T_i$ (but not identifying copies of other vertices in $V$ as the same vertex). $T$'s weight function $w'_T$ is inherited from each $T_i$ in the natural way. Similarly, we let $\phi(v)$ be the set containing each copy of $v$ in each of the $T_i$. It is easy to verify that $\phi$ is indeed a copy mapping. Also, note that $\phi(v)$ is computable in deterministic poly-time.
We now describe $\pi_{G \to T}$. Let $X_0, X_1, \ldots, X_m \subseteq E$ be a tuple of subgraphs of $E$. We use the probabilistic method to show there exists a tuple of subsets $X'_0, X'_1, \ldots, X'_m \subseteq E' := E(T)$ which satisfy the above properties. Note that the overall construction will still be deterministic as we only need to show the existence of $\pi_{G \to T}$ (e.g., we are not to be able to efficiently compute $\pi_{G\to T}$).
Independently sample $k := O(\log m) = O(\log n)$ random FRT trees, namely, $T'_1, \ldots, T'_{k}$ and let $w_{T_i}$ be their corresponding weights. In each $T'_i$ let $T'_i(X_0)$ be the unique forest (subgraph of $T'_i$) which has the same connected components as $X_0$. Finally, we set $X'_0 := \bigcup_{i=1}^k T'_i(X_0)$. Due to the properties of FRT, we have that $\E[w_{T'_i}( X_0 )] \le O(\log n) \cdot w_G(X_0)$, hence $w'_T(X'_0) \le k \cdot O(\log n) \cdot w_G(X_0) = O(\log^2 n) \cdot w_G(X_0)$ with at least constant probability.
We now fix a subset $X_i$. For each $j \in [k]$ we have that $w_{T'_i}( X_i ) \le O(\log n) \cdot w_G(X_i)$ with at least constant probability, hence with probability at least $1 - \exp(-O(k)) = 1 - n^{-O(1)}$ there exists some $j(i) \in [k]$ where the property holds. Assuming this is the case, we set $X'_i := T'_{j(i)}(X_i)$. Applying a union bound over all subgraphs $X'_i$ for $i \in \{0, 1, \ldots, m\}$, we conclude all of the above properties are satisfied with at least constant probability, hence via the probabilistic method at least one such mapping exists. By construction, the forward mapping satisfies the cost preservation properties with $\alpha = O(\log^2 n)$. Furthermore, if two nodes $u, v \in V(G)$ are connected in $X_0 \cup X_i$, then they are connected in $T'_{j(i)}(X_0) \cup T'_{j(i)}(X_i) \supseteq X'_0 \cup X'_i$---consider an edge either in $e \in X_0$ or in $e \in X_i$, in the former case the endpoints of the edge are connected in $T'_{j(i)}(X_0)$ and in the latter they are connected in $T'_{j(i)}(X_i)$.
Lastly, we specify $\pi_{T \to G}$. While the original definition acts on a tuple $( X'_i )_{i=1}^m$ of subsets of $E'$, we specify its action on a single subset $\pi_{T \to G}(F')$ and then this function to all elements of the tuple, i.e., $X_i := \pi_{T \to G}(X'_i)$ for all $i$. We let $\pi_{T \to G}(F')$ be $\bigcup_{(u', v') \in F'} P_{uv}$ where $P_{uv}$ is an arbitrary shortest path in $G$ between $u$ and $v$ and $u'$ and $v'$ are copies of $u$ and $v$. We first verify the cost preservation: for every $F' \subseteq E(G)$ we have $w_G(\pi_{T \to G}(F')) \le \sum_{(u', v') \in F'} w_G(P_{uv}) \le \sum_{(u', v') \in F'} w'_T(u', v') = w'(F')$, where the last inequality holds because distances in FRT trees dominate distances in $G$. This proves the cost preservation.
Next, we verify the demand-robust connectivity preservation: for each edge $X'_0 \cup X'_i$, its endpoints are connected either via $X_0 = \pi_{T\to G}(X'_0)$ (if $e \in X'_0$), or via $X_i = \pi_{T\to G}(X'_i)$ (if $e \in X'_i$), hence if two nodes are connected via $X'_0 \cup X'_i$, then they are connected via $X_0 \cup X_i$. It is easy to check that $T, \phi$, and $\pi_{G \to T}$ can all be constructed in deterministic poly-time.
\end{proof}
We also remark that the construction of merging partial tree embeddings can also be made into a demand-robust embedding of a smaller size. However, this approach seems more complicated and yields the same cost approximation, hence we do not present it here.
\subsection{Reducing from General Graphs to Trees}
In this section we show how to map the demand-robust group Steiner tree and forest problems on a general graph to an equivalent problem on a demand-robust copy tree embedding with a poly-log loss in the approximation factor. We formally describe the mapping and then proceed to prove its properties.
\textbf{Mapping to a copy tree embedding}. We describe how to map an instance $I = (G, r, \mathcal{S})$ of the demand-robust group Steiner tree/forest to a copy tree embedding $\mathcal{C} = (T, \phi, \pi_{G \to T}, \pi_{T \to G})$. We define an instance $I' = (G', r', \mathcal{S}')$ where $G' := T$ with $r'$ being the root of $T$. We set $\mathcal{S}' \gets \mathcal{S}$ with the following changed applied:
\begin{enumerate}
\item In the group Steiner tree problem, each group $g \in S_i \in \mathcal{S}$ is changed to $g' := \bigcup_{v \in g} \phi(v)$. In other words, each node $v$ in a group is replaced by all of its copies $\phi(v)$ in the copy tree embedding.
\item In the group Steiner forest problem, each pair $(A, B) \in S_i \in \mathcal{S}$ is changed to $(\bigcup_{a \in A} \phi(a), \bigcup_{b \in B} \phi(b))$.
\end{enumerate}
Note that the demand-robust group Steiner tree/forest instance maps to another instance of the same problem (e.g., a group Steiner tree problem maps to a group Steiner tree problem).
We remind the reader that the group Steiner forest problem directly generalizes the group Steiner tree problem---given a group Steiner tree problem on $g$ with groups $(g_i)$ we can reduce it to an equivalent group Steiner forest problem on the same graph $G$ and root $r$, where each group $g$ is mapped to the pair $(\{r\}, g)$.
Comparing the mapping to the copy-tree-embedding with the above reduction, a natural question arises whether one should apply the reduction before or after applying the mapping to the copy tree embedding. However, one can easily check that there is no difference---these two transformations ``commute''.
The following lemma illustrates why such a mapping definition is appropriate and it shows the utility of \Cref{def:demand-robust-copy-tree}.
\begin{lemma}\label{lemma:map-group-steiner-to-trees}
Suppose that an instance $I$ of the demand-robust group Steiner tree (resp., forest) problem maps to a demand-robust group Steiner tree (resp., forest) instance $I'$ via a $\alpha$-approximate demand-robust copy tree embedding $\mathcal{C}$. Then:
\begin{enumerate}
\item If $X_0, X_1, \ldots, X_{m}$ ($X_i \subseteq E(G)$) is a feasible solution for $I$ of cost $\mathrm{OPT}$, then $( X'_i )_{i=0}^{m} := \pi_{G \to T}\left( (X_i)_{i=0}^{m} \right)$ is a feasible solution to $I'$ with cost at most $\alpha \cdot \mathrm{OPT}$. \label{subclaim:graph-to-tree}
\item If $X'_0, X'_1, \ldots, X'_{m}$ ($X'_i \subseteq E(T)$) is a feasible solution for $I'$ of cost $\mathrm{ALG}$, then $( X_i )_{i=0}^{m} := \pi_{T \to G}\left( (X_i')_{i=0}^{m} \right)$ is a feasible solution to $I$ with cost at most $\mathrm{ALG}$. \label{subclaim:tree-to-graph}
\end{enumerate}
\end{lemma}
\begin{proof}
We first prove (\ref{subclaim:graph-to-tree}). It is sufficient to prove the result for the forest problem---take the tree instance on $G$ with a feasible solution $X$ of cost $\mathrm{OPT}$, reduce it to an equivalent forest instance, map it to $\mathcal{C}$ and, applying the forest claim, conclude there is a feasible solution $X'$ of value at most $\alpha \cdot \mathrm{OPT}$. By commutativity, $X'$ is also a feasible solution for the reduction of the original tree instance to the mapping to $\mathcal{C}$, hence is a feasible solution (of cost at most $\alpha \cdot \mathrm{OPT}$) for the mapping of the original problem to $\mathcal{C}$, proving the claim.
We now prove (\ref{subclaim:graph-to-tree}) for the forest problem. Fix a scenario $S_i \in \mathcal{S}$. By feasibility, for each pair $(A, B) \in S_i$ in the original instance, there exists $a \in A$ and $b \in B$ which are connected via $X_0 \cup X_i$. Therefore, by the demand-robust connectivity preservation, there exits $a' \in \phi(a)$ and $b' \in \phi(b)$ that are connected via $X'_0 \cup X'_i$. In other words, the set of vertices $\bigcup_{a \in A} \phi(a)$ is connected to the set of vertices $\bigcup_{b \in B} \phi(v)$ via $X'_0 \cup X'_i$, hence the solution is feasible for $I'$.
Finally, we analyze the cost. By the cost preservation property, we have that $w'(X'_i) \le \alpha \cdot w(X_i)$, hence the cost is:
\begin{align*}
w'_G(X'_0) + \max_{1 \le i \le m} \sigma_i \cdot w'_G(X'_i) \le \alpha \cdot \left( w'(X'_0) + \max_{1 \le i \le m} \sigma_i \cdot w'(X'_i) \right) \le \alpha \cdot \mathrm{OPT} .
\end{align*}
Next, we prove (\ref{subclaim:tree-to-graph}). It is sufficient to prove the result for the forest problem---take the tree problem on $G$, map it to $\mathcal{C}$, then reduce to a forest problem and obtain a feasible solution $X'$ of cost $\mathrm{ALG}$. By commutativity and assuming the claim for the forest problem, $X$ is a feasible solution to the reduction of the original tree instance to a forest instance. Hence, $X$ is a feasible solution (of cost at most $\mathrm{ALG}$) to the original tree instance.
We now prove (\ref{subclaim:tree-to-graph}) for the forest problem. Fix a scenario $S_i \in \mathcal{S}$. By feasibility, for each pair $(A, B) \in S_i$ in the original instance, the set of vertices $\bigcup_{a \in A} \phi(a)$ is connected to the set of vertices $\bigcup_{b \in B} \phi(v))$. Therefore, there exits $a' \in \phi(a), a \in A$ and $b' \in \phi(b), b \in B$ such that $a', b'$ are connected via $X'_0 \cup X'_i$. By the demand-robust connectivity preservation, we have that $a = \phi^{-1}(a')$ and $b = \phi^{-1}(b')$ are connected via $X_0 \cup X_i$, hence the solution is feasible for $I$.
Finally, we analyze the cost. By the cost preservation property, we have that $w(X_i) \le w'(X'_i)$, hence the cost is:
\begin{align*}
w_G(X_0) + \max_{1 \le i \le m} \sigma_i \cdot w_G(X_i) \le w'(X'_0) + \max_{1 \le i \le m} \sigma_i \cdot w'(X'_i) \le \mathrm{ALG}.
\end{align*}
\end{proof}
\subsection{Demand-Robust Group Steiner Tree When $G$ is a Tree}
\label{sec:demand-robust-group-steiner-tree-on-a-tree}
In this section we give a poly-log-approximation algorithm for the demand-robust group Steiner tree problem when the underlying graph $G$ is a weighted and rooted tree. The main result of the section follows.
\DRSTT*
We note that combining \Cref{thm:demand-robust-steiner-tree-algo} with the mapping of \Cref{lemma:map-group-steiner-to-trees} and the demand-robust copy tree embedding construction \Cref{thm:demand-robust-copy-tree} immediately yields a randomized $O(\log^4)$-competitive poly-time algorithm for the group Steiner tree on general graphs, namely \Cref{thm:demand-robust-steiner-tree-algo-on-general-graph}.
The rest of this section is dedicated to proving \Cref{thm:demand-robust-steiner-tree-algo}. The general outline of our proofs is as follows.
\begin{enumerate}
\item We prove an important structural property on the first-stage solution that allows us to conclude that the there exists a first-stage solution that is a rooted subtree of $G$ (i.e., it is connected and contains the root of $G$).
\item We write the linear program that fractionally relaxes the demand-robust group Steiner tree problem.
\item We show how to utilize the randomized rounding for the online group Steiner tree problem of \cite{alon2006general} to construct a demand-robust solution. We remark that a more naive attempt at utilizing the randomized rounding techniques on a general graph (i.e., without transfering the problem to a demand-robust copy tree embedding) would not yield a poly-logarithmic approximation ratio---we crucially use the fact that $G$ is a tree to make the randomized rounding work.
\end{enumerate}
First, we prove an important structural property on the first-stage solution, first proved in \cite{dhamdhere2005pay}: there exists a 2-approximate first-stage solution that is a union of minimal feasible solutions for a subset of scenarios. For the demand-robust group Steiner tree problem, we say that $M_i \subseteq E$ is a minimal feasible solution to the scenario $S_i$ if no proper subset $M'_i \supsetneq M_i$ is feasible for the scenario (i.e., there exists at least one group in $S_i$ that is not connected to the root via $M'_i$).
\begin{lemma}[Adapted from \cite{dhamdhere2005pay}]\label{lemma:first-stage-structure}
In the demand-robust group Steiner tree problem on the graph $G = (V, E)$, there exists a first-stage solution $X_0 \subseteq E$ which can be extended to a solution of (worst-case realization) cost $2 \cdot \mathrm{OPT}$ which has the following structure. There exists a subset $I \subseteq \{1,2,\ldots,m\}$ and a set $\{ M_i \}_{i \in I}$, where $M_i$ is some minimal feasible solution (i.e., no proper subset is feasible) to the scenario $S_i$, such that $X_0 = \bigcup_{i \in I} M_i$.
\end{lemma}
The proof of this result is directly argued via the proof of Lemma 4.1 in Section 4.1 of \cite{dhamdhere2005pay}. However, our claim requires slightly weaker structural properties compared to \cite{dhamdhere2005pay}---it stipulates that the first-stage solution $X_0$ is a union of minimal feasible solutions instead of being the minimal solution for a particular instance. The proof remains unchanged: every time when the \textsc{if} condition in (2b) is true (as given in \cite{dhamdhere2005pay}), we add $I \gets I \cup \{i\}$ and observe that $M_i := X^*_{0i} \cup X_i^*$ is a minimal feasible solution for scenario $i$. By construction, $X_0 = \bigcup_{i \in I} M_i$ and, as argued in the proof, the cost of $X_0$ is at most $2 \cdot OPT$.
\paragraph{Relaxation $\mathrm{LP}_{GST}$.} We now give the linear program for a tree $G = (V, E, w)$ with a root $r \in V$ that relaxes the original problem. We say that a vector $x \in \mathbb{R}^E$ is decreasing on root-leaf path if for every $e \in E$ not incident to the root $r$ and its parent edge $\mathrm{parent}(e)$ we have $x_{\mathrm{parent}(e)} \ge x_e$---this condition is required by the randomized online rounding technique and can be argued to be a valid constraint due to \Cref{lemma:first-stage-structure}. The LP jointly optimizes over the first-stage solution $\{ x_{0, e} \}_e$ and second-stage parts of the solution $\{ x_{i, e} \}_{i \in [m], e \in E}$ while ensuring (1) the first-stage solution is decreasing on root-leaf paths, and (2) that the maximum flow between the root and each group $g_{i, j}$ (in scenario $i$) is at least $1$ when using $x_0 + x_i$ as edge capacities. We formally write out the linear program $\mathrm{LP}_{GST}$.
\begin{figure}[H]
\centering
\begin{align*}
\min & \quad z \\
\text{such that} \\
\forall i \in [m] & \quad \sum_{e \in E} w(e) \left[ x_{0, e} + \sigma_i \cdot x_{i, e} \right ] \le z \\
\forall i \in [m], \forall j \in [k(i)] & \quad \maxflow(x_0 + x_i, \{r\}, g_{i, j}) \ge 1 \\
\forall e \in E & \quad \text{if $e$ is not incident to $r$, then } x_{0, \mathrm{parent}(e)} \ge x_{0, e} \\
\forall i \in \{0\} \cup [m], \forall e \in E & \quad x_{i, e} \ge 0
\end{align*}
\caption{Linear program $\mathrm{LP}_{GST}$}
\end{figure}
In the linear program we introduced the notation $\maxflow(x, A, B)$ where $x \in \mathbb{R}_{\ge 0}^E$, $A \subseteq V, B \subseteq V$ which corresponds to the maximum flow between the set $A$ and set $B$ when the capacity of an edge $e$ are set to $x_e$. The maximum flow between two sets $A$, $B$ is defined as the flow between the super-source $a$ and super-sink $b$ when a new virtual node $a$ is connected to all nodes in $A$ with infinite capacity and analogously for $b$.
The condition that this maximum flow using capacities $x_0 + x_i$ is at least $1$ can be expressed as a linear program with a polynomial number of variables and constraints, hence $\mathrm{LP}_{GST}$ can be solved in poly-time.
Let $z^*$ be the optimal cost of the linear program. We argue that the LP is a relaxation of the original problem (with a factor-$2$ loss), i.e., $z^* \le 2 \mathrm{OPT}$. Let $X_0^*$ be the first-stage solution that satisfies the stipulations of \Cref{lemma:first-stage-structure}, hence $w(X_0^*) \le 2 \mathrm{OPT}$. The solution $X_0^*$ is decreasing on root-leaf paths since each minimal feasible solution is decreasing on root-leaf paths, hence we can deduce the same about their union. The flow and positivity properties are trivially satisfied by any feasible integral solution. Therefore, $z^* \le w(X_0^*) \le 2 \mathrm{OPT}$.
\paragraph{Rounding the LP.} We use the online algorithm for the group Steiner tree problem on trees from \citet{alon2006general}. Intuitively, given a sequence of fractional solutions $y_1, y_2, \ldots$, where each $y_i \in [0,1]^E$ represents the extent to which the edges in $E$ are bough and satisfy some simple monotonicity properties, the algorithm maintains a sequence of non-decreasing integral solutions $F_1, F_2, \ldots$ where $F_i \subseteq E$ such that (1) the cost of the integral solution is competitive with the cost of the fractional solution, and (2) the integral solution satisfies the same set of constraints as the fractional solution. The result is formalized as follows.
\begin{lemma}[\cite{alon2006general}]
\label{thm:online-group-steiner-on-trees}
Let $G = (V, E, w)$ be a weighted tree with a distinguished root $r \in V$. There exists a polynomial-time randomized algorithm which accepts a sequence of vectors $y_0, y_1, \ldots, y_T \in [0, 1]^E$ where each $y_i$ is decreasing on root-leaf paths for $i \in \{0, \ldots, T\}$ and $y_i(e) \le y_{i+1}(e)$ for all $i \in \{0, \ldots, T-1\}, e \in E$. For each $i \in \{0, \ldots, T\}$, upon receiving the vector $y_i$, the algorithm outputs a set $F_i \subseteq E$ which includes the previous output (i.e., $F_{i-1} \subseteq F_i$ if $i > 1$) and (1) $\Pr[e \in F_i] = y_i$ for each $e \in E$, and (2) for each $i$ and every set $g \subseteq V$ if $\maxflow(y_i, \{r\}, g) \ge 1$, then $F_i$ connects some node of $g$ to the root with probability at least $\Omega(1 / \log n)$.
\end{lemma}
This algorithm is explicitly explained in Section 4.2 of \cite{alon2006general}. Property (1) is argued via Lemma 10 and Property (2) matches Lemma 12.
Using the online rounding scheme of \Cref{thm:online-group-steiner-on-trees}, we show how to round $\mathrm{LP}_{GST}$ to obtain an (integral) demand-robust solution.
\begin{lemma}\label{lemma:rounding-lp-sol}
Consider a demand-robust group Steiner tree problem on a weighted rooted tree $G = (V, E, w)$. Given a feasible solution $x$ to $\mathrm{LP}_{GST}$ with objective value $z$, there exists a polynomial-time randomized algorithm that outputs $X_0 \subseteq E, \ldots, X_m \subseteq E$ such that $w(X_0) + \sigma_i \cdot w(X_i) \le O(\log^2 n) \cdot z$ for all $i \in [m]$, and each group $g_{i, j}$ is connected to the root via $X_0 \cup X_i$ with probability at least $1 - n^{- O(1) }$ (both $O$-constants can be jointly increased).
\end{lemma}
\begin{proof}
We run $C \cdot \log^2 n$ ($C > 0$ is a sufficiently large constant) independent copies of the algorithm described in \Cref{thm:online-group-steiner-on-trees} and continuously output the union of the copies' output. We set $y_0 := x_0$ and note that $x_0$ is valid, since it is decreasing on root-leaf paths due to the constraint in $\mathrm{LP}_{GST}$. We output (the union of all the copies) as the first stage solution $X_0$. We remember the state of the algorithm copies and perform the following for each scenario $i \in [m]$ (reverting the state upon completion).
Suppose now that some scenario $S_i \in \mathcal{S}$ is realized. We set $y_1 := x^*_0 + x^*_i$, hence clearly $y_0 \le y_1$. Furthermore, we can assume without loss of generality that $y_1$ is decreasing on root-leaf paths since otherwise we can lower the value of any violating edge value $(y_1)_e$ without decreasing the maximum flow to any group $g \subseteq V$; clearly, the value will not fall below $(y_0)_e$. Therefore, we can feed $y_1$ to all the algorithms and recover (the union of multiple copies of the their output) $X_i$, which will be our second-stage solution.
We argue that this solution $X_0, X_1, \ldots, X_m$ is feasible. We remark here that $X_0$ only depends on $y_0$, and $X_i \supseteq X_0$. Furthermore, the probability that a single copy does not satisfy a group is $1 - 1 / O(\log n) \le \exp(- 1 / O(\log n) )$. Therefore, we can conclude via the independence of our algorithm copies' randomness and a union bound that every group is satisfied with at least one copy of the algorithm with probability at least $1 - \text{poly}(n) \cdot \exp(- 1 / O(\log n) \cdot C \log^2 n ) \ge 1 - n^{-C' }$ (where $C' = O(1)$ can be made arbitrary by increasing $C = O(1)$).
Finally, we argue our cost bound. Let $z$ be the objective value of $x$ and let $(F_0, F_1, \ldots, F_m)$ be the output of a fixed copy of the algorithm. For each $i \in [m]$ we have:
\begin{align*}
\E[ w(F_0) + \sigma_i\cdot w(F_1) ] & = \sum_{e \in E} w(e)\left( \Pr[e \in F_0] + \sigma_i \cdot \Pr[e \in F_1] \right) \\
& \le \sum_{e \in E} w(e)\left( x_{0, e} + \sigma_i \cdot x_{i, e} \right ) \le z .
\end{align*}
Therefore, we have $\E[ w(X_0) + \sigma_i\cdot w(X_i) ] \le C \cdot \log^2 n \cdot z = O(\log^2 n) \cdot z$, bounding the cost.
\end{proof}
We conclude with our proof of \Cref{thm:demand-robust-steiner-tree-algo}.
\begin{proofof}{\Cref{thm:demand-robust-steiner-tree-algo}}
Let $x^*$ represent the optimal solution to $\mathrm{LP}_{GST}$. We apply \Cref{lemma:rounding-lp-sol} on $x^*$ (with $f_{i, j} := 1$ for all $i,j$), the described poly-time algorithm outputs a feasible (integral) solution $X_0, X_1, \ldots, X_m$ such that $X_0 \cup X_i$ connects each group $g_{i, j}$ to the root with probability at least $1 - n^{-O(1)}$. Since there are at most $m \le \text{poly}(n)$ scenarios, and each scenario has at most $\text{poly}(n)$ groups, we can conclude via a union bound that the solution is feasible with probability at least $1 - \text{poly}(n) \cdot n^{-O(1)} \ge 1 - n^{-100}$.%
\end{proofof}
\subsection{Demand-Robust Group Steiner Forest When $G$ is a Tree}
In this section we give a poly-log-approximation algorithm for the demand-robust group Steiner forest problem when the underlying graph $G$ is a weighted and rooted tree. The main result of the section follows.
\DRSFT*
We note that combining \Cref{thm:demand-robust-steiner-forest-algo} with the mapping of \Cref{lemma:map-group-steiner-to-trees} and the demand-robust copy tree embedding construction \Cref{thm:demand-robust-copy-tree} immediately yields a randomized $O(\log^6)$-competitive poly-time algorithm for the group Steiner forest on general graphs when the aspect ratio is polynomial, namely \Cref{thm:demand-robust-steiner-forest-algo-on-general-graph}. Note that here we used the fact that for graphs with polynomial aspect ratio the depth of the FRT trees can be assumed to be $D = O(\log n)$. The rest of this section is dedicated to proving \Cref{thm:demand-robust-steiner-forest-algo}.
We proceed in a similar way to the demand-robust group Steiner tree on a tree: first write a linear programming relaxation and then utilize the online rounding scheme for the group Steiner forest problem (presented in \cite{naor2011online}) to obtain a demand-robust solution. Again, we remark that using the randomized rounding scheme in a more naive way (without going through the demand-robust copy tree embedding) does not immediately yield poly-logarithmic approximation ratios.
\paragraph{Relaxation $\mathrm{LP}_{GSF}$.} We write a somewhat more complicated linear programming relaxation than we did in the demand-robust group Steiner tree case. Remember that $G$ is a rooted tree. We make $D+1$ copies, $G_0, G_1, \ldots, G_D$ of the tree $G$. Next, the $\ell^{th}$ copy $G_\ell$ deletes all nodes whose depth is less than $\ell$ (e.g., for $\ell = 0$ we copy $G$ and for $\ell = D$ the graph is a set of isolated nodes). Note that $G_\ell$ is a forest; let $\mathcal{T}_\ell$ the set of (maximal) trees in $G_\ell$. For each edge $e \in E(G_\ell)$ in a copy $G_{\ell}$ we introduce first-stage and second-stage variables $x_{\ell, i, e}$ for $\ell \in \{0, 1, \ldots, D\}$ and $i \in \{0, 1, \ldots, m\}$. Similarly as in the group Steiner tree case, we require that the first-stage solution is root-leaf decreasing in order for the online rounding scheme to work. Lastly, over $\ell, i, j$ (same range as before) and for $T \in \mathcal{T}_\ell$ we introduce a ``flow variable'' $f_{\ell, T, i, j}$ which corresponds to the amount of flow that can be routed via $x_{\ell, 0} + x_{\ell, i}$ between the root of $T$ and the nodes in $A_{i, j}$ and $B_{i, j}$ (we want the same amount of flow to be routable to both of them). The linear program requires that the total amount of flow $f$ across all the trees in $\bigcup_{\ell=0}^D \mathcal{T}_\ell$ is at least $1$.
\begin{figure}[H]
\centering
\begin{align*}
\min & \quad z \\
\text{such that} \\
\forall i \in [m] & \quad \sum_{\ell=0}^D \sum_{e \in E(G_\ell)} w(e) \left[ x_{\ell, 0, e} + \sigma_i \cdot x_{\ell, i, e} \right ] \le z \\
\forall \ell \in \{0, \ldots, D\}, i \in [m], \forall j \in [k(i)], \forall T \in \mathcal{T}_\ell & \quad \maxflow(x_{\ell, 0} + x_{\ell, i}, \{\text{root of T}\}, A_{i, j}) \ge f_{\ell, T, i, j} \\
\forall \ell \in \{0, \ldots, D\}, i \in [m], \forall j \in [k(i)], \forall T \in \mathcal{T}_\ell & \quad \maxflow(x_{\ell, 0} + x_{\ell, i}, \{\text{root of T}\}, B_{i, j}) \ge f_{\ell, T, i, j} \\
\forall i \in [m], \forall j \in [k(i)] & \quad \sum_{\ell=0}^D \sum_{T \in \mathcal{T}_\ell} f_{\ell, T, i, j} \ge 1 \\
\forall \ell \in \{0, \ldots, D\}, \forall e \in E(G_\ell) & \quad \text{if $e$ is not at the top of its tree in $G_\ell$, then } x_{\ell, 0, \mathrm{parent}(e)} \ge x_{\ell, 0, e} \\
\forall \ell \in \{0, \ldots, D\}, \forall i \in \{0\} \cup [m], \forall e \in E(G_\ell) & \quad x_{\ell, i, e} \ge 0 \\
\forall \ell \in \{0, \ldots, D\}, \forall i \in [m], \forall e \in E(G_\ell) & \quad f_{\ell, i, e} \ge 0
\end{align*}
\caption{Linear program $\mathrm{LP}_{GSF}$}
\end{figure}
The condition that this maximum flow using capacities $x_{\ell, 0} + x_{\ell, i}$ is at least $f_{\ell, T, i, j}$ can be expressed as a linear program with a polynomial number of variables and constraints, hence $\mathrm{LP}_{GSF}$ can be solved in poly-time.
We now argue that $\mathrm{LP}_{GSF}$ relaxes the original problem (up to a factor of $O(D)$ loss). To this end we introduce some notation. Let $p$ be a simple path in $G$ and consider the highest (closest to the root) node $x \in V(p)$ it passes through. We say that $p$ \textbf{peaks at node $x$}. The high-level idea is that we can consider the optimal integral solution and, for each pair $(A_{i, j}, B_{i, j})$ observe the path that connects a node in $A_{i, j}$ with a node in $B_{i, j}$. If this path peaks at node $x$, we assign this pair to the tree in $\mathcal{T}_{\mathrm{depth}(x)}$ whose root is exactly $x$. Then, by applying the structural \Cref{lemma:first-stage-structure} on each tree in $\bigcup_{\ell=0}^D \mathcal{T}_\ell$, we can conclude that there is a root-leaf decreasing integral solution that solves the assigned pairs to the tree, hence the integral solution satisfies all the properties of $\mathrm{LP}_{GSF}$ and is therefore a relaxation.
\begin{lemma}\label{lemma:lp-gsf-relaxation}
Let $z^*$ be the optimal objective value of $\mathrm{LP}_{GSF}$ with respect to some demand-robust group Steiner forest problem with optimal value $\mathrm{OPT}$ on an underlying tree with depth $D$. Then $z^* \le O(D) \cdot \mathrm{OPT}$.
\end{lemma}
\begin{proof}
Let $X_0^*, X^*_1, \ldots, X^*_m$ be the optimal first-stage and second-stage solutions (as defined on $G$). We define $X^*_{\ell, T, i}$ for $\ell \in \{0, 1, \ldots, D\}, i \in \{0, 1, \ldots, m\}, T \in \mathcal{T}_\ell$ as a natural extension of $X_i^*$ to $T$: if $e' \in E(T)$ is copied from $e \in E(G)$, then $e' \in X^*_{\ell, T, i} \iff e \in X^*_i$. Therefore, since each edge is copied $D+1$ times, for all $i \in [m]$ we have that $\sum_{\ell=0}^D \sum_{T \in \mathcal{T}_\ell} w(X^*_{\ell, 0, T}) + \sigma_i \cdot w(X^*_{\ell, T, i}) \le (D+1) \cdot \mathrm{OPT}$.
Let $p$ be the path connecting (some node in) $A_{i,j}$ to (some node in) $B_{i,j}$. Suppose that $p$ peaks at node $x$, let $\ell$ be the depth (in $G$) of $x$, and let $T$ be the maximal tree in $G_{\ell}$ whose root is at $x$. Since in the optimal solution both $A_{i,j}$ and $B_{i,j}$ are connected to the root, we \textbf{assign} the ``groups'' $A_{i,j}$ and $B_{i,j}$ to $T$ (both $A_{i,j}$ and $B_{i,j}$ are considered stand-alone groups, i.e., we forget that they were paired beforehand). Clearly, since the optimal solution is feasible, each (element of a) pair is assigned to exactly one tree.
Fix a particular (maximal) tree $T$ in $\bigcup_{\ell=0}^D G_{\ell}$ and consider the set $\mathcal{P}_{T}$ of groups \textbf{assigned} to $T$. Grouping by the groups their originating scenario, we can rewrite $\mathcal{P}_{T}$ as $\mathcal{P}'_{T} := ( \mathcal{P}_{T, i} )_{i=1}^m$ where $\mathcal{P}_{T, i}$ is the set of groups from $\mathcal{P}_{T}$ that originated from scenario $i$. Finally, we note that $( X^*_{\ell, T, i} )_{i=0}^m$ is a feasible solution to the demand-robust group Steiner \textbf{tree} problem with scenarios $\mathcal{P}'_{\ell}$.
Applying \Cref{lemma:first-stage-structure} on each such tree $T$ , there exists a (first-stage and second-stage) solution $( X'_{\ell, T, i} )_{i=0}^m$ such that for all $\ell, T, i$, we have (i) $w(X'_{\ell, T, i}) \le 2 \cdot w(X^*_{\ell, T, i})$, (ii) the first-stage solution $X'_{\ell, T, 0}$ is a subtree of $T$ with coinciding roots, (iii) $( X'_{\ell, T, i} )_i$ is a feasible solution to $\mathcal{P}'_{\ell}$ (i.e., for each pair $(A_{i,j}, B_{i,j})$ assigned to $T$, $X'_{\ell, T, 0} \cup X'_{\ell, T, i}$ connects $A_{i,j}$ to the root of $T$ as well as $B_{i,j}$).
We now define $x_{\ell, i, e} := 1$ if $e \in X'_{\ell, T, i}$ for the unique tree $T \in \mathcal{T}_\ell$ such that $e \in E(T)$, and $0$ otherwise. Furthermore, if groups $A_{i, j}$ and $B_{i, j}$ are assigned to a tree $T \in \mathcal{T}_\ell$, we can set $f_{\ell, T, i, j} := 1$ and $f_{\ell, T, i, j} := 0$ otherwise. We argue that $(x, f)$ is a feasible solution to the linear program $\mathrm{LP}_{GSF}$.
Property (ii) of $X'$ ensures that the $x_{\ell, i}$ is decreasing on all root-leaf paths of each tree in $G_{\ell}$. Finally, from property (iii) we conclude that the maximum flow property being at least $f_{\ell, T, i, j}$ is also satisfied, hence proving that $(x, f)$ is a feasible solution. Therefore, the objective follows from condition (i); for all $i \in [m]$ we have that:
\begin{align*}
z^* & \le \sum_{\ell=0}^D \sum_{T \in \mathcal{T}_{\ell}} \sum_{e \in E(T)} w(e) \left[ x_{\ell, 0, e} + \sigma_i \cdot x_{\ell, i, e} \right ] \\
& = \sum_{\ell=0}^D \sum_{T \in \mathcal{T}_{\ell}} w(X'_{\ell, T, 0}) + \sigma_i \cdot w(X'_{\ell, T, i}) \\
& \le 2 \sum_{\ell=0}^D \sum_{T \in \mathcal{T}_{\ell}} w(X^*_{\ell, T, 0}) + \sigma_i \cdot w(X^*_{\ell, T, i}) \\
& \le O(D) \cdot \mathrm{OPT}.\qedhere
\end{align*}
\end{proof}
We now present the randomized online rounding scheme from \cite{naor2011online} which enables us to round $\mathrm{LP}_{GSF}$ into a demand-robust solution.
\begin{lemma}[\cite{naor2011online}]
\label{thm:online-pair-group-forest}
Let $G = (V, E, w)$ be a forest, namely a collection of (maximal) rooted trees $G_1, G_2, \ldots, G_m$ with roots $r_1, \ldots, r_m$. There exists a polynomial-time randomized algorithm which accepts a sequence of vectors $y_0, y_1, \ldots, y_T \in [0, 1]^E$ where each $y_i$ is decreasing on root-leaf paths for $i \in \{0, \ldots, T\}$ and $y_i(e) \le y_{i+1}(e)$ for all $i \in \{0, \ldots, T-1\}, e \in E$. For each $i \in \{0, \ldots, T\}$, upon receiving the vector $y_i$, the algorithm outputs a set $F_i \subseteq E$ which includes the previous output (i.e., $F_{i-1} \subseteq F_i$ if $i > 1$) and such that (1) $\Pr[e \in F_i] = y_i$ for each $e \in E$, and (2) for each $i$ and each pair $(A, B)$ where $A \subseteq V, B \subseteq V$, if $\sum_{j=1}^m \min(\maxflow(y_i, r_j, A), \maxflow(y_i, r_j, B)) \ge 1$, then with probability $\Omega(1/\log^2 n)$ there is a root $r_j$ connects to both a node in $A$ and a node in $B$ via $F_i$.
\end{lemma}
The algorithm is implicitly explained in Section 3 of \cite{naor2011online}. Their description talks about an online rounding algorithm for the group Steiner forest problem on a tree $G$. The algorithm accepts an increasing sequence of vectors $y_0, \ldots, y_T \in [0,1]^{E(G)}$ and proceeds by splitting $G'$ into a forest $\bigcup_{\ell=0}^D \mathcal{T}_\ell$ and providing the guarantees specified in this claim. The guarantees are proven in Lemma 6 of the paper.
Finally, we combine the relaxation with the LP rounding to prove the main result of this section.
\begin{proofof}{\Cref{thm:demand-robust-steiner-forest-algo}}
Let $(x^*, f^*)$ be the optimal LP solution of the demand-robust Steiner forest problem with respect to scenarios $\mathcal{S}$ and let $z^* \le O(D) \cdot \mathrm{OPT}$ be the objective value (\Cref{lemma:lp-gsf-relaxation}).
\textbf{Splitting $G$ into a forest $G'$.} Given a tree $G$, we construct a forest $G'$ as composed of $\bigsqcup_{\ell=0}^D \mathcal{T}_\ell$ (i.e., each tree in $\mathcal{T}_\ell$ will be included as a component in $G'$). Note that for each $i \in \{0, \ldots, m\}$ the input $x_i$ can be naturally understood as a real vector indexed over the set $E(G')$.
Furthermore, an edge in $e \in E(G)$ \textbf{corresponds} to possibly multiple (but at most $O(D)$) edges in $E(G')$, whereas an edge $e'\in E(G')$ corresponds to a unique edge $e \in E(G)$. Therefore, we define a projection $\pi_{G' \to G} : 2^{E(G')} \to 2^{E(G)}$ which maps an edge $e' \in E(G')$ to its corresponding edge $e = \pi_{G' \to G}(\{e'\})$, and we extend this to subgraphs $F' \subseteq E(G')$ via $\pi_{G' \to G}(F') = \bigcup_{e' \in E(F')} \pi_{G' \to G}(\{e'\})$.
\textbf{Constructing the solution.} We set $y_0 := x_0$ and apply \Cref{thm:online-pair-group-forest} on $G'$ to obtain the integral first-stage $X_0$. Note that $x_0$ is decreasing on root-leaf paths due to a constraint in $\mathrm{LP}_{GST}$.
The second-stage solutions $(X_1, \ldots, X_m)$ are obtained by saving the state of the algorithm and performing the following for each scenario $i \in [m]$ (reverting the state upon completion). In case some scenario $S_i \in \mathcal{S}$ is realized, we set $y_1 := x^*_0 + x^*_i$, hence clearly $y_0 \le y_1$. First, we note that $x_0$ is decreasing on root-leaf paths due to the constraint in $\mathrm{LP}_{GSF}$. Furthermore, we can assume without loss of generality that $y_1$ is decreasing on root-leaf paths since otherwise we can lower the value of any violating edge value $(y_1)_e$ without decreasing the maximum flow to any subset of $V$; clearly, the value will not fall below $(y_0)_e$. Therefore, $y_1$ is valid and can be fed to all the algorithms, recovering $X_i \subseteq E(G')$.
\textbf{Analysis.} The cost analysis is straightforward: $\E[ w(X_0) + \sigma_i w(X_i) ] \le z^* \le O(D) \cdot \mathrm{OPT}$.
By construction of $\mathrm{LP}_{GSF}$, for each pair $(A_{i, j}, B_{i, j})$ the fractional solution $x_0 + x_i \in \mathbb{R}_{\ge 0}^{E(G')}$ yields a flow of at least $1$ across all $\bigcup_\ell T_\ell$, or equivalently, $G'$. Therefore, by \Cref{thm:online-group-steiner-on-trees}, with probability $\Omega(1/\log^2 n)$ some node in $A_{i, j}$ and some node in $B_{i,j}$ will be connected to the same root of a tree in $G'$ via $X_0 \cup X_i \subseteq E(G')$. Furthermore, by construction of $\pi_{G' \to G}$, this implies that (with the same probability) $\pi_{G' \to G}(X_0 \cup X_i)$ are connected in $G$. We can run $O(\log^3 n)$ independent copies to recover the result with high probability (at least $1 - 1/n^{100}$) and have the cumulative cost be $O(D \cdot \log^3 n) \cdot \mathrm{OPT}$.
\end{proofof}
\section{Conclusion and Future Work}
Online and dynamic algorithms built on probabilistic tree embeddings seem inherently randomized and necessarily not robust to adaptive adversaries. In this work we gave an alternative to probabilistic tree embeddings---the copy tree embedding---which is better suited to deterministic and adaptive-adversary-robust algorithms. We illustrated this by giving several new results in online and demand-robust algorithms, including a reduction of deterministic online group Steiner tree and group Steiner forest to their tree cases, a bicriteria deterministic algorithm for online partial group Steiner tree and new algorithms for demand-robust Steiner forest, group Steiner tree and group Steiner forest.
As a conceptual contribution we believe that copy tree embeddings will prove to be useful far beyond the selected algorithmic problems covered in this paper. We conclude by providing just some directions for such future works.
As mentioned earlier, \citet{bienkowski2020nearly} recently gave a deterministic algorithm for online non-metric facility location---which is equivalent to online group Steiner tree on trees of depth $2$---with a poly-log-competitive ratio and stated that they expect their techniques will extend to online group Steiner tree on trees. A very exciting direction for future work would thus be to extend these techniques to general depth trees which, when combined with our reduction to the tree case, would prove the existence of a deterministic poly-log-competitive algorithm for online group Steiner tree, settling the open question of \citet{alon2006general}.
While our focus has been on two specific constructions, it would be interesting to prove lower bounds on copy tree embedding parameters, such as, more rigorously characterizing the tradeoffs between the number of copies and the cost approximation factor. One should also consider the possibility of improved constructions. For example: Is it possible to get a logarithmic approximation with few copies, maybe even a constant number of copies? It is easy to see that with an exponential number of copies---one for each possible subgraph---a perfect cost approximation factor of one is possible. Can one show that a sub-logarithmic distortion is impossible with a polynomial number of copies? We currently do not even have a proof that excludes a constant cost approximation factor with a constant copy number.
Furthermore, while this paper focused on online group Steiner problems, there are many other online and dynamic algorithms where copy tree embeddings might be able to give deterministic and adaptive-adversary-robust solutions for general graphs. Several such works are: \citet{englert2007reordering} and \citet{englert2017reordering} give an algorithm for the reordering buffer problem; \citet{guo2020facility} recently gave a dynamic algorithm for facility location; \citet{gupta2019permutation} gives an algorithm for fully dynamic metric matching. All these works feature a deterministic algorithm which works against adaptive adversaries in trees but then use FRT to obtain a randomized algorithm for general graph, which unsurprisingly only works against oblivious adversaries. The work on the reordering buffer problem seems especially promising since the algorithm for trees is quite similar in spirit to our water-filling algorithm for partial group Steiner tree. We believe that the natural generalization of this water-filling algorithm to copy tree embeddings should work and generalize the deterministic algorithm from trees to general graphs. While there has been follow-up work on this problem which does not use FRT for this problem \cite{kohler2017reordering} this would still improve the known bounds for this problem for some parameter settings.
Lastly, a recent work of \citet{barta2020online} gave online embeddings for network design with logarithmic approximation guarantees in the number of terminals rather than $n$. It would be exciting to marry these ideas with the ones presented here to get the best of both worlds: a deterministic online copy tree embedding with distortion as a function of the number of terminals.
\bibliographystyle{plainnat}
|
1,108,101,566,622 | arxiv | \section{Introduction}
\label{Introduction}
Soon after the discovery of Gamma-Ray Bursts (GRBs),
it was realized that some bursts are repeating.
The localization of these objects, called
Soft Gamma-Ray Repeaters (SGRs), showed that they
are located in the local group
(Cline et al. 1980; Evans et al. 1980; Mazets \& Golenetskii 1981),
and their flare energy release ranges from
$\sim10^{39}$ to $10^{46}$~erg.
In quiescence, SGRs (and also the related
class of Anomalous X-ray Pulsars; AXPs)
are detected as faint X-ray sources
with luminosities in the range $\sim10^{33}$ to $10^{36}$~erg~s$^{-1}$.
Their X-ray light curves
are modulated with periodicities of the order of 10~s,
and period derivatives of the order of $10^{-10}$~s~s$^{-1}$.
These properties suggest that SGRs are young neutron stars
with ultra-strong magnetic fields ($\gtorder10^{14}$~G).
Contrary to ``normal'' neutron stars (i.e., radio pulsars),
whose energy reservoir is rotational,
SGR's source of energy is most probably magnetic.
The basic properties of SGRs are well explained by the popular
magnetar model (Duncan \& Thompson 1992; Paczynski 1992).
Known SGRs and AXPs are associated with star forming regions
(for a review see Gaensler et al. 2001).
Moreover, some of them may be associated
with supernova remnants
(Cline et al. 1982; Kulkarni \& Lingenfelter 1994;
Hurley et al. 1999;
Woods et al. 1999).
However, Levan et al. (2006) suggested a formation
channel for magnetars in old stellar populations.
Unfortunately, only four SGRs are known to date, all in the
local group (see Woods \& Thompson 2006 for a recent review),
of which three reside in the Milky-Way and one in the
Large Magellanic Cloud.
The small number of known SGRs severely hinders our ability to study
their origin,
environments (e.g., Gaensler et al. 2001),
and rate of luminous flares
(Palmer et al. 2005; Popov \& Stern 2006; Ofek 2007a).
However, the strongest SGR flares can be detected in nearby
galaxies (e.g., Duncan 2001; Eichler 2002).
Discovery of extragalactic SGRs is
an exciting possibility that will
enable us to enlarge the sample
of known objects in this class.
Unfortunately, extragalactic SGR flares have proven
hard to recognize, and their observed rate
of gamma-ray flares
is probably of the order of several percent
of the short-duration GRB observed rate
(e.g., Lazzati et al. 2005; Nakar et al. 2006; Ofek 2007a).
To date, only a small number of extragalactic SGR candidates
are known:
in M81 (Ofek et al. 2006; Frederiks et al. 2007b), in NGC\,6946 (Crider 2006),
and in M74 (Ofek 2007a).
Unfortunately, each of these candidates has been observed to flare only once.
Moreover, because of the limited positional accuracy
of most current gamma-ray telescopes,
they have astrometric uncertainties of
hundreds of square arcminutes or more.
This positional accuracy is too poor to allow environmental studies.
Furthermore,
given the relatively large positional uncertainty,
it is possible that some of these
candidates are due to a chance coincidence.
Discovery of extragalactic SGRs will increase our statistical sample
of such objects and
with accurate positions it will be possible to study
their environments.
In particular, it may reveal a new population of
SGRs that are not bound to star forming
regions (e.g., Levan et al. 2006).
\subsection{GRB~070201}
\label{GRB070201}
In this paper we discuss an extragalactic
SGR giant flare candidate associated with the nearby galaxy M31.
At UTC 2007 Feb 1, 15:23:10.780, an intense short-hard GRB
with $\sim0.2\,$s duration was detected by the
Inter-Planetary Network (IPN; e.g. Hurley et al. 1999).
The burst was detected by Konus-{\it Wind},
{\it INTEGRAL}-SPI-ACS,
{\it Swift}-BAT\footnote{The burst was outside the BAT coded field of view. Therefore it was~not localized by {\it Swift}-BAT.}
(Golenetskii et al. 2007a),
and {\it Messenger},
while {\it Suzaku} and {\it RHESSI} were not able to observe the burst due to
Earth occultation, and {\it Odyssey} was not able to observe
it due to Mars occultation.
Early on, Perley \& Bloom (2007) noticed that the
preliminary IPN annulus crosses the Andromeda galaxy
(see also: Ofek 2007b; Golenetskii et al. 2007b).
Later on, with the analysis of the {\it Messenger}
data (Hurley et al. 2007), and re-analysis
of the data (Mazets et al. 2007),
the error region shrunk to a $0.124$~deg$^{2}$
quadrilateral that intersects the M31 galaxy.
\begin{figure}
\centerline{\includegraphics[width=8.5cm]{f1.eps}}
\caption{The Konus-{\it Wind} gamma-ray light curve of GRB\,070201 (solid heavy line),
compared with the light curve of the 2004 December 27 SGR giant flare
(dotted line), and the Konus-{\it Wind} light curve of GRB\,051103
(dashed-dotted line; Ofek et al. 2006; Frederiks et al. 2007b).
The light curve of the 2004 December 27 SGR giant flare is based
on a digitization of Figure~1 in Terasawa et al. (2005),
while the light curve of GRB~070201 is based on a digitization of
the 18-1160 keV-band light curve from the Konus-{\it Wind} website.
The fluxes of the different bursts are scaled such they will have the same
peak flux.
\label{GRB070201_LC}}
\end{figure}
\begin{figure*}
\centerline{\includegraphics[width=18cm]{f2_lr.eps}}
\caption{{\it GALEX} Near-UV image,
obtained on UTC 2003 September 5 (1940~s exposure),
of the region of the error quadrilateral
of GRB\,070201 intersecting
with M31. The solid red lines mark the
revised error quadrilateral (Mazets et al. 2007)
and the blue lines show the original error quadrilateral
(Hurley et al. 2007),
the red circles show the position of X-ray sources
detected during the 2007 {\it XMM} observations, the yellow boxes
mark the {\it XMM} X-ray sources detected in 2002,
the blue diamonds mark the position of X-ray sources
listed in the {\it ROSAT}-PSPC catalog of M31 (Supper et al. 2001), and the
the cyan crosses mark the position of known supernova (SN) remnants in M31
(Magnier et al. 1995).
The size of the markers (in arcsec) of the X-ray sources corresponds to
their flux ($F$) in the 0.2--10~keV band using the relation:
$10+10\times(15+log_{10}{F [{\rm erg}~{\rm s}^{-1}~{\rm cm}^{-2}]})$.
More than one symbol of the same type at
almost the same position corresponds to detection
of the same source in the overlap regions between images
taken during the same year.
The ROTSE-IIIb observations cover the entire error region south of
the yellow line.
\label{GALEX_X}}
\end{figure*}
The burst had the highest peak count rate of any
GRB observed by Konus-{\it Wind} in 12 years of operation
(excluding Galactic SGRs).
The GRB fluence in the Konus-{\it Wind} $20\,$keV--$1.2\,$MeV band was
$2.00_{-0.26}^{+0.10}\times10^{-5}\,$erg\,cm$^{-2}$,
and its peak flux on two-milliseconds time scale was
$1.61_{-0.50}^{+0.29}\times10^{-3}\,$erg\,cm$^{-2}$\,s$^{-1}$
($90\%$~confidence; Mazets et al. 2007).
The light curve, shown in Figure~\ref{GRB070201_LC} (solid line; Mazets et al. 2007),
had a ``bumpy'' rise with a time scale of $20$~milliseconds
and two leading peaks with durations
of a few milliseconds, while
the decaying tail had a time scale of about $0.1\,$s
(see discussion in Mazets et al. 2007).
Golenetskii et al. (2007b) and Mazets et al, (2007) found that
the spectrum of GRB\,070201 is well fitted by a power-law
with an exponential cutoff,
$dN/dE \sim E^{-\alpha}\exp{[-E(2-\alpha)/E_{p}]}$,
where $E$ is the energy.
They also found that,
in the first $64\,$milliseconds,
the best fit parameters
are $\alpha=0.52_{-0.15}^{+0.13}$, and $E_{p}=360_{-38}^{+44}\,$keV
($90\%$ confidence; $\chi^{2}/dof=32/35$),
while the best fit parameters for the time integrated spectrum
are $\alpha=0.98_{-0.11}^{+0.10}$, and $E_{p}=296_{-32}^{+38}\,$keV
($\chi^{2}/dof=40/40$).
Like GRB~070201,
the spectrum of the 2004 December 27
giant flare of SGR\,1806$-$20, at peak, is not consistent
with a black-body spectrum, but is well fitted by
a power-law with an exponential cutoff model,
with $\alpha=0.73_{-0.64}^{+0.47}$ ($\chi^{2}/dof=10.6/12$;
Frederiks et al. 2007a).
Abbott et al. (2007b) analyzed the available
Laser Interferometric Gravitational-Wave Observatory
(LIGO; Abbott et al. 2007a) data,
collected within 180~s of the time of GRB~070201.
They did~not find any gravitational wave source
coincident with this GRB.
Using these observations they rule
out, at the $99\%$ confidence level, a compact binary
(i.e., black-holes or neutron stars) merger
origin for this GRB with progenitors
masses in the ranges:
$1~{\rm M}_{\odot} < M_{1} < 3~{\rm}_{\odot}$ and
$1~{\rm M}_{\odot} < M_{2} < 40~{\rm}_{\odot}$,
and with a distance below 3.5~Mpc.
In this paper, we present the case of GRB\,070201 as a possible SGR
giant flare in the nearby galaxy M31.
In \S\ref{Obs} we present
our search for a visible light transient associated with
this GRB.
We examine archival X-ray and UV
images of the IPN error quadrilateral (\S\ref{Archive}),
and we look for possible candidates for
pulsating X-ray sources that could be SGRs within
M31 (\S\ref{Xray}). In \S\ref{Sim} we quantify
the probability to detect X-ray pulsations of
an SGR or AXP in M31, and finally we discuss
the nature of GRB~070201 in \S\ref{Disc}.
\section{Observations}
\label{Obs}
Optical images of the Andromeda galaxy were obtained nightly by the
0.45-m ROTSE-IIIb telescope as part of
the Texas Supernova Search (Quimby 2006).
Routine unfiltered images covering the GRB error quadrilateral
south of $\delta=+42^{h}08^{m}57^{s}$
(i.e., the southern $42\%$ of the error box, including the
intersection with the
spiral arms) were taken on
UTC 2007 Feb 02.0821, 10.6 hours after the GRB trigger.
We preformed PSF-matched image subtraction of
the data using a modified version of the
Supernova Cosmology Project's search code (Perlmutter et al. 1999).
After subtracting off a reference template constructed from 37
ROTSE-IIIb images obtained between 2005
July and 2006 June, we find no new objects in the southern part of
the error box covered to a 5-$\sigma$
limiting magnitude of 17.15 (calibrated against the USNO-B1.0 R2 magnitude).
Assuming a distance to M31 of 770~kpc (e.g., Ribas et al. 2005),
and correcting for Galactic extinction in this direction
(Schlegel, Finkbeiner, \& Davis 1998; Cardelli, Clayton, \& Mathis 1989),
this corresponds to an absolute
magnitude limit of $-7.4$.
\section{Archival data}
\label{Archive}
The intersection region
of the error quadrilateral of GRB\,070201 with M31
has been observed by several facilities, including
{\it ROSAT}, {\it GALEX} and {\it XMM}-Newton.
{\it XMM} observed this field on several epochs,
listed in Table~\ref{Tab-ObsLog}.
Analysis of the 2002 {\it XMM}-Newton data
was presented in Pietsch, Freyberg, \& Haberl (2005).
Interestingly, the last {\it XMM} observation of the field was obtained
about four weeks prior to the GRB trigger, as part of the
M31 {\it XMM}-Newton X-ray survey (Stiele et al. 2007).
Source extraction from the 2007 {\it XMM} images
is presented in \S\ref{Xray}, while a complete
catalog and analysis of the 2007 {\it XMM} M31 observations
will be presented in Stiele et al. (2008, in preparation).
\begin{deluxetable}{ccccc}
\tablecolumns{5}
\tablewidth{0pt}
\tablecaption{Log of Newton-{\it XMM} observations}
\tablehead{
\colhead{Date} &
\colhead{Exp.} &
\colhead{R.A.} &
\colhead{Dec.} &
\colhead{PA} \\
\colhead{} &
\colhead{ks} &
\colhead{deg} &
\colhead{deg} &
\colhead{deg}
}
\startdata
2002-01-26.7 & 4 & 11.33543 & $+$41.93236 & 237.28 \\
2002-01-27.0 & 54 & 11.36929 & $+$41.92389 & 237.24 \\
2007-01-02.9 & 54 & 11.46008 & $+$41.51242 & 251.68 \\
2007-01-04.2 & 52 & 11.23317 & $+$42.14294 & 250.50 \\
2007-01-04.9 & 62 & 11.69142 & $+$41.88250 & 250.71 \\
2007-01-06.2 & 55 & 11.36483 & $+$41.91969 & 249.36 \\
\enddata
\tablecomments{{L}ist of {\it XMM}-Newton observations of the error
quadrilateral of GRB~070201. PA is the position angle of the
{\it XMM}-Newton instruments.}
\label{Tab-ObsLog}
\end{deluxetable}
In Figure~\ref{GALEX_X} we present the {\it GALEX} Near-UV image
of the region of the error quadrilateral intersecting
M31. In this Figure, we show: the
refined (red lines; Mazets et al. 2007)
and original (blue lines; Hurley et al. 2007)
IPN error quadrilateral;
the {\it ROSAT} PSPC sources (blue diamonds; Supper et al. 2001);
the {\it XMM} sources detected in 2002 (yellow boxes; Pietsch et al. 2005);
the {\it XMM} sources detected in 2007 (red circles);
and known and candidates SN remnants (Magnier et al. 1995).
The size of the markers of X-ray sources correspond to
their flux.
We note that there is some overlap between
the {\it XMM} observations.
Therefore, more than one symbol of the same type in
almost the same position corresponds to a detection
of the same source in the overlap regions between images
taken the same year.
The ROTSE-IIIb observations covers the error quadrilateral
south of the yellow line.
Several X-ray sources in the field of GRB\,070201
(Fig.~\ref{GALEX_X})
show long timescale variability
between 2002 and 2007.
Since several types of astrophysical X-ray sources
are known to vary,
this information by itself is not very constructive for
the identification of an SGR X-ray counterpart in this field.
However, an SGR or an AXP may reveal itself as a pulsating
X-ray source with periodicity around 10~s.
In the next section we describe a search for
such X-ray variable sources.
A thorough variability analysis of X-ray sources
in the entire M31 galaxy will be presented in
Stiele et al. (2008, in preparation).
\section{Search for short-period X-ray variable sources in the error quadrilateral}
\label{Xray}
All known SGRs/AXPs exhibit X-ray pulsations
with periodicities in the range of 2~s to 12~s.
Therefore,
it may be possible to identify such objects in M31
by looking for X-ray variable sources with periods
in the range of $\sim$1 to 20~s.
To look for such sources,
we inspected the pipeline-processed event files
of the four fields observed in 2007 (Table~\ref{Tab-ObsLog}),
and removed time intervals during which
particle events caused the event rate
in the detector to flare by more than two standard deviations
above the mean rate.
We then created images
of the 0.2--12~keV events, binned to $4$~arcsec resolution .
For the purpose of identifying point sources, standard data
selection was applied to make the images,
to remove events near the edges of
detector chips and bad pixels, and
to reject events that were likely to be cosmic rays (pattern $>4$
for the PN, and $>12$ for the MOS detectors).
We then generated matching exposure maps, and
we searched for point sources using the routine {\tt ewavelet},
separately for each detector.
We extracted events for each source from
the radius defined by {\tt ewavelet}, which was
$\approx15$~arcsec.
This radius contains about $50\%$ of the photons for each source.
The arrival times of the photons
were transformed to the solar system barycenter using the tool {\tt barycen}.
Finally, we searched for periodicities
in the extracted time tagged photons
using discrete fast
Fourier transforms.
The time series were padded so that the number of points
in the transforms were a power of 2.
This provides a frequency resolution slightly finer than
$1/t_{\rm exp}$, where $t_{\rm exp}$ is the exposure time.
The maximum frequency considered was the Nyquist frequency of
the 13.6~Hz PN detector sampling rate, and
the lowest frequency searched was $10^{-4}$~Hz.
We found no signals stronger than 19.15 times
the mean of the power-spectrum noise.
This cutoff power was selected,
such the
probability of a
single source to surpass this threshold,
in one or more of the $\sim10^{6}$ tested frequencies,
is about $\sim1\%$ (in the entire FFT-tested frequencies range).
Limiting ourself to the 1~s to 20~s periodicity range,
this limit corresponds to false alarm probability
of $\sim0.05\%$.
In total we search for periodicity among 149
X-ray sources, within the original error quadrilateral
(Hurley et al. 2007; blue lines in Fig.~\ref{GALEX_X})
which are listed in Table~\ref{Tab-src}.
We did~not find any periodic variable among the {\it XMM} X-ray
sources.
\begin{deluxetable}{llllll}
\tablecolumns{6}
\tablewidth{0pt}
\tablecaption{{L}ist of X-ray sources for which periodicity was searched}
\tablehead{
\colhead{Name} &
\colhead{RA (J2000)} &
\colhead{Dec (J2000)} &
\colhead{r\tablenotemark{a}} &
\colhead{Counts} &
\colhead{Obs/Det\tablenotemark{b}} \\
\colhead{} &
\colhead{deg} &
\colhead{deg} &
\colhead{$''$} &
\colhead{} &
\colhead{}
}
\startdata
004603.5$+$414623 & 11.51444 & 41.77310 & 18.1 & 42274 & 201/PN \\
004617.7$+$414258 & 11.57362 & 41.71622 & 29.5 & 24500 & 201/PN \\
004618.7$+$414354 & 11.57812 & 41.73170 & 14.1 & 19633 & 201/PN \\
004624.6$+$414414 & 11.60240 & 41.73723 & 18.3 & 22009 & 201/PN \\
004625.6$+$414159 & 11.60687 & 41.69995 & 13.8 & 9300 & 201/PN \\
\enddata
\tablenotetext{a}{Aperture radius in which source counts were extracted.}
\tablenotetext{b}{{L}ast three digits of the {\it XMM} observation ID (starts with 0402561) followed by the detector name (M1, M2 or PN).}
\tablecomments{First five entries of the Table. The Table in its entirety is available via the electronic version.}
\label{Tab-src}
\end{deluxetable}
\section{Is it possible to detect the modulated X-ray emission of SGRs in M31?}
\label{Sim}
The quiescence X-ray luminosity of known AXPs/SGRs ranges from
$10^{33}$ to $10^{36}$~erg~s$^{-1}$.
At the distance of M31 ($770$~kpc, e.g., Ribas et al. 2005),
these correspond to fluxes
of $10^{-17}$ to $10^{-14}$~erg~s$^{-1}$~cm$^{-2}$
in the $2$--$10$~keV range.
Given these flux levels, we discuss here
the chances to detect the modulated X-ray light curves
of SGRs/AXPs in M31 as a function of
the flux of the X-ray source and its
light curve shape (i.e., the fraction of flux within
a pulse).
Specifically, we would like to answer the question:
what is the probability to detect an SGR or an AXP,
based on its periodic X-ray signal,
in the Andromeda galaxy? --
In order to answer this question we perform the
simulations described below.
In our simulations we assumed a 75~ks exposure
with the {\it XMM}-Newton fully depleted PN CCD,
which roughly corresponds to a 50~ks integration
with all the European Photon Imaging Camera (EPIC) CCDs.
Our simulated time-tagged X-ray light curves consist
of the background expected for an {\it XMM} observation
and a periodic signal.
The periodic light curve consists of a non variable part
and photons clumped in periodic pulses.
In all the simulated photon-tagged light curves
the periodicity was set to exactly 10~s,
and the width of the periodic pulse
was $20\%$ of the period (i.e., 2~s).
We controlled the ``shape'' of the light curve by adjusting
the fraction of photons
within a pulse (hereafter 'pulse fraction').
We simulated light curves in a dense grid
of count rates and pulse fractions.
The counts rate were set to be between $10^{-4}$ to $5\times10^{-2}$
counts per sec (along 100 logarithmically spaced grid points),
and the pulse fractions in the range
$0.21$ to $0.81$ (61 linearly spaced grid points).
In each grid point we simulated 100 photon-tagged
light curves, and for each light curve
we calculated the power spectrum,
and checked if the 10~s period
signal is stronger than
19.15 times
the mean of the power-spectrum noise.
We note that this threshold was used in the search for
X-ray variable sources described in \S\ref{Xray}.
Finally, in each grid point
we calculated the probability to recover
the periodic signal with a power exceeding that threshold,
which corresponds to false alarm probability of
about $1\%$.
Figure~\ref{AXP_DetSim} presents the result of these
simulations.
The contours show the probability to detect
the X-ray periodicity with a
false alarm probability of $1\%$ per each source
(assuming that in each source $10^{6}$ independent
frequencies are tested),
as a function of the two free parameters.
The lower X-axis shows the observed count rate
(and the luminosity at the distance of M31,
on the upper X-axis),
and the Y-axis marks the fraction of energy
within a pulse whose width is $20\%$ of the period
of the light curve.
On the right-hand Y-axis we show the rms pulsed fraction,
$f_{rms}$, defined in Woods \& Thompson (2006; Table 14.2).
Next we compared these simulations
with the actual
properties of known AXPs and SGRs.
For each one of the 11 AXPs and SGRs
listed in Woods \& Thompson (2006),
for which the luminosity and rms pulse fraction ($f_{rms}$)
are known, we calculated their count rates (or range
of count rates in case they are variable).
We converted the luminosity of the AXPs/SGRs to count rates using the
PIMMS web tool\footnote{http://cxc.harvard.edu/toolkit/pimms.jsp},
and assumed a neutral Hydrogen column density of $10^{21}$~cm$^{-2}$,
in the direction of M31 (Dickey \& Lockman 1990; Kalberla et al. 2005),
and that the distance to M31 is 770~kpc (e.g., Ribas et al. 2005).
Furthermore, we assumed that the X-ray spectrum of each SGR/AXP is described
only\footnote{Note that some of these objects have more complicated spectra.} by
a power-law and we adopted the measured power-law indices
for each one of these sources (Woods \& Thompson 2006).
The location of the known SGRs/AXPs in the pulse fraction
vs. X-ray luminosity (in the $2-10$~keV
range\footnote{The simulation assumes the observations are conducted
in the 0.2--10~keV band. For compatibility with
Woods \& Thompson (2006), we present the luminosity
in the $2-10$~keV band.}) space are presented in
Fig.~\ref{AXP_DetSim}
as circles (or lines to indicate a range).
The X-ray emission about one month
before and several months after an SGR giant flare
is known to be higher than ``normal''.
This may elevate the probability to detect
X-ray emission from extragalactic SGR giant flares
in the {\it XMM} M31 images taken four weeks prior to the burst.
For example, the X-ray flux of SGR~1900$+$14 was about
1.5 times higher than normal, starting about one month prior
to the SGR giant flare of 1998 Aug 27,
and also for a year past the flare.
In the case of the
December 27, 2004 giant flare,
the X-ray emission from SGR~1806$-$20
was about two times brighter than
its typical quiescence emission about one month prior to the burst.
In Fig.~\ref{AXP_DetSim},
we mark the elevated X-ray luminosities of SGR~1806$-$20
and SGR~1900$+$14 by stars.
Based on this plot we estimate that
the probability to detect a pulsating
X-ray source associated with an AXP/SGR
in M31, using the 50~ks {\it XMM}-Newton image we analyzed,
is $\sim10\%$ (per SGR/AXP).
We note however, that for a 2~Ms exposure using {\it XMM},
the probability to detect an AXP/SGR in
M31 increases to $\sim50\%$.
\begin{figure}
\centerline{\includegraphics[width=8.5cm]{f3.eps}}
\caption{The probability (contours) to detect,
with $1\%$ false alarm probability,
each one of the Galactic AXPs/SGRs (in quiescent state)
if placed in the Andromeda
galaxy, as a function the source count rate
(lower X-axis) or luminosity (upper X-axis),
and the fraction of energy within
a $20\%$ (of period) width pulse (left Y-axis).
Translation of the energy within the pulse
to rms pulse fraction, $f_{rms}$
(for definition see Woods \& Thompson [2006; Table 14.2]), is shown
on the right-hand Y-axis.
The location of known AXPs/SGRs are shown
as circles or lines (if variable).
The simulations assume the {\it XMM}-PN is observing
the targets for 75~ks which is roughly
equivalent to the 50~ks {\it XMM} observations we analyzed.
The properties of the SGRs/AXPs
(i.e., luminosity range, spectral shape, and rms pulse fraction)
were adopted from Woods \& Thompson (2006).
To account for the observed elevated X-ray luminosity of SGRs
about one month prior to giant flares,
we increased the maximum quiescence luminosities
of SGR~1900$+$14 and SGR~1806$-$20 by 1.5 and 2, respectively
(Woods et al. 2001; 2007).
These elevated luminosities are marked as stars
in the Figure.
\label{AXP_DetSim}}
\end{figure}
\section{Discussion}
\label{Disc}
In the following we discuss the
energetics, spectral and temporal properties of GRB~070201 (\S\ref{disc-en}).
Given the properties of this event we discuss its nature in \S\ref{disc-nat}.
\subsection{Energetics, spectrum and light curve}
\label{disc-en}
The IPN error quadrilateral of the bright
GRB\,070201 includes the outskirts of
the nearby (770~kpc)
galaxy M31.
If indeed GRB\,070201 has originated in M31,
the isotropic energy release from this burst,
$(1.41_{-0.18}^{+0.07})\times10^{45}$~erg,
is of the same order of magnitude as that emitted by SGR giant flares.
For comparison, the isotropic energy release of the 1979 March 5
SGR~0526$-$66 flare was $>6\times10^{44}\,$erg (Mazets et al. 1979), that
of the 1998 August 27 flare from SGR~1900$+$14
was $2\times10^{44}\,$erg
(Mazets et al. 1999),
while the energy release from the 2004 December 27
giant flare from SGR~1806$-$20 was as high as
$(1-4)\times10^{46}\,$erg
(Hurley et al. 2005; Palmer et al. 2005; Cameron et al. 2005).
In the context of the magnetar model for SGR giant flares
(Thompson \& Duncan 1995; 1996)
we expect the fireball to be optically
thick and therefore to produce a quasi-thermal spectrum.
As discussed in \S\ref{GRB070201},
the gamma-ray spectrum
of GRB~070201 (Golenetskii et al. 2007b), at peak luminosity,
as well as that of the SGR~1806$-$20
2004 Dec 27 giant flare (Frederiks et al. 2007a),
are not well described by a black-body spectrum.
However, this does not necessarily mean that
the spectrum of the burst is not a modified
thermal spectrum.
A simple consistency test for the SGR hypothesis
is to assume the spectrum is quasi-thermal;
we would then expect the black-body radius of the
emission region to be on the order of
the radius of a neutron star.
By approximating the gamma-ray spectrum of
SGR flares by a black-body spectrum,
one can derive a rough black-body radius for the
bursting source.
GRB~070201 had a peak luminosity
(on a 2~ms time scale) of $1.14_{-0.35}^{+0.20}\times10^{47}$~erg~s$^{-1}$,
and a peak energy of the observed
gamma-ray spectrum that corresponds
to black-body temperature of $\sim1.6\times10^{9}$~K.
Using the distance of M31,
we find a black-body radius of $60\pm40\,$km.
This radius is roughly consistent with
the sizes derived for other SGR giant flares
(e.g., Hurley et al. 2005; Ofek et al. 2006).
The temporal behavior of the gamma-ray emission from GRB~070201
(Fig.~\ref{GRB070201_LC}; see also Mazets et al. 2007)
is somewhat different from that of the
2004 December 27, SGR~1806$-$20 giant flare
(e.g., Hurley et al. 2005; Palmer et al. 2005; Terasawa et al. 2005).
In GRB~070201, the rise to maximum flux
is interrupted by two secondary peaks,
and the total rise time is somewhat longer
than in the case of the 2004 December 27 event.
Moreover, it seems that the light-curve of GRB~070201
is more variable than the typical SGR giant flare light curves.
Such variability is consistent with that seen in
the case of cosmological short-duration hard-spectrum GRBs
(e.g., Nakar \& Piran 2002; for a recent review see Nakar 2007).
However, our knowledge about SGR giant flare
light curves is based on a very small sample of events.
\subsection{The nature of GRB~070201}
\label{disc-nat}
Given the short-duration of this GRB
and its spatial association with M31,
there is a possibility that this burst
is an SGR flare in M31.
Estimating the probability for a chance coincidence
is susceptible to the pitfalls of {\it a-posteriori} statistics.
Keeping this in mind,
a rough estimate of the chance coincidence probability
is given by the sum of the area of M31 and the
error quadrilateral of GRB\,070201
(about $2$~deg$^{2}$), multiplied by the
number of short-hard GRBs detected by Konus-{\it Wind}
in the last 15\,yrs
($\sim30$; Ofek 2007a), and divided by the area of the
celestial sphere.
This rough chance coincidence probability is about $0.2\%$.
Therefore, we suggest that the simplest explanation
is that GRB\,070201 is indeed related to M31,
and that it was an SGR giant flare.
This is supported by the fact that, like other
known SGRs (Gaensler et al. 2001),
the GRB\,070201 error box is spatially associated
with star forming regions in M31 (Fig.~\ref{GALEX_X}).
We note that, if located in M31,
the energy of this event ($\sim10^{45}$~erg)
is too large for other kinds of known ``Galactic GRBs''
(e.g., Kasliwal et al. 2007).
Moreover, Abbott et al. (2007b) search for
a gravitational wave signal coincident
with the time of this burst using
the Laser Interferometer Gravitational wave Observatory
(LIGO).
The lack of signal argues against a compact
object merger (neutron stars/black-holes)
in M31, while is consistent with this event
being an SGR giant flare in the Andromeda galaxy.
Finally, we note that instruments like {\it Swift}-BAT
(Gehrels et al. 2004),
and {\it GLAST}-GBM (Band et al. 2004),
will be able to detect fainter bursts, with energy
of about $\sim10^{42}$~erg, from the Andromeda galaxy.
Such bursts are several orders of magnitude
more common than $\sim10^{45}$~erg events.
Therefore, with appropriate fast response
X-ray follow up observations of GRBs with
error regions that include nearby galaxies,
it may be possible to detect the afterglows
of such extragalactic SGR flares.
To summarize,
we do~not identify a visible light afterglow
associated with GRB\,070201.
Furthermore, we did~not find any periodic X-ray source
in archival {\it XMM} images of the intersection of
the error quadrilateral of GRB~070201 with M31.
We show
that the probability to detect a pulsating
X-ray source associated with an AXP/SGR
in M31, in the available {\it XMM} data,
is $\sim10\%$.
Therefore, the fact that we did not find
a X-ray pulsating source within the error quadrilateral
does not rule out the possibility that GRB~070201
is an SGR giant flare in M31.
\acknowledgments
This work is supported in part by grants from NSF and NASA.
HS acknowledges support from the Bundesministerium f\"ur Wirtschaft und
Technologie/Deutsches Zentrum f\"ur Luft- und Raumfahrt
(BMWI/DLR, FKZ 50 OR 0405).
MMK acknowledges the Moore Foundation for
the George Ellory Hale Fellowship.
|
1,108,101,566,623 | arxiv | \section{Introduction and summary}
The quasi-particle concept serves as a camouflage of interaction
in many-body systems. The aim is to explain a macroscopic system
in equilibrium as an ensemble of noninteracting and countable
units called "quasi-particles". Non-equilibrium may then be
described by a kinetic theory for these quasi-particles.
In this article the interaction-free theory is occupation number
statistics which yields the entropy as a functional of the
quasi-particle distribution function $f$. Interaction between
gas atoms is taken care of by modelling the elementary cell
volume and by suitable constraints for $f$.
Our theory reproduces the exact quantum mechanical density
corrections to the distribution function and to equilibrium
thermodynamics. The only partial success of related attempts
(e.g. \cite{RaS76}) stems from an insufficient handling of the
combinatorial entropy, i.e. the neglect of van der Waals
blocking.
\section{One-particle distribution function}
The one-particle distribution function $f(\vek{p}_1,\vek{r},t)$
of gas kinetics enables the calculation of densities (in
position space) for macroscopic quantities by taking moments
over momentum space. We insist the two simplest ones be
the number density $n$
\begin{eqnarray}
\label{E-1-1}
n(\vek{r},t) = \int \mbox{d}^3 p_1\; f(\vek{p}_1,\vek{r},t)
\end{eqnarray}
and the density of linear momentum $\vek{\pi}$
\begin{eqnarray}
\label{E-1-2}
\vek{\pi}(\vek{r},t) = \int \mbox{d}^3 p_1\;
\vek{p}_1\; f(\vek{p}_1,\vek{r},t)
\ .
\end{eqnarray}
Any justification of gas kinetics within the frame of a more
fundamental theory starts with a definition of $f$ which one is
free to choose, provided the interpretation of the moments
\fmref{E-1-1} and \fmref{E-1-2} is correct. Our quantum
statistical ansatz reads
\begin{eqnarray}
\label{E-1-3}
f(\vek{p}_1,\vek{r},t)
:=
f_W(\vek{p}_1,\vek{r},t)
+
\Psi(\vek{p}_1,\vek{r},t)
\ ,
\end{eqnarray}
where $f_W$ is the Wigner distribution function
\begin{eqnarray}
\label{E-1-4}
f_W(\vek{p}_1,\vek{r},t)
=
\left( 2 \pi \hbar\right)^{-3}
\int \mbox{d}^3 r^\prime
\erw{\psi^\dagger(\vek{r}+\frac{\vek{r}^\prime}{2},t)
\psi(\vek{r}-\frac{\vek{r}^\prime}{2},t)}
\exp\left\{\frac{i}{\hbar}\;\vek{p}_1\cdot\vek{r}^\prime\right\}
\end{eqnarray}
and $\Psi$ is a bilinear functional thereof
\begin{eqnarray}
\label{E-1-5}
\hspace*{-10mm}&&\Psi(\vek{p}_1,\vek{r},t) :=\\
\hspace*{-10mm}&&
\int \mbox{d}^3 p_2 \mbox{d}^3 p_1^\prime \mbox{d}^3 p_2^\prime\;
E(\vek{p} , \vek{p}^\prime)
\delta\left(\vek{P}-\vek{P}^\prime\right)
\left[
f_W(\vek{p}_1,\vek{r},t) f_W(\vek{p}_2,\vek{r},t)
-
f_W(\vek{p}_1^\prime,\vek{r},t) f_W(\vek{p}_2^\prime,\vek{r},t)
\right]
\nonumber \\
\hspace*{-10mm}&&
+
\int \mbox{d}^3 p_2 \mbox{d}^3 p_1^\prime \mbox{d}^3 p_2^\prime\;
O(\vek{p} , \vek{p}^\prime)
\delta\left(\vek{P}-\vek{P}^\prime\right)
\left[
f_W(\vek{p}_1,\vek{r},t) f_W(\vek{p}_2,\vek{r},t)
+
f_W(\vek{p}_1^\prime,\vek{r},t) f_W(\vek{p}_2^\prime,\vek{r},t)
\right]
\nonumber
\ ,
\end{eqnarray}
with $\vek{P}, \vek{P}^\prime$ and $\vek{p}, \vek{p}^\prime$
denoting centre-of-mass and relative momenta throughout the
article
\begin{eqnarray}
\label{E-1-10}
\vek{P} = \vek{p}_1 + \vek{p}_2
\ , \qquad
\vek{p} = {\frac{1}{2}}\;(\vek{p}_1 - \vek{p}_2 )
\ .
\end{eqnarray}
The even kernel
\begin{eqnarray}
\label{E-1-6}
E(\vek{p} , \vek{p}^\prime)
=
E(\vek{p}^\prime , \vek{p})
=
E(-\vek{p} , -\vek{p}^\prime)
\end{eqnarray}
and the odd kernel
\begin{eqnarray}
\label{E-1-7}
O(\vek{p} , \vek{p}^\prime)
=
-O(\vek{p}^\prime , \vek{p})
=
O(-\vek{p} , -\vek{p}^\prime)
\end{eqnarray}
remain to be specified. The structure of $\Psi$ reminds of
Boltzmann's collision integral although there is no
$\delta$-function for the kinetic energies of relative motion
$E_p$, $E_{p^\prime}$ which are defined by
\begin{eqnarray}
\label{E-1-11}
E_p = \frac{\vek{p}^2}{2 m_{rel}} = \frac{\vek{p}^2}{m}
\ .
\end{eqnarray}
In any case equations \fmref{E-1-1} and \fmref{E-1-2} hold:
\begin{eqnarray}
\label{E-1-8}
n(\vek{r},t)
=
\erw{\psi^\dagger(\vek{r},t)\psi(\vek{r},t)}
=
\int \mbox{d}^3 p_1\; f(\vek{p}_1,\vek{r},t)
=
\int \mbox{d}^3 p_1\; f_W(\vek{p}_1,\vek{r},t)
\end{eqnarray}
and
\begin{eqnarray}
\label{E-1-9}
\vek{\pi}(\vek{r},t)
=
\int \mbox{d}^3 p_1\; \vek{p}_1 \; f(\vek{p}_1,\vek{r},t)
=
\int \mbox{d}^3 p_1\; \vek{p}_1 \; f_W(\vek{p}_1,\vek{r},t)
\ .
\end{eqnarray}
There is not only a quantum statistical motivation to the
definition \fmref{E-1-3}, but it also allows a quasi-particle
interpretation.
\section{Quantum statistical background}
Explicit expressions for the kernels $E$ and $O$ follow from the
theory of Kadanoff and Baym \cite{KaB62}, from which in the spatially
homogeneous case and in ${\mathcal T}$-matrix approximation the
kinetic equation
\begin{eqnarray}
\label{E-2-1}
\partial_t (f_W + \Psi) = J_{B}[f_W]
\end{eqnarray}
is obtained \cite{Bae69,Bae84} with $J_{B}[f_W]$ as Boltzmann's
collision integral depending on $f_W$. Here $\Psi$ is just
the functional \fmref{E-1-5} with the kernels
\begin{eqnarray}
\label{E-2-2}
O(\vek{p} , \vek{p}^\prime)
&=&
\frac{1}{4}(2\pi\hbar)^3
{\mathcal P}(E_p - E_{p^\prime})
\\
&&
\left\{
|\bra{\vek{p}}{\mathcal T}_{\pm}(E_{{p}^\prime}+i\epsilon)
\ket{\vek{p}^\prime}|^2
-
|\bra{\vek{p}^\prime}{\mathcal T}_{\pm}(E_{{p}}+i\epsilon)
\ket{\vek{p}}|^2
\right\}
\nonumber
\end{eqnarray}
and
\begin{eqnarray}
\label{E-2-3}
E(\vek{p} , \vek{p}^\prime)
=&&
\pi (2\pi\hbar)^3
\delta(E_p - E_{p^\prime})
\\
&&
\Im{
\bra{\vek{p}}{\mathcal T}_{\pm}(E_{{p}^\prime}+i\epsilon)
\ket{\vek{p}^\prime}^*
\bra{\vek{p}}{\mathcal T}_{\pm}^\prime(E_{{p}^\prime}+i\epsilon)
\ket{\vek{p}^\prime}
}
\nonumber \\
&+&
\frac{1}{4}(2\pi\hbar)^3
{\mathcal P}^\prime(E_p - E_{p^\prime})
\nonumber
\\
&&
\left\{
|\bra{\vek{p}}{\mathcal T}_{\pm}(E_{p^\prime}+i\epsilon)
\ket{\vek{p}^\prime}|^2
+
|\bra{\vek{p}^\prime}{\mathcal T}_{\pm}(E_{p}+i\epsilon)
\ket{\vek{p}}|^2
\right\}
\nonumber
\ .
\end{eqnarray}
The ${\mathcal T}$-matrix occurring here is the properly
symmetrized momentum representation of the two-particle operator
\begin{eqnarray}
\label{E-2-4}
{\mathcal T}(z)
=
V - V \frac{1}{H-z} V
\ , \qquad
{\mathcal T}^\prime(z) = \dd{}{z} {\mathcal T}(z)
\ ,
\end{eqnarray}
with $H=H_{kin}+V$ being the Hamiltonian of relative motion.
${\mathcal P}$ is the principle value distribution and
${\mathcal P}^\prime$ it's derivative. The upper sign in
${\mathcal T}_{\pm}$ (and elsewhere) refers to bosons and the
lower sign to fermions.
In equilibrium, quantum statistical mechanics yields the density
expansion for $f=f_W+\Psi$, which up to second order reads
\begin{eqnarray}
\label{E-2-6}
f_{eq}(\vek{p}_1)
=
\frac{n}{(2\pi m \kappa T)^{3/2}}
e^{-\frac{\vek{p}_1^2}{2 m \kappa T}}
\left(
1 + n (2 B(T) + \phi(\vek{p}_1))
\right)
\ ,
\end{eqnarray}
with $\kappa$ being Boltzmann's constant and
\begin{eqnarray}
\label{E-2-7}
\phi(\vek{p}_1)
&=&
\pm
\lambda^3(T)\; e^{-\frac{\vek{p}_1^2}{2 m \kappa T}}
-
\lambda^3(T)\; \int \mbox{d}^3 p_2\;
e^{-\frac{\vek{p}_2^2}{2 m \kappa T}}
\Big\{
\frac{\tilde{F}({p})}{\kappa T}
+\tilde{G}({p})
\Big\}
\nonumber
\ .
\end{eqnarray}
Here we have introduced the thermal wave length
\begin{eqnarray}
\label{E-2-8}
\lambda
=
\frac{2\pi\hbar}{\sqrt{2\pi m \kappa T}}
\ .
\end{eqnarray}
The quantities $\tilde{F}$ and $\tilde{G}$ as well as the second
virial coefficient $B(T)$ are given as
functionals of the ${\mathcal T}$-matrix:
\begin{eqnarray}
\label{E-2-8-1}
\tilde{F}({p})
=
{\frac{1}{2}}\; (2\pi\hbar)^3\;
\Re{\bra{\vek{p}}{\mathcal T}_{\pm}(E_{{p}}+i\epsilon)
\ket{\vek{p}}}
\ ,
\end{eqnarray}
\begin{eqnarray}
\label{E-2-8-2}
\tilde{G}({p})
=
\frac{\pi}{2} (2\pi\hbar)^3\;
\int \mbox{d}^3 q\;
&&
\delta\left(E_{{p}} - E_{{q}} \right)
\\
&\times&
\Im{
\bra{\vek{p}}{\mathcal T}_{\pm}(E_{{q}}+i\epsilon)
\ket{\vek{q}}
\bra{\vek{q}}{\mathcal T}_{\pm}^\prime(E_{{q}}+i\epsilon)
\ket{\vek{p}}^*
}
\ ,
\nonumber
\end{eqnarray}
\begin{eqnarray}
\label{E-2-9}
B(T) &=& B_0(T) + B_1(T) + B_2(T)
\\[2mm]
B_0(T) &=& \mp 2^{-5/2}\; \lambda^3
\ , \quad
B_1(T) = \frac{\SmallMean{\tilde{F}}}{\kappa T}
\ , \quad
B_2(T) = \SmallMean{\tilde{G}}
\nonumber
\ ,
\end{eqnarray}
where $\SmallMean{\cdot}$ denotes the thermal average, e.g.
\begin{eqnarray}
\label{E-2-9-1}
\SmallMean{\tilde{F}}
&=&
\frac{\int\mbox{d}^3 p\; \exp\{-\frac{E_p}{\kappa T}\} \tilde{F}(p)}
{\int\mbox{d}^3 p\; \exp\{-\frac{E_p}{\kappa T}\} }
\ .
\end{eqnarray}
These formulae are exact if the two-particle interaction does
not allow any bound states. $B_1$ essentially accounts for
long-range attraction and $B_2$ for hard repulsion, this
correspondence being most striking in the van der Waals limit
(cf. section \xref{SecvdW}).
\section{Quasi-particle picture for equilibrium}
We just work out the usual idea:
\begin{enumerate}
\item The entropy density $s$ is represented as a functional of the
one-particle distribution function. This is an outcome of
occupation number statistics (combinatorial entropy)
\begin{eqnarray}
\label{E-3-1}
s = -\kappa \int \mbox{d}^3 p_1 \;
\Big[
&&
f(\vek{p}_1) \ln\big(v_{el}(\vek{p}_1)f(\vek{p}_1)\big)
\\
&&\mp
\left(
\frac{1}{v_{el}(\vek{p}_1)}\pm f(\vek{p}_1)
\right)
\ln\big(1\pm v_{el}(\vek{p}_1)f(\vek{p}_1)\big)
\Big]
\nonumber
\end{eqnarray}
with $v_{el}$ as the volume of an elementary cell in
six-dimensional $\mu$-space, $v_{el}$ accommodating one
single-particle quantum state. Eq. \fmref{E-3-1} is well known as
a standard result for non-interacting particles. However, the
choice of $v_{el}$ and the constraints for $f$ may provide a
camouflage of interaction.
\item For equilibrium the distribution function $f$ is the one
which minimizes $s$ subject to appropriate constraints.
\item The constraints are a given number density $n$,
\begin{eqnarray}
\label{E-3-2}
n = \int \mbox{d}^3 p_1\; f(\vek{p}_1)
\ ,
\end{eqnarray}
and a given energy density $u$,
\begin{eqnarray}
\label{E-3-3}
u = \int \mbox{d}^3 p_1\; \varepsilon(\vek{p}_1)\; f(\vek{p}_1)
\ .
\end{eqnarray}
\end{enumerate}
The quasi-particle interpretation is now introduced by way of
ansatz (eqs. \fmref{E-3-4}, \fmref{E-3-5}), its aim being to
account for interaction effects in lowest order of the
density. Strong repulsion reduces the freely accessible volume
for gas particles. Because of this effect, called "van der
Waals blocking", more than just $(2\pi\hbar)^3$ is needed as an
elementary cell volume
\begin{eqnarray}
\label{E-3-4}
v_{el}(\vek{p}_1) =
(2\pi\hbar)^3
\left[1 +
\int \mbox{d}^3 p_2\;
G\left(
\left|
\vek{p}
\right|
\right)
\; f(\vek{p}_2)
\right]
\ .
\end{eqnarray}
Also, an interacting gas particle carries with it a correlation
cloud giving rise to an interactive contribution which changes
the kinetic energy of a particle into the energy of a quasi
particle
\begin{eqnarray}
\label{E-3-5}
\varepsilon(\vek{p}_1) =
\frac{\vek{p}_1^2}{2 m}
+
\int \mbox{d}^3 p_2\;
F\left(
\left|
\vek{p}
\right|
\right)
\; f(\vek{p}_2)
\ .
\end{eqnarray}
For our variational problem the functions $G$ and $F$ are
considered as given though, for the time being, unknown. They
will be determined afterwards by comparing the equilibrium
solution $f=f_{eq}$ with the corresponding expression from
many-body quantum theory. The resulting thermodynamics then
serves as a further touchstone of the quasi-particle
interpretation.
The solution of the variational problem is obviously equivalent
to
\begin{eqnarray}
\label{E-3-6}
\left(
\frac{\delta s}{\delta f(\vek{p})}
\right)_{f=f_{eq}}
=
\frac{1}{\vartheta}
\left(
\frac{\delta u}{\delta f(\vek{p})}
-
\alpha
\frac{\delta n}{\delta f(\vek{p})}
\right)_{f=f_{eq}}
\end{eqnarray}
with $\vartheta$ and $\alpha$ as Lagrange parameters due to the the
constraints for $f$. Comparison with the thermodynamic identity
for the entropy $S$ at constant volume
\begin{eqnarray}
\label{E-3-7}
\mbox{d} S
=
\frac{1}{T}
\left(
\mbox{d} U - \mu \mbox{d} N
\right)
\end{eqnarray}
reveals that $\vartheta$ means the temperature, $\vartheta=T$,
and $\alpha$ means the chemical potential,
$\alpha=\mu$. According to \fmref{E-3-6} $f=f_{eq}$ is
equivalently determined by the fixed-point equation
\begin{eqnarray}
\label{E-3-8}
f(\vek{p}_1) =
\frac{\zeta}{v_{el}(\vek{p}_1)}\;
\frac{\exp\left\{-\beta\left(\frac{\vek{p}_1^2}{2 m}+K(\vek{p}_1)
\right)\right\}}
{1\mp \zeta \exp\left\{-\beta\left(\frac{\vek{p}_1^2}{2 m}+K(\vek{p}_1)
\right)\right\}}
\end{eqnarray}
with
\begin{eqnarray}
\label{E-3-9}
\beta = \frac{1}{\kappa T}
\ , \qquad
\zeta = \exp\left\{\beta\mu\right\}
\end{eqnarray}
and
\begin{eqnarray}
\label{E-3-10}
K(\vek{p}_1) =
\int \mbox{d}^3 p_2\;
&\Bigg\{&
2 f(\vek{p}_2)
F\left(
\left|
\vek{p}
\right|
\right)
\\
&&
\pm
\frac{(2\pi\hbar)^3}{\beta v_{el}^2(\vek{p}_2)}\;
\ln\big(1\pm v_{el}(\vek{p}_2)f(\vek{p}_2)\big)
G\left(
\left|
\vek{p}
\right|
\right)
\Bigg\}
\nonumber
\ .
\end{eqnarray}
\section{Lowest-order density corrections}
The fugacity $\zeta$ can be given as a power series in $n$
\begin{eqnarray}
\label{E-4-1}
\zeta = n \lambda^3
\left(
1 + 2 n B(T) + \cdots
\right)
\end{eqnarray}
and the fixed point equation \fmref{E-3-8} may be iterated
starting off with a Maxwellian normalized to $n$. This yields a
density expansion according to which -- apart from third and
higher order contributions -- one regains eq. \fmref{E-2-6} for
$f$, but now with
\begin{eqnarray}
\label{E-4-2}
\phi(\vek{p}_1)
&=&
\pm
\lambda^3\; e^{-\frac{\beta}{2} E_{{p}_1}}
\\
&&
-
\frac{2 \lambda^3}{(2\pi\hbar)^3}
\; \int \mbox{d}^3 p_2\;
e^{-\frac{\beta}{2} E_{{p}_2}}
\left\{
\beta
F\left(
\left|
\vek{p}
\right|
\right)
+
G\left(
\left|
\vek{p}
\right|
\right)
\right\}
\ .
\nonumber
\end{eqnarray}
Therefore, the quasi-particle picture independently explains
eq. \fmref{E-2-6}, if and only if
\begin{eqnarray}
\label{E-4-3}
F({p})
=
\tilde{F}({p})
\end{eqnarray}
and
\begin{eqnarray}
\label{E-4-4}
G({p})
=
\tilde{G}({p})
\end{eqnarray}
Having determined $f$ up to second order in $n$, we deduce the
entropy with its lowest order density corrections from
eq. \fmref{E-3-1}:
\begin{eqnarray}
\label{E-4-5}
s = n \kappa
\left\{
\frac{5}{2} - \ln\left( n \lambda^3 \right)
- n \left[ B(T) + T B^\prime(T) \right]
\right\}
\ ,
\end{eqnarray}
with $B(T)$ defined by eqs. \fmref{E-2-9}.
Analogously evaluating eq. \fmref{E-3-3}, we obtain the two
leading contributions to the energy density:
\begin{eqnarray}
\label{E-4-6}
u = n \kappa T
\left\{
\frac{3}{2} - n T B^\prime(T)
\right\}
\ .
\end{eqnarray}
The last two results imply the pressure equation of state
\begin{eqnarray}
\label{E-4-8}
p = n
\left(
\pp{u}{n}
\right)_{s/n}
- u
=
n \kappa T
\left\{
1 + n B(T)
\right\}
\ .
\end{eqnarray}
The exact density corrections, i.e. virial contributions have
thus been obtained.
\section{Physical meaning of the quasi-particles}
One may imagine a quasi-particle to be a gas particle together
with its surrounding correlation cloud which is described by
the radial distribution function $g(r,T)$. This interpretation
suggests itself because the virial correction to the energy of
the ideal gas (eq. \fmref{E-4-6}) is mainly determined by $g(r,T)$ in
an obvious way. The interpretation is immediately
evident in the classical case where
\begin{eqnarray}
\label{E-5-A}
B(T)
&=&
B_{cl}(T)
=
-{\frac{1}{2}}\; \int \mbox{d}^3 r \;(g_0(r,T)-1)
\end{eqnarray}
with
\begin{eqnarray}
\label{E-5-B}
g_0(r,T)
&=&
g_{0,cl}(r,T)
=
\exp\left\{-\frac{V(r)}{\kappa T} \right\}
\end{eqnarray}
as the first term of the density expansion
\begin{eqnarray}
\label{E-5-C}
g(r,T)
&=&
g_0(r,T) + n g_1(r,T) + n^2 g_2(r,T) + \cdots
\ .
\end{eqnarray}
Therefore
\begin{eqnarray}
\label{E-5-D}
n^2 \kappa T
\;
T B_{cl}^\prime(T)
&=&
\frac{n^2}{2}\int\mbox{d}^3 r \; g_{0,cl}(r,T) V(r)
\end{eqnarray}
is the classical virial correction to the internal energy.
In general, however, for a homogeneous system
\begin{eqnarray}
\label{E-5-E}
g(r,T)
&=&
\frac{1}{n^2}
\SmallMean{\psi^\dagger(\vek{r}^\prime)\,\psi^\dagger(\vek{r}^\prime+\vek{r})\,
\psi(\vek{r}^\prime+\vek{r})\,\psi(\vek{r}^\prime)}
\\
&=&
\frac{2}{n}
\frac{\delta f_{free}}{\delta V(r)}
\end{eqnarray}
with $f_{free}$ being the free energy per particle, whence
\begin{eqnarray}
\label{E-5-F}
g_0(r,T)
&=&
2 \kappa T \frac{\delta B(T)}{\delta V(r)}
\ .
\end{eqnarray}
Then quantum mechanically \cite{BaG70,Bae84}
\begin{eqnarray}
\label{E-5-G}
\int\mbox{d}^3 r \; g_{0}(r,T) V(r)
&=&
2^{3/2} \lambda^3 \int\mbox{d}^3 p\;
e^{-\beta E_p}\; {}^{(-)}\bra{\vek{p}}V_{\pm}\ket{\vek{p}}^{(-)}
\ ,
\end{eqnarray}
where the scattering eigenstates $\ket{\vek{p}}^{(-)}$ satisfy the
Lippmann Schwinger equation which reads in momentum
representation
\begin{eqnarray}
\label{E-5-H}
\braket{\vek{p}^\prime}{\vek{p}}^{(-)}
&=&
\delta(\vek{p}^\prime - \vek{p})
- (E_{p^\prime}-E_p-i\epsilon)
\bra{\vek{p}^\prime}V\ket{\vek{p}}^{(-)}
\ .
\end{eqnarray}
The virial correction to the energy density (eqs. \fmref{E-3-3},
\fmref{E-4-6})is made up of three terms
\begin{eqnarray}
\label{E-5-I}
n^2 \kappa T
\;
T B^\prime(T)
&=&
\int\mbox{d}^3 p_1\;
\frac{\vek{p}_1^2}{2 m} \Psi_M(\vek{p}_1)
+
\int\mbox{d}^3 p_1\;\mbox{d}^3 p_2\;
f_M(\vek{p}_1)\;f_M(\vek{p}_2)\;F(p)
\\
&&
+
\int\mbox{d}^3 p_1\;
\frac{\vek{p}_1^2}{2 m} n^2 f_{W,2}(\vek{p}_1)
\nonumber
\ .
\end{eqnarray}
Here $f_M$ is the Maxwellian normalized to $n$ and $\Psi_M$ is
our functional \fmref{E-1-5} with $f_W$ replaced by
$f_M$. $f_{W,2}$ denotes the second term in the density
expansion of the Wigner function
\begin{eqnarray}
\label{E-5-J}
f_{W}(\vek{p}_1)
&=&
f_{M}(\vek{p}_1) + n^2 f_{W,2}(\vek{p}_1) + \cdots
\ .
\end{eqnarray}
Looking more closely at eq. \fmref{E-5-I} and taking account of
\fmref{E-5-G} one can show that
\begin{eqnarray}
\label{E-5-K}
\int\mbox{d}^3 p_1\;
\frac{\vek{p}_1^2}{2 m} \Psi_M(\vek{p}_1)
+
\int\mbox{d}^3 p_1\;\mbox{d}^3 p_2\;
f_M(\vek{p}_1)\;f_M(\vek{p}_2)\;F(p)
&=&
\int\mbox{d}^3 r \; g_{0}(r,T) V(r)
\ .
\end{eqnarray}
Therefore, in the classical limit the last term in
eq. \fmref{E-5-I} must vanish.
\section{Classical van der Waals approximation}
\label{SecvdW}
Considering distinguishable particles, one has to neglect
quantum statistical contributions. This means
\begin{eqnarray}
\label{E-5-1}
B_0 = 0
\ .
\end{eqnarray}
Then with \fmref{E-2-9}, \fmref{E-4-3} and \fmref{E-4-4}
\begin{eqnarray}
\label{E-5-2}
B(T)
&=&
\frac{\SmallMean{F}}{\kappa T}
+ \SmallMean{G}
\end{eqnarray}
holds and suggests a comparison with the van der
Waals version of the second virial coefficient
\begin{eqnarray}
\label{E-5-4}
B_{vdW}(T)
&=&
- \frac{a}{\kappa T}
+ b
\ ,
\end{eqnarray}
which is readily obtained from the model equation of state
\begin{eqnarray}
\label{E-5-5}
\left(p + n^2 a \right)
\left(1 - n b \right)
=
n \kappa T
\end{eqnarray}
and the corresponding density expansion
\begin{eqnarray}
\label{E-5-6}
p
&=&
n \kappa T \left(1 + n \left[ b - \frac{a}{\kappa T}\right]
\right) + o(n^3)
\ .
\end{eqnarray}
The van der Waals limit therefore obviously means
\begin{eqnarray}
\label{E-5-7}
\SmallMean{F}
&=&
\mbox{const}
=
- a
\qquad\mbox{and}
\qquad
\SmallMean{G}
=
\mbox{const}
=
b
\ .
\end{eqnarray}
This model assumption is actually quite reasonable as we are
going to demonstrate for \element{4}{He} atoms interacting via a
Lennard-Jones-potential lacking bound states \cite{The90}:
\begin{eqnarray}
\label{E-5-9}
V(r)
&=&
4 V_0
\left[
\left(
\frac{\sigma}{r}
\right)^{12}
-
\left(
\frac{\sigma}{r}
\right)^{6}
\right]
\ ;\quad
V_0 = 10.22\,\mbox{K} \; \kappa
\quad , \quad
\sigma = 2.56\,\mbox{\AA}
\ .
\end{eqnarray}
Because the interaction is radially symmetric the following
relations between the (anti-) symmetrized ${\mathcal T}$-matrix, the scattering
amplitude $f_{\pm}(p,\theta)$ and phase shifts $\delta_l(p)$
\begin{eqnarray}
\label{E-5-11}
f_{\pm}(p,\theta)
&=&
- \pi^2 m \hbar
\bra{\vek{p}}{\mathcal T}_{\pm}(E_{{p}}+i\epsilon)\ket{\vek{q}}
\ , \
|\vek{p}|=|\vek{q}|\ ,\ \vek{p}\cdot\vek{q}=|\vek{p}|
|\vek{q}| \cos(\theta)
\\
f_{\pm}(p,\theta)
&=&
\frac{\hbar}{p}
\sum_l{}^\prime\; (2l+1)\; \mbox{e}^{i\, \delta_l(p)}\; \sin(\delta_l(p))\; P(\cos(\theta))
\end{eqnarray}
may be used (see e.g. \cite{Bau67}), where the summation runs
over even $l$ for bosons and odd $l$ for fermions. $F(p)$ and
$G(p)$ (compare to eqs. \fmref{E-2-8-1} and \fmref{E-2-8-2}) can
then be expressed in terms of phase shifts which is compatible
with the Beth Uhlenbeck result for $B(T)$ \cite{BeU36}
\begin{eqnarray}
\label{E-5-10}
F(p)
&=&
-\frac{4 \pi \hbar^2}{m} f_{\vek{p}}(0)
=
-\frac{4 \pi \hbar^2}{m}
\frac{\hbar}{2 p}
\sum_l{}^\prime\; (2l + 1) \sin\left[ 2 \delta_l(p) \right]
\\
G(p)
&=&
-\hbar \int\mbox{d}\Omega\; \mbox{Im}
\left[ (f_{\vek{p}}(\theta))^* \pp{}{p} f_{\vek{p}}(\theta)
\right]
=
-{4 \pi \hbar}
\frac{\hbar^2}{p^2}
\sum_l{}^\prime\; (2l + 1) \sin^2\left[ \delta_l(p) \right]
\pp{\delta_l}{p}
\ .
\nonumber
\end{eqnarray}
\begin{figure}[ht!]
\unitlength1mm
\begin{center}
\begin{picture}(150,55)
\put(-5,0){\epsfig{file=F-6-1.eps,width=70mm}}
\put(75,0){\epsfig{file=F-6-2.eps,width=75mm}}
\end{picture}
\end{center}
\mycaption{$F$ and $G$ as functions of $k\sigma$ (thick solid
lines) and thermal weight functions
$w\propto p^2 \exp\left\{ -E_p/(\kappa T)\right\}$ for two
temperatures.}{F-6-1}
\end{figure}
Figure \xref{F-6-1} nicely shows that for a large region of
temperatures $\SmallMean{F}$ and $\SmallMean{G}$ may be nearly
considered as constants, i.e. not depending on temperature. In
this case the single particle energy, the elementary cell
volume and the mean energy are easily determined as
\begin{eqnarray}
\label{E-5-12}
\varepsilon(\vek{p}_1) =
\frac{\vek{p}_1^2}{2 m}
- a n
\ , \quad
v_{el}(\vek{p}_1) =
(2\pi\hbar)^3
\left[1 + b n \right]
\ , \quad
u = n \kappa T
\left\{
\frac{3}{2} - \frac{n a}{\kappa T}
\right\}
\ .
\end{eqnarray}
|
1,108,101,566,624 | arxiv | \section{Introduction}
In recent years, Quantum Information Theory has played the role of a melting pot for various branches of physics. In the context of high energy theory, a motivation to understand the application of complexity to quantum field theory arises from attempts to apply the AdS/CFT duality in certain black hole settings. In particular, it is notoriously difficult to probe physics behind the horizon of a black hole. It has been observed that although the entanglement entropy of an eternal AdS black hole saturates as it thermalizes \cite{Hartman}, the size of the Einstein-Rosen bridge continues to increase with time.
Motivated by this observation,
Susskind et.~al.~\cite{Susskind,Susskind1,Susskind2,Susskind3,Susskind4,Susskind5,Susskind6} have proposed new probes on the gravity side for the inner region beyond the black hole horizon.
One probe is given by the volume of a maximal codimension-one bulk surface extending to the boundary of AdS spacetime \cite{Susskind,Susskind1,Susskind2,Susskind3,Susskind4}. There is a second proposal, where the probe is the action defined on the Wheeler-DeWitt (WDW) patch \cite{Susskind5,Susskind6}.
Both of these quantities have the potential to probe physics behind the horizon. It is conjectured that these two objects are dual to the so-called ``complexity" of the dual field theory state. For this reason these proposals are known as the CV (complexity = volume) \cite{Susskind3} and CA (complexity = action) \cite {Susskind5} conjectures, respectively. These conjectures have opened up a completely new line of research that relates high energy theory and condensed matter physics with quantum information theory at the centre, e.g \cite{Couch,Swingle:2017zcd,Bhattacharyya:2018wym,Ham, Bolognesi:2018ion}\footnote{This list is by no means complete. Interested readers are requested to check the citations and references of these papers.}.
The holographic proposals mentioned above connect a probe on the gravity side with a concept in quantum information theory called quantum complexity \cite{watrous}. More specifically we will be focusing on circuit complexity. Circuit complexity is the minimum number of unitary operators (also known as quantum gates) that are required to construct the desired target state from a suitable reference state.
For Gaussian states, this can either be computed by working directly with the wavefunctions in the position basis \cite{MyersCC,MyersCCa,MyersCC1} or using a covariance matrix \cite{MyersCC2,MyersCC3,MyersCC4,MyersCC5,MyersCC6,MyersCC7}.
In both these cases, the quantum complexity is typically computed using a geometric technique pioneered by Nielsen \cite{NL1,NL2,NL3}. Alternatively, it has also been proposed that the quantum complexity might be computed using Fubini-Study distance \cite{MyersCC8}.
It has been shown in \cite{AB} (especially in the context of certain time evolution) that out of all these methods, the quantum complexity computed using wavefunction might be the most sensitive one to the underlying physics.
Over the past few years, circuit complexity has enjoyed a wide range of applications.
For instance, quantum complexity may be a possible diagnostic for quantum chaos, and is now considered as an integral part of the web of diagnostics for quantum chaos \cite{qchaospre, qchaosprea,qchaos,qchaos1,qchaos2,qchaos3,qchaos4}. It was highlighted in \cite{qchaos1} that circuit complexity can provide essential information (such as the scrambling time, Lyapunov exponent, etc.) about a quantum chaotic system.
In \cite{qchaos1}, an inverted harmonic oscillator model was used to establish the chaotic features of complexity and compared them with the information one can obtain from the out-of-time-order correlators. The time scale when the complexity starts to grow was identified as the scrambling time and the slope of the linear portion behaves as the Lyapunov exponent.
In this paper we use this in the field of cosmology. More explicitly, we apply the notion of circuit complexity to scalar cosmological perturbations on an expanding
Friedmann-Lemaitre-Robertson-Walker (FLRW)
background. Scalar perturbations on an expanding background can naturally be described with the formalism of squeezed quantum states: when a mode exits the horizon it becomes highly squeezed, while a mode inside the horizon has its squeezing ``frozen in''\cite{Grishchuk,Albrecht}.
We will choose the ground state while the mode is inside the horizon as our reference state, and study complexity for a target state consisting of the time-evolved cosmological perturbation on the expanding background. For simplicity we consider a simple model
consisting of a period of de Sitter (dS) expansion followed by radiation-dominated expansion, as a proxy for inflation followed by reheating.
This approach gives us interesting behaviors for the complexity of cosmology at different epochs. We find that during dS expansion, the complexity is proportional to the number of e-folds for a super-horizon mode. The exponential growth as in \cite{qchaos1}, suggests that during the de Sitter regime the complexity grows as in an unstable (chaotic) system. Moreover, one can also identify the scrambling time scale for this chaotic regime and the Lyapunov exponent. During the subsequent radiation phase the Universe de-complexifies, even though the squeezing of the perturbation continues, and eventually the complexity ``freezes in'' once the mode re-enters the horizon.
The organization of the paper is as follows. In Section \ref{sec:Inverted} we will use the inverted harmonic oscillator model to get insights about our approach and to establish our tools and techniques. In Section \ref{sec:SqueezedCosmo} we review the cosmological scalar perturbations and the origin of the squeezed states and the various solutions. In Section \ref{sec:Complexity} we discuss the complexity for this squeezed states and discuss the evolution of complexity and its implications. We conclude with a discussion and future directions.
\section{Inverted Harmonic Oscillator}
\label{sec:Inverted}
To begin, we will introduce the main techniques and concepts used throughout the paper through the example of the inverted harmonic oscillator.
Since a super-horizon scalar cosmological perturbation behaves like an inverted harmonic oscillator at large scales, the intuition we develop here will be useful for our later analysis.
The inverted harmonic oscillator is defined by a Hamiltonian with a ``wrong sign'' of the restoring force (with unit mass) \cite{Barton}:
\begin{eqnarray}
\hat H = \frac{1}{2} \hat p^2 - \frac{1}{2} k^2 \hat x^2.
\label{InvertHpq}
\end{eqnarray}
Using the raising and lowering operators based on the non-inverted harmonic oscillator
\begin{eqnarray}
\hat x = \frac{1}{\sqrt{2k}} \left(\hat a^\dagger + \hat a\right), \hspace{.4in} \hat p = i \sqrt{\frac{k}{2}} \left(\hat a^\dagger - \hat a\right)\, ,
\end{eqnarray}
the inverted Hamiltonian (\ref{InvertHpq}) becomes
\begin{eqnarray}
\hat H = - \frac{k}{2} \left(\hat a^2 + \hat a^\dagger{}^2\right)\, .
\label{InvertH}
\end{eqnarray}
If the system starts in the ``vacuum state'' annihilated by the lowering operator
\begin{eqnarray}
\hat a |0\rangle = 0\, ,
\end{eqnarray}
then it will naturally evolve into a squeezed state at later times.
In particular, the unitary evolution $\hat {\mathcal U}$ of a state can be parameterized as \cite{Grishchuk,Albrecht}
\begin{eqnarray}
\hat {\mathcal U} = \hat {\mathcal S}(r,\phi) \hat {\mathcal R}(\theta)\, .
\end{eqnarray}
where $\hat {\mathcal R}$ is the ``rotation operator,'' defined as
\begin{eqnarray}
\hat {\mathcal R}(\theta) \equiv {\rm exp}\left[-i\theta(t) (\hat a^\dagger \hat a + \hat a \hat a^\dagger)\right]
\end{eqnarray}
in terms of the rotation parameter $\theta(t)$,
and $\hat {\mathcal S}(r,\phi)$ is the ``squeezing operator,'' defined as
\begin{eqnarray}
\hat {\mathcal S}(r,\phi) \equiv {\rm exp} \left[\frac{r(t)}{2}\left(e^{-2i\phi}\hat a^2 - e^{2i\phi} \hat a^\dagger{}^2\right)\right]
\end{eqnarray}
in terms of the squeezing parameter $r(t)$ and squeezing angle $\phi(t)$.
In what follows, the rotation operator and rotation parameter will not play an important role, so we will drop them from our subsequent analysis.
The action of the rotation operator produces an irrelevant phase; however, the action of the squeezing operator results in a single mode squeezed vacuum state \cite{book}:
\begin{eqnarray}
|\Psi(t)\rangle = \hat {\mathcal S}(r,\phi) |0\rangle = \frac{1}{\sqrt{\cosh r}}\sum_{n=0}^\infty (-1)^n\ e^{-2in\phi} \tanh^n r\ \frac{\sqrt{(2n)!}}{2^n n!}\ |2n\rangle\, .
\label{InvertedPsi}
\end{eqnarray}
To understand the importance of the squeezing angle and squeezing parameter, consider the combinations
\begin{eqnarray}
\hat q_+ &\equiv & \hat p \sin \phi + k\ \hat x \cos \phi\, ; \\
\hat q_- &\equiv & \hat p \cos\phi - k\ \hat x \sin \phi\, .
\end{eqnarray}
The uncertainty for these new variables is \cite{Albrecht}
\begin{eqnarray}
\label{InvertedSqueeze1}
\Delta q_+^2 &=& \langle \Psi(t)|\hat q_+^2 |\Psi(t) \rangle = \frac{1}{2} e^{-2r}\, ; \\
\Delta q_-^2 &=& \langle \Psi(t)|\hat q_-^2 |\Psi(t) \rangle = \frac{1}{2} e^{2r}\, .
\label{InvertedSqueeze2}
\end{eqnarray}
This clearly shows the origin of the term ``squeezed states'': the wavefunction $|\Psi(t)\rangle$ is squeezed with a small uncertainty in the $\hat q_+$ direction, with a correspondingly large uncertainty in the $\hat q_-$ direction, so that the uncertainty relation is still saturated $\Delta q_+ \Delta q_- = 1/2$.
The squeezing angle $\phi$ determines the angle in phase space at which
the squeezing occurs.
It is straightforward to insert (\ref{InvertedPsi}) into the Schr\"odinger equation
\begin{eqnarray}
i \frac{d}{dt} |\Psi(t)\rangle = \hat H |\Psi(t) \rangle
\end{eqnarray}
to obtain the squeezing equations of motion
\begin{eqnarray}
\dot r &=& k \sin( 2\phi)\, ; \nonumber \\
\dot \phi &=& k \coth(2r) \cos(2\phi)\, . \label{InvertedEOM}
\end{eqnarray}
It is easy to see that these equations have a solution in which the squeezing grows with time along a constant squeeze angle
\begin{eqnarray}
r(t) = k t, \hspace{.4in} \phi(t) = \pi/4\, .
\end{eqnarray}
Thus, as expected the vacuum $|0\rangle$ evolves into the highly squeezed state along a direction that is an equal mixture of the $\hat q$ and $\hat p$ directions.
An interesting concept in quantum mechanics that has enjoyed a fair amount of recent interest is the {\it circuit complexity} of a pair of states.
Defined in an analogous way to classical complexity, the circuit complexity is roughly the minimum number of fundamental quantum gates required to transform a reference state to some target state.
As discussed in the introduction, there are several different methods of computing the circuit complexity between a reference and a target state, including the geometric approaches by Nielsen's \cite{NL1,NL2,NL3}. Moreover, based on the choice of cost functional for each of these approaches there are different measures\cite{MyersCC,MyersCC1,NL1}. In the main part of the paper we will focus on the circuit complexity using directly the wavefunction \cite{MyersCC}.
To begin our calculation of complexity of the inverted harmonic oscillator we first need to obtain the position-space wavefunction for the squeezed state $|\Psi(t)\rangle$
\begin{eqnarray}
\langle x|\Psi(t)\rangle = {\mathcal N} e^{-\frac{1}{2} \Omega(t) x^2}\, ,
\label{invertGaussian}
\end{eqnarray}
where ${\mathcal N}$ is a normalization factor and $\Omega(t)$ is the complex frequency
\begin{eqnarray}
\Omega(t) = \frac{k}{e^{2r} \sin^2 \phi + e^{-2r} \cos^2 \phi}\left(1 -i\sin(2\phi) \sinh(2r)\right)\ .
\label{invertGaussianFreq}
\end{eqnarray}
In the unsqueezed limit $r \rightarrow 0$ we obtain the unsqueezed ground state wavefunction with $\Omega(t) \approx k$.
In the highly squeezed limit, however, where $\phi \approx \pi/4$ and $r \gg 1$ we obtain a purely complex frequency $\Omega(t) \approx i$.
Taking the unsqueezed vacuum $\langle x |0\rangle$ as our reference state and the squeezed state $\langle x |\Psi(t)\rangle$ (\ref{invertGaussian}) as our target state,
the geometric circuit complexity evaluates to be \cite{AB}:
\begin{eqnarray}
\label{InvertedComplexity1}
{\mathcal C}_1 &=& \frac{1}{2} \left[ \ln\left|\frac{\Omega(t)}{k}\right|+\tan^{-1} \left(\frac{{\rm Im}\ \Omega(t)}{{\rm Re}\Omega}\right)\right]\, ; \\
{\mathcal C}_2 &=& \frac{1}{2} \sqrt{\left(\ln\left|\frac{\Omega(t)}{k}\right|\right)^2 + \left(\tan^{-1} \left(\frac{{\rm Im}\ \Omega}{{\rm Re} \Omega}\right)\right)^2}\, ,
\label{InvertedComplexity2}
\end{eqnarray}
where ${\mathcal C}_1,{\mathcal C}_2$ refer to the complexity calculated with different cost functionals, as we explain in more detail in Section \ref{sec:Complexity}.
For small amounts of squeezing $r \ll 1$ we have
\begin{eqnarray}
{\mathcal C}_1 \sim {\mathcal C}_2 \approx 0\, ,
\end{eqnarray}
as expected, since then the reference and target states are approximately the same.
For large amounts of squeezing $r \gg 1, \phi \approx \pi/4$ (corresponding to late times for the inverted harmonic oscillator), these expressions for the complexity (\ref{InvertedComplexity1},\ref{InvertedComplexity2}) become
\begin{eqnarray}
{\mathcal C}_1\sim {\mathcal C}_2 \approx \frac{1}{2} \sqrt{\left(\tan^{-1} e^{2r}\right)^2} \approx \frac{\pi}{4}\, ,
\label{InvertedComplexityResult}
\end{eqnarray}
so that the complexity of a single mode vacuum squeezed state {\it saturates} at late times.
This is consistent with the expectation that the complexity for a quantum chaotic system saturates at some maximum complexity.
More generally, squeezed vacuum states are frequently used in quantum optics applications outside of the context of the inverted harmonic oscillator, so the results found here are of more general interest and applicability.
In this sense, we can take the general squeezed state (\ref{InvertedPsi}) -- and its gaussian form (\ref{invertGaussian}),(\ref{invertGaussianFreq}) -- as representing a generic squeezed vacuum state.
We can then easily determine the complexity of such a squeezed vacuum state from the expressions (\ref{InvertedComplexity1}),(\ref{InvertedComplexity2}).
In particular, note that if the squeezing angle is fixed to be $\phi \rightarrow n \frac{\pi}{2}$ for some integer $n$
then the complexity of the squeezed state (\ref{InvertedPsi}) (equivalently (\ref{invertGaussian}))
does not saturate, but instead scales with the squeezing ${\mathcal C}_1 \sim r$ for large squeezing $r \gg 1$.
\section{Squeezed Cosmological Perturbations}
\label{sec:SqueezedCosmo}
Having explored the concepts of squeezing and complexity in a simple model of an inverted harmonic oscillator, we are now ready to apply these concepts to that of scalar cosmological perturbations.
We will consider a spatially flat Friedmann-Lemaitre-Robertson-Walker (FLRW) metric
\begin{eqnarray}
ds^2 = -dt^2 + a(t)^2 d\vec{x}^2 = a(\eta)^2 \left(-d\eta^2+d\vec{x}^2\right)\, .
\end{eqnarray}
On this background we will consider fluctuations of a scalar field $\varphi(x) = \varphi_0(t) + \delta\varphi(x)$ and the metric
\begin{eqnarray}
ds^2 = a(\eta)^2 \left(-(1+2\psi(x,\eta))d\eta^2 + (1-2\psi(x,\eta)) d\vec{x}^2\right)\, .
\end{eqnarray}
The perturbed action can be written in terms of the curvature perturbation ${\mathcal R} = \psi+\frac{H}{\dot \varphi_0} \delta \varphi$, where a dot denotes a derivative with respect to cosmic time $t$, and $H = \dot a/a$.
The action then takes the simple form \cite{Mukhanov}
\begin{eqnarray}
S = \frac{1}{2}\int dt\, d^3x\, a^3 \frac{\dot \phi^2}{H^2} \left[\dot {\mathcal R}^2 - \frac{1}{a^2} \left(\partial_i {\mathcal R}\right)^2\right]\, .
\end{eqnarray}
The action can be transformed into a form of that for a canonically normalized scalar field by use of the Mukhanov variable
$v \equiv z {\mathcal R}$
where $z \equiv a\, \sqrt{2\epsilon}$, with $\epsilon = -\dot H/H^2 = 1-{\mathcal H}'/{\mathcal H}^2$,
\begin{eqnarray}
S = \frac{1}{2} \int d\eta\, d^3x \left[v'^2 - (\partial_i v)^2 + \left(\frac{z'}{z}\right)^2 v^2 - 2 \frac{z'}{z} v' v\right]\, .
\label{CosmoAction}
\end{eqnarray}
Here a prime denotes a derivative with respect to conformal time and ${\mathcal H} = a'/a$.
This action represents perturbations of a free scalar field coupled to an external time-varying source.
A virtually identical-looking expression can also be derived for tensor perturbations with the replacement $z'/z \rightarrow a'/a$, and our results will hold for these types of perturbations as well.
Usually the last term in (\ref{CosmoAction}) is removed by integration by parts\footnote{The last term in (\ref{CosmoAction}) can also be removed by an appropriate canonical transformation as discussed in \cite{Martin1}.}, giving rise to the action
\begin{eqnarray}
S = \frac{1}{2} \int d\eta\, d^3x \left[v'^2 - (\partial_i v)^2 + \frac{z''}{z}v^2\right]\, .
\label{CosmoHarmonicAction}
\end{eqnarray}
In this form, the time-varying source clearly leads to a time-dependent frequency, and this can cause the long-wavelength modes
to appear as an inverted harmonic oscillator.
While we will be working instead with the action (\ref{CosmoAction}), the physics will nonetheless follow this intuition.
Promoting the perturbation to a quantum field and expanding into Fourier modes
\begin{eqnarray}
\hat v(\eta,\vec{x}) = \int \frac{d^3k}{(2\pi)^{3/2}} \hat v_{\vec{k}}(\eta)\, e^{i\vec{k}\cdot \vec{x}}\, ,
\end{eqnarray}
and defining the usual creation and annihilation operators\\
\begin{eqnarray}
\hat v_{\vec{k}} = \frac{1}{\sqrt{2k}} \left(\hat c_{\vec{k}} + \hat c_{-\vec{k}}^\dagger\right), \hspace{.2in}
\hat v_{\vec{k}}' = -i\frac{k}{2} \left(\hat c_{\vec{k}} - \hat c_{-\vec{k}}^\dagger\right)\, ,
\label{CosmoCreationOperators}
\end{eqnarray}
the Hamiltonian can be written as
\begin{eqnarray}
\hat H = \int d^3k\, \hat {\mathcal H}_{\vec{k}} = \int d^3k \left[k\left(\hat c_{\vec{k}} \hat c_{\vec{k}}^\dagger + \hat c_{-\vec{k}}^\dagger \hat c_{-\vec{k}}\right)
- i \frac{z'}{z} \left(\hat c_{\vec{k}} \hat c_{-\vec{k}} - \hat c_{\vec{k}}^\dagger \hat c_{-\vec{k}}^\dagger\right)\right]\, .
\label{CosmoH}
\end{eqnarray}
The first term in (\ref{CosmoH}) represents the usual free-particle Hamiltonian, while the second term describes the interaction between the quantum perturbation and the expanding background.
Notice that this last term is similar in form to the Hamiltonian (\ref{InvertH}) for the inverted harmonic oscillator from the last section, and indeed we will see that when the last term in the Hamiltonian dominates $z'/z \gg k$ the squeezing for the curvature perturbation will also grow.
The momentum structure of the Hamiltonian indicates that the interaction with the background leads to particle creation in pairs with opposite momenta. Because of this, we are naturally led to consider our states as appearing in two-mode pairs $(\vec{k},-\vec{k})$.
As with the inverted harmonic oscillator, the unitary evolution ${\mathcal U_{\vec{k}}}$ of a state can be factorized into a parameterization of the form
\cite{Grishchuk,Albrecht}
\begin{eqnarray}
\hat {\mathcal U}_{\vec{k}} = \hat{\mathcal S}_{\vec{k}}(r_k,\phi_k) \hat{\mathcal R}_{\vec{k}}(\theta_k)\, ,
\end{eqnarray}
where $\hat{\mathcal R}_{\vec{k}}$ is the two-mode rotation operator
\begin{eqnarray}
\hat{\mathcal R}_{\vec{k}}(\theta_k) \equiv {\rm exp}\ \left[-i\theta_k(\eta) (\hat c_{\vec{k}} \hat c_{\vec{k}}^\dagger + \hat c_{-\vec{k}}^\dagger \hat c_{-\vec{k}})\right]
\end{eqnarray}
written in terms of the rotation angle parameter $\theta_k(\eta)$ and $\hat{\mathcal S}_{\vec{k}}$ is the two-mode squeeze operator
\begin{eqnarray}
\hat {\mathcal S}_{\vec{k}}(r_k,\phi_k) \equiv {\rm exp}\ \left[\frac{r_k(\eta)}{2} \left(e^{-2i\phi_k(\eta)} \hat c_{\vec{k}} \hat c_{-\vec{k}} - e^{2i\phi_k(\eta)} \hat c_{-\vec{k}}^\dagger \hat c_{\vec{k}}^\dagger\right)\right]
\end{eqnarray}
written in terms of the squeezing parameter $r_k(\eta)$ and squeezing angle $\phi_k(\eta)$.
As with the inverted harmonic oscillator, the rotation operator and rotation angle $\theta_k$ will not be important, so we will not include them in our subsequent analysis.
Also, since the squeezing equations of motion will only depend on the magnitude $k$ of the wavenumber $\vec{k}$, we have suppressed the vector notation on the subscripts of these parameters.
By recognizing that the interaction of the cosmological perturbation with the time-dependent scale factor leads to a time-dependent frequency for the canonically normalized harmonic oscillator (\ref{CosmoHarmonicAction}), the appearance of a squeezed state for cosmological perturbations is quite natural in the context of the previous section on the inverted harmonic oscillator.
The quantization of this parametric oscillator is then naturally described in the language of two-mode squeezed states \cite{Grishchuk,Albrecht,Martin1,Martin2}.
We will assume that at the initial time all of the modes of interest are well inside the horizon $k \gg |\eta|$ so that the system can be described by the free part of the Hamiltonian (\ref{CosmoH}). We then define the initial state (two-mode) vacuum with respect to the annihilation operator
\begin{eqnarray}
\hat c_{\vec{k}} |0\rangle_{\vec{k},-\vec{k}} = 0, \hspace{.2in} \forall\ \vec{k}\, .
\end{eqnarray}
The two-mode squeeze operator results in a two-mode squeezed vacuum state
\begin{equation}
|\Psi_{sq}\rangle_{\vec{k},-\vec{k}} = \hat {\mathcal S}(r_k,\phi_k)_{\vec{k}} |0\rangle_{\vec{k}} = \frac{1}{\cosh r_k} \sum_{n=0}^{\infty} (-1)^n e^{-2 i n \phi_k} \tanh^n r_k\, |n_{\vec{k}}; n_{-\vec{k}}\rangle\, ,
\label{psi1}
\end{equation}
where the two-mode excited state is
\begin{eqnarray}
|n_{\vec{k}}; n_{-\vec{k}}\rangle = \sum_{n=0}^\infty \frac{1}{n!} \left(\hat c_{\vec{k}}^\dagger\right)^n \left(\hat c_{-\vec{k}}^\dagger\right)^n\, |0\rangle_{\vec{k},-\vec{k}}\, .
\end{eqnarray}
The full wavefunction then consists of the product of the wavefunctions for each $\vec{k}$
\begin{eqnarray}
|\Psi\rangle = \otimes_{\vec{k}} |\Psi\rangle_{\vec{k},-\vec{k}}\, ,
\end{eqnarray}
though we will mostly just work with $|\Psi\rangle_{\vec{k},-\vec{k}}$.
The time evolution of the squeezing parameters $r_k(\eta),\phi_k(\eta)$ is determined by the Schr\"odinger equation
\begin{eqnarray}
i \frac{d}{d\eta} |\Psi_{sq}\rangle_{\vec{k},-\vec{k}} = \hat {\mathcal H}_{\vec{k},-\vec{k}} |\Psi_{sq}\rangle_{\vec{k},-\vec{k}}\, ,
\end{eqnarray}
and leads to the differential equations
\begin{eqnarray}
\frac{dr_k}{d\eta} &=&
-\frac{z'}{z} \cos (2\phi_k)\, ;~ \nonumber \\
\frac{d\phi_k}{d\eta}&=& k +
\frac{z'}{z} \coth(2r_k) \sin (2\phi_k)\, .
\label{martin22}
\end{eqnarray}
Note that for a stationary background spacetime $z$ is constant, so there is no squeezing $r = 0$.
\subsection{Squeezing Solutions}
For a given background expansion $a(\eta)$, the squeezing equations (\ref{martin22}) can be solved for the squeezing parameters $r_k(\eta),\phi_k(\eta)$ (recall $z \equiv a\sqrt{2\epsilon}$).
Before we proceed to compute the circuit complexity for the states (\ref{psi1}), let us explore the behavior of squeezing solutions for cosmological backgrounds. This will give us some insight into the behavior of squeezing due to the expansion of the Universe. The squeezing of cosmological perturbations has been studied previously \cite{Grishchuk,Albrecht}.
In general the equations (\ref{martin22}) must be solved numerically for a given cosmological background.
However, we can make progress with a qualitative understanding of the solutions by noting that in general the scale factor depends on some power of the conformal time
\begin{eqnarray}
a(\eta) \sim \left(\frac{\eta}{\eta_0}\right)^\beta
= \begin{cases}
-\frac{1}{H\eta} & \beta = -1,\ \mbox{de Sitter} \\
\frac{\eta}{\eta_0} & \beta = 1,\ \mbox{Radiation} \\
\left(\frac{\eta}{\eta_0}\right)^2 & \beta = 2,\ \mbox{Matter}
\end{cases}\, ,
\label{scalefactor}
\end{eqnarray}
where $\beta = 2/(1+3w)$ in terms of the equation of state $p/\rho = w$ of the cosmological fluid of the background expansion.
These different equations of state can arise, for example, as the behavior of the scalar field $\varphi$
on different potentials $V(\varphi) = V_0 \varphi^\gamma$.
Accordingly, this implies that the term $z'/z$ appearing in the squeezing equations of motion scales inversely proportional to $\eta$: $z'/z = \beta/\eta$. The equations of motion (\ref{martin22}) then become
\begin{eqnarray}
\frac{dr_k}{d\eta} &=&
-\frac{\beta}{\eta} \cos (2\phi_k)\, ;~ \nonumber \\
\frac{d\phi_k}{d\eta}&=& k +
\frac{\beta}{\eta} \coth(2r_k) \sin (2\phi_k)\, .
\label{squeezeQual}
\end{eqnarray}
Solutions to (\ref{squeezeQual}) depend whether the mode is super-horizon $k|\eta| \ll 1$ or sub-horizon $k|\eta| \gg1$ and whether the squeezing is small $r_k \ll 1$ or large $r_k \gg 1$.
Let's begin by considering the small squeezing, sub-horizon limit. The equations of motion in this limit take the form
\begin{eqnarray}
\frac{dr_k}{d\eta} &=&
-\frac{\beta}{\eta} \cos (2\phi_k)\, ;~ \nonumber \\
\frac{d\phi_k}{d\eta}&=& k +
\frac{\beta}{\eta} \frac{1}{2r_k} \sin (2\phi_k)\, ;
\end{eqnarray}
where we took the small $r_k$ limit of $\coth(2r_k)$.
These equations of motion have the solution
$r_k \sim \beta/(2k\eta) \ll 1$ and $\phi_k \sim -\pi/4$, indicating that in the small squeezing, sub-horizon limit the squeezing stays small with fixed squeezing angle.
A similar analysis of the small squeezing, super-horizon limit has an approximate solution $r_k \sim |\beta \ln(k\eta)|$, $\phi_k \sim -\pi/2$. However since $k |\eta| \ll 1$ for super-horizon modes, this indicates there is tension
with having a super-horizon mode with small squeezing, so we should instead consider super-horizon modes with large squeezing, for which the squeezing equations of motion take the form
\begin{eqnarray}
\frac{dr_k}{d\eta} &=&
-\frac{\beta}{\eta} \cos (2\phi_k)\, ;~ \nonumber \\
\frac{d\phi_k}{d\eta}&\approx&
\frac{\beta}{\eta} \sin (2\phi_k)\, .
\label{LargeSqueezeEOMQualitative}
\end{eqnarray}
Here, we indeed see that solutions self-consistently take the form
$r_k \sim |\beta \ln(k\eta)|$, $\phi_k \sim -\pi/2$ for $k |\eta| \ll 1$.
Thus, we have learned that an initially small squeezing inside the horizon remains small until it exits the horizon, after which it begins to grow and becomes much larger than one. Note that since the squeezing scales as the log of the conformal time on super-horizon scales then it also is proportional to the number of e-folds for the mode $k$ since horizon exit $r_k \sim \ln a(\eta)/a_{exit} \equiv N_e^{(k)}$.
Finally, we consider a mode which is highly squeezed but re-enters the horizon at some later time. This is what would happen, for example, for modes that exit the horizon during inflation, becoming highly squeezed then re-enter the horizon after the end of inflation during a radiation- or matter-dominated stage of expansion.
In this case the squeeze equation of motion for $\phi_k$ becomes
\begin{eqnarray}
\frac{d\phi_k}{d\eta} \approx k\, ,
\end{eqnarray}
so that the squeezing angle is no longer fixed but is instead running $\phi_k \sim \phi_k^{(0)} + k \eta$. Examining the corresponding equation for the squeezing parameter
\begin{eqnarray}
\frac{dr_k}{d\eta} \approx \frac{\beta}{\eta} \cos\left(2\phi_k^{(0)} + 2k \eta\right)\, ,
\label{largeSqueezeSubH}
\end{eqnarray}
we see that the running $\phi_k$ will cause $\cos(2\phi_k)$ to oscillate between positive and negative values, shutting off growth of $r_k$.
Indeed, an approximate solution to (\ref{largeSqueezeSubH}) is a damped oscillation
\\
\begin{eqnarray}
r_k \sim r_k^{(0)} + \frac{\beta}{2k\eta} \sin\left(2\phi_k^{(0)} + 2k \eta\right)\, .
\end{eqnarray}
Thus, when highly squeezed mode re-enters the horizon it ``freezes in'' to the value of the squeezing at horizon-crossing, with a decaying oscillation about that value.
A plot illustrating these qualitative features -- no squeezing growth on sub-horizon scales, squeezing growth on super-horizon scales, and freeze-out of squeezing upon horizon re-entry -- is shown in Figure \ref{fig:QualSqueezing}. Below we will consider some exact and numerical solutions to the full squeezing equations of motion (\ref{martin22}), and we will see precisely these features.
\begin{figure}[t]
\centering \includegraphics[width=.65\textwidth]{QualitativeSqueezePlot.png}
\caption{In this qualitative plot, we follow the growth of the squeezing parameter $r_k$ as a function of the scale factor $a$ for fixed $k$ as it starts small inside the horizon, then grows larger than one after horizon exit, then ``freezes out'' upon horizon re-entry with a decaying oscillation, as described in the text. Notice that while outside of the horizon the squeezing parameter grows as the number of e-folds spent super-horizon $r_k \sim \log a \sim N_e^{(k)}$.}
\label{fig:QualSqueezing}
\end{figure}
With a general qualitative understanding of the behavior of squeezing solutions in hand, now let's explore some exact and numerical solutions to (\ref{martin22}) for some specific cosmological backgrounds.
The simplest solution is that of an exponentially expanding de Sitter background, for which $a(\eta) = -1/(H\eta)$ for $-\infty < \eta < 0$, so that $z'/z = -1/\eta$.
An exact solution for a de Sitter background is known\footnote{Note that there is a typo involving a factor of $1/2$ in the solution for $\phi_k$ in the solution of \cite{Albrecht}.} \cite{Albrecht}
\begin{eqnarray}
r_k &=& -\sinh^{-1} \left(\frac{1}{2k\eta}\right)\, ; \nonumber \\
\phi_k &=& -\frac{\pi}{4} + \frac{1}{2} \tanh^{-1} \left(\frac{1}{2k\eta}\right)\, .
\label{exactdSSqueezing}
\end{eqnarray}
At early times $k |\eta| \gg 1,$ the modes are inside the horizon, and we have vanishing squeezing $r_k \approx -\frac{1}{2k\eta} \ll 1,$ and an approximately constant squeezing angle $\phi_k \approx -\pi/4$, as already discussed in our qualitative analysis. At late times $k |\eta| \ll 1$ the modes are outside the horizon; from the action (\ref{CosmoHarmonicAction}) in which the modes appear as a harmonic oscillator with a time-dependent frequency, the external frequency due to the expansion of the Universe dominates and the action takes the form of an inverted harmonic oscillator.
Thus, we expect in this regime that the squeezing will grow with time, as did the inverted harmonic oscillator from the previous section.
Indeed, in this limit the solution (\ref{exactdSSqueezing}) gives a growing squeezing parameter $r_k \approx |\ln(-k\eta)| \sim \ln(a) \gg 1$ as $k |\eta| \gg 1$ and constant squeezing angle $\phi_k \approx -\pi/2$, again in excellent agreement with our qualitative analysis. Since the squeezing parameter grows with the log of the scale factor it is proportional to the number of e-folds of de Sitter expansion since horizon exit $r_k \sim N_e^{(k)}$, a feature we saw was true more generally for other expanding backgrounds.
Based on this analysis, we see that the vacuum state will remain un-squeezed while modes are inside the horizon, while squeezing will begin to grow appreciably once modes exit the horizon. Since in dS space modes that begin inside the horizon eventually exit the horizon due to the expansion of the Universe, we expect that an initially un-squeezed vacuum state for a mode $\vec{k}$ will become increasingly squeezed as time evolves in a de Sitter Universe.
Indeed, we see precisely this behavior in the analytic solution (\ref{exactdSSqueezing}) as well as numerical solutions to the squeezing equations (\ref{martin22}), as shown in Figures \ref{fig:dSSqueezing} and \ref{fig:SqueezingAngle}.
\begin{figure}[t]
\centering\includegraphics[width=.48\textwidth]{dSSpaceSqueezing.png} \hspace{.2in} \includegraphics[width=.48\textwidth]{dSSpaceSqueezing_LinearPlot.png}
\caption{(Left) The squeezing parameter $r_k$ as a function of the scale factor $a$ for de Sitter space for the exact solution (\ref{exactdSSqueezing}) and numerical solutions to the squeezing equations (\ref{martin22}) for $k = 0.001$ in units of $\eta_0$, defined by $a(\eta_0) = 1$ (color online).
The squeezing parameter grows appreciably -- and logarithmically -- only on super-horizon scales $k < 1/|\eta|$. (Right) The same graph shown with a linear scale for $r_k$ demonstrates that the growth on super-horizon scales is proportional to the number of e-folds of expansion since mode $k$ exited the horizon $r_k \sim N_e^{(k)}$.}
\label{fig:dSSqueezing}
\end{figure}
\begin{figure}[t]
\centering\includegraphics[width=.47\textwidth]{dSSpaceSqueezingAngle.png} \hspace{.1in} \includegraphics[width=.47\textwidth]{RadiationSqueezingAngle.png}
\caption{(Left) The squeezing angle $\phi_k$ for a dS background (for the same $k$ as Figure \ref{fig:dSSqueezing}) oscillates around $\phi_k = -\pi/4$ when the mode is inside the horizon, and then transitions to $\phi = -\pi/2$ after the mode exits the horizon, in accordance with our qualitative results from the text and the exact solution (\ref{exactdSSqueezing}).
(Right) The squeezing angle for a radiation-dominated background with $k = 0.1$ (again in units of $\eta_0$), plotted as $\cos(2\phi_k)$. Notice that at early times while the mode is super-horizon we have $\phi_k \approx -\pi/2$, while after the mode re-enters the horizon we have $\phi_k \sim k \eta$ increasing with time leading to oscillations in $\cos(2\phi_k)$ which cuts off further growth in $r_k$, in agreement with our qualitative analysis in the text.}
\label{fig:SqueezingAngle}
\end{figure}
For a cosmological background dominated by radiation we have $a(\eta) = \eta/\eta_0$, so that $z'/z = 1/\eta$, where now $\eta > 0$.
This background could arise in the presence of a scalar field due to the oscillation of the homogeneous scalar field condensate about a minimum, such as for example during reheating after the end of inflation.
Interestingly, a slight modification to the signs of the exact de Sitter solution (\ref{exactdSSqueezing}) leads to an exact solution for radiation as well:
\begin{eqnarray}
r_k &=& \sinh^{-1} \left(\frac{1}{2k\eta}\right)\, ; \nonumber\\
\phi_k &=& -\frac{\pi}{4} + \frac{1}{2} \tanh^{-1}\left(\frac{1}{2k\eta}\right)\, .
\label{exactRadiationSqueezing}
\end{eqnarray}
Unlike the de Sitter case, however, at sufficiently early times $\eta \rightarrow 0$ a mode will start outside the horizon $k \eta \ll 1$, then re-enter the horizon later. This exact solution (\ref{exactRadiationSqueezing}), then, represents the {\it decaying} solution; we also expect there to be a growing mode solution as well.
Indeed, from the qualitative discussion above, we expect that the squeezing of the mode will continue to grow while outside of the horizon, then ``freeze in'' when the mode re-enters the horizon.
In Figure \ref{fig:RadiationSqueezing} we see precisely this behavior, where the squeezing parameter is plotted for several different magnitudes of the wavenumber $k$.
In Figure \ref{fig:SqueezingAngle} we see that the behavior of the squeezing angle before and after horizon re-entry matches our qualitative analysis from above, where $\phi_k \approx -\pi/2$ outside the horizon, and $\phi \sim k\eta$ after horizon re-entry.
\begin{figure}[t!]
\centering\includegraphics[width=.47\textwidth]{RadiationSqueezing.png}\hspace{.1in} \includegraphics[width=.47\textwidth]{RadiationSqueezingHorizonExit.png}
\caption{(Left) The squeezing parameter $r_k$ for a radiation background with $k = 0.1$ in units of $\eta_0$ is plotted against the scale factor $a$. Since modes start outside the horizon in a radiation background, the squeezing is large and growing at early times. Once the mode re-enters the horizon, however, the squeezing ``freezes in'' with a damped oscillation about the value at horizon crossing. (Right) Different wavenumbers (again in units of $\eta_0$) lead to different times of horizon re-entry, and thus different ``freeze in'' values of the squeezing.}
\label{fig:RadiationSqueezing}
\end{figure}
Finally, let's consider a slightly more realistic background expansion that transitions from de Sitter at early times to radiation at late times. This can be viewed as a simple model of early Universe inflation followed by a period of scalar field reheating.
For this expansion history we expect modes starting inside the horizon to eventually exit the horizon, with corresponding growth in squeezing.
At the transition to radiation we don't expect to see any change in the growth of the squeezing parameter $r_k$;
however, at some point following this transition the mode will re-enter the horizon and the squeezing will ``freeze in''.
Figure \ref{fig:UniverseSqueezing} illustrates precisely this behavior.
The squeezing angle also illustrates similar behavior as we saw with the dS and radiation backgrounds separately, seen in Figure \ref{fig:UniverseSqueezedAngle}.
Interestingly, if we zoom in on the transition between dS and radiation for the squeezing angle we see that the squeezing angle reaches a minimum after some time after the actual transition; this feature will be important for our understanding of complexity for these combined backgrounds.
\begin{figure}[t]
\centering\includegraphics[width=.60\textwidth]{UniverseSqueezing.png}
\caption{The squeezing parameter $r_k$ for a cosmological background consisting of de Sitter followed by radiation shows the features already seen in the de Sitter and radiation plots separately ($k=0.01$ in units of $\eta_0$). The squeezing, initially small, grows upon horizon exit and continues growing through the transition to radiation. Eventually the mode re-enters the horizon during the radiation era and ``freezes out'' at its value at horizon crossing.}
\label{fig:UniverseSqueezing}
\end{figure}
\begin{figure}[h]
\centering\includegraphics[width=.95\textwidth]{UniverseSqueezingAngle.png}
\caption{(Left) The squeezing angle $\cos(2\phi_k)$ for the solution shown in Figure \ref{fig:UniverseSqueezing} shown as a function of the scale factor $a$ for a dS expansion followed by a transition to radiation shows how the squeezing angle freezes out to $\phi_k \approx -\pi/2$ when outside the horizon, and grows when it re-enters the horizon. (Right) The inset shows a zoomed in region of the transition between dS and radiation. Notice that the squeezing angle reaches a minimum some time after the transition, then begins to slowly grow again. This feature will be important in our understanding of the complexity in the next section.}
\label{fig:UniverseSqueezedAngle}
\end{figure}
\section{Complexity for cosmological Squeezed States}
\label{sec:Complexity}
In the previous section, we saw that it is natural to describe the evolution of scalar cosmological perturbations as a two-mode squeezed vacuum state. We developed a qualitative understanding of the behavior of the squeezed solutions both inside and outside of the horizon, finding that in general, the corresponding quantized harmonic oscillator becomes inverted when modes become super-horizon, leading to squeezing in a similar way as we saw in Section \ref{sec:Inverted}.
We verified this qualitative reasoning with an exact solution in the case of a de Sitter expanding background, as well as numerical solutions for several other expanding backgrounds.
We are now ready to consider the complexity of the squeezed cosmological perturbations.
As discussed in Appendix \ref{app:Complexity}, we will compute the {\it circuit complexity} of a target state relative to a chosen reference state.
A natural reference state for our cosmological perturbations is that of the two-mode vacuum state $|0\rangle_{\vec{k},-\vec{k}}$, while our target state will be the squeezed two-mode vacuum state $|\Psi_{sq}\rangle_{\vec{k},-\vec{k}}$ in (\ref{psi1}).
In order to utilize the formalism of \cite{MyersCC}, we will need to express the reference and target states as gaussian wavefunctions.
We will first define a set of auxiliary ``position'' and ``momentum'' variables
\begin{eqnarray}
\hat q_{\vec{k}} \equiv \frac{1}{\sqrt{2k}} \left(\hat c_{\vec{k}}^\dagger + \hat c_{\vec{k}}\right), \hspace{.2in}
\hat p_{\vec{k}} \equiv i\sqrt{\frac{k}{2}}\left(\hat c_{\vec{k}}^\dagger - \hat c_{\vec{k}}\right)\, ,
\end{eqnarray}
which are conjugate variables $[\hat q_{\vec{k}},\hat p_{\vec{k}'}] = i \delta^3(\vec{k}-\vec{k}')$.
Notice that the main difference between the ``position'' $\hat q_{\vec{k}}$ and the Fourier mode $\hat v_{\vec{k}}$ given in (\ref{CosmoCreationOperators}) is that the former is defined with respect to a raising operator of $\vec{k}$ instead of $-\vec{k}$.
The two-mode vacuum state's wavefunction, defined as $\hat c_{\vec{k}} |0\rangle_{\vec{k},-\vec{k}} = 0$, has the usual gaussian form
\begin{eqnarray}
\psi_R(q_{\vec{k}},q_{-\vec{k}})= \langle q_{\vec{k}},q_{-\vec{k}} | 0\rangle_{\vec{k},-\vec{k}} = \left(\frac{k}{\pi}\right)^{1/4}\, e^{-\frac{k}{2} (q_{\vec{k}}^2 + q_{-\vec{k}}^2)}\, .
\label{reference}
\end{eqnarray}
To calculate the wavefunction corresponding to the squeezed state (\ref{psi1}) we note that the following combination annihilates $|\Psi_{sq}\rangle_{\vec{k},-\vec{k}}$
\begin{eqnarray}
\left(\cosh r_k\ \hat c_{\vec{k}} + e^{-2i\phi_k} \sinh r_k\ \hat c_{-\vec{k}}^\dagger\right) |\Psi_{sq}\rangle_{\vec{k},-\vec{k}} = 0\, .
\end{eqnarray}
Using this we can calculate the ``position-space'' form of the wavefunction \cite{Martin2}
\begin{equation} \label{state1}
\Psi_{sq} (q_{\vec{k}}, q_{-\vec{k}})= \langle q_{\vec{k}},q_{-\vec{k}}|\Psi_{sq}\rangle_{\vec{k}} = \frac{e^{A(q_{\vec{k}}^2+q_{-\vec{k}}^2)-B q_{\vec{k}} q_{-\vec{k}}}}{\cosh r_k \sqrt{\pi} \sqrt{ 1- e^{-4 i \phi_k} \tanh^2 r_k}}\, ,
\end{equation}
where the coefficients $A$ and $B$ are functions of the squeezing parameter $r_k$ and squeezing angle $\phi_k$
\begin{equation}
A= \frac{k}{2} \left( \frac{e^{-4 i \phi_k} \tanh^2 r_k +1}{e^{-4 i \phi_k} \tanh^2 r_k -1} \right)\, ,\hspace{.2in} B= 2k \left( \frac{e^{-2 i \phi_k} \tanh r_k }{e^{-4 i \phi_k} \tanh^2 r_k -1}\right)\, .
\label{ABSqueezed}
\end{equation}
As discussed in Appendix \ref{app:Complexity},
we will focus our study of complexity by directly working with the wavefunction using the approach of Nielsen \cite{NL1,NL2,NL3}, which we will call {\it circuit complexity}, though we do briefly investigate circuit complexity using covariance matrix method in Appendix \ref{app:Covariance}.
Even selecting this general approach, however, does not eliminate all possible ambiguity in the computation of complexity, since there are different measures of complexity depending on different choices for the ``cost function.''
In particular, the complexity for two simple choices of cost functions -- ``linear'' weighting ${\mathcal C}_1$ and ``geodesic'' weighting ${\mathcal C}_2$ -- can easily be computed from the vacuum reference state (\ref{reference}) and squeezed target state (\ref{state1}) (see Appendix \ref{app:Complexity} for details)
\begin{eqnarray}
\label{complexity1}
{\mathcal C}_1(k) &=&\frac{1}{2} \left (\ln \left|\frac{\Omega_{\vec{k}}}{\omega_{\vec{k}}}\right|+ \ln \left|\frac{\Omega_{-\vec{k}}}{\omega_{-\vec{k}}}\right|+ \tan^{-1} \frac{\text{Im}\ \Omega_{\vec{k}}}{\text{Re}\ \Omega_{\vec{k}}} + \tan^{-1} \frac{\text{Im}\ \Omega_{-\vec{k}}}{\text{Re}\ \Omega_{-\vec{k}}} \right)\, ; \\ \cr
{\mathcal C}_2(k) &=&\frac{1}{2} \sqrt{ \left(\ln \left|\frac{\Omega_{\vec{k}}}{\omega_{\vec{k}}}\right| \right)^2+ \left(\ln \left|\frac{\Omega_{-\vec{k}}}{\omega_{-\vec{k}}}\right| \right)^2+ \left(\tan^{-1} \frac{\text{Im}\ \Omega_{\vec{k}}}{\text{Re}\ \Omega_{\vec{k}}}\right)^2+ \left(\tan^{-1} \frac{\text{Im}\ \Omega_{-\vec{k}}}{\text{Re}\ \Omega_{-\vec{k}}}\right)^2},
\label{complexity2}
\end{eqnarray}
where $\Omega_{\vec{k}}=-2 A+B$, $\Omega_{-\vec{k}}=-2A-B$, and $\omega_{\vec{k}} = \omega_{-\vec{k}} = k/2$. The inverse tangent term in the above expression is necessary when the frequency is complex, see \cite{AB}.
We will see that the qualitative results for our squeezed states are essentially identical for these two measures (they only differ by a multiplicative factor) so we will have confidence in the genericity of our results\footnote{There will be some differences when compared against the circuit complexity computed using the covariance matrix; see Appendix \ref{app:Covariance}. However, as previously noted, we expect the latter to be less sensitive to detailed features of the wavefunction.}.
Using (\ref{ABSqueezed}) in (\ref{complexity1}),(\ref{complexity2}) we can obtain simple expressions for the two measures of complexity for the general two-mode squeezed vacuum state relative to the un-squeezed vacuum
\begin{eqnarray}
\label{complexSqueeze1}
{\mathcal C}_1(k) &=&\left| \ln \left|\frac{1+e^{-2i\phi_k} \tanh r_k}{1-e^{-2i\phi_k}\tanh r_k}\right|\right|+ |\tan^{-1} \left(2\sin 2 \phi_k \sinh r_k \cosh r_k \right)|\, ; \\
{\mathcal C}_2(k) &=& \frac{1}{\sqrt{2}} \sqrt{\left(\ln \left|\frac{1+e^{-2i\phi_k} \tanh r_k}{1-e^{-2i\phi_k}\tanh r_k}\right|\right)^2 + \left(\tan^{-1} \left(2\sin 2 \phi_k \sinh r_k \cosh r_k \right)\right)^2}\, .
\label{complexSqueeze2}
\end{eqnarray}
For large amounts of complexity $r_k \gg 1$ the last term is bounded by $\pi/2$, so the two measures of complexity are approximately equal to each other up to a multiplicative factor ${\mathcal C}_1 \approx \sqrt{2}\, {\mathcal C}_2$;
further, on super-horizon scales we expect the squeezing angle to take the value $\phi_k \rightarrow -\pi/2$, so the complexities (\ref{complexSqueeze1}),(\ref{complexSqueeze2}) simplify to be simply proportional to the squeezing parameter
\begin{eqnarray}
{\mathcal C}_1(k) \approx \sqrt{2}\, {\mathcal C}_2(k) \approx \left|\ln\left(\frac{1-\tanh r_k}{1+\tanh r_k}\right)\right| \approx r_k \approx \ln a/a_{exit} = N_e^{(k)}\, ,
\label{complexityQualitative}
\end{eqnarray}
and therefore also proportional to the number of e-folds the mode $k$ has been super-horizon, as discussed below (\ref{LargeSqueezeEOMQualitative}).
Since the expressions (\ref{complexSqueeze1}),(\ref{complexSqueeze2}) are functionally similar, we will focus our analysis on ${\mathcal C}_2$ without loss of generality.
It is interesting to note that that, in contrast to the inverted harmonic oscillator, we have found that the complexity grows with time, rather than saturating.
This appears to be due to the fact that while Hamiltonian for the inverted harmonic oscillator was time-independent, the term $z'/z$ in the Hamiltonian (\ref{CosmoH}) for cosmological perturbations is time-dependent, thus leading to growing complexity with time.
Note that (\ref{complexityQualitative}) implies that the rate of change of the complexity (with respect to cosmic time $t$) when the mode is super-horizon is given by the Hubble expansion rate
\begin{eqnarray}
\frac{d \mbox{Complexity}}{dt} \approx H\, .\end{eqnarray}
\subsection{Complexity in Expanding Backgrounds}
It is now a simple matter to insert the time-dependent solutions for the squeezing parameter and angle $r_k,\phi_k$ due to the expansion of the Universe from the previous section into (\ref{complexSqueeze2}) to see the time dependence of complexity for scalar cosmological perturbations.
Before we insert the numerical solutions, however, we can use our exact solutions for a dS expanding background (\ref{exactdSSqueezing})
to obtain analytic expressions for the complexity
(\ref{complexSqueeze2})
\begin{eqnarray}
{\mathcal C}_2(k) &=& \frac{1}{\sqrt{2}} \sqrt{\left(\log\left(\frac{(-2k\eta)} {\sqrt{4+(2k\eta)^2}}\right)\right)^2 + \left(\tan^{-1}\left(\frac{1}{-k\eta}\right)\right)^2} \\
&=&
\begin{cases}
\frac{1}{\sqrt{2}} \sqrt{\frac{1}{(2k\eta)^2} + \left(\tan^{-1} \frac{1}{-k\eta}\right)^2} \approx \sqrt{\frac{5}{2}}\frac{1}{-2k\eta} & \mbox{ for $k\eta \gg 1$ (sub-horizon)} \cr
\frac{1}{\sqrt{2}} \sqrt{\left(\log(-k\eta)\right)^2 + \left(\frac{\pi}{2}\right)^2} \approx \frac{1}{\sqrt{2}} \left|\log(-k\eta)\right|\sim \frac{1}{\sqrt{2}} N_e^{(k)} & \mbox{ for $k\eta \ll 1$ (super-horizon)}
\end{cases}
\label{ComplexitydS}
\end{eqnarray}
where we note in the last line that the complexity scales like the number of e-folds for mode $k$ (as could be expected from (\ref{complexityQualitative})).
More generally, we can insert the numerical solutions for a dS background for the squeezing parameter and angle from the previous section into (\ref{complexSqueeze2}).
Figure \ref{fig:Complexity} shows that, as expected from our qualitative and exact analysis, when the mode is inside the horizon the complexity ${\mathcal C}_2$ is small, while when the mode exits the horizon the complexity quickly grows linearly with the $\log$ of the scale factor, and thus is proportional to the number of e-folds.
\begin{figure}[t]
\centering
\includegraphics[width=.45\textwidth]{dSSpaceComplexity.png}\hspace{.1in} \includegraphics[width=.45\textwidth]{RadiationComplexity.png}
\caption{(Left) The complexity ${\mathcal C}_2$ for a cosmological perturbation in a dS cosmological background relative to the ground state reference demonstrates that the complexity remains small while the mode is within the horizon, then grows linearly with the $\log$ of the scale factor after exiting the horizon. ($k=0.001$ as in Figure \ref{fig:dSSqueezing}) (Right) The complexity for radiation illustrates a different pattern in which the complexity decreases from its starting value even while outside the horizon, due to the increasing squeezing angle $\phi_k$ for a radiation background as seen previously. This increasing squeezing angle leads to a decreasing complexity. After the mode re-enters the horizon for a radiation background it begins to oscillate, freezing in the complexity about which it subsequently oscillates. ($k=0.1$ as in Figure \ref{fig:RadiationSqueezing})}
\label{fig:Complexity}
\end{figure}
The linear growth of the complexity on super-horizon scales resembles the growth of complexity for other chaotic quantum systems \cite{qchaos1}, reflecting the fact that on super-horizon scales the Hamiltonian (\ref{CosmoH}) acts like an inverted harmonic oscillator.
As discussed in \cite{qchaos1}, we can extract information about quantum chaos \footnote{A concrete proof of chaos will require further tests by using other diagnostics of chaos. For example interested readers are referred to \cite{Kudler-Flam} and the references there in.} such as the scrambling time and Lyapunov exponent from the complexity.
Based on the analysis of \cite{qchaos1},
the scrambling time scale should be set by the time of horizon exit, and the Lyapunov exponent is set by the slope of the linear part of the complexity, which from (\ref{ComplexitydS}) is ${\mathcal O}(1)$.
\begin{figure}[t]
\centering
\includegraphics[width=.60\textwidth]{UniverseComplexity.png}
\caption{The complexity ${\mathcal C}_2$ for the squeezing solution of Figures \ref{fig:UniverseSqueezing} and \ref{fig:UniverseSqueezedAngle}, namely a background that transitions from dS to radiation, initially grows on super-horizon scales during the dS phase, but decreases on super-horizon scales during the radiation phase, similar to that seen for pure radiation in Figure \ref{fig:Complexity}. After horizon re-entry, the complexity ``freezes in'' and oscillates due to the rapid evolution of the squeezing angle $\phi_k \sim k\eta$ on sub-horizon scales. The slight mismatch between the transition between dS and radiation and the peak of the complexity is due to the offset minimum in the squeezing angle $\phi_k$ after the transition, as seen in Figure \ref{fig:UniverseSqueezedAngle}.}
\label{fig:UniverseComplexity}
\end{figure}
Also in Figure \ref{fig:Complexity} we show the evolution of the complexity for a radiation background.
Contrary to the dS case, the complexity does not grow on super-horizon scales for a radiation background.
At first glance this seems puzzling, since the squeezing $r_k$ continues to grow on super-horizon scales, as seen in Figure \ref{fig:RadiationSqueezing}.
However, as seen in Figure \ref{fig:SqueezingAngle} (and in the detailed zoom of Figure \ref{fig:UniverseSqueezedAngle})
the squeezing angle $\phi_k$ increases during the radiation era $\phi_k \sim -\pi/2 + k\eta$, driving the complexity to lower values until horizon crossing. After horizon crossing the squeezing angle is now dominated by the sub-horizon contribution $\phi_k \sim k\eta$, leading to oscillations in the complexity through $e^{-2i\phi_k}$.
Thus, we see that unlike entropy, the circuit complexity of a mode can decrease.
Naturally, the evolution of the complexity for the simple model of the Universe consisting of a period of dS expansion followed by radiation is the concatenation of these two behaviors, as seen in Figure \ref{fig:UniverseComplexity}. As noted earlier, during the de Sitter era
the complexity starts at close to zero since the mode is approximately that of the unsqueezed vacuum, and is nearly constant until the mode exits the horizon.
After horizon exit the complexity continues to grow as long as the Universe is accelerating.
During this period the linear growth of the complexity for super-horizon modes during the de sitter era resembles quantum chaos.
This scenario changes quite dramatically almost immediately after entering into the radiation regime. During this period the Universe de-complexifies and eventually after the mode re-enters the horizon the complexity ``freezes in'' at a value higher than the initial complexity before horizon exit.
Finally, we note that one can easily extend our analysis of complexity for all modes
\begin{equation}
{\mathcal C}^{(\rm tot)}= \sum_k {\mathcal C}_2(k).
\label{TotalComplexity}
\end{equation}
As we have seen, a vacuum state that starts inside the horizon remains unsqueezed until it exits the horizon, with correspondingly small complexity. This means that ultra-high energy modes $k \eta \gg 1$ that don't exit the horizon before the transition to radiation will essentially not contribute at all to the total complexity of the Universe in this model, providing an effective UV cutoff to the complexity sum (\ref{TotalComplexity}).
The complexity is instead dominated by the first modes that exit the horizon, since they accumulate the largest amount of e-folds while super-horizon.
It would be interesting to carefully calculate the total complexity of the Universe for a more realistic background evolution for a future work.
\section{Discussion}
Quantum information theory is helping to shape our understanding about fundamental properties of nature, and quantum complexity plays a major role. In this paper we have applied Nielsen's geometric approach to compute the complexity of the Universe; specifically, we computed the complexity of scalar cosmological perturbations by taking our reference state as the unsqueezed ground state and our target state as the squeezed vacuum state representing the evolution of cosmological perturbations.
This approach gives us a new perspective in which to examine the history of the Universe. We found that the complexity during dS expansion grows linearly with the number of e-folds for super-horizon modes, with the rate of change of complexity given by the dS Hubble expansion rate.
This linear growth suggests that the Universe is described by quantum chaos
during the dS era, with a corresponding scrambling time scale and Lyapunov exponent.
Interestingly, the complexity during this era appears to be unbounded, and will continue to grow linearly with the number of e-folds for as long as dS expansion continues.
When the dS expansion is followed by a period of radiation domination the complexity decreases
until ``freezing in'' once the mode re-enters the horizon.
We believe this new approach will open up the possibility of many future research directions.
One obvious extension is to apply our analysis for other cosmological scenarios and models; for example, it would be interesting to study the complexity for accelerating solutions different from dS, or the complexity for hydrodynamical perturbations with sound speeds different than one.
We also found that the complexity for a mode that exits the horizon during dS then re-enters the horizon during radiation initially increases, then decreases and ``freezes-in'' after horizon re-entry.
Since complexity represents the number of unitary quantum gates necessary to build the target state from the reference state, this suggests that there may be some sort of ``short cut'' in the space of quantum operators that can encode the spectrum of cosmological perturbations upon horizon re-entry.
As another potential application, we found that the complexity during the dS era grows linearly with the number of e-folds without bound, at a rate proportional to the dS Hubble expansion.
However, it has been suggested that the complexity for a system with a fixed number of q-bits should be bounded from above, and that the rate of growth of complexity should be bounded as well.
While these expectations appear to apply primarily to systems with time-independent Hamiltonians, it would be interesting to find connections between these ideas and cosmology, potentially placing limits on either the number of e-folds of dS expansion or the dS Hubble rate from quantum information theoretic grounds.
Finally, these results may be useful for understanding complexity in simple quantum optics setups, where the squeezed vacuum state arises quite naturally.
We would like to explore these potential directions in the near future.
\section*{Acknowledgements}
We would like to thank Jeff Murugan for reading the manuscript and comments. AB is supported by Research Initiation Grant (RIG/0300) provided by IIT-Gandhinagar. This work was supported by the Natural Sciences and Engineering Research Council of Canada. AB thank the organizers and the participants of the workshop hosted by Department of Physics of Ashoka University, Sonipat, Haryana, India, on holography, complexity and entanglement and National Strings Meeting 2019, hosted by the Department of Physics, IISER Bhopal, India, to give him the opportunity to present a talk on the complexity and for stimulating discussions.
|
1,108,101,566,625 | arxiv | \section{Introduction}
The Casimir effect is one of the most interesting macroscopic manifestations
of the nontrivial structure of the vacuum state in quantum field theory
(see, e.g., \cite{Mostepanenko,Plunien,Milton,Bordag1} and references
therein). The effect is a phenomenon common to all systems characterized by
fluctuating quantities and results from changes in the vacuum fluctuations
of a quantum field that occur because of the imposition of boundary
conditions or the choice of topology. It may have important implications on
all scales, from cosmological to subnuclear, and has become in recent
decades an increasingly popular topic in quantum field theory. It is well
known that the uniqueness of the vacuum state is lost when we work within
the framework of quantum field theory in a general curved spacetime or in
non--inertial frames. In particular, the use of general coordinate
transformations in quantum field theory in flat spacetime leads to an
infinite number of unitary inequivalent representations of the commutation
relations. Different inequivalent representations will in general give rise
to different vacuum states. For instance, the vacuum state for a uniformly
accelerated observer, the Fulling--Rindler vacuum \cite%
{Full73,Boul75,Unru76,Full77,Gerl89}, turns out to be inequivalent to that
for an inertial observer, the familiar Minkowski vacuum. Quantum field
theory in accelerated systems contains many special features produced by a
gravitational field. In particular, the near horizon geometry of most black
holes is well approximated by Rindler spacetime and a better understanding
of physical effects in this background could serve as a handle to deal with
more complicated geometries like Schwarzschild. The Rindler geometry shares
most of the qualitative features of black holes and is simple enough to
allow detailed analysis. Another motivation for the investigation of quantum
effects in the Rindler space is related to the fact that this space is
conformally related to de Sitter space and to Robertson--Walker space with
negative spatial curvature. As a result the expectation values of the
energy--momentum tensor for conformally invariant fields and for
corresponding conformally transformed boundaries on the de Sitter and
Robertson--Walker backgrounds can be generated from the corresponding
Rindler counterpart by the standard transformation (see, for instance, \cite%
{Birrell}).
An interesting topic in the investigations of the Casimir effect is
the dependence of the vacuum characteristics on the type of the
vacuum. Vacuum expectation values of the energy-momentum tensor
induced by an infinite plane boundary moving with uniform proper
acceleration through the Fulling-Rindler vacuum was studied by
Candelas and Deutsch \cite{Candelas} for the conformally coupled
$4D$ Dirichlet and Neumann massless scalar and electromagnetic
fields. In this paper only the region of the right Rindler wedge to
the right of the barrier is considered. In Ref. \cite{Saha02} we
have investigated the Wightman function and the vacuum
energy-momentum tensor for a massive scalar field with general
curvature coupling parameter, satisfying the Robin boundary
conditions on the infinite plane in an arbitrary number of spacetime
dimensions and for the electromagnetic field. We have considered
both regions, including the one between the barrier and Rindler
horizon. The vacuum expectation values of the energy-momentum tensor
for scalar fields with Dirichlet and Neumann boundary conditions and
for the electromagnetic field in the geometry of two parallel plates
moving by uniform accelerations are investigated in Ref.
\cite{Avag02}. In particular, the vacuum forces acting on the
boundaries are evaluated. They are presented as a sum of the
'interaction' and self-action parts. The 'interaction' forces
between the plates are always attractive for both scalar and
electromagnetic cases. Due to the well-known surface divergences in
the boundary parts, the total Casimir energy cannot be obtained by
direct integration of the vacuum energy density and needs an
additional renormalization. In Ref. \cite{Saha04} by using the zeta
function technique, the Casimir energy is evaluated for massless
scalar fields under Dirichlet and Neumann boundary conditions, and
for the electromagnetic field with perfect conductor boundary
conditions on one and two parallel plates. On background of
manifolds with boundaries, the physical quantities, in general, will
receive both volume and surface contributions and the surface terms
play an important role in various branches of physics. An expression
for the surface energy-momentum tensor for a scalar field with
general curvature coupling parameter in the general case of bulk and
boundary geometries is derived in Ref. \cite{Saha04c}. In Ref.
\cite{SahSet04b} the vacuum expectation value of the surface
energy-momentum tensor is evaluated for a massles scalar field
obeying Robin boundary condition on an infinite plane moving by
uniform proper acceleration. By using the conformal relation between
the Rindler and de Sitter spacetimes and the results from
\cite{Saha02}, in Ref. \cite{SahSet04} the vacuum energy-momentum
tensor for a scalar field is evaluated in de Sitter spacetime in
presence of a curved brane on which the field obeys the Robin
boundary condition with coordinate dependent coefficients.
In the present paper the Wightman function and the vacuum
expectation value of the field square the energy-momentum tensor are
investigated for a massive scalar field with an arbitrary curvature
coupling parameter obeying the Robin boundary conditions on two
parallel branes moving by uniform proper accelerations through the
Fulling-Rindler vacuum. The general case is considered when the
constants in the boundary conditions are different for separate
plates. Robin type conditions are an extension of Dirichlet and
Neumann boundary conditions and appear in a variety of situations,
including the considerations of vacuum effects for a confined
charged scalar field in external fields \cite{Ambj83}, spinor and
gauge field theories, quantum gravity and supergravity
\cite{Luck91,Espo97}. Robin conditions can be made conformally
invariant, while purely-Neumann conditions cannot. Thus, Robin-type
conditions are needed when one deals with conformally invariant
theories in the presence of boundaries and wishes to preserve this
invariance. It is interesting to note that the quantum scalar field
satisfying the Robin condition on the boundary of cavity violates
the Bekenstein's entropy-to-energy bound near certain points in the
space of the parameter defining the boundary condition
\cite{Solo01}. The Robin boundary conditions are an extension of
those imposed on perfectly conducting boundaries and may, in some
geometries, be useful for depicting the finite penetration of the
field into the boundary with the 'skin-depth' parameter related to
the Robin coefficient. Robin boundary conditions naturally arise
for scalar and fermion bulk fields in the Randall-Sundrum model \cite%
{Gher00,Flac01b,Saha05}. In this model the bulk geometry is a slice of
anti-de Sitter space and the corresponding Robin coefficients are related to
the curvature scale of the space.
The outline of this paper is the following. In the next section the
Wightman function is considered. The corresponding mode-sum is
evaluated by using the generalized Abel-Plana summation formula
\cite{Sahrev}. This allows us to extract from the corresponding
vacuum expectation values the Wightman function for the geometry of
a single plate and to present the remained part in the form of the
exponentially convergent integrals. The vacuum expectation values of
the field square and the Casimir energy-momentum tensor is evaluated
in Section \ref{sec:VEVEMT}. Various limiting cases are considered.
In Section \ref{sec:IntForce} we investigate the vacuum
'interaction' forces between the plates as functions on
corresponding proper accelerations. Section \ref{sec:Conc} contains
a summary of the work and some suggestions for further research. In
Appendix \ref{section:App1} on the base of the generalized
Abel-Plana formula, a summation formula is derived for the series
over zeros of a combination of the Bessel modified functions with an
imaginary order.
\section{Wightman function}
\label{sec:WF}
We consider a real scalar field $\varphi (x)$ with general curvature
coupling parameter $\zeta $ satisfying the field equation
\begin{equation}
\left( \nabla _{\mu }\nabla ^{\mu }+m^{2}+\zeta R\right) \varphi =0,
\label{fieldeq}
\end{equation}%
where $R$ is the scalar curvature for a $(D+1)$--dimensional background
spacetime, and $\nabla _{\mu }$ is the covariant derivative operator. For
special cases of minimally and conformally coupled scalars one has $\zeta =0$
and $\zeta =(D-1)/4D$, respectively. Our main interest in this paper will be
the Wightman function, the vacuum expectation values (VEVs) of the field
square and the energy-momentum tensor in the Rindler spacetime induced by
two parallel plates moving with uniform proper acceleration when the quantum
field is prepared in the Fulling-Rindler vacuum. For this problem the
background spacetime is flat and in Eq. (\ref{fieldeq}) we have $R=0$. As a
result the eigenmodes are independent on the curvature coupling parameter.
However, the local characteristics of the vacuum such as energy density and
vacuum stresses \ depend on this parameter.
In the accelerated frame it is convenient to introduce Rindler coordinates $%
(\tau ,\xi ,\mathbf{x})$ related to the Minkowski ones, $(t,x^{1},\mathbf{x}%
) $ by formulas
\begin{equation}
t=\xi \sinh \tau ,\quad x^{1}=\xi \cosh \tau , \label{RindMin}
\end{equation}%
where $\mathbf{x}=(x^{2},\ldots ,x^{D})$ denotes the set of coordinates
parallel to the plates. In these coordinates the Minkowski line element
takes the form
\begin{equation}
ds^{2}=\xi ^{2}d\tau ^{2}-d\xi ^{2}-d\mathbf{x}^{2}, \label{metric}
\end{equation}%
and a world line defined by $\xi ,\mathbf{x}=\mathrm{const}$ describes an
observer with constant proper acceleration $\xi ^{-1}$. The Rindler time
coordinate $\tau $ is proportional to the proper time along a family of
uniformly accelerated trajectories which fill the Rindler wedge, with the
proportionality constant equal to the acceleration. Assuming that the plates
are situated in the right Rindler wedge $x^{1}>\left\vert t\right\vert $, we
will let the surfaces $\xi =a$ and $\xi =b$, $b>a$ represent the
trajectories of these boundaries, which therefore have proper accelerations $%
a^{-1}$ and $b^{-1}$ (see Fig. \ref{fig1avsa}). We will consider the case of
a scalar field satisfying Robin boundary conditions on the surfaces of the
plates:
\begin{equation}
\left. \left( A_{j}+B_{j}\frac{\partial }{\partial \xi }\right) \varphi
\right\vert _{\xi =j}=0,\quad j=a,b, \label{Dboundcond}
\end{equation}%
with constant coefficients $A_{j}$ and $B_{j}$. Dirichlet and Neumann
boundary conditions are obtained from here as special cases. All results
below will depend, of course, on the ratios $A_{j}/B_{j}$ only. However, to
keep the transition to Dirichlet and Neumann cases transparent, we write the
boundary conditions in the form (\ref{Dboundcond}).
\begin{figure}[tbph]
\begin{center}
\epsfig{figure=Sahafig1.eps,width=6cm,height=5cm}
\end{center}
\caption{The $(x^{1},t)$ plane with the Rindler coordinates. The heavy lines
$\protect\xi =a$ and $\protect\xi = b$ represent the trajectories of the
plates.}
\label{fig1avsa}
\end{figure}
The plates divide the right Rindler wedge into three regions: $0<\xi <a$, $%
\xi >b$, and $a<\xi <b$. The VEVs in two first regions are the same as those
induced by single plates located at $\xi =a$ and $\xi =b$, respectively. As
these VEVs are investigated in Ref. \cite{Saha02}, in the consideration
below we restrict ourselves to the region between the plates. First we
consider the positive frequency Wightman function $G^{+}(x,x^{\prime
})=\left\langle 0\left\vert \varphi (x)\varphi (x^{\prime })\right\vert
0\right\rangle $, with $\left. |0\right\rangle $ being the amplitude for the
corresponding vacuum state. The VEVs of the field square and the
energy-momentum tensor can be evaluated on the base of this function. In
addition, the Wightman function determines the response of the particle
detector of Unruh-DeWitt type, moving through the vacuum under
consideration. By expanding the field operator over the complete set of
eigenfunctions $\left\{ \varphi _{\alpha }(x),\varphi _{\alpha }^{\ast
}(x)\right\} $ satisfying boundary conditions (\ref{Dboundcond}) and using
the commutation relations one finds
\begin{equation}
G^{+}(x,x^{\prime })=\sum_{\alpha }\varphi _{\alpha }(x)\varphi _{\alpha
}^{\ast }(x^{\prime }), \label{mswf}
\end{equation}%
where the collective index $\alpha $ is a set of quantum numbers specifying
the solution.
To evaluate the mode sum in formula (\ref{mswf}) we need the form of the
eigenfunctions $\varphi _{\alpha }(x)$ (for a recent discussion of
eigenmodes in four Rindler sectors and relations between them see, for
instance, \cite{Gerl99}). For the geometry under consideration the metric
and boundary conditions are static and translational invariant in the
hyperplane parallel to the plates. It follows from here that the
corresponding part of the eigenfunctions can be taken in the standard plane
wave form:
\begin{equation}
\varphi _{\alpha }=C\phi (\xi )\exp \left[ i\left( \mathbf{kx}-\omega \tau
\right) \right] ,\quad \alpha =(\mathbf{k},\omega ), \label{wavesracture}
\end{equation}%
with the wave vector $\mathbf{k}=(k_{2},\ldots ,k_{D})$. The frequency $%
\omega $ in Eq. (\ref{wavesracture}) corresponds to the dimensionless
coordinate $\tau $ and hence is dimensionless. The proper time $\tau _{g}$
and the frequency $\omega _{g}$ measured by a uniformly accelerated observer
with the proper acceleration $g$ and world line $(x^{1})^{2}-t^{2}=g^{-2}$
are related to $\tau $ and $\omega $ by the formulas $\tau _{g}=\tau /g$, $%
\omega _{g}=\omega g$ (the features of the measurements for time, frequency,
and length relative to a Rindler frame as compared to a Minkowski frame are
discussed in Ref. \cite{Gerl03}). The equation for $\phi (\xi )$ is obtained
from field equation (\ref{fieldeq}) on background of metric (\ref{metric})
and has the form
\begin{equation}
\xi ^{2}\phi ^{\prime \prime }(\xi )+\xi \phi ^{\prime }(\xi )+\left( \omega
^{2}-\lambda ^{2}\xi ^{2}\right) \phi (\xi )=0, \label{fiequ}
\end{equation}%
where the prime denotes a differentiation with respect to the argument of
the function,
\begin{equation}
\lambda =\sqrt{k^{2}+m^{2}}, \label{lambda}
\end{equation}%
and $k=|\mathbf{k}|$. In the region between the plates the linearly
independent solutions to equation (\ref{fiequ}) are the Bessel modified
functions $I_{i\omega }(k\xi )$ and $K_{i\omega }(k\xi )$. The solution
satisfying boundary condition (\ref{Dboundcond}) on the plate $\xi =b$ has
the form
\begin{equation}
Z_{i\omega }^{(b)}(\lambda \xi ,\lambda b)=\bar{I}_{i\omega }^{(b)}(\lambda
b)K_{i\omega }(\lambda \xi )-\bar{K}_{i\omega }^{(b)}(\lambda b)I_{i\omega
}(\lambda \xi ). \label{Deigfunc}
\end{equation}%
Here and below for a given function $f(z)$ we use the notations
\begin{equation}
\bar{f}^{(j)}(z)=A_{j}f(z)+\frac{B_{j}}{j}zf^{\prime }(z),\quad j=a,b.
\label{fbarnot}
\end{equation}%
Note that function (\ref{Deigfunc}) is real, $Z_{i\omega }^{(b)}(\lambda \xi
,\lambda b)=Z_{-i\omega }^{(b)}(\lambda \xi ,\lambda b)$. From the boundary
condition on the plate $\xi =a$ we find that the possible values for $\omega
$ are roots to the equation
\begin{equation}
Z_{i\omega }(\lambda a,\lambda b)=0, \label{Deigfreq}
\end{equation}%
with the notation
\begin{equation}
Z_{\omega }(u,v)=\bar{I}_{\omega }^{(b)}(v)\bar{K}_{\omega }^{(a)}(u)-\bar{K}%
_{\omega }^{(b)}(v)\bar{I}_{\omega }^{(a)}(u). \label{Zomega}
\end{equation}%
For a fixed $\lambda $, the equation (\ref{Deigfreq}) has an infinite set of
real solutions with respect to $\omega $. We will denote them by $\omega
_{n}=\omega _{n}(\lambda a,\lambda b)$, $\omega _{n}>0$, $n=1,2,\ldots $,
and will assume that they are arranged in the ascending order $\omega
_{n}<\omega _{n+1}$. In addition to the real zeros, in dependence of the
values of the ratios $A_{j}/B_{j}$, equation (\ref{Deigfreq}) can have a
finite set of purely imaginary solutions. The presence of such solutions
leads to the modes with an imaginary frequency and, hence, to the unstable
vacuum. In the consideration below we will assume the values of the
coefficients in Eq. (\ref{Dboundcond}) for which the imaginary solutions are
absent and the vacuum is stable.
The coefficient $C$ in formula (\ref{wavesracture}) is determined from the
standard Klein-Gordon orthonormality condition for the eigenfunctions which
for metric (\ref{metric}) takes the form
\begin{equation}
\int d\mathbf{x}\int_{a}^{b}\frac{d\xi }{\xi }\varphi _{\alpha }%
\overleftrightarrow{\partial }_{\tau }\varphi _{\alpha ^{\prime }}^{\ast
}=i\delta _{\alpha \alpha ^{\prime }}. \label{normcond}
\end{equation}%
The $\xi $-integral on the left of this formula is evaluated using the
integration formula
\begin{equation}
\int_{a}^{b}\frac{d\xi }{\xi }\phi _{1\omega }(\xi )\phi _{2v}(\xi )=\xi
\left. \frac{\phi _{1\omega }(\xi )\phi _{2\upsilon }^{\prime }(\xi )-\phi
_{2\nu }(\xi )\phi _{1\omega }^{\prime }(\xi )}{\omega ^{2}-\upsilon ^{2}}%
\right\vert _{a}^{b}, \label{intformula}
\end{equation}%
valid for any two solutions $\phi _{l\omega }(\xi )$, $l=1,2$, to equation (%
\ref{fiequ}). Taking into account boundary condition (\ref{Dboundcond}),
from Eq. (\ref{normcond}) for the normalization coefficient one finds
\begin{equation}
C^{2}=\left. \frac{\left( 2\pi \right) ^{1-D}\bar{I}_{i\omega
}^{(a)}(\lambda a)}{\bar{I}_{i\omega }^{(b)}(\lambda b)\frac{\partial }{%
\partial \omega }Z_{i\omega }(\lambda a,\lambda b)}\right\vert _{\omega
=\omega _{n}}. \label{Dnormc}
\end{equation}%
Now substituting the eigenfunctions
\begin{equation}
\varphi _{\alpha }(x)=CZ_{i\omega _{n}}^{(b)}(\lambda \xi ,\lambda b)\exp
\left[ i\left( \mathbf{kx}-\omega _{n}\tau \right) \right] \label{Dsol2}
\end{equation}%
into the mode sum formula (\ref{mswf}), for the positive frequency Wightman
function one finds
\begin{eqnarray}
G^{+}(x,x^{\prime }) &=&\int \frac{d\mathbf{k}e^{i\mathbf{k}(\mathbf{x}-%
\mathbf{x}^{\prime })}}{(2\pi )^{D-1}}\sum_{n=1}^{\infty }\frac{\bar{I}%
_{i\omega }^{(a)}(\lambda a)e^{-i\omega (\tau -\tau ^{\prime })}}{\bar{I}%
_{i\omega }^{(b)}(\lambda b)\frac{\partial }{\partial \omega }Z_{i\omega
}(\lambda a,\lambda b)} \notag \\
&&\times \left. Z_{i\omega }^{(b)}(\lambda \xi ,\lambda b)Z_{i\omega
}^{(b)}(\lambda \xi ^{\prime },\lambda b)\right\vert _{\omega =\omega _{n}}.
\label{Wigh1}
\end{eqnarray}%
As the expressions for the eigenfrequencies $\omega _{n}$ (as functions on $%
\lambda a$, $\lambda b$, $A_{j}/B_{j}$) are not explicitly known, the form (%
\ref{Wigh1}) of the Wightman function is inconvenient. For the further
evoluation of this VEV we can apply to the sum over $n$ the summation
formula (\ref{Dsumformula}) derived in Appendix \ref{section:App1} on the
base of the generalized Abel-Plana formula. As a function $F(z)$ in formula (%
\ref{Dsumformula}) let us choose
\begin{equation}
F(z)=\frac{Z_{iz}^{(b)}(\lambda \xi ,\lambda b)Z_{iz}^{(b)}(\lambda \xi
^{\prime },\lambda b)}{\bar{I}_{iz}^{(b)}(\lambda b)\bar{I}%
_{-iz}^{(b)}(\lambda b)}e^{-iz(\tau -\tau ^{\prime })}. \label{FtoAPF}
\end{equation}%
Condition (\ref{condforAPF2pl}) for this function is satisfied if $%
a^{2}e^{|\tau -\tau ^{\prime }|}<\xi \xi ^{\prime }$. In particular, this is
the case in the coincidence limit $\tau =\tau ^{\prime }$ in the region
under consideration: $\xi ,\xi ^{\prime }>a$. By using formula (\ref%
{Dsumformula}), for the Wightman function one obtains the expression%
\begin{eqnarray}
G^{+}(x,x^{\prime }) &=&G^{+}(x,x^{\prime };b)-\int \frac{d\mathbf{k\,}e^{i%
\mathbf{k}(\mathbf{x}-\mathbf{x}^{\prime })}}{\pi (2\pi )^{D-1}}%
\int_{0}^{\infty }d\omega \,\Omega _{b\omega }(\lambda a,\lambda b) \notag
\\
&&\times Z_{\omega }^{(b)}(\lambda \xi ,\lambda b)Z_{\omega }^{(b)}(\lambda
\xi ^{\prime },\lambda b)\cosh [\omega (\tau -\tau ^{\prime })],
\label{Wigh3}
\end{eqnarray}%
where we have introduced the notation%
\begin{equation}
\Omega _{b\omega }(\lambda a,\lambda b)=\frac{\bar{I}_{\omega
}^{(a)}(\lambda a)}{\bar{I}_{\omega }^{(b)}(\lambda b)Z_{\omega }(\lambda
a,\lambda b)}. \label{Omega2}
\end{equation}%
In Eq. (\ref{Wigh3})
\begin{eqnarray}
G^{+}(x,x^{\prime };b) &=&\int \frac{d\mathbf{k}e^{i\mathbf{k}(\mathbf{x}-%
\mathbf{x}^{\prime })}}{\pi ^{2}(2\pi )^{D-1}}\int_{0}^{\infty }d\omega
\sinh (\pi \omega ) \notag \\
&&\times e^{-i\omega (\tau -\tau ^{\prime })}\frac{Z_{i\omega
}^{(b)}(\lambda \xi ,\lambda b)Z_{i\omega }^{(b)}(\lambda \xi ^{\prime
},\lambda b)}{\bar{I}_{i\omega }^{(b)}(\lambda b)\bar{I}_{-i\omega
}^{(b)}(\lambda b)}, \label{Wigh1pl}
\end{eqnarray}%
is the Wightman function in the region $\xi <b$ for a single plate at $\xi
=b $. This function is investigated in Ref. \cite{Saha02} and can be
presented in the form
\begin{equation}
G^{+}(x,x^{\prime };b)=G_{R}^{+}(x,x^{\prime })+\left\langle \varphi
(x)\varphi (x^{\prime })\right\rangle ^{(b)}, \label{G+2}
\end{equation}%
where $G_{R}^{+}(x,x^{\prime })$ is the Wightman function for the right
Rindler wedge without boundaries and the part%
\begin{eqnarray}
\left\langle \varphi (x)\varphi (x^{\prime })\right\rangle ^{(b)} &=&-\int
\frac{d\mathbf{k}e^{i\mathbf{k}(\mathbf{x}-\mathbf{x}^{\prime })}}{\pi (2\pi
)^{D-1}}\int_{0}^{\infty }d\omega \frac{\bar{K}_{\omega }^{(b)}(\lambda b)}{%
\bar{I}_{\omega }^{(b)}(\lambda b)} \notag \\
&&\times I_{\omega }(\lambda \xi )I_{\omega }(\lambda \xi ^{\prime })\cosh
[\omega (\tau -\tau ^{\prime })] \label{phi212}
\end{eqnarray}%
is induced in the region $\xi <b$ by the presence of the plate at $\xi =b$.
Note that the representation (\ref{G+2}) with (\ref{phi212}) is valid under
the assumption $\xi \xi ^{\prime }<b^{2}e^{|\tau -\tau ^{\prime }|}$. Hence,
the application of the summation formula based on the generalized Abel-Plana
formula allowed us (i) to escape the necessity to know the explicit
expressions for the eigenfrequencies $\omega _{n}$, (ii) to extract from the
VEVs the purely Rindler and single plate parts, (iii) to present the
remained part in terms of integrals with the exponential convergence in the
coincidence limit.
By using the identity%
\begin{eqnarray}
&&\frac{\bar{K}_{\omega }^{(b)}(\lambda b)}{\bar{I}_{\omega }^{(b)}(\lambda
b)}I_{\omega }(\lambda \xi )I_{\omega }(\lambda \xi ^{\prime })=\frac{\bar{I}%
_{\omega }^{(a)}(\lambda a)}{\bar{K}_{\omega }^{(a)}(\lambda a)}K_{\omega
}(\lambda \xi )K_{\omega }(\lambda \xi ^{\prime }) \notag \\
&&+\sum_{j=a,b}n^{(j)}\Omega _{j\omega }(\lambda a,\lambda b)Z_{\omega
}^{(j)}(\lambda \xi ,\lambda j)Z_{\omega }^{(j)}(\lambda \xi ^{\prime
},\lambda j), \label{ident1}
\end{eqnarray}%
with $n^{(a)}=1$, $n^{(b)}=-1$, the Wightman function can be also presented
in the form%
\begin{eqnarray}
G^{+}(x,x^{\prime }) &=&G^{+}(x,x^{\prime };a)-\int \frac{d\mathbf{k\,}e^{i%
\mathbf{k}(\mathbf{x}-\mathbf{x}^{\prime })}}{\pi (2\pi )^{D-1}}%
\int_{0}^{\infty }d\omega \,\Omega _{a\omega }(\lambda a,\lambda b) \notag
\\
&&\times Z_{\omega }^{(a)}(\lambda \xi ,\lambda a)Z_{\omega }^{(a)}(\lambda
\xi ^{\prime },\lambda a)\cosh [\omega (\tau -\tau ^{\prime })].
\label{Wigh31}
\end{eqnarray}%
In this formula%
\begin{equation}
G^{+}(x,x^{\prime };a)=G_{R}^{+}(x,x^{\prime })+\left\langle \varphi
(x)\varphi (x^{\prime })\right\rangle ^{(a)} \label{G+1}
\end{equation}%
is the Wightman function in the region $\xi >a$ for a single plate at $\xi
=a $, and
\begin{eqnarray}
\left\langle \varphi (x)\varphi (x^{\prime })\right\rangle ^{(a)} &=&-\int
\frac{d\mathbf{k}e^{i\mathbf{k}(\mathbf{x}-\mathbf{x}^{\prime })}}{\pi (2\pi
)^{D-1}}\int_{0}^{\infty }d\omega \frac{\bar{I}_{\omega }^{(a)}(\lambda a)}{%
\bar{K}_{\omega }^{(a)}(\lambda a)} \notag \\
&&\times K_{\omega }(\lambda \xi )K_{\omega }(\lambda \xi ^{\prime })\cosh
[\omega (\tau -\tau ^{\prime })]. \label{phi211}
\end{eqnarray}%
In Eq. (\ref{ident1}) we use the notations%
\begin{eqnarray}
Z_{\omega }^{(a)}(\lambda \xi ,\lambda a) &=&\bar{I}_{\omega }^{(a)}(\lambda
a)K_{\omega }(\lambda \xi )-\bar{K}_{\omega }^{(a)}(\lambda a)I_{\omega
}(\lambda \xi ), \label{Zom1} \\
\Omega _{a\omega }(\lambda a,\lambda b) &=&\frac{\bar{K}_{\omega
}^{(b)}(\lambda b)}{\bar{K}_{\omega }^{(a)}(\lambda a)Z_{\omega }(\lambda
a,\lambda b)}. \label{Oma}
\end{eqnarray}%
Two representations of the Wightman function, Eqs. (\ref{Wigh3}) and (\ref%
{Wigh31}), are obtained from each other by the replacements%
\begin{equation}
a\rightleftarrows b,\quad I_{\omega }\rightleftarrows K_{\omega }.
\label{replacement}
\end{equation}%
In the coincidence limit the second term on the right of formula (\ref{Wigh3}%
) is finite on the plate $\xi =b$ and diverges on the plate at $\xi =a$,
whereas the second term on the right of Eq. (\ref{Wigh31}) is finite on the
plate $\xi =a$ and is divergent for $\xi =b$. Consequently, the form (\ref%
{Wigh3}) [(\ref{Wigh31})] is convenient for the investigations of the VEVs
near the plate $\xi =b$ ($\xi =a$). Note that in the formulas given above
the integration over the angular part of the vector $\mathbf{k}$ can be done
with the help of the formula%
\begin{equation}
\int d{\mathbf{k}}\,\frac{e^{i{\mathbf{kx}}}F(k)}{(2\pi )^{\frac{D-1}{2}}}%
=\int_{0}^{\infty }dk\,k^{D-2}F(k)\frac{J_{(D-3)/2}(k|{\mathbf{x}}|)}{(k|{%
\mathbf{x}}|)^{(D-3)/2}}, \label{intformwf}
\end{equation}%
for a given function $F(k)$, and $J_{\nu }(z)$ is the Bessel function. In
this section we have considered the positive frequency Whightman function.
By the same method any other two-point function (Hadamard function,
Feynman's Green function, etc.) can be evaluated.
\section{Casimir densities}
\label{sec:VEVEMT}
\subsection{VEV for the field square}
In this section we will consider the VEVs of the field square and the
energy-momentum tensor in the region between the plates. As the
corresponding quantities for a single plate are investigated in Ref. \cite%
{Saha02}, here we will be concentrated on the parts induced by the presence
of the second plate. In the coincidence limit from the formulas for the
Wightman function one obtains two equivalent forms for the VEV\ of the field
square:%
\begin{eqnarray}
\left\langle 0\left\vert \varphi ^{2}\right\vert 0\right\rangle
&=&\left\langle 0_{R}\left\vert \varphi ^{2}\right\vert 0_{R}\right\rangle
+\left\langle \varphi ^{2}\right\rangle ^{(j)} \notag \\
&&-A_{D}\int_{0}^{\infty }dk\,k^{D-2}\int_{0}^{\infty }d\omega \,\Omega
_{j\omega }(\lambda a,\lambda b)Z_{\omega }^{(j)2}(\lambda \xi ,\lambda j),
\label{phi2sq1}
\end{eqnarray}%
corresponding to $j=a$ and $j=b$, and $\left. |0_{R}\right\rangle $ is the
amplitude for the Fulling-Rindler vacuum without boundaries,%
\begin{equation}
A_{D}=\frac{1}{2^{D-2}\pi ^{\frac{D+1}{2}}\Gamma \left( \frac{D-1}{2}\right)
}. \label{Ad}
\end{equation}%
In Eq. (\ref{phi2sq1}) the part $\left\langle \varphi ^{2}\right\rangle
^{(j)}$ is induced by a single plate at $\xi =j$ when the second plate is
absent. From (\ref{phi212}), (\ref{phi211}) for this part one has \cite%
{Saha02}
\begin{subequations}
\label{phi21plgen}
\begin{eqnarray}
\left\langle \varphi ^{2}\right\rangle ^{(a)} &=&-A_{D}\int_{0}^{\infty
}dk\,k^{D-2}\int_{0}^{\infty }d\omega \frac{\bar{I}_{\omega }^{(a)}(\lambda
a)}{\bar{K}_{\omega }^{(a)}(\lambda a)}K_{\omega }^{2}(\lambda \xi ),
\label{phi21pl} \\
\left\langle \varphi ^{2}\right\rangle ^{(b)} &=&-A_{D}\int_{0}^{\infty
}dk\,k^{D-2}\int_{0}^{\infty }d\omega \frac{\bar{K}_{\omega }^{(b)}(\lambda
b)}{\bar{I}_{\omega }^{(b)}(\lambda b)}I_{\omega }^{2}(\lambda \xi ).
\label{phi21plb}
\end{eqnarray}%
The last term on the right of formula (\ref{phi2sq1}) is finite on the plate
at $\xi =j$ and diverges for the points on the other plate.
Extracting the contribution from the second plate, we can write the
expression (\ref{phi2sq1}) for the vacuum expectation value in the symmetric
form
\end{subequations}
\begin{equation}
\left\langle 0\left\vert \varphi ^{2}\right\vert 0\right\rangle
=\left\langle 0_{R}\left\vert \varphi ^{2}\right\vert
0_{R}\right\rangle +\sum_{j=a,b}\left\langle \varphi
^{2}\right\rangle ^{(j)}+\left\langle \varphi ^{2}\right\rangle
^{(ab)}, \label{phi2sq2n}
\end{equation}%
with the 'interference' part%
\begin{eqnarray}
\left\langle \varphi ^{2}\right\rangle ^{(ab)} &=&-A_{D}\int_{0}^{\infty
}dk\,k^{D-2}\int_{0}^{\infty }d\omega \bar{I}_{\omega }^{(a)}(\lambda a)
\notag \\
&&\times \left[ \frac{Z_{\omega }^{(b)2}(\lambda \xi ,\lambda b)}{\bar{I}%
_{\omega }^{(b)}(\lambda b)Z_{\omega }(\lambda a,\lambda b)}-\frac{K_{\omega
}^{2}(\lambda \xi )}{\bar{K}_{\omega }^{(a)}(\lambda a)}\right] .
\label{phi2int}
\end{eqnarray}%
An equivalent form for this part is obtained with the replacements (\ref%
{replacement}) in the subintegrand. The 'interference' term (\ref{phi2int})
is finite for all values of $\xi $ in the range $a\leq \xi \leq b$,
including the points on the boundaries. The well-known surface divergences
are contained in the single plate parts only. To find the corresponding
asymptotic behaviour we note that for the points near the boundaries the
main contribution into the $\omega $-integral comes from large values of $%
\omega $ and we can use the uniform asymptotic expansions for the modified
Bessel functions for large values of the order (see, for instance, \cite%
{Abramowitz}). Introducing a new integration variable $x=k/\omega $ and
replacing the modified Bessel functions by their uniform asymptotic
expansions, in the limit $\xi \rightarrow j$ to the leading order one obtains%
\begin{equation}
\left\langle \varphi ^{2}\right\rangle ^{(j)}\approx \frac{k_{j}\Gamma
\left( \frac{D-1}{2}\right) }{(4\pi )^{\frac{D+1}{2}}|\xi -j|^{D-1}},
\label{phi2asnear}
\end{equation}%
where%
\begin{equation}
k_{j}=1-2\delta _{B_{j}0}. \label{kj}
\end{equation}%
This term has different signs for Dirichlet and non-Dirichlet boundary
conditions and is the same as that for a plate on the Minkowski bulk with $%
|\xi -j|$ being the distance from the plate.
In the limit $a\rightarrow b$ with the fixed values of the coefficients in
the boundary conditions and the mass, the 'interference' part (\ref{phi2int}%
) is divergent and for small values of $b/a-1$ the main contribution comes
from large values of $\omega $. Again, introducing an integration variable $%
x=k/\omega $ and replacing the modified Bessel functions by their uniform
asymptotic expansions, to the leading order one obtains%
\begin{equation}
\langle \varphi ^{2}\rangle ^{(ab)}\approx \frac{(4\pi )^{-\frac{D}{2}}}{%
\Gamma \left( \frac{D}{2}\right) }\int_{0}^{\infty }dy\,y^{D-1}\frac{%
k_{a}e^{2y(a-\xi )}+k_{b}e^{2y(\xi -b)}+2}{k_{a}k_{b}e^{2y(b-a)}-1}.
\label{phi2closeab}
\end{equation}
Large values of the proper accelerations for the plates correspond to the
limit $a,b\rightarrow 0$. In this limit the plates are close to the Rindler
horizon. From formulas (\ref{phi21pl}), (\ref{phi21plb}), (\ref{phi2int}) we
see that for fixed values of the ratios $a/b$, $\xi /b$, both single plate
and 'interference' parts behave as $b^{1-D}$ in the limit $b\rightarrow 0$.
In the limit $a\rightarrow 0$ for fixed values $\xi $ and $b$, the left
plate tends to the Rindler horizon for a fixed world line of the right
plate. The main contribution into the $\omega $-integral in Eq. (\ref%
{phi2int}) comes from small values $\omega $, $\omega \lesssim 1/\ln
(2/\lambda a)$. Using the formulas for the Bessel modified functions for
small arguments, it can be seen that the 'interference' part (\ref{phi2int})
vanishes as $\ln ^{-2}(2b/a)$.
Now we turn to the limit of small accelerations of the plates: $%
a,b\rightarrow \infty $ with fixed values $b-a$, $B_{j}/A_{j}$, and $m$. In
this case the main contribution comes from large values of $\omega $. Using
the uniform asymptotic formulas for the Bessel modified functions, the
following formula is obtained for the single plate parts:%
\begin{equation}
\left\langle \varphi ^{2}\right\rangle ^{(j)}\approx -\frac{(4\pi )^{-\frac{D%
}{2}}}{\Gamma \left( \frac{D}{2}\right) }\int_{m}^{\infty }dy\,\left(
y^{2}-m^{2}\right) ^{\frac{D}{2}-1}\frac{e^{-2y|\xi -j|}}{c_{j}(y)},
\label{phi21plclose}
\end{equation}%
with the notation%
\begin{equation}
c_{j}(y)=\frac{A_{j}-n^{(j)}B_{j}y}{A_{j}+n^{(j)}B_{j}y},\quad
n^{(a)}=1,\quad n^{(b)}=-1. \label{cjyn}
\end{equation}%
Similarly, for the 'interference' term we find
\begin{equation}
\langle \varphi ^{2}\rangle ^{(ab)}\approx \frac{(4\pi )^{-\frac{D}{2}}}{%
\Gamma \left( \frac{D}{2}\right) }\int_{m}^{\infty }dy\,\frac{\left(
y^{2}-m^{2}\right) ^{\frac{D}{2}-1}}{c_{a}(y)c_{b}(y)e^{2y(b-a)}-1}\left[
2-\sum_{j=a,b}\frac{e^{-2y|\xi -j|}}{c_{j}(y)}\right] . \label{phi2close}
\end{equation}%
Formulae (\ref{phi21plclose}) and (\ref{phi2close}) coincide with the
corresponding expressions for the geometry of two parallel plates on the
Minkowski bulk. In this limit, $\xi $ corresponds to the Cartesian
coordinate perpendicular to the plates which are located at $\xi =a$ and $%
\xi =b$.
For large values of the mass, $ma\gg 1$, we introduce in (\ref{phi21pl}) and
(\ref{phi21plb}) a new integration variable $y=\lambda /m$. The main
contribution into the $\omega $-integral comes from the values $\omega \sim
\sqrt{ma}$. By using the uniform asymptotic expansions for the Bessel
modified functions for large values of the order and further expanding over $%
\omega /ma$, for the single plate parts to the leading order one finds%
\begin{equation}
\left\langle \varphi ^{2}\right\rangle ^{(j)}\approx -\frac{m^{\frac{D}{2}%
-1}e^{-2m|\xi -j|}\sqrt{j/\xi }}{2(4\pi )^{\frac{D}{2}}c_{j}(m)|\xi -j|^{%
\frac{D}{2}}}, \label{phi2largemass}
\end{equation}%
for $j=a,b$. By the similar way, for the 'interference' part we obtain the
formula%
\begin{equation}
\langle \varphi ^{2}\rangle ^{(ab)}\approx \frac{m^{\frac{D}{2}-1}e^{2m(a-b)}%
\sqrt{ab}}{(4\pi )^{\frac{D}{2}}\xi c_{a}(m)c_{b}(m)(b-a)^{\frac{D}{2}}}.
\label{phi2largemassab}
\end{equation}%
As we could expect, the both single plate and 'interference' parts are
exponentially suppressed for large values of the mass.
\subsection{VEV of the energy-momentum tensor}
By using the field equation it can be seen that the expression for the
energy-momentum tensor of the scalar field under consideration can be
presented in the form
\begin{equation}
T_{ik}=\nabla _{i}\varphi \nabla _{k}\varphi +\left[ \left( \zeta -\frac{1}{4%
}\right) g_{ik}\nabla _{l}\nabla ^{l}-\zeta \nabla _{i}\nabla _{k}\right]
\varphi ^{2}, \label{EMT2}
\end{equation}%
and the corresponding trace is equal to
\begin{equation}
T_{i}^{i}=D(\zeta -\zeta _{c})\nabla _{i}\nabla ^{i}\varphi
^{2}+m^{2}\varphi ^{2}. \label{trace}
\end{equation}%
By virtue of Eq. (\ref{EMT2}), the VEV of the energy-momentum tensor is
expressed in terms of the Wightman function as
\begin{equation}
\langle 0\left\vert T_{ik}(x)\right\vert 0\rangle =\lim_{x^{\prime
}\rightarrow x}\nabla _{i}\nabla _{k}^{\prime }G^{+}(x,x^{\prime })+\left[
\left( \zeta -\frac{1}{4}\right) g_{ik}\nabla _{l}\nabla ^{l}-\zeta \nabla
_{i}\nabla _{k}\right] \langle 0\left\vert \varphi ^{2}(x)\right\vert
0\rangle . \label{vevemtW}
\end{equation}%
Making use the formulas for the Wightman function and the field square, one
obtains two equivalent forms, corresponding to $j=a$ and $j=b$ (no summation
over $i$):
\begin{eqnarray}
\langle 0|T_{i}^{k}|0\rangle &=&\langle 0_{R}|T_{i}^{k}|0_{R}\rangle
+\langle T_{i}^{k}\rangle ^{(j)}-A_{D}\delta _{i}^{k}\int dk\,k^{D-2} \notag
\\
&&\times \lambda ^{2}\int_{0}^{\infty }d\omega \,\Omega _{j\omega }(\lambda
a,\lambda b)F^{(i)}\left[ Z_{\omega }^{(j)}(\lambda \xi ,\lambda j)\right] .
\label{Tik1}
\end{eqnarray}%
In this formula,
\begin{equation}
\langle 0_{R}|T_{i}^{k}|0_{R}\rangle =\delta _{i}^{k}\frac{A_{D}}{\pi }%
\int_{0}^{\infty }dkk^{D-2}\lambda ^{2}\int_{0}^{\infty }d\omega \sinh \pi
\omega \,f^{(i)}[K_{i\omega }(\lambda \xi )] \label{DFR}
\end{equation}%
is the corresponding VEV for the Fulling--Rindler vacuum without boundaries,
and the terms (no summation over $i$)
\begin{subequations}
\label{D1plateboundgen}
\begin{eqnarray}
\langle T_{i}^{k}\rangle ^{(a)} &=&-A_{D}\delta _{i}^{k}\int_{0}^{\infty
}dkk^{D-2}\lambda ^{2}\int_{0}^{\infty }d\omega \frac{\bar{I}_{\omega
}^{(a)}(\lambda a)}{\bar{K}_{\omega }^{(a)}(\lambda a)}F^{(i)}[K_{\omega
}(\lambda \xi )], \label{D1platebound} \\
\langle T_{i}^{k}\rangle ^{(b)} &=&-A_{D}\delta _{i}^{k}\int_{0}^{\infty
}dkk^{D-2}\lambda ^{2}\int_{0}^{\infty }d\omega \frac{\bar{K}_{\omega
}^{(b)}(\lambda b)}{\bar{I}_{\omega }^{(b)}(\lambda b)}F^{(i)}[I_{\omega
}(\lambda \xi )], \label{D1plateboundb}
\end{eqnarray}%
are induced by the presence of a single plane boundaries located at $\xi =a$
and $\xi =b$ in the regions $\xi >a$ and $\xi <b$ respectively. In formulas (%
\ref{Tik1}), (\ref{D1platebound}), (\ref{D1plateboundb}) for a given
function $g(z)$ we use the notations
\end{subequations}
\begin{eqnarray}
F^{(0)}[g(z)] &=&\left( \frac{1}{2}-2\zeta \right) \left[ \left( \frac{dg(z)%
}{dz}\right) ^{2}+\left( 1+\frac{\omega ^{2}}{z^{2}}\right) g^{2}(z)\right] +%
\frac{\zeta }{z}\frac{d}{dz}g^{2}(z)-\frac{\omega ^{2}}{z^{2}}g^{2}(z),
\label{f0} \\
F^{(1)}[g(z)] &=&-\frac{1}{2}\left( \frac{dg(z)}{dz}\right) ^{2}-\frac{\zeta
}{z}\frac{d}{dz}g^{2}(z)+\frac{1}{2}\left( 1+\frac{\omega ^{2}}{z^{2}}%
\right) g^{2}(z), \label{f1} \\
F^{(i)}[g(z)] &=&\left( \frac{1}{2}-2\zeta \right) \left[ \left( \frac{dg(z)%
}{dz}\right) ^{2}+\left( 1+\frac{\omega ^{2}}{z^{2}}\right) g^{2}(z)\right] -%
\frac{g^{2}(z)}{D-1}\frac{k^{2}}{\lambda ^{2}}, \label{f23}
\end{eqnarray}%
where $i=2,\ldots ,D$ and the indices 0,1 correspond to the coordinates $%
\tau $, $\xi $, respectively. For the last term on the right of Eq. (\ref%
{Tik1}) we have to substitute $g(z)=Z_{\omega }^{(j)}(z,\lambda j)$. The
expressions for the functions $f^{(i)}[g(z)]$ in (\ref{DFR}) are obtained
from the corresponding expressions for $F^{(i)}[g(z)]$ by the replacement $%
\omega \rightarrow i\omega $. It can be easily seen that for a conformally
coupled massless scalar the energy-momentum tensor is traceless.
The purely Fulling-Rindler part (\ref{DFR}) of the energy-momentum tensor is
investigated in a large number of papers (see, for instance, references
given in \cite{Avag02}). The most general case of a massive scalar field in
an arbitrary number of spacetime dimensions has been considered in Ref. \cite%
{Hill} for conformally and minimally coupled cases and in Ref. \cite{Saha02}
for general values of the curvature coupling parameter. For a massless
scalar the VEV for the Rindler part without boundaries can be presented in
the form
\begin{eqnarray}
\langle T_{i}^{k}\rangle _{\mathrm{sub}}^{(R)} &=&\langle
0_{R}|T_{i}^{k}|0_{R}\rangle -\langle 0_{M}|T_{i}^{k}|0_{M}\rangle \notag \\
&=&-\frac{2\delta _{i}^{k}\xi ^{-D-1}}{(4\pi )^{\frac{D}{2}}\Gamma \left(
\frac{D}{2}\right) }\int_{0}^{\infty }\frac{\omega ^{D}g^{(i)}(\omega
)d\omega }{e^{2\pi \omega }+(-1)^{D}}\,, \label{subRindm0}
\end{eqnarray}%
where the expressions for the functions $g^{(i)}(\omega )$ are presented in
Ref. \cite{Saha02}, and $\left. |0_{M}\right\rangle $ is the amplitude for
the Minkowski vacuum without boundaries. Expression (\ref{subRindm0})
corresponds to the absence from the vacuum of thermal distribution with
standard temperature $T=(2\pi \xi )^{-1}$. In general, the corresponding
spectrum has non-Planckian form: the density of states factor is not
proportional to $\omega ^{D-1}d\omega $. The spectrum takes the Planckian
form for conformally coupled scalars in $D=1,2,3$ with $g^{(0)}(\omega
)=-Dg^{(i)}(\omega )=1$, $i=1,2,\ldots D$. It is of interest to note that
for even values of spatial dimension the distribution is Fermi-Dirac type
(see also \cite{Taga85,Oogu86}). For the massive scalar the energy spectrum
is not strictly thermal and the corresponding quantities do not coincide
with ones for the Minkowski thermal bath.
The boundary induced quantities (\ref{D1platebound}), (\ref{D1plateboundb})
are investigated in Ref. \cite{Candelas} for a conformally coupled $D=3$
massless Dirichlet scalar in the region on the right from a single plate and
in Ref. \cite{Saha02} for a massive scalar with general curvature coupling
and Robin boundary condition in an arbitrary number of dimensions in both
regions. The single boundary parts diverge at the plates surfaces $\xi =j$, $%
j=a,b$. Near the plates the leading terms of the corresponding asymptotic
expansions have the form (no summation over $i$)
\begin{equation}
\langle T_{i}^{i}\rangle ^{(j)}\approx \frac{Dj\langle T_{1}^{1}\rangle
^{(j)}}{(D-1)(j-\xi )}\approx \frac{D(\zeta _{c}-\zeta )\Gamma \left( \frac{%
D+1}{2}\right) }{2^{D}\pi ^{\frac{D+1}{2}}|\xi -j|^{D+1}}k_{j},
\label{1basymppe}
\end{equation}%
with $i=0,2,\ldots ,D$, and $k_{j}$ is defined by Eq. (\ref{kj}). These
leading terms vanish for a conformally coupled scalar and coincide with the
corresponding quantities for a plane boundary in the Minkowski vacuum.
Now let us present the VEV (\ref{Tik1}) in the form%
\begin{equation}
\langle 0|T_{i}^{k}|0\rangle =\langle 0_{R}|T_{i}^{k}|0_{R}\rangle
+\sum_{j=a,b}\langle T_{i}^{k}\rangle ^{(j)}+\langle T_{i}^{k}\rangle
^{(ab)}, \label{Tikdecomp}
\end{equation}%
where (no summation over $i$)
\begin{eqnarray}
\langle T_{i}^{k}\rangle ^{(ab)} &=&-A_{D}\delta _{i}^{k}\int_{0}^{\infty
}dk\,k^{D-2}\lambda ^{2}\int_{0}^{\infty }d\omega \bar{I}_{\omega
}^{(a)}(\lambda a) \notag \\
&&\times \left[ \frac{F^{(i)}[Z_{\omega }^{(b)}(\lambda \xi ,\lambda b)]}{%
\bar{I}_{\omega }^{(b)}(\lambda b)Z_{\omega }(\lambda a,\lambda b)}-\frac{%
F^{(i)}[K_{\omega }(\lambda \xi )]}{\bar{K}_{\omega }^{(a)}(\lambda a)}%
\right] \label{intterm1}
\end{eqnarray}%
is the 'interference' term. The surface divergences are contained in the
single boundary parts and this term is finite for all values $a\leq \xi \leq
b$. An equivalent formula for $\langle T_{i}^{k}\rangle ^{(ab)}$ is obtained
from Eq. (\ref{intterm1}) by replacements (\ref{replacement}).
Both single plate and 'interference' parts separately satisfy the standard
continuity equation for the energy-momentum tensor, which for the geometry
under consideration takes the form
\begin{equation}
\frac{d(\xi \left\langle T_{1}^{1}\right\rangle )}{d\xi }=\left\langle
T_{0}^{0}\right\rangle . \label{rel}
\end{equation}%
For a conformally coupled massless scalar field the both parts are traceless
and we have an additional relation $\left\langle T_{i}^{i}\right\rangle =0$.
In the limit $a\rightarrow b$ expression (\ref{intterm1}) is divergent and
for small values of $b/a-1$ the main contribution comes from the large
values of $\omega $. Introducing a new integration variable $x=k/\omega $
and replacing Bessel modified functions by their uniform asymptotic
expansions for large values of the order, at the leading order one receives%
\begin{eqnarray}
\langle T_{i}^{i}\rangle ^{(ab)} &\approx &-\frac{(4\pi )^{-\frac{D}{2}}}{%
\Gamma \left( \frac{D}{2}+1\right) }\int_{0}^{\infty }dy\frac{y^{D}}{%
k_{a}k_{b}e^{2y(b-a)}-1} \notag \\
&&\times \left[ 1+2D\left( 1-\delta _{1}^{i}\right) (\xi -\xi
_{c})\sum_{j=a,b}k_{j}e^{-2y|\xi -j|}\right] . \label{Tiiclose}
\end{eqnarray}
In the limit of large proper accelerations for the plates, $a,b\rightarrow 0$%
, for fixed values $a/b$ and $\xi /b$, the world lines of both plates are
close to the Rindler horizon. In this case the single plate and
'interference' parts grow as $b^{-D-1}$. The situation is essentially
different when the world line of the left plane tends to the Rindler
horizon, $a\rightarrow 0$, whereas $b$ and $\xi $ are fixed. By the way
similar to that for the case of the field square, it can be seen that in
this limit the 'interference' part (\ref{intterm1}) vanishes as $\ln
^{-2}(2b/a)$.
In the limit of small proper accelerations, $a,b\rightarrow \infty $ with
fixed values $b-a$, $B_{j}/A_{j}$, and $m$, the main contribution comes from
large values of $\omega $. Using the asymptotic formulas for the Bessel
modified functions, to the leading order one obtains (no summation over $i$)
\begin{eqnarray}
\langle T_{i}^{i}\rangle ^{(j)} &\approx &\frac{4\left( 1-\delta
_{1}^{i}\right) }{(4\pi )^{\frac{D}{2}}\Gamma \left( \frac{D}{2}\right) }%
\int_{m}^{\infty }dy\,(y^{2}-m^{2})^{\frac{D}{2}} \notag \\
&&\times \frac{e^{-2y|\xi -j|}}{c_{j}(y)}\left( \zeta -\zeta _{c}+\frac{%
\zeta -1/4}{y^{2}-m^{2}}m^{2}\right) , \label{EMT1plMink}
\end{eqnarray}%
for the single boundary terms, and%
\begin{eqnarray}
\langle T_{i}^{i}\rangle ^{(ab)} &\approx &-\frac{(4\pi )^{-\frac{D}{2}}}{%
\Gamma \left( \frac{D}{2}+1\right) }\int_{m}^{\infty }dy\frac{(y^{2}-m^{2})^{%
\frac{D}{2}}}{c_{a}(y)c_{b}(y)e^{2y(b-a)}-1} \notag \\
&&\times \left[ 1-\left( 1-\delta _{1}^{i}\right) \frac{4D(\xi -\xi
_{c})y^{2}-m^{2}}{2(y^{2}-m^{2})}\sum_{j=a,b}\frac{e^{-2y|\xi -j|}}{c_{j}(y)}%
\right] , \label{D2Mink0}
\end{eqnarray}%
for the 'interference' term and with the function $c_{j}(y)$ defined by (\ref%
{cjyn}). These expressions are exactly the same as the corresponding
expressions for the geometry of two parallel plates on the Minkowski
background investigated in \cite{Rome02} for a massless scalar and in Ref.
\cite{Mate} for the massive case. In particular, the single boundary terms
vanish for a conformally coupled massless scalar.
In the large mass limit, $ma\gg 1$, by the method similar to that used in
the previous subsection for the field square, it can be seen that the both
single plate and 'interference' parts are exponentially suppressed (no
summation over $i$): $\langle T_{i}^{i}\rangle ^{(j)}\sim m^{D/2+1}\exp
[-2m|\xi -j|]$, $j=a,b$, for single plate parts and $\langle
T_{i}^{i}\rangle ^{(ab)}\sim m^{D/2+1}\exp [2m(a-b)]$ for the 'interference'
part.
\section{'Interaction' forces between the plates}
\label{sec:IntForce}
Now we turn to the 'interaction' forces between the plates due to
the vacuum fluctuations. The vacuum force acting per unit surface of
the plate at $\xi =j$ is determined by the ${}_{1}^{1}$--component
of the vacuum energy-momentum tensor evaluated at this point. The
corresponding effective pressures can be presented as a sum of two
terms:
\begin{equation}
p^{(j)}=p_{1}^{(j)}+p_{\mathrm{(int)}}^{(j)},\quad j=a,b. \label{FintD}
\end{equation}%
The first term on the right is the pressure for a single plate at
$\xi =j$ when the second plate is absent. This term is divergent due
to the surface divergences in the subtracted vacuum expectation
values and needs additional renormalization. This can be done, for
example, by applying the generalized zeta function technique to the
corresponding mode sum. This procedure is similar to that used in
Ref. \cite{Saha04} for the evaluation of the total Casimir energy in
the cases of Dirichlet and Neumann boundary conditions and in Ref.
\cite{SahSet04b} for the evaluation of the surface energy for a
single Robin plate. This calculation lies on the same line with the
evaluation of the total Casimir energy and surface densities and
will be presented in the forthcoming paper \cite{SahaDav}. Note that
in the formulae for the VEV of the energy-momentum tensor the Robin
coefficients enter in the form of the dimensionless combination
$\beta _j=B_j/(jA_j)$. As a result in the massless case from the
dimensional arguments we expect that the single plate part will have
the form $p_{1}^{(j)}=\alpha (\beta _j)j^{-(D+1)}$. The coefficient
$\alpha (\beta _j)$ in this formula will change if we will change
the renormalization scale and can be fixed by imposing suitable
renormalization conditions which relates it to observables.
The second term on the right of Eq. (\ref{FintD}),
\begin{equation}
p_{\mathrm{(int)}}^{(j)}=-\left[ \langle T_{1}^{1}\rangle ^{(l)}+\langle
T_{1}^{1}\rangle ^{(ab)}\right] _{\xi =j}, \label{pintD}
\end{equation}%
with $j,l=a,b$, $l\neq j$, is the pressure induced by the presence
of the second plate, and can be termed as an 'interaction' force.
This term is finite for all nonzero distances between the plates and
is not affected by the renormalization procedure. Note that the term
'interaction' here should be understood conditionally. The quantity
$p_{\mathrm{(int)}}^{(j)}$ determines the force by which the scalar
vacuum acts on the plate due to the modification of the spectrum for
the zero-point fluctuations by the presence of the second plate. As
the vacuum properties are $\xi $-dependent, there is no a priori
reason for the 'interaction' terms (and also for the total pressures
$p^{(j)}$) to be equal for $j=a$ and $\ j=b$, and the corresponding
forces in general are different. For the plate at $\xi =j$ the
'interaction' term is due to the third summand on the right of Eq.
(\ref{Tik1}). Substituting into this term $\xi =j$ and using the
Wronskian for the modified Bessel functions one has
\begin{equation}
p_{\mathrm{(int)}}^{(j)}=\frac{A_{D}A_{j}^{2}}{2j^{2}}\int_{0}^{\infty
}dk\,k^{D-2}\int_{0}^{\infty }d\omega \left[ \left( \lambda ^{2}j^{2}+\omega
^{2}\right) \beta _{j}^{2}+4\zeta \beta _{j}-1\right] \,\Omega _{j\omega
}(\lambda a,\lambda b), \label{pint2}
\end{equation}%
with $\beta _j$ defined in the paragraph after formula
(\ref{FintD}). In dependence of the values for the coefficients in
the boundary conditions, the effective pressures (\ref{pint2}) can
be either positive or negative, leading to repulsive or attractive
forces. It can be seen that for Dirichlet boundary condition on
one plate and Neumann boundary condition on the other one has $p_{\mathrm{%
(int)}}^{(j)}>0$ and the 'interaction' forces are repulsive for all
distances between the plates. Note that for Dirichlet or Neumann
boundary conditions on both plates the 'interaction' forces are
always attractive \cite{Avag02}.
By using the relation%
\begin{equation}
\left[ \left( \lambda ^{2}j^{2}+\omega ^{2}\right) \beta _{j}^{2}+\beta
_{j}-1\right] \,A_{j}^{2}\Omega _{j\omega }(\lambda a,\lambda b)=n^{(j)}j%
\frac{\partial }{\partial j}\ln \left\vert 1-\frac{\bar{I}_{\omega
}^{(a)}(\lambda a)\bar{K}_{\omega }^{(b)}(\lambda b)}{\bar{I}_{\omega
}^{(b)}(\lambda b)\bar{K}_{\omega }^{(a)}(\lambda a)}\right\vert ,
\label{pintrel2}
\end{equation}%
with $n^{(j)}$ from (\ref{cjyn}), expressions (\ref{pint2}) for the
'interaction' forces can be written in another equivalent form
\begin{eqnarray}
p_{\mathrm{(int)}}^{(j)} &=&n^{(j)}\frac{A_{D}}{2j}\int_{0}^{\infty
}dk\,k^{D-2}\int_{0}^{\infty }d\omega \left[ 1+\frac{\left( 4\zeta -1\right)
\beta _{j}}{\left( \lambda ^{2}j^{2}+\omega ^{2}\right) \beta _{j}^{2}+\beta
_{j}-1}\right] \notag \\
&&\times \frac{\partial }{\partial j}\ln \left\vert 1-\frac{\bar{I}_{\omega
}^{(a)}(\lambda a)\bar{K}_{\omega }^{(b)}(\lambda b)}{\bar{I}_{\omega
}^{(b)}(\lambda b)\bar{K}_{\omega }^{(a)}(\lambda a)}\right\vert .
\label{pint3}
\end{eqnarray}%
For Dirichlet and Neumann scalars the second term in the square
brackets is zero. To clarify the dependence of the vacuum
'interaction' forces on the
parameters $a,b$ it is useful to write down the corresponding derivatives:%
\begin{eqnarray}
n^{(j)}\frac{\partial p_{\mathrm{(int)}}^{(j)}}{\partial l} &=&\frac{%
A_{D}A_{a}^{2}A_{b}^{2}}{2abj}\int_{0}^{\infty }dk\,k^{D-2}\int_{0}^{\infty
}d\omega \left[ \left( \lambda ^{2}j^{2}+\omega ^{2}\right) \beta
_{j}^{2}+4\zeta \beta _{j}-1\right] \notag \\
&&\times \frac{\left( \lambda ^{2}l^{2}+\omega ^{2}\right) \beta
_{l}^{2}+\beta _{l}-1}{Z_{\omega }^{2}(\lambda a,\lambda b)},
\label{pintder}
\end{eqnarray}%
with $j,l=a,b$, $j\neq l$.
Now we consider the limiting cases for the 'interaction' forces
between the plates. For small distances between the plates,
$b/a-1\ll 1$, to the leading order over $1/(b-a)$, the 'interaction'
forces are the same as for the plates
in the Minkowski bulk with the distance $b-a$. The latter are determined by $%
i=1$ component of tensor (\ref{D2Mink0}). In this limit the
'interaction' forces are repulsive in the case of Dirichlet boundary
condition on one plate and non-Dirichlet boundary condition on the
another, and are attractive for all other cases. Note that in the
limit $b\to a$ with fixed values of the boundary coefficients and
the proper acceleration of the left plate, $a^{-1}$, the
renormalized single plate parts $p_1^{(j)}$ remain finite while the
'interaction' part goes to infinity. This means that for
sufficiently small distances between the plates the 'interaction'
term on the right of formula (\ref{FintD}) will dominate.
For large distances between the plates one has $a/b\ll 1$.
Introducing a new integration variable $y=\lambda b$ and using the
asymptotic formulas for the Bessel modified functions for small
values of the argument, we can see that the subintegrand is proportional to $%
(ya/b)^{2\omega }$. It follows from here that the main contribution into the
$\omega $-integral comes from small values of $\omega $. Expanding with
respect to $\omega $, in the leading order we obtain
\begin{subequations}
\label{pintas2gen}
\begin{eqnarray}
p_{\mathrm{(int)}}^{(a)} &\approx &\frac{\pi ^{2}A_{D}\left( 1-4\zeta \beta
_{a}\right) A_{b}^{2}}{24(D-1)a^{2}b^{D-1}\ln ^{3}(2b/a)}\int_{mb}^{\infty
}dy\,\left( y^{2}-m^{2}b^{2}\right) ^{\frac{D-1}{2}}\frac{y^{2}\beta
_{b}^{2}-1}{y\bar{I}_{0}^{(b)2}(y)}, \label{pintas2a} \\
p_{\mathrm{(int)}}^{(b)} &\approx &\frac{\pi ^{2}A_{D}A_{b}^{2}}{%
24b^{D+1}\ln ^{2}(2b/a)}\int_{mb}^{\infty }dy\,y\left(
y^{2}-m^{2}b^{2}\right) ^{\frac{D-3}{2}}\frac{y^{2}\beta _{b}^{2}+4\zeta
\beta _{b}-1}{\bar{I}_{0}^{(b)2}(y)}. \label{pintas2b}
\end{eqnarray}%
For a massless minimally coupled scalar field these pressures have
the same sign. In Figure \ref{fig2} we have plotted the dependence
of the vacuum 'interaction' forces between the plates as functions
on the ratio $a/b$ for a
massless minimally coupled scalar field in $D=3$ with Robin coefficients $%
\beta _{a}=0$ and $\beta _{b}=1/5$. These forces are repulsive for
small distances and are attractive for large distances. In the
presented example there are parameter choices which give vanishing
'interaction' forces.
\begin{figure}[tbph]
\begin{center}
\epsfig{figure=Sahafig2.eps,width=7cm,height=5.6cm}
\end{center}
\caption{The vacuum effective pressures $a^{D+1}p_{\mathrm{(int)}}^{(j)}$, $%
j=a,b$, determining the 'interaction' forces between the plates, as
functions on $a/b$ for a massless minimally coupled scalar field in
$D=3$ with Robin coefficients $\protect\beta _{a}=0$ and
$\protect\beta _{b}=1/5$.} \label{fig2}
\end{figure}
For large values of the mass, $ma\gg 1$, the main contribution into the $%
\omega $-integral comes from the values $\omega \sim \sqrt{m(b-a)}$. By
using the uniform asymptotic expansions for the Bessel modified functions,
to the leading order one finds
\end{subequations}
\begin{equation}
p_{\mathrm{(int)}}^{(j)}\approx \frac{B_{j}^{2}m^{\frac{D}{2}+3}e^{2m(a-b)}%
\sqrt{ab}}{(4\pi )^{\frac{D}{2}}c_{l}(m)[A_{j}-n^{(j)}B_{j}m]^{2}j(b-a)^{%
\frac{D}{2}}}, \label{pjlargemass}
\end{equation}%
for $B_{j}\neq 0$, $j,l=a,b$, $j\neq l$. For $B_{j}=0$ the leading term is
the same as that for the $A_{j}=0$ case. The latter is obtained from (\ref%
{pjlargemass}).
\section{Conclusion}
\label{sec:Conc}
The use of general coordinate transformations in quantum field
theory in flat spacetime leads to an infinite number of unitary
inequivalent representations of the commutation relations with
different vacuum states. In particular, the vacuum state for a
uniformly accelerated observer, the Fulling--Rindler vacuum, turns
out to be inequivalent to that for an inertial observer, the
Minkowski vacuum. In the present paper we have considered the
positive frequency Wightman function, the VEVs of the field square
and the energy-momentum tensor for a scalar field in the region
between two infinite parallel plates moving by uniform proper
accelerations, assuming that the field is prepared in the
Fulling-Rindler vacuum state and satisfies the Robin boundary
conditions on the plates. The general case is investigated when
the constants in the Robin boundary conditions are different for
separate plates. The boundaries and boundary conditions are static
in the Rindler coordinates and no Rindler quanta are created. The
only effect of the imposition of boundary conditions on a quantum
field is the vacuum polarization. The Wightman function is
presented in the form of the mode sum involving series over zeros
$\omega =\omega _{n}$ of the
function $Z_{i\omega }(\lambda a,\lambda b)$ defined by relation (\ref%
{Zomega}). For the summation of these series we have applied a summation
formula derived in Appendix \ref{section:App1} by using the generalized
Abel-Plana formula. This allowed to extract from the Whightman function the
part due to a single plate and to present the additional part in terms of
integrals, exponentially convergent in the coincidence limit. The single
plate part is investigated previously in Ref \cite{Saha02}. The contribution
induced by the second boundary is presented in two alternative forms, Eqs. (%
\ref{Wigh3}), (\ref{Wigh31}), obtained from each other by replacements (\ref%
{replacement}). In Section \ref{sec:VEVEMT}, by using the expression
for the Wightman function, we evaluate the VEVs of the field square
and the energy-momentum tensor. The latter is diagonal and the
corresponding components are determined by relation (\ref{Tik1}).
Various limiting cases are studied. In the limit of small distances
between the plates, to the leading order, the VEVs are the same as
those for two parallel plates in the Minkowski vacuum. In the near
horizon limit, $a,b\rightarrow 0$, the proper accelerations of the
plates are large. For fixed values $a/b$ and $\xi /b$, the VEVs grow
as $b^{1-D}$ for the field square and as $b^{-1-D}$ for the
components of the energy-momentum tensor. In the limit when the
world line of the left plate tends to the Rindler horizon,
$a\rightarrow 0$, for a fixed proper accelerations of the right
plate and the observer, the VEVs induced by the left plate vanish as
$\ln ^{-2}(2b/a)$ for both the field square and energy-momentum
tensor. For large values of the mass, the both single plate and
interference parts of the VEVs are exponentially suppressed. The
vacuum forces acting on boundaries are determined by
$_{1}^{1}$-component of the stress and are investigated in Section
\ref{sec:IntForce}. These forces are presented as the sums of two
terms. The first ones correspond to the forces acting on a single
boundary then the second boundary is absent. Due to the surface
divergences in the VEVs of the energy-momentum tensor, these forces
are infinite and need an additional renormalization. The another
terms in the vacuum forces are finite and are induced by the
presence of the second boundary. They correspond to the
'interaction' forces between the plates. These forces per unit
surface are determined by formula (\ref{pintD}). For small distances
between the plates, to the leading order the standard Casimir result
on background of the Minkowski vacuum is rederived. In this limit
the 'interaction' forces are repulsive in the case of Dirichlet
boundary condition on one plate and non-Dirichlet boundary condition
on the another, and are attractive for all other cases. For large
distances, the 'interaction' forces can be either attractive or
repulsive in dependence of the coefficients in the boundary
conditions. In Figure \ref{fig2} we have presented an example when
the vacuum 'interaction' forces are repulsive for small distances
and are attractive for large distances. This provides a possibility
for the stabilization of the interplate distance by vacuum forces.
However, it should be noted that to make reliable predictions
regarding quantum stabilization, the renormalized single plate parts
$p_1^{(j)}$ also should be taken into account. The calculation of
these quantities lies on the same line with the evaluation of the
total Casimir energy and surface densities and will be presented in
the forthcoming paper \cite{SahaDav}.
In the present paper we have investigated the VEV of the bulk
energy-momentum tensor. For scalar fields with general curvature
coupling and Robin boundary conditions, in Ref. \cite{Rome02} it has
been shown that in the discussion of the relation between the mode
sum energy, evaluated as the sum of the zero-point energies for each
normal mode of frequency, and the volume integral of the
renormalized energy density for the Robin parallel plates geometry
it is necessary to include in the energy a surface
term concentrated on the boundary (see also the discussion in Ref. \cite%
{Full03,Milt04}). Similar issues for the spherical and cylindrical boundary
geometries and in braneworld scenarios are discussed in Refs. \cite%
{Saha01,Rome01,Saha04d}. An expression for the surface energy-momentum
tensor for a scalar field with a general curvature coupling parameter in the
general case of bulk and boundary geometries is derived in Ref. \cite%
{Saha04c}. The investigation of the total Casimir energy, the
surface densities, and the energy balance for the geometry under
consideration will be reported in \cite{SahaDav}.
The formulas derived in this paper can be used to generate the vacuum
densities for a conformally coupled massless scalar field in de Sitter
spacetime in presence of two curved branes on which the field obeys the
Robin boundary conditions with coordinate dependent coefficients. The
corresponding procedure is similar to that realized in Ref. \cite{SahSet04}
for the geometry of a single brane and is based on the conformal relation
between the Rindler and de Sitter line elements. The results obtained above
can be also applied to the geometry of two parallel plates near the $D=3$
\textquotedblright Rindler wall.\textquotedblright\ This wall is described
by the static plane-symmetric distribution of the matter with the diagonal
energy-momentum tensor $T_{i}^{k}=\mathrm{diag}(\varepsilon
_{m},-p_{m},-p_{m},-p_{m})$ (see Ref. \cite{Avak01}). Below we will denote
by $x$ the coordinate perpendicular to the wall and will assume that the
plane $x=0$ is at the center of the wall. If the plane $x=x_{s}$ is the
boundary of the wall, when the external ($x>x_{s}$) line element with the
time coordinate $t$ can be transformed into form (\ref{metric}) with
\begin{equation}
\xi (x)=x-x_{s}+\frac{1}{2\pi \sigma _{s}},\quad \tau =2\pi \sigma _{s}\sqrt{%
g_{00}(x_{s})}t. \label{ksiRw}
\end{equation}%
In this formula the parameter $\sigma _{s}$ is the mass per unit surface of
the wall and is determined by the distribution of the matter:%
\begin{equation}
\sigma _{s}=2\int_{0}^{x_{s}}\left( \varepsilon _{m}+3p_{m}\right) \left[
\frac{g(x)}{g(x_{s})}\right] ^{1/2}dx. \label{sigmas1}
\end{equation}%
For the \textquotedblright Rindler wall\textquotedblright\ one has $%
g_{22}^{\prime }(x)|_{x=0}<0$ \cite{Avak01} (the external solution for the
case $g_{22}^{\prime }(x)|_{x=0}>0$ is described by the standard Taub
metric). Hence, the Wightman function, the VEVs for the field square and the
energy-momentum tensor in the region between two plates located at $x=x_{1}$
and $x=x_{2}$, $x_{i}>x_{s}$ near the \textquotedblright Rindler
wall\textquotedblright\ are obtained from the results given above
substituting $\xi _{i}=\xi (x_{i})$, $i=1,2$ and $\xi =\xi (x)$. For $\sigma
_{s}>0$, $x\geq x_{s}$ one has $\xi (x)\geq \xi (x_{s})>0$ and the Rindler
metric is regular everywhere in the external region.
\section{Acknowledgements}
The authors are grateful to Armen Yeranyan for useful discussions. This work
was supported by the Armenian National Science and Education Fund (ANSEF)
Grant No. 05-PS-hepth-89-70 and by the Armenian Ministry of Education and
Science Grant No. 0124.
|
1,108,101,566,626 | arxiv | \section{Introduction}
\noindent Dirac quantization of the first class constrained systems
\cite{Dirac} has many attractive features. The quantum theory can be
constructed through defining the physical states which are annihilated
by the operators of the first class
constraints, and then, taking the mean value of the canonical
operators, we obtain the physical values. In the resulting quantum mechanics
there is no Dirac bracket, and consequently, one can avoid such
difficult problems as the complicated general solution of the
Dirac brackets and also the factor-ordering problems. This problem,
on the other hand, appears in the explicit representation of the canonical
operators.
It is well known that the first class constraints satisfy the
algebra \cite{HTEI}
\begin{eqnarray}
\{ T_a,T_b \} = C_{ab}^c T_c,\\
\{T_a,H_0 \}=B_a^b T_b,
\end{eqnarray}
\noindent where
$T_a$ and $T_b$ are the first class constraints, $C_{ab}^c$
and $B_a^b$ are the structure constants and
$H_0$ is the original Hamiltonian. The physical states
are obtained by imposing the condition
\begin{equation}
\tilde{T}_\alpha | \psi \rangle_{phys} = 0, \,\,\,\, \alpha=1,2,
\end{equation}
\noindent where $\tilde{T}_\alpha$ are the operators of the first
class constraints.
If the system has only the second class constraints then it is possible
to convert these constraints into the ones of the first class by
extending the phase space under special rules.
After this, one applies the Dirac procedure described above.
Batalin, Fradkin,
Fradkina and Tyutin\cite{BFFT} developed an elegant formalism
of transforming systems with the second class constraints into
the systems which contain only the first class constraints.
This is achieved with the aid of auxiliary fields which serve to
extend the phase space in a convenient way to transform the
second class into first class constraints.
This procedure is known as the BFFT formalism.
The original theory is matched when the so called unitary gauge is
chosen.
In the original formulation of the BFFT formalism, the
resulting first class constraints form an Abelian
algebra. This is naturally the case for the systems with the linear
second class constraints. Recently, Banerjee, Banerjee and
Ghosh \cite{Banerjee1}, have studied the non-abelian Proca model, and
Oliveira and Barcelos \cite{BW} have studied the non-linear sigma model.
In these works the BFFT formalism has been adapted in order
that the first class constraints can form a non-abelian algebra.
\footnote{For the systems with initial first and second class constraints,
the former had also to be modified in order to keep the same initial algebra,
either abelian or non-abelian \cite{Kim}.}From these examples,
it might appear that the original formulation of the BFFT formalism is only
addressed to the theories with linear second class constraints, while the
extension of Banerjee, Banerjee and Ghosh is addressed to the non-linear ones.
At same time the non-linear second class constraints for the same non-abelian Proca model and for the Skyrme model \cite{Skyrme} have been
recently studied in
the context of the original BFFT formalism \cite{Banerjee3,WOJAN}. In spite
of this, it is important to emphasize that the possibility pointed out by
Banerjee, Banerjee and Ghosh that one can obtain a non-abelian first
class theory leads to a richer structure compared with the usual BFFT case.
The purpose of this article is to convert the second class constraints,
which arise after the collective coordinates expansion of the Skyrme model
into first class ones. We achieve this by applying the non-abelian BFFT
formalism, and thus, employ
the Dirac method of first class constraints to quantize this system.
The paper is organized as follows. In Sec. 2, we give a brief outline
of the usual BFFT formalism and its non-abelian extension. We also
emphasize and clarify some of the particular aspects of the formalism.
In Sec. 3, we apply the non-abelian
BFFT formalism for the collective coordinates quantization of
the SU(2) Skyrme model. We make a special choice for the
structure functions, and consequently, obtain two different
simplified algebras for the first class constraints and the
non-abelian extended Hamiltonians. By using the Faddeev-Senjanovich
path integral procedure \cite{FS} we derive the
Lagrangians that lead the new theories. In Sec. 4, the spectrum
of the two simplified extended theories is calculated. In Sec. 5,
we present the conclusions.
\section {Brief review of the BFFT formalism and its non-abelian extension}
\renewcommand{\theequation}{2.\arabic{equation}}
\setcounter{equation}{0}
Let us consider a system described by a Hamiltonian $H_0$ in a
phase space $(q^i,p^i)$ with $i=1,\dots,N$. Here we suppose that the
coordinates are bosonic (extensions to include fermionic degrees of
freedom and to the continuous case can be done in a straightforward
way). It is also supposed that there the system possesses only the second class constraints. Denoting them by $T_a$, with $a=1,\dots ,M<2N$,
we arrive at the following algebra
\begin{equation}
\bigl\{T_a,\,T_b\bigr\}=\Delta_{ab},
\label{2.1}
\end{equation}
\noindent
where $\det(\Delta_{ab})\not=0$.
As it was mentioned above, the general purpose of the BFFT formalism
is to convert the second class constraints into the first class ones.
This goal is achieved by
introducing canonical variables, one for each of the second class constraint
(the connection between the number of the second class constraints and
the number of the new variables should be equal in order to preserve
the same number of the physical degrees of freedom in the resulting extended
theory). We denote these auxiliary variables by $\eta^a$ and assume
that they satisfy the following algebra
\begin{equation}
\bigl\{\eta^a,\,\eta^b\bigr\}=\omega^{ab}.
\label{2.2}
\end{equation}
\noindent
Here $\omega^{ab}$ is a constant non-degenerate matrix
(~det$(\omega^{ab})\neq 0$~).
The obtainment of $\omega^{ab}$ is embodied in the calculation
of the resulting first class constraints which are denoted as
$\tilde T_a$. Of course, these constraints depend on the new
variables $\eta^a$, that is
\begin{equation}
\tilde T_a=\tilde T_a(q,p;\eta),
\label{2.3}
\end{equation}
\noindent
and are supposed to satisfy the boundary condition
\begin{equation}
\tilde T_a(q,p;0)= T_a(q,p).
\label{2.4}
\end{equation}
\noindent
In the framework of the BFFT formalism, the characteristic property of
the new constraints is that they are assumed to be strongly
involutive, i.e.
\begin{equation}
\bigl\{\tilde T_a,\,\tilde T_b\bigr\}=0.
\label{2.5}
\end{equation}
\noindent
The solution of Eq.~(\ref{2.5}) can be achieved by considering
$\tilde T_a$ expanded as
\begin{equation}
\tilde T_a=\sum_{n=0}^\infty T_a^{(n)},
\label{2.6}
\end{equation}
\noindent
where $T_a^{(n)}$ is a term of order $n$ in $\eta$. The condition of compatibility with the boundary condition~(\ref{2.4}) requires
\begin{equation}
T_a^{(0)}=T_a.
\label{2.7}
\end{equation}
\noindent
Substituting the Eq.(\ref{2.6}) into~(\ref{2.5}) leads to a set of
equations, one for each coefficient of $\eta^n$. We list some of them
below
\begin{eqnarray}
&&\bigl\{T_a,T_b\bigr\}
+\bigl\{T_a^{(1)},T_b^{(1)}\bigr\}_{(\eta)}=0
\label{2.8}\\
&&\bigl\{T_a,T_b^{(1)}\bigr\}+\bigl\{T_a^{(1)},T_b\bigr\}
+\bigl\{T_a^{(1)},T_b^{(2)}\bigr\}_{(\eta)}
+\bigl\{T_a^{(2)},T_b^{(1)}\bigr\}_{(\eta)}=0
\label{2.9}\\
&&\bigl\{T_a,T_b^{(2)}\bigr\}
+\bigl\{T_a^{(1)},T_b^{(1)}\bigr\}_{(q,p)}
+\bigl\{T_a^{(2)},T_b\bigr\}
+\bigl\{T_a^{(1)},T_b^{(3)}\bigr\}_{(\eta)}
\nonumber\\
&&\phantom{\bigl\{T_a^{(0)},T_b^{(2)}\bigr\}_{(q,p)}}
+\bigl\{T_a^{(2)},T_b^{(2)}\bigr\}_{(\eta)}
+\bigl\{T_a^{(3)},T_b^{(1)}\bigr\}_{(\eta)}=0
\label{2.10}\\
&&\phantom{\bigl\{T_a^{(0)},T_b^{(2)}\bigr\}_{(q,p)}+}
\vdots
\nonumber
\end{eqnarray}
\noindent
Here the notations $\{,\}_{(q,p)}$ and $\{,\}_{(\eta)}$, represent the
parts of the Poisson bracket $\{,\}$ corresponding to the variables
$(q,p)$ and $(\eta)$, respectively. The
equations above are used iteratively to obtain the
corrections $T^{(n)}$ ($n\geq1$). Equation~(\ref{2.8}) gives
$T^{(1)}$. Using this result together with the Eq.~(\ref{2.9}),
one calculates $T^{(2)}$, and so on. Since $T^{(1)}$ is linear
in $\eta$ one can write it as
\begin{equation}
T_a^{(1)}=X_{ab}(q,p)\,\eta^b,
\label{2.11}
\end{equation}
\noindent where $X_{ab}$ are some new quantities.
Substituting this expression into (\ref{2.8}) and using
(\ref{2.1}) and (\ref{2.2}), we obtain
\begin{equation}
\Delta_{ab}+X_{ac}\,\omega^{cd}\,X_{bd}=0.
\label{2.12}
\end{equation}
\noindent
We notice that this equation does not define $X_{ab}$ in a unique way,
because it also contains the still unknown elements $\omega^{ab}$.
What is usually done is to choose $\omega^{ab}$ in such a way that the new
variables are unconstrained. One mights mention that
sometimes it is not possible to make such a choice\cite{Barc2}.
In this case, the new variables remain constrained. Consequently, the
consistency of the method requires an introduction of other new
variables in order to transform these constraints into the
first class ones. This may lead to an endless process. It is
important to emphasize that $\omega^{ab}$ can be fixed anyway.
However, even if one fixes $\omega^{ab}$, it is still not possible to
obtain a unique solution for $X_{ab}$. Let us check this point.
Since we are only considering bosonic coordinates~\footnote{The
problem also exists for the fermionic sector.},
$\Delta_{ab}$ and $\omega^{ab}$ are antisymmetric quantities. So,
expression (\ref{2.12}) includes $M(M-1)/2$ independent
equations. On the other hand, since there is no additional symmetry
involving $X_{ab}$, they should represent a set of $M^2$
independent quantities.
In the case when $X_{ab}$ does not depend on ($q,p$), it is easily
seen that the expression $T_a+\tilde T_a^{(1)}$ is already strongly
involutive for any choice we make and we succeed in obtaining
$\tilde T_a$. If this
is not so, the usual procedure is to introduce $T_a^{(1)}$ into Eq.
(\ref{2.9}) in order to calculate $T_a^{(2)}$ and so on. At this
point one faces a problem that has been the origin of some developments
of the BFFT method, including the adoption of a non-abelian constraint algebra. This occurs because we do not know {\it a priori} what is the best
choice we can make to go from one step to another. Sometimes it is
possible to figure out a convenient choice for $X_{ab}$ in order to
obtain a first class (abelian) constraint algebra at the first stage
of the process \cite{Banerjee3}. It is opportune to mention
that in ref. \cite{Banerjee4}, the use of a
non-abelian algebra was in fact a way of avoiding to dealing with the higher
orders of the iterative method. More recently, the method has been
used (in its abelian version) beyond the first correction
\cite{Banerjee2} but we mention that sometimes there are problems in
doing this \cite{Barc1}.
\newpage
Another point of the usual BFFT formalism is that any dynamic function
$A(q,p)$ (for instance, the Hamiltonian) has also to be properly
modified in order to be strongly involutive with the first class
constraints $\tilde T_a$. Denoting the modified quantity by $\tilde
A(q,p;\eta)$, we then have
\begin{equation}
\bigl\{\tilde T_a,\,\tilde A\bigr\}=0.
\label{2.13}
\end{equation}
\noindent
In addition, $\tilde A$ has to satisfy the boundary condition
\begin{equation}
\tilde A(q,p;0)=A(q,p).
\label{2.14}
\end{equation}
\noindent The derivation of $\tilde A$ is similar to what has been
done in getting $\tilde T_a$. Therefore, we consider an expansion
of the form
\begin{equation}
\tilde A=\sum_{n=0}^\infty A^{(n)},
\label{2.15}
\end{equation}
\noindent
where $A^{(n)}$ is also a term of order $n$ in $\eta$'s.
Consequently, the compatibility with Eq.~(\ref{2.14}) requires that
\begin{equation}
A^{(0)}=A.
\label{2.16}
\end{equation}
\noindent
The combination of Eqs.~(\ref{2.6}), (\ref{2.7}), (\ref{2.13}),
(\ref{2.15}), and (\ref{2.16}) gives the equations
\begin{eqnarray}
&&\bigl\{T_a,A\bigr\}
+\bigl\{T_a^{(1)},A^{(1)}\bigr\}_{(\eta)}=0
\label{2.17}\\
&&\bigl\{T_a,A^{(1)}\bigr\}+\bigl\{T_a^{(1)},A\bigr\}
+\bigl\{T_a^{(1)},A^{(2)}\bigr\}_{(\eta)}
+\bigl\{T_a^{(2)},A^{(1)}\bigr\}_{(\eta)}=0
\label{2.18}\\
&&\bigl\{T_a,A^{(2)}\bigr\}
+\bigl\{T_a^{(1)},A^{(1)}\bigr\}_{(q,p)}
+\bigl\{T_a^{(2)},\bigr\}
+\bigl\{T_a^{(1)},A^{(3)}\bigr\}_{(\eta)}
\nonumber\\
&&\phantom{\bigl\{T_a^{(0)},A^{(2)}\bigr\}_{(q,p)}}
+\bigl\{T_a^{(2)},A^{(2)}\bigr\}_{(\eta)}
+\bigl\{T_a^{(3)},A^{(1)}\bigr\}_{(\eta)}=0
\label{2.19}\\
&&\phantom{\bigl\{T_a^{(0)},A^{(2)}\bigr\}_{(q,p)}+}
\vdots
\nonumber
\end{eqnarray}
\noindent
which correspond to the coefficients of the powers 0, 1, 2, etc$\dots$ of
the variable $\eta$. It is just a matter of algebraic
work to show that the general expression for $A^{(n)}$ reads as
\begin{equation}
A^{(n+1)}=-{1\over n+1}\,\eta^a\,\omega_{ab}\,X^{bc}\,G_c^{(n)}.
\label{2.20}
\end{equation}
\noindent
where $\omega_{ab}$ and $X^{ab}$ are the inverses of $\omega^{ab}$
and $X_{ab}$, and
\begin{eqnarray}
G_a^{(n)}=\sum_{m=0}^n\bigl\{T_a^{(n-m)},\,A^{(m)}\bigr\}_{(q,p)}
+\sum_{m=0}^{n-2}\bigl\{T_a^{(n-m)},\,A^{(m+2)}\bigr\}_{(\eta)}\nonumber\\
+\bigl\{T_a^{(n+1)},\,A^{(1)}\bigr\}_{(\eta)}.
\label{2.21}
\end{eqnarray}
\noindent The general prescription of the usual BFFT method to obtain the
Hamiltonian is a direct use of the relations (\ref{2.15}) and
(\ref{2.20}). This works well for the system with linear constraints. For
non-linear theories, where it may be necessary to consider all orders
of the iterative process, this calculation might be quite
complicated. However, there is an alternative procedure that drastically
simplifies the algebraic work. The basic idea is to
obtain the involutive forms for the initial fields $q$ and $p$
\cite{Banerjee5}. This can be directly achieved from the previous
calculation of $\tilde A$. Denoting such fields by $\tilde q$ and
$\tilde p$ we have
\begin{equation}
H(q,p)\longrightarrow H(\tilde q,\tilde p)
=\tilde H(\tilde q,\tilde p).
\label{2.22}
\end{equation}
\noindent
It is obvious that the initial boundary condition in the BFFT
process, that is, the reduction of the involutive function to the
original function when the new fields are set to zero, remains
preserved. One can also mention that for the systems with linear
constraints, the new variables $\tilde q$ and $\tilde p$ are just
shifted in the auxiliary coordinate $\eta$ \cite{Ricardo}.
Finally, let us consider the case where the first class
constraints form a non-abelian algebra, i.e.
\begin{equation}
\bigl\{\tilde T_a,\,\tilde T_b\bigr\}=C_{ab}^c\,\tilde T_c.
\label{2.23}
\end{equation}
\noindent
The quantities $C_{ab}^c$ are the structure constants of the
non-abelian algebra. These constraints are considered to satisfy the
same previous conditions given by (\ref{2.3}), (\ref{2.4}),
(\ref{2.6}), and (\ref{2.7}). But now, instead of Eqs.
(\ref{2.8})-(\ref{2.10}), we obtain
\begin{eqnarray}
C_{ab}^c\,T_c&=&\bigl\{T_a,T_b\bigr\}
+\bigl\{T_a^{(1)},T_b^{(1)}\bigr\}_{(\eta)}
\label{2.24}\\
C_{ab}^c\,T_c^{(1)}&=&\bigl\{T_a,T_b^{(1)}\bigr\}
+\bigl\{T_a^{(1)},T_b\bigr\}
\nonumber\\
&&+\,\bigl\{T_a^{(1)},T_b^{(2)}\bigr\}_{(\eta)}
+\bigl\{T_a^{(2)},T_b^{(1)}\bigr\}_{(\eta)}
\label{2.25}\\
C_{ab}^c\,T_c^{(2)}&=&\bigl\{T_a,T_b^{(2)}\bigr\}
+\bigl\{T_a^{(1)},T_b^{(1)}\bigr\}_{(q,p)}
\nonumber\\
&&+\bigl\{T_a^{(2)},T_b^{(0)}\bigr\}_{(q,p)}
+\bigl\{T_a^{(1)},T_b^{(3)}\bigr\}_{(\eta)}
\nonumber\\
&&+\bigl\{T_a^{(2)},T_b^{(2)}\bigr\}_{(\eta)}
+\bigl\{T_a^{(3)},T_b^{(1)}\bigr\}_{(\eta)+}
\label{2.26}\\
&&\vdots
\nonumber
\end{eqnarray}
\noindent
The use of these equations is the same as before, i.e., they shall
work iteratively. Equation (\ref{2.24}) gives $T^{(1)}$. With this
result and Eq. (\ref{2.25}) one calculates $T^{(2)}$, and so on. To
calculate the first correction, we assume it is given by the same
general expression (\ref{2.11}). Introducing it into (\ref{2.24}), we
now get
\begin{equation}
C_{ab}^c\,T_c=\Delta_{ab}+X_{ac}\,\omega^{cd}\,X_{bd}.
\label{2.27}
\end{equation}
\noindent
Of course, the same difficulties concerning the
solutions of Eq.~(\ref{2.12}) also apply here, with the additional
problem of choosing the appropriate structure constants $C_{ab}^c$.
To obtain the embedding Hamiltonian $\tilde H(q,p,\eta)$ one cannot
use the simplified version discussed for the abelian case (embodied
into Eq.~(\ref{2.22}) ) because the algebra is not strong involutive
anymore. Thus we start from the fact that the new Hamiltonian $\tilde
H$ and the new constraints $\tilde T_a$ satisfy the relation
\begin{equation}
\bigl\{\tilde T_a,\,\tilde H\bigr\}=B_a^b\,\tilde T_b,
\label{2.28}
\end{equation}
\noindent where the coefficients $B_a^b$ are the
structure constant of the non-abelian algebra. The involutive
Hamiltonian is considered to
satisfy the same conditions\break(\ref{2.14})-(\ref{2.16}). We then obtain
that the general correction $H^{(n)}$ is given by a relation similar
to (\ref{2.20}), but now the quantities $G_a^{(n)}$ are given by
\begin{eqnarray}
G_a^{(n)}&=&\sum_{m=0}^n\bigl\{T_a^{(n-m)},\,H^{(m)}\bigr\}_{(q,p)}
+\sum_{m=0}^{n-2}\bigl\{T_a^{(n-m)},\,A^{(m+2)}\bigr\}_{(\eta)}
\nonumber\\
&&+\,\,\bigl\{T_a^{(n+1)},\,A^{(1)}\bigr\}_{(\eta)}
-B_a^b\,T_c^{(n)}.
\label{2.30}
\end{eqnarray}
\section {The non-abelian BFFT formalism for the SU(2) Skyrme model}
The classical static Lagrangian of the Skyrme model
is given by
\begin{equation}
\label{clag}
L = \int d^3r \{ -{F_\pi^2\over 16} Tr \(\partial_i U
\partial_i U^+ \) + {1\over 32 e^2} Tr \[ U^+\partial_i U,
U^+ \partial_j U \]^2 \} \, ,
\end{equation}
\noindent where $F_\pi$ is the pion decay constant, {\it e}
is a dimensionless parameter and U is an SU(2) matrix.
Performing the collective semi-classical expansion\cite{ANW},
substituting U(r) by $U(r,t)=A(t)U(r)A^+ (t)$ in (\ref{clag}),
where A is an SU(2) matrix, we obtain
\begin{equation}
\label{Lag}
L = - M + \lambda Tr [ \partial_0 A\partial_0 A^{-1} ],
\end{equation}
\noindent where M is the soliton mass. In the hedgehog
representation for U, $U=\exp(i\tau \cdot \hat{r} F(r))$,
this mass is given by
\begin{equation}
\label{henergia}
M = 4\pi {F_\pi \over e} \int^\infty_0 dx { x^2 {1\over 8}
\[ F'^2 + 2 {2 \sin^2 F \over x^2} \] + {1\over 2}
\[ {\sin^2 F\over x^2} + 2 F'^2 \] } ,
\end{equation}
\noindent where {\it x} is a dimensionless variable defined
by $x=eF_\pi r$ and $\lambda$ is called the inertia moment
written as
\begin{equation}
\label{lambda}
\lambda = {2\over 3} \pi ({1\over e^3 F_\pi}) \Lambda
\end{equation}
\noindent with
\begin{equation}
\label{Lambda}
\Lambda = \int^\infty_0 dx x^2 \sin^2F \[ 1 +
4(F'^2 + {\sin^2 F\over x^2}) \].
\end{equation}
\noindent The SU(2) matrix A can be written as $A=a^0
+i a\cdot \tau$ with the constraint
\begin{equation}
\label{pri}
T_1 = a^ia^i - 1 \approx 0, \,\,\,\, i=0,1,2,3.
\end{equation}
\noindent The Lagrangian(\ref{Lag}) can be written as a function of the
$a^i$ as
\begin{equation}
\label{cca}
L = -M + 2\lambda \dot{a}^i\dot{a}^i.
\end{equation}
\noindent In order to identify more constraints, we calculate the
momentum
\begin{equation}
\label{cm}
\pi^i = {\partial L \over \partial \dot{a}_i} = 4 \lambda \dot{a}^i.
\end{equation}
\noindent Now we can rewrite the Hamiltonian in the form
\begin{eqnarray}
\label{chr}
H_c=\pi^i \dot a^i-L=4\lambda \dot a^i \dot a^i -L=M+2
\lambda \dot a^i\dot a^i\nonumber\\
=M+{1\over 8 \lambda } \sum_{i=0}^3 \pi^i\pi^i.
\end{eqnarray}
\noindent Constructing the total Hamiltonian and imposing the
consistency condition that constraints do not evolve in time
\cite{Dirac} we get a new constraint
\begin{equation}
\label{T2}
T_2 = a^i\pi^i \approx 0 \,\,.
\end{equation}
\noindent We observe that no further constraints are generated
via this iterative procedure. The constraints $T_1$ and $T_2$
are of the second class. The matrix elements of their Poisson
brackets read
\begin{equation}
\label{Pa}
\Delta_{\alpha \beta} = \{T_\alpha,T_\beta\} = -2 \epsilon_{\alpha \beta}
a^ia^i, \,\, \alpha,\beta = 1,2
\end{equation}
\noindent where $\epsilon_{\alpha \beta}$ is the antisymmetric
tensor normalized as $\epsilon_{12} = -\epsilon^{12} = -1$.
\par Then, the standard quantization is made where we replace
$\,\pi^i \,$ by $\, -i \partial/\partial a_i \,$ in (\ref{chr}),
leading to
\begin{equation}
\label{uqh}
H=M+{1\over 8 \lambda } \sum_{i=0}^3 (-{\partial^2
\over\partial{a_i}^2})\,\,.
\end{equation}
\noindent Due the constraint $\sum_{i=0}^3a^i a^i=1$,
the operator $\sum_{i=0}^3 (-{\partial^2\over\partial{a_i}^2})$
must be interpreted as the Laplacian on the three-sphere\cite{ANW}.
A typical polynomial wave function\cite{ANW},
${1\over N(l)}(a^1 + i a^2)^l = |polynomial \rangle\, ,$ is an
eigenvector of the Hamiltonian
(\ref{uqh}), with the eigenvalues given by \footnote
{This wave function is also eigenvector of the spin and
isospin operators, written as\cite{ANW} $ J^k={1\over 2}
( a_0 \pi_k -a_k \pi_0 - \epsilon_{klm} a_l \pi_m )$ and
$ I^k={1\over 2 } ( a_k \pi_0 -a_0 \pi_k- \epsilon_{klm} a_l
\pi_m ).$}.
\begin{equation}
\label{uqhe}
E=M+{1\over 8 \lambda } l(l+2), \,\,\,\, l=1,2,3\dots \,\,.
\end{equation}
\vskip .5cm
To implement the extended non-abelian BFFT formalism, we introduce
auxiliary coordinates, one for each of the second class constraint.
Let us generally denote them by $\eta^\alpha$, where $\alpha=1,2$,
and consider that the Poisson algebra of these new coordinates
is given by
\begin{equation}
\label{algebra1}
\{ \eta^\alpha, \eta^\beta \} = \omega^{\alpha \beta}
= 2\epsilon^{\alpha\beta};
\,\,\alpha=1,2.
\end{equation}
\noindent From Eq.~(\ref{2.27}), we have
\begin{equation}
2X_{11}(x,z)\,X_{22}(y,z) = -2\,a^i a^i\ + \,C_{12}^1\,T_1.
\label{3.15}
\end{equation}
\noindent
After some attempts, we find that a convenient choice for these
coefficients is
\begin{eqnarray}
&&X_{11}=1,
\nonumber\\
&&X_{22}=-1,
\nonumber\\
&&X_{12}=0=X_{21},
\nonumber\\
&&C_{12}^1=2,
\nonumber\\
&&C_{12}^2=0.
\label{3.16}
\end{eqnarray}
\noindent Using (\ref{2.4}), (\ref{2.6}), (\ref{2.11}), (\ref{algebra1}) and
(\ref{3.16}), the new set of constraints is found to be
\begin{eqnarray}
\label{TF1}
\tilde{T}_1=a^i a^i-1+\eta^1,\\
\label{TF2}
\tilde{T}_2=a^i\pi^i-\eta^2+\eta^1\eta^2.
\end{eqnarray}
\noindent The first class constraint algebra is
\begin{eqnarray}
&&\bigl\{\tilde T_1,\,\tilde T_1\bigr\}=0,
\nonumber\\
&&\bigl\{\tilde T_1,\,\tilde T_2\bigr\}
=2\,\tilde T_1,
\nonumber\\
&&\bigl\{\tilde T_2,\,\tilde T_2\bigr\}=0.
\label{3.24}
\end{eqnarray}
Next, we derive the corresponding Hamiltonian in the extended
phase space. The corrections for the canonical Hamiltonian are
given by Eqs. (\ref{2.20}) and (\ref{2.30}). With the objective
to simplify the expression of the first class Hamiltonian, we
chose two different algebras for the system defined by the parameters
$B_a^b$ in (\ref{2.28}). We have verified that possible values are
\begin{equation}
\label{sys1}
B_a^b=0, \,\,\,\, a,b=1,2,
\end{equation}
\noindent and
\begin{equation}
\label{sys2}
B_1^1={1\over 2\lambda},\,\,\,\, B_1^2=B_2^1=B_2^2=0.
\end{equation}
\noindent Using the inverse matrices
\begin{eqnarray}
\label{apc1}
\omega_{\alpha\beta} = {1\over 2} \epsilon_{\alpha\beta}, \\\
\label{apc2}
X^{\alpha \beta} = \left( \begin{array}{clcr} 1 & \,\,0 \\ 0
& -1\end{array} \right),
\end{eqnarray}
\noindent and the algebra defined by (\ref{sys1}), it is possible
to compute the involutive first class Hamiltonian
\newpage
\begin{eqnarray}
\label{HF1F}
\tilde{H}=M + {1\over 8\lambda} \pi^i\pi^i
-{1\over 8\lambda} \pi^i\pi^i \eta^1
-{1\over 4\lambda} a^i\pi^i\eta^2
+{1\over 4\lambda} a^i\pi^i \eta^1\eta^2\nonumber\\
+ {1\over 8\lambda} a^i a^i\eta^2\eta^2
-{1\over 8\lambda} a^i a^i\eta^1\eta^2\eta^2\nonumber\\\nonumber\\
=M+{1\over 8\lambda} \pi^i\pi^i(1-\eta^1)
-{1\over 4\lambda} a^i\pi^i\eta^2(1-\eta^1)
+ {1\over 8\lambda} a^i a^i\eta^2\eta^2(1-\eta^1).
\end{eqnarray}
\noindent Thus, the Hamiltonian (\ref{HF1F}) satisfies the first class algebra
\begin{eqnarray}
\label{HF11}
\{ \tilde{T}_1, \tilde{H}_1 \} = 0,\,\,\,\,\,\,\,(B_1^1=B_1^2=0)\\
\label{HF21}
\{ \tilde{T}_2, \tilde{H}_1 \} = 0. \,\,\,\,\,\,\,(B_2^1=B_2^2=0)
\end{eqnarray}
\noindent The other non-abelian first class Hamiltonian is given by
\begin{eqnarray}
\label{HF2F}
\tilde{H}_2=\tilde{H}_1+{1\over 4\lambda} \tilde{T}_2\nonumber\\\nonumber\\
=M+{1\over 8\lambda} \pi^i\pi^i(1-\eta^1)
-{1\over 4\lambda} a^i\pi^i\eta^2(1-\eta^1)
+ {1\over 8\lambda} a^i a^i\eta^2\eta^2(1-\eta^1)\nonumber\\
+{1\over 4\lambda} (a^i\pi^i-\eta^2(1-\eta^1)),
\end{eqnarray}
\noindent which satisfies the first class Poisson algebra
\begin{eqnarray}
\label{HF12}
&\{\tilde{T}_1, \tilde{H}_2\} = {1\over 2\lambda}\tilde{T}_1,\,\,
\,\,\,\,(B_1^1={1\over 2\lambda},B_1^2=0)\\
\label{HF22}
&\{\tilde{T}_2, \tilde{H}_2 \} = 0. \,\,\,\,\,\,\,\,\,\,\,\,\,\,
\,\,\,\,\,\,\,\,
(B_2^1=B_2^2=0)
\end{eqnarray}
\newpage
\noindent Here we would like to remark that, contrary the results
obtained by the abelian BFFT method applied to the non-linear
Lagrangian theories \cite{Banerjee3,WOJAN}, both expressions of the
first class Hamiltonians (\ref{HF1F}) and (\ref{HF2F}) are finite
sums. As it was emphasized in the introduction, the possibility
pointed out by Banerjee, Banerjee and Ghosh to obtain non-abelian
first class theories leads to a more elegant and simplified
Hamiltonian structure than usual abelian BFFT case.
The next step is to look for the Lagrangian that leads
to this new theory. A consistent way of doing this is by
means of the path integral formalism, where the Faddeev
procedure \cite{FS} has to be used. Let us identify the new
variables $\eta^\alpha$ as a canonically conjugate pair $ (\phi,
\pi_\phi)$ in the Hamiltonian formalism,
\begin{eqnarray}
\label{cpair}
\eta^1 \rightarrow 2 \phi \,, \nonumber \\
\eta^2 \rightarrow \pi_\phi \,,
\end{eqnarray}
\noindent satisfying (\ref{algebra1}).
Then, the general expression for the vacuum functional reads
\begin{equation}
\label{vfg}
Z = N \int [d\mu] \exp \{ i \int dt [ \dot{a}^i\pi^i
+ \dot{\phi}\pi_\phi - \tilde{H} ] \},
\end{equation}
\noindent with the measure $[d\mu]$ given by
\begin{eqnarray}
\label{mesure}
[d\mu] = [da^i] [d\pi^i] [d\phi] [d\pi_\phi]
| det\{,\} | \nonumber \\ \delta(a^ia^i-1+2\phi)
\delta(a^i\pi^i- \pi_\phi +2\phi\pi_\phi)\prod_\alpha
\delta(\tilde{\Lambda}_\alpha),
\end{eqnarray}
\noindent where $\tilde{\Lambda}_\alpha$ are the gauge fixing
conditions corresponding to the first class constraints
$\tilde{T}_\alpha$ and the term $| det\{,\} |$ represents
the determinant of all constraints of the theory, including
the gauge-fixing ones. The quantity N that appears in
(\ref{vfg}) is an usual normalization factor. Starting from
the Hamiltonian (\ref{HF1F}), the vacuum functional reads
\begin{eqnarray}
\label{pf1}
Z = N \int [da^i] [d\pi^i] [d\phi] [d\pi_\phi]
| det\{,\} | \, \delta( a^ia^i-1+2\phi )\nonumber\\
\delta(a^i\pi^i-\pi_\phi(1-2\phi))
\prod_\alpha
\delta(\tilde{\Lambda}_\alpha)\exp \{ i \int dt
[ \dot{a}^i\pi^i + \dot{\phi}\pi_\phi
-M\nonumber\\
-{1\over 8\lambda} \pi^i\pi^i(1-2\phi)
+{1\over 4\lambda} a^i\pi^i \pi_\phi(1-2\phi)
-{1\over 8\lambda} a^i a^i \pi_\phi\pi_\phi(1-2\phi)]\}.
\end{eqnarray}
\noindent Using the delta function $\delta(a^ia^i-1+2\phi)$ and
exponentiating the delta function $\delta[a^i\pi^i-\pi_\phi(1-2\phi)]$
with Fourier variable $\xi$, we obtain
\begin{eqnarray}
\label{ele}
Z = N \int [da^i] [d\pi^i] [d\phi] [d\pi_\phi] [d\xi]
| det\{,\} | \, \delta( a^ia^i-1+2\phi ) \prod_\alpha
\delta(\tilde{\Lambda}_\alpha) \nonumber \\ \exp \{ i \int dt
[ \dot{a}^i\pi^i + \dot{\phi}\pi_\phi
-M
-{1\over 8\lambda}\pi^j\pi^j a^ia^i
+{1\over 4\lambda} a^j\pi^j a^ia^i \pi_\phi\nonumber\\
-{1\over 8\lambda} (a^i a^i)^2 (\pi_\phi)^2
+\xi a^i\pi^i-\xi a^i a^i \pi_\phi
]\}.
\end{eqnarray}
\noindent Integrating over $\pi_\phi$, we arrive at,
\begin{eqnarray}
\label{pifi}
Z = N \int [da^i] [d\pi^i] [d\phi] [d\xi]
| det\{,\} | \, \delta( a^ia^i-1+2\phi ) \prod_\alpha
\delta(\tilde{\Lambda}_\alpha) \nonumber \\
{1\over a^ia^i}\exp \{ i \int dt
[ \dot{a}^i\pi^i
-M
-{1\over 8\lambda}\pi^j\pi^j a^i a^i
+ \xi a^i\pi^i\nonumber\\
-{2\lambda \dot{\phi}\dot{\phi}\over {(a^ia^i)}^2}
+ {4\lambda \dot{\phi}\xi\over a^ia^i}
- 2\lambda\xi^2
]\}.
\end{eqnarray}
\newpage
\noindent Performing the integration over $\pi^i$, we obtain
\begin{eqnarray}
\label{pii}
Z = N \int [da^i][d\phi] [d\xi]
| det\{,\} | \, \delta( a^ia^i-1+2\phi ) \prod_\alpha
\delta(\tilde{\Lambda}_\alpha) \nonumber \\
{1\over 1-2\phi}
\sqrt{{1\over 1-2\phi}}
\exp \{ i \int dt
[
-M
+{2\lambda}{ \dot{a}^i \dot{a}^i\over 1-2\phi}\nonumber \\
- {2\lambda}{\dot{\phi} \dot{\phi}\over (1-2\phi)^2}
+{4\lambda \over 1-2\phi}(a^i \dot{a}^i + \dot{\phi})\xi
]\}.
\end{eqnarray}
\noindent Finally, the integration over $\xi$ leads to
\begin{eqnarray}
\label{xi1}
Z = N \int [da^i][d\phi]
|det\{,\}| \, \delta(a^ia^i-1+2\phi) \, \delta(a^i \dot{a}^i + \dot{\phi}) \prod_\alpha
\delta(\tilde{\Lambda}_\alpha) \nonumber \\
\sqrt{1\over 1-2\phi}
\exp \{ i \int dt [
-M
+{2\lambda\over 1-2\phi} \dot{a}^i\dot{a}^i\nonumber\\
-{2\lambda\over {(1-2\phi)}^2} \dot{\phi}\dot{\phi}
] \},
\end{eqnarray}
\noindent where the new $\delta$ function above came from the integration over $\xi$.
We notice that it is nothing other than the derivative of constraint $\tilde{T}_1$. It is then just a consistency condition and does not represent any new restriction over the coordinates of the theory. From the vacuum functional (\ref{xi1}), we identify the Lagrangian of the new theory
\begin{equation}
\label{L1f}
L = -M
+{2\lambda\over 1-2\phi} \dot{a}^i\dot{a}^i\nonumber\\
-{2\lambda\over {(1-2\phi)}^2} \dot{\phi}\dot{\phi}.
\end{equation}
\noindent Putting the extended variables, in the phase space,
$\phi$ and $\pi_\phi$ equal to zero, we obtain the original
Skyrmion Lagrangian. This result indicates the consistency
of the theory.
\par For the Hamiltonian (\ref{HF2F}) the vacuum functional is
\begin{eqnarray}
\label{ele2}
Z = N \int [da^i] [d\pi^i] [d\phi] [d\pi_\phi]
| det\{,\} | \, \delta( a^ia^i-1+2\phi )\nonumber\\
\delta(a^i\pi^i-\pi_\phi(1-2\phi))
\prod_\alpha
\delta(\tilde{\Lambda}_\alpha)\exp \{ i \int dt
[ \dot{a}^i\pi^i + \dot{\phi}\pi_\phi
-\tilde{H}_1-{1\over 4\lambda}\tilde{T}_2 \}
\end{eqnarray}
\noindent Using the properties of delta functions, it is easy to see
that we have obtained the same Lagrangian (\ref{L1f}).
\section{The spectrum of the theory}
\par Here we intend to obtain the spectrum of the extended theory.
We use the Dirac method of quantization for the first class constraints
\cite{Dirac}.The basic idea consists in imposing quantum
mechanically the first class constraints as operator
condition on the wave-functions as a way to obtain the physical
subspace, i.e.,
\begin{equation}
\label{qope}
\tilde{T}_\alpha | \psi \rangle_{phys} = 0, \,\,\,\, \alpha=1,2.
\end{equation}
\noindent The operators $\tilde{T}_1\,$ and $\tilde{T}_2\,$
are
\begin{eqnarray}
\label{qope1}
\tilde{T}_1=a^ia^i-1+\eta^1,\\
\label{qope2}
\tilde{T}_2=a^i\pi^i - \eta^2+\eta^1\eta^2.
\end{eqnarray}
\noindent Thus, the physical states that satisfy (\ref{qope}) are
\begin{equation}
\label{physical}
| \psi \rangle_{phys} = {1\over V } \, \delta (a^i\pi^i
- \eta^2+\eta^1\eta^2) \,\delta(a^i a^i-1+\eta^1)\,|polynomial \rangle,
\end{equation}
\noindent where {\it V } is the normalization factor
and the ket {\it polynomial} was defined in Section 3 as,
$|polynomial \rangle ={1\over N(l)} (a^1+ i a^2)^l \,$. The
corresponding quantum Hamiltonians of (\ref{HF1F}) and (\ref{HF2F})
will be indicated as
\begin{eqnarray}
\label{echs1}
\tilde{H_1}= M+{1\over 8\lambda} \pi^i\pi^i(1-\eta^1)
-{1\over 4\lambda} a^i\pi^i\eta^2(1-\eta^1)\nonumber\\
+ {1\over 8\lambda} a^i a^i\eta^2\eta^2(1-\eta^1),
\end{eqnarray}
\noindent and
\begin{eqnarray}
\label{echs2}
\tilde{H_2}= \tilde{H_1}+{1\over 4\lambda}\tilde{T_2}.
\end{eqnarray}
\noindent Thus, in order to obtain the spectrum of the theory, we take
the scalar product,
$_{phys}\langle\psi| \tilde{H} | \psi \rangle_{phys}\,$,
that is the mean value of the extended Hamiltonian, for the two
quantum Hamiltonians (\ref{echs1}) and (\ref{echs2}). We begin
with the first Hamiltonian (\ref{echs1}) calculating the scalar
product
\begin{eqnarray}
\label{mes1}
_{phys}\langle\psi| \tilde{H_1} | \psi \rangle_{phys}=\nonumber \\
\langle polynomial |\,\, {1\over V^2} \int d\eta^1 d\eta^2
\delta(a^i a^i - 1 + \eta^1)\delta(a^i\pi^i - \eta^2+\eta^1
\eta^2)\nonumber \\
\tilde{H}_1
\delta([a^i\pi^i - \eta^2+\eta^1\eta^2)\delta(a^i a^i - 1 + \eta^1)\,\,
| polynomial \rangle .
\end{eqnarray}
\noindent Notice that due to the presence of the delta functions
$\delta(a^i a^i - 1 + \eta^1)$ and
$\delta(a^i\pi^i - \eta^2+\eta^1\eta^2)$ in
(\ref{mes1}) the scalar product can be simplified.
Then, integrating over $\eta^1$ and $\eta^2$ we obtain\footnote{The
regularization of delta function squared like
$(\delta(a^i a^i - 1 + \eta^1))^2$ and
$(\delta(a^i\pi^i - \eta^2+\eta^1\eta^2))^2$
is performed by using the delta relation, $(2\pi)^2\delta(0)=
\lim_{k\rightarrow 0}\int d^2x \,e^{ik\cdot x} =\int d^2x= V.$
Then, we use the parameter V as the normalization factor.}
\begin{eqnarray}
\label{mes13}
_{phys}\langle\psi| \tilde{H_1} | \psi \rangle_{phys}=\nonumber \\
\langle polynomial | M + {1\over 8\lambda} a^ia^i \pi^j \pi^j
- {1\over 8\lambda} a^i\pi^i a^j\pi^j | polynomial \rangle .
\end{eqnarray}
\noindent We repeat the same procedure for the quantum Hamiltonian
(\ref{echs2}). Taking the mean value, we have
\begin{eqnarray}
\label{mes2}
_{phys}\langle\psi| \tilde{H_2} | \psi \rangle_{phys}=\nonumber \\
\langle polynomial |\,\, {1\over V^2} \int d\eta^1 d\eta^2
\delta(a^i a^i - 1 + \eta^1)\delta(a^i\pi^i - \eta^2+\eta^1
\eta^2)\nonumber \\
\tilde{H_2}
\delta([a^i\pi^i - \eta^2+\eta^1\eta^2)\delta(a^i a^i - 1 + \eta^1)\,\,
| polynomial \rangle .
\end{eqnarray}
\noindent Using the delta properties, we obtain the simplified
scalar product for the Hamiltonian (\ref{echs2})
\begin{eqnarray}
\label{mes21}
_{phys}\langle\psi| \tilde{H_2} | \psi \rangle_{phys}=\nonumber \\
\langle polynomial | M + {1\over 8\lambda} a^ia^i \pi^j \pi^j
- {1\over 8\lambda} a^i\pi^i a^j\pi^j | polynomial \rangle .
\end{eqnarray}
\noindent The expression above is the same obtained for the scalar
product of quantum Hamiltonian $\tilde{H_1}$. It is important to
remark that, despite the BFFT formalism permits to have freedom to
choose different first class algebras for the same second class
Hamiltonian, the two expressions for the spectrum of both algebras are
identical. This result shows again the consistency of the BFFT
formalism.
The final Hamiltonian operator inside the kets (\ref{mes13})
and (\ref{mes21}) must be hermitian. Then, this Hamiltonian has to be
symmetrized\footnote{In the BFFT formalism applied on the Skyrme model, the operator ordering problem appears in the expression of the non-abelian first class Hamiltonian.}. Following the prescription of Weyl ordering\cite{Weyl}
(symmetrization procedure) we can write the symmetric
Hamiltonian as
\begin{eqnarray}
\label{HWeyl}
\tilde{H}_{sym} = {1\over 8\lambda} \[ a^ia^i \pi^j \pi^j \]_{sym}
- {1\over 8\lambda}\[ a^i\pi^i a^j\pi^j \]_{sym},
\end{eqnarray}
\noindent where $\[ a^ia^i \pi^j \pi^j \]_{sym} $ and
$\[ a^i\pi^i a^j\pi^j \]_{sym}$ are defined as
\begin{eqnarray}
\label{wdef}
\[ a^i a^i \pi^j \pi^j \]_{sym} = {1\over 32\lambda}
\[ a^i( a^i \pi^j + \pi^j a^i )\pi^j + \pi^j
( a^i \pi^j + \pi^j a^i ) a^i \]\\
\[ a^i\pi^i a^j\pi^j \]_{sym} = {1\over 32\lambda}
\[(a^i \pi^i+ \pi^i a^i) (a^j \pi^j + \pi^j a^j) \].
\end{eqnarray}
\noindent Then, using the symmetric Hamiltonian operator
$\tilde{H}_{sym}$ Eq.~(\ref{HWeyl}) both the mean values
(\ref{mes13}) and (\ref{mes21}) are
\begin{eqnarray}
\label{mes2W}
_{phys}\langle\psi| \tilde{H}_{sym} | \psi \rangle_{phys}=\nonumber \\
\langle polynomial | M + {1\over 8\lambda} \[ a^ia^i \pi^j \pi^j \]_{sym}
- {1\over 8\lambda}\[ a^i\pi^i a^j\pi^j \]_{sym}| polynomial \rangle .
\nonumber\\
=\langle polynomial | M + {1\over 32\lambda}
[ a^i ( a^i \pi^j + \pi^j a^i )\pi^j + \pi^j
( a^i \pi^j + \pi^j a^i ) a^i ]\nonumber \\- {1\over 32\lambda}
[(a^i \pi^i+ \pi^ia^i) (a^j \pi^j + \pi^j a^j) ]
| polynomial \rangle \,\,.
\end{eqnarray}
\noindent The operator $\pi^j$ describes a free particle
and its representation on the collective coordinates space $a^i$
is given by
\begin{equation}
\label{piconfig}
\pi^j = -i {\partial\over \partial a_j}\,\,.
\end{equation}
\noindent Substituting the expression (\ref{piconfig}) into
(\ref{mes2W}), we obtain
\begin{equation}
\label{meswe}
_{phys}\langle\psi| [\tilde{H}]_{sym} | \psi \rangle_{phys} =
M + {1\over 8\lambda} \[ l(l+2) + 1 \] \,\,.
\end{equation}
\noindent This last expression, Eq.~(\ref{meswe}), differs from
the conventional energy eigenvalues of the Skyrme model, Eq.~(\ref{uqhe}),
by an additional constant term.
Thus, using the symmetrized non-abelian BFFT Hamiltonians and employing
the Dirac quantization method of first class constraints, we have
obtained the Skyrmion rotational mode energy
eigenvalues with a mass shift. Similar results have been also
obtained by many authors\footnote{These authors have pointed out that a mass
shift can improve the usual phenomenology predicted by the Skyrme
model.}\cite{Fujii} using different procedures.
\section{Conclusions}
We have used an extension of the BFFT formalism presented by Banerjee,
Banerjee and Ghosh in order to quantize the SU(2) Skyrme model. Using
the non-abelian algebra, we have shown that, contrary to the
results obtained by the usual abelian BFFT formalism, it is possible
to construct the first class Hamiltonians that are simple finite sums.
The extended Lagrangians were achieved by using the Faddeev-Senjanovich
constraint path integral formalism. In the so called unitary gauge
we reproduced the original Skyrmion Lagrangian. We calculate the
mean energy for the two different first class
Hamiltonian operators leading consistently to the same mass spectrum of
the theory. Then, our results show,
in some sense, that for the non-linear theory the non-abelian BFFT
formalism is more adequate than the abelian formalism.
\section{Acknowledgments}
We would like to thank Ilya Shapiro for critical reading.
This work is supported in part by FAPEMIG, Brazilian Research Council.
|
1,108,101,566,627 | arxiv | \section{Introduction}
In an increasingly globalized and connected world in which the population and urbanization grows over the years, the pressure for more efficient and sustainable road freight increases due to the greater demand for different types of products. Road freight plays a fundamental role in global logistics due to the flexibility of the infrastructure that this segment of transport offers, allowing a door-to-door service, which is not possible with other modes \cite{Ergstron_2016}.
On the other hand, the reliability in road freight depends on some factors that can be the cause of many accidents, interrupting the supply chain and generating significant losses. According to the World Health Organization (WHO) data, accidents costs on average around 3\% of the Gross Domestic Product (GDP) of a country and led to the death of 1.35 million people, being the eighth leading cause of death all around the world.
Therefore, reducing the accident rate of road freight is an important factor from a strategic and sustainable point of view, which seeks logistical and economical development, because when an accident occurs losses can assume large financial proportions in addition to impact in others different dimensions. From this point, the need for improvement in the reduction of losses in road transport stands out \cite{Ergstron_2016}.
When a route is chosen for the vehicle that will leave a depot to deliver to other cities, not only the logistics costs or distance should be considered, but also the measurement of the risks linked to that route. Many studies consider only the first factor using mathematical models like VRP as a way of solving it. Risk measurement is addressed in cash-in-transit problems also through the VRP, which basically is a mathematical model that aims to minimize the distances traveled with the restriction that the value of the risk of robberies of heavy trucks during the transport of money is limited by a risk threshold \cite{talarico_et_al_2015}.
The route safety is also discussed in studies of hazardous materials transportation such as fuels, flammable materials, gases and others, aiming to reduce social and environmental impacts and increase transport safety by minimizing the risk factor \cite{holeczek_2021}.
VRP has been widely studied throughout its sixty-year history, but still few studies have taken into account the issue of risk. Recently, researches have emerged in the areas of cash-in-transit and hazardous materials, but they did not consider a statistical approach to risk into the models.
Therefore, verifying the applications of the topic risks in VRP and the need to reduce losses due to accidents, this paper aims to introduce an analytical approach for road freight through optimization model and statistical analysis to support the decision making for the choice of routes based on logistic cost and safety.
In addition to the contributions of this analytical approach are its simplicity of application to a real problem of a Brazilian road freight company that daily needs to choose the best routes that consider safety and costs, as well as their adaptability to other VRP models. Finally, Knime Analytics Platform was helpful to simplify data exploration, analysis, visualization and interpretation and to estimate the accidents probabilities.
This paper is organized in five sessions in which the first one introduces the importance of studying risk in VRP. From the second session, a review and discussion of the literature which addressed risks in VRP is carried out. In an analytical approach, the methodology proposed for the study is shown, as well as the calculations used. Finally, the results are presented in experimental studies and the conclusion is drawn from this.
\section{Literature review}
\subsection{Risk in VRP}
Vehicle Routing Problems (VRP) have been extensively studied throughout their history to support real-life applications. Risk and safety in VRP has been received more attention in applications for the transport of hazardous materials, whose risk is an accident causing socio-environmental damage, and cash-in-transit which are related to cargo theft \cite{talarico_et_al_2017}.
\cite{erkut_Ingolfsson_2005} cited eight risk models that were developed for the optimization of hazardous materials transportation, whose three main ones are represented by Table \ref{tab:table_1}. For the first model, the risk ($IP$) is calculated through the probability of the undesirable occurrence event of each route $i$, while the second ($PE$) considers only the number of people exposed to risk. In the traditional model, the risk ($TR$) is a product between the probability of the undesirable event and its measure of consequence.
To investigate the behavior of these three risk models in VRP, \cite{holeczek_2021} used bi-objective functions that minimize both distance and accident risk. The results shows that the Traditional Risk generates the best reduction in total risk but it shows the worst deviation from the minimum distance when compared to the other two models. The Accident Probability offers the best trade-off with an economical goal and it is most appropriate for problems where the consequences are uncertain. As for the Population Exposed, the data are more easily acquired and the results are evaluated more intuitively for a decision maker, but it could only be applied to problems that consider urban areas because for an environment such as rural areas other factors must be considered.
\begin{table}[]
\centering
\begin{tabular}{l l l}
\textbf{Model} & \textbf{Equation} & \\\hline
Accident Probability & $IP(r) = \sum_{i\in r} p_i$ & $p_i =$ accident probability \\\hline
Population Exposure & $PE(r) = \sum_{i\in r} D_i$ & $D_i =$ population exposure \\\hline
Tradicional Risk & $TR(r) = \sum_{i\in r} p_i * C_i$ & $p_i =$ accident probability \\
& & $C_i =$ measure of the consequence \\\hline
\end{tabular}
\caption{Models for risk assessment. Adapted from \cite{erkut_Ingolfsson_2005}.}
\label{tab:table_1}
\end{table}
The elements of each model such as accident probability, consequences, number of people exposed and others, are defined according to the case is being studied. \cite{androutsopoulos_et_al_2012,carrese_et_al_2019} , for example, studied the risks in hazardous materials transport and they considered that the undesirable event would be the accident whose consequence is related to the number of people exposed to risk. On the other hand, \cite{talarico_et_al_2015} applied the risk in \textit{cash-in-transit} routing problem that they considered the probability of a robbery is proportional to the distance of the route and, as a consequence, the amount of cash into the truck.
In addition to the elements, the models also vary according to the risk approach used in VRP. Thus, Table \ref{tab:table_2} was built to show the case studied, models and risk elements that were adopted for the calculation.
\cite{du2017multi,Pradhananga2014,Wang2018} applied the concept of the traditional model, as proposed by \cite{erkut_Ingolfsson_2005} in which the probability of an accident was used as an undesirable event and the exposed population as a consequence. \cite{androutsopoulos_et_al_2012,holeczek_2021} also applied the traditional definition, however it was considered as \textit{load dependent}, that is, the amount of load factor is added to the model and varies as deliveries are made. \cite{carrese_et_al_2019} is also based on the traditional model, but two other factors that interfere in the driver's attention are added to the objective function, the Altimetric Index and the Planimetric Index. The first considers the elevations along the route while the second is introduced to take into account geometrical constraints related to the road radius.
\cite{Bula_et_al_2016,Bula2019} calculated the risk in a different way than what has been discussed so far. In this case, in addition to the accidents probabilities, it also considered the release probability as a result of the accident, which the consequences are the number of people exposed, and the truck type and the load amount have a relevant impact on the risk.
As a derivation of hazardous materials, models for \textit{cash-in-transit} arise. \cite{talarico_et_al_2015,talarico_et_al_2017} proposed the traditional method to calculate the risk of robbery. As already explained, they considered the consequence as equivalent to the amount of value in transport. \cite{Ghannadpour2020} also assumed the consequence as the same way and the distance is proportional to the risk of theft, but added to the model a factor about the frequency of passing through the same route and the ambushing probabilities to the vehicle and its success.
Table \ref{tab:table_2} shows that the studies are concentrated in only two areas: hazardous materials and cash-in-transit. Talking about other segments of transport, there are few studies in the literature taking into account risks in VRP and it is important because there are also significant consequences or losses in case an accident happens, especially financial when the values of shipped goods are high.
The VRP model with risks vary according to the objectives of each study. Table \ref{tab:table_3} summarizes the problem characteristics with the nomenclature presented by \cite{Braekers2016}. The symbol ``x'' is used when the paper considers the characteristics and Table \ref{tab:my_label} represents the nomenclatures.
\cite{Bula_et_al_2016,talarico_et_al_2015} used a mono-objective function which firstly minimized the distances and than the risk. \cite{androutsopoulos_et_al_2012,Bula2019,Pradhananga2014,Wang2018} suggest bi-objective functions that analyze both logistical costs or distances and route risks. Multi-objectives are presented by \cite{carrese_et_al_2019,Ghannadpour2020,Zheng2010} which the first one, in addition to minimize the distance, weighted the risk by the traditional method (probability of accident and a consequence) and the number of people exposed.
Generally, the risk factor is analyzed as an objective to be minimized, however \cite{talarico_et_al_2015} considers it as a constraint which the risk value is limited by a risk threshold and classified as \textit{Risk constrained Cash-in-Transit Vehicle Routing Problem (RCTVRP)}. \cite{Wang2018} restricts the condition that no vehicles of the same fleet travel in echelon because when there are two or more vehicles using the same route as the same time the consequences are considered to be greater if occurs an accident between them.
\begin{longtable}{p{.30\textwidth} p{.30\textwidth} p{.30\textwidth}}
\textbf{Authors} & \textbf{Case studied} & \textbf{Risk model} \\\hline
\cite{Zheng2010} & Hazardous Materials & accident probability * consequence + population exposure \\\hline
\cite{androutsopoulos_et_al_2012,holeczek_2021} & Hazardous Materials & accident probability * population exposure * load amount \\\hline
\cite{du2017multi,Pradhananga2014,Wang2018} & Hazardous Materials & accident probability * population exposure \\\hline
\cite{talarico_et_al_2015} & Cash-in-Transit & route length * load amount \\\hline
\cite{Bula_et_al_2016,Bula2019} & Hazardous Materials & accident probability * release probability * route length * load amount * truck type * population exposure \\\hline
\cite{carrese_et_al_2019} & Hazardous Materials & accident probability * population exposure + altimetric index + planimetric index \\\hline
\cite{Ghannadpour2020} & Cash-in-Transit & ambush probability * theft success probability * route length * load amount * frequency of repeated use of a route \\\hline
\caption{Risk models proposed in literature.}
\label{tab:table_2}
\end{longtable}
\begin{table}[]
\centering
\begin{tabular}{c c}
\multicolumn{2}{l}{}\\
\hline
\textbf{Categories} & \textbf{Sub-categories} \\\hline
\multirow{4}{*}{Objective Function (3.10)} & Travel time dependent (3.10.1) \\
& Distance dependent (3.10.2)\\
& Implied hazard/risk related (3.10.5)\\
& Others (3.10.6)\\\hline
Data used (5.1) & Real-world data (5.1.1) \\\hline
\end{tabular}
\caption{Nomenclature proposed by \cite{Braekers2016}.}
\label{tab:my_label}
\end{table}
\begin{table}[]
\centering
\begin{tabular}{l c c c c c c}
\multicolumn{5}{l}{\small{* Nomenclature presented by \cite{Braekers2016}.}}\\
\hline
\textbf{Authors} & \textbf{VRP} & \multicolumn{4}{c}{\textbf{3.10*}} & \textbf{5.1*}\\
\hline
\textbf{} & \textbf{} & \textbf{3.10.1*} & \textbf{3.10.2*} & \textbf{3.10.5*} & \textbf{3.10.6*} & \textbf{5.1.1*} \\\hline
\cite{androutsopoulos_et_al_2012} & VRPTW & x & & x & \\\hline
\cite{Bula_et_al_2016} & HVRP & & & x & \\\hline
\cite{Bula2019} & HVRP & & x & x & \\\hline
\cite{carrese_et_al_2019} & VRPTW & x & & x & x & x \\\hline
\cite{du2017multi} & MRVRP & & & x & & x \\\hline
\cite{Ghannadpour2020} & VRPTW & & x & x & \\\hline
\cite{holeczek_2021} & CVRP & & x & x & \\\hline
\cite{Pradhananga2014} & VRPTW & x & & x & & x \\\hline
\cite{talarico_et_al_2015} & RCTVRP & & x & & \\\hline
\cite{Wang2018} & VRPTW & & x & x & \\\hline
\cite{Zheng2010} & CVRP & & x & x & x \\\hline
\end{tabular}
\caption{Description and characteristics of VRP models.}
\label{tab:table_3}
\end{table}
For the measurement of risk, few studies is addressed on how the data are explored and, according to \cite{du2017multi}, it is necessary to integrate real historical data of accidents and big-data in the formulation of models for the transport of hazardous materials, however \cite{talarico_et_al_2015} mentions that there is little data available and \cite{androutsopoulos_et_al_2012} does not explore risk measurement due to its complexity and also states that future studies should deal with this issue.
\cite{carrese_et_al_2019} calculated the accident probability from data obtained by the mobility agency in Rome, quantified population density through a \textit{census data} and measured the infrastructure through the \textit{Google Application Programming Interface (API)}.
\cite{Pradhananga2014} estimated accident rates using data collected from the \textit{Institute for Traffic Accident Research and Data Analysis} (ITARDA) and the \textit{Ministry of Land, Infrastructure, Transport and Tourism} (MLIT), both from Japan. \cite{Ghannadpour2020} estimated the probability of a robber attack using game theory, while to calculate the success probability it used the multi-criteria decision making.
For a future study, \cite{Pradhananga2014} proposed extensions of the model considering such characteristics for hazardous materials routing problem using real-time traffic information and considering the effects of infrastructural characteristics of the road network for future studies.
\cite{milovanovic_2012} developed a methodology for calculating the risk of accidents in the hazardous materials transport which factors that influence the accident probability as well as their consequences were considered. The factors were measured from indirect interviews with experts obtaining numerical risk results for each route through this analysis. On the other hand, it did not use mathematical models of VRP to optimize routes and it did not consider statistical analysis.
As much as some studies still try to deal with the use of real data in their problems, the statistical view in data analysis is not approached in a deeper way. \cite{Fillbrunn2017} reviewed some the extensions of the free software Knime Analytics Platform that could support the analyzes from a database and that provides the creation of structured workflows. Combined with this, \cite{Ali2021} used the Monte Carlo simulation to assess the losses in economic values for the Pakistani economy in the event of a transport strike, but in the case of this study it would relate the losses to road accidents.
\subsection{Contributions of this paper}
\begin{itemize}
\item Develop a risk model in VRP that is applied to both the cases studied presented in Table \ref{tab:table_2} and to different cases of the road freight;
\item Use real-world data and statistical approach to estimate the accident probabilities. As Table \ref{tab:table_3}, few studies take into account real-world data and none of them use a statistical approach.
\end{itemize}
\section{Analytical Approach}
\subsection{Description}
This study followed the procedures described in the workflow (Figure \ref{figure1}) and started from the VRP definition that was employed with a set of cities and roads known by the authors. Then, the parameters were defined by the online tool \cite{qualp} to calculate the logistics costs of each arc, which consider fuel expenses, based on the vehicle's consumption and its value per liter, and with tolls, whether there is on the arc.
\begin{figure}
\centering
\includegraphics[width=1\textwidth]{Figures/Figure_1.png}
\caption{Workflow to employ this paper metodology.}
\label{figure1}
\end{figure}
In parallel, the literature on statistical analysis of risk in road freight was revised and from \cite{milovanovic_2012} and from the data that was found, the approach to calculate the probabilities and costs related to the risk of accident for each arc was developed. All collected and processed data used in this study is available at \cite{github}.
Several databases of Brazilian roads were consulted on the \textit{web} and it was found that, for the most part, it comes from government agencies whose information is available for public viewing and among them there are: the National Transport Confederation (CNT), the Department of Roads and Highways of the State of Sao Paulo (DER- SP) and the National Department of Transport Infrastructure (DNIT). Through a load insurance, data was provided regarding the losses in monetary values that occurred due to accidents between the periods from January 2018 to March 2021.
The data were processed using the \textit{Knime Analytics Platform} tool, which generated the accident probabilities for each arc. Then, from the probability results, the Monte Carlo simulation was programmed to obtain the costs related to the risks of the arcs.
The objective function (Equation \ref{equation1}) minimizes logistics and risk-related costs, and it was weighted by the $\alpha$ parameter whose values vary between 0 and 1 and which consists of assigning weight to safety during the optimization.
After defining the objective function and the parameters of the VRP, the mathematical model was implemented in \textit{Python} and the results were generated by CBC solver. The decision maker could evaluate the routes and the values of each objective function component, logistics cost and risk cost, and for the total that is the sum of them.
The security level $\alpha$ must be adjusted by the decision maker to assess whether the results are suitable to the case. Depending on the cargo type or its value, $\alpha$ may have to be adjusted in order to have a greater or lesser level of safety prioritization of the routes.
Therefore, this evaluation is necessary for the decision maker to consider logistics costs and security, if the result is not adequate, a new change of $\alpha$ must be carried out until the solution fits to the best choice of the decision maker.
\subsection{The VRP model}
The capacitated VRP (CVRP) is employed in a real problem of a transportation company and the arcs are defined through the indirect graph $G = (N,E)$. The problem presents a single depot that is located in Limeira/SP. Two vertices are created to represent exit and arrival in the depot, respectively ($\{0,n+1\}$). Set $C= \{1,...,n\}$ represents the delivery points in nine cities ($n=9$). The total number of vertices is represented by $N = C \cup \{0,n+1\}$. Set ($K = \{k_1,k_2,k_3\}$) is the vehicles with equal capacity $q$. The map indicated in Figure \ref{figure2} illustrates both the depot and all the delivery points that the carrier must carry out.
The expressions of the mathematical model are presented next. Firstly it is defined the sets.
$N = C \cup \ \{0,n+1\}, C= \{1,...,n\}, E = \{(i,j) : i,j \in N, i \neq j, i \neq n+1, j \neq 0\}$
\\
\begin{equation}
min z = (1 - \alpha)\sum_{k\in K}\sum_{(ij)\in E}c_{ij}X_{ijk} + \alpha\sum_{k\in K}\sum_{(ij)\in E}r_{ij}X_{ijk}
\label{equation1}
\end{equation}
\begin{equation}
\sum_{k\in K}\sum_{j\in E}X_{ijk} = 1 , \forall i \in C
\label{equation2}
\end{equation}
\begin{equation}
\sum_{i\in E}d_i\sum_{j\in E}X_{ijk} \leq q , \forall k \in K
\label{equation3}
\end{equation}
\begin{equation}
\sum_{j\in E}X_{0jk} = 1 , \forall k \in K
\label{equation4}
\end{equation}
\begin{equation}
\sum_{i\in E}X_{ihk} - \sum_{j\in E}X_{hjk} = 0 , \forall h \in C, k \in K
\label{equation5}
\end{equation}
\begin{equation}
\sum_{i\in E}X_{i,n+1,k} = 1 , \forall k \in K
\label{equation6}
\end{equation}
\begin{equation}
u_{ik}-(n+1)X_{ijk} \leq u_{jk} - n, \forall (i,j) \in E, k \in K
\label{equation7}
\end{equation}
\begin{equation}
X_{ijk} \in \{0,1\}, \forall (i,j) \in E, k \in K
\label{equation9}
\end{equation}
Parameters: $c_{ij}$: logistics cost; $r_{ij}$: Risk cost; $\alpha$: security level; $d_i$: demand of node $i$; $q$: vehicle capacity; $X_{ijk}$: binary variable.
The Equation (\ref{equation1}) is the objective function that minimizes logistics $c_{ij}$ and risks $r_{ij}$ costs. The Equation (\ref{equation2}) describes each arc must receive only one vehicle. The vehicle capacity constraint is represented by the Expressions (\ref{equation3}). Equations (\ref{equation4}) and (\ref{equation6}) show every vehicle must leave and arrive at the depot, respectively. The flow restriction for each node is represented by the Equation (\ref{equation5}). Finally, the sub-tour elimination is indicated by the Constraints (\ref{equation7}) and the binary variables are defined by Expression (\ref{equation9}).
\begin{figure}[h]
\centering
\includegraphics[width=0.8\textwidth]{Figures/Figure_2.png}
\caption{Depot in red and delivery points in blue.}
\label{figure2}
\end{figure}
Expressions (\ref{equation8}) ensure a strong formulation to eliminate sub-tour because of its LP relaxation but the number of these constrains could be too high ($2^n$) to the model. Because of this, it was used \textit{cut callbacks} and \textit{lazy constraints} to insert only constrains which violated the model and it is not necessary to use $2 \leq |S| \leq n$ in the model \cite{HaroldoG.Santos2019}.
\begin{equation}
\sum_{ij \in S}{X_{ijk}} \leq |S| -1, \forall S \subseteq C, k \in K
\label{equation8}
\end{equation}
Where: $S$ is a set of sub-tour.
The \textit{cut callbacks} are used only to improve the LP relaxation but not to define feasible solutions, which need to define by the initial formulation. So, it is included a weak sub-tour elimination constraints presented by Expression (\ref{equation7}) in the initial model and then add Constraints (\ref{equation8}) as cuts.
In this way, \textit{Python MIP} (\cite{HaroldoG.Santos2019}) package was introduced to the model because it is possible to eliminate sub-tour with \textit{cut callbacks} and \textit{lazy constraints} that makes it more efficient. The free solver CBC was used to optimize this problem and it was solved in 44 seconds for all twenty different values of $\alpha$ resulting in 363 variables and 289 constrains for each instance.
\subsection{Calculating the risk cost $r_{ij}$}
As already mentioned, the accidents probabilities $Paccident_{ij}$ were generated for each arc \textit{ij} by \textit{Knime Analytics Platform}, according to the workflows represented by the Figures \ref{figure3}, \ref{figure4} and \ref{figure5} and Equations (\ref{equation10}), (\ref{equation11}), (\ref{equation12}), (\ref{equation13}), (\ref{equation14}), (\ref{equation15}), (\ref{equation16}), (\ref{equation17}) e (\ref{equation18}).
At this first moment, it was necessary to estimate a general probability ($Pgeneral$) of accidents occurring on any road in Brazil according to the workflow of the Figure \ref{figure3}. Data was collected from the free websites of \cite{dersp,dnit} and from the article \cite{cnt_painelacidentes2018}. \cite{dnit} provides the average volume $V$ of all vehicles that travel daily on federal roads, while \cite{dersp} extracted the percentage $P_{sp}$ of heavy vehicles $HV_{sp}$ over the total $V_{sp}$ that circulates in Sao Paulo State roads that was calculated by Equation (\ref{equation10}) and assumed that the percentage in federal roads as the same. From this point of view, Equation (\ref{equation11}) resulted in the amount of heavy vehicles $HV$ that is used in Equation (\ref{equation12}) with the number of accidents $N_{accidents}$, extracted from \cite{cnt_painelacidentes2018}, and then $Pgeneral$ was found.
\begin{figure}
\centering
\includegraphics[width=0.75\textwidth]{Figures/Figure_3.png}
\caption{Workflow to calculate $Pgeneral$.}
\label{figure3}
\end{figure}
\begin{equation}
P_{sp} = HV_{sp}/V_{sp}
\label{equation10}
\end{equation}
\begin{equation}
HV = P_{sp}.V
\label{equation11}
\end{equation}
\begin{equation}
Pgeneral = \frac{N_{accidents}}{HV}.100\%
\label{equation12}
\end{equation}
The factors considered to the calculation of $r_{ij}$ were: the types of roads and the traffic of heavy vehicles. The first is considered because in Brazil there are several types of roads that show different safety levels. The Accident Panel Report prepared by \cite{cnt_painelacidentes2018} breaks it into five categories ( $T=\{t1,...,t5\}$) as presented by Table \ref{tab:table_4}. It is also shown that the number of deaths per hundred accidents was extracted from this report and it is considered in the calculation for the risk of accidents.
\begin{table}[]
\centering
\begin{tabular}{c c}
\textbf{Road type} & \textbf{Death rate per 100 accidents} \\\hline
two lanes two way road with central safety lane & 12.3 \\\hline
two lanes two way road with central barrier & 8.5 \\\hline
two lanes two way road with central line & 18.0 \\\hline
Single lane one way road & 11.9 \\\hline
Single lane two way road & 22.3 \\\hline
\end{tabular} \caption{Death rate per type of road \cite{cnt_painelacidentes2018}.}
\label{tab:table_4}
\end{table}
The flow of heavy vehicles was obtained from \cite{dersp} by speed radars installed on the roads and it was considered as directly proportional to the accidents probabilities because two roads with the same characteristics but one have a bigger flow than other so the first has greater chances of an accident occurring.
The indexes $it_h$ and $iv_h$ represents, respectively, the types of road and the flow of vehicles were calculated according to the workflow of the Figure \ref{figure4} and where $h \in H$ which means that road $h$ is on the set $H$ of problems roads. The data was collected from \cite{cnt_painelacidentes2018,dersp}. First, it was necessary to find $\Bar{x}$ and $\Bar{y}$ by the Equations (\ref{equation13}) and (\ref{equation14}) that represent, respectively, the average flow of vehicles $x_h$ running on the twelve roads of the problem $N_h = 12$ and the average death rate per 100 accidents $y_t$ among the five types of roads $N_t = 5$. Then, the indexes $iv_h$ and $it_h$ were calculated by the Equations (\ref{equation15}) and (\ref{equation16}), which basically consists if $iv_h$ or $it_h$ is greater than 1.0, the accident probability on the road $h$ will be greater than the general probability ($P_{general}$), and otherwise if the indices are less than 1.0.
\begin{equation}
\Bar{x} = \frac{\sum_{h\in H}x_{h}}{N_h}
\label{equation13}
\end{equation}
\begin{equation}
\Bar{y} = \frac{\sum_{t\in T}y_{t}}{N_t}
\label{equation14}
\end{equation}
\begin{equation}
iv_h = 1 + \frac{x_{h} - \Bar{x}}{\Bar{x}} \ , \ \forall h \in H
\label{equation15}
\end{equation}
\begin{equation}
it_h = 1 + \frac{y_{h} - \Bar{y}}{\Bar{y}} \ , \ \forall h \in H
\label{equation16}
\end{equation}
Where $x_h$ means the flow of vehicles represented by road $h$ and $y_h$ the death rate of the type of road $h$.
\begin{figure}
\centering
\includegraphics[width=0.9\textwidth]{Figures/Figure_4.png}
\caption{Workflow to calculate the indexes $iv_h$ e $it_h$.}
\label{figure4}
\end{figure}
In some arcs, the vehicle may pass through more than one road ($h$), e. g., the arc Limeira-Cosmópolis has two roads $h_1$=SP330 and $h_2$=SP133 whose flows and characteristics are different. Thus, it is necessary a index $e_{ij}$ that is weighted by $iv_h$, $it_h$ and $l_h$, and represented by the Equation (\ref{equation17}). Where $l_h$ is the length that the truck travels in each road $h$ of the arc $ij$ and $l_{ij}$ represents the total length of the arc $ij$.
\begin{equation}
e_{ij} = \frac{\sum_{h\in H}iv_h.it_h.l_h}{l_{ij}} \ , \ \forall (i,j) \in E
\label{equation17}
\end{equation}
Finally, Equation (\ref{equation18}) describes the accident $Paccident_{ij}$ occurring among $ij$. The workflow of the Figure \ref{figure5} shows how $Paccident_{ij}$ and $e_{ij}$ were found.
\begin{figure}
\centering
\includegraphics[width=0.9\textwidth]{Figures/Figure_5.png}
\caption{Workflow to calculate $e_{ij}$ and $Paccident_{ij}$.}
\label{figure5}
\end{figure}
\begin{equation}
Paccident_{ij} = Pgeneral.e_{ij} \ , \ \forall (i,j) \in E
\label{equation18}
\end{equation}
After to find $Paccident_{ij}$, it is possible to estimate $r_{ij}$ by Monte Carlo simulation. This was possible through data provided by the load insurance, which collected the percentages of accidents divided into intervals of losses in load values according to Table \ref{tab:table_5}.
\begin{table}[]
\centering
\begin{tabular}{c c}
Range of values & occurrence \\\hline
\$ 0.01 to \$ 200,000.00 & 37.91\% \\\hline
\$ 200,000.00 to \$ 300,000.00 & 24.17\% \\\hline
\$ 300,000.00 to \$ 500,000.00 & 19.91\% \\\hline
\$ 500,000.00 to \$ 1,000,000.00 & 16.11\% \\\hline
\$ 1,000,000.00 or more & 1.90\% \\\hline
\end{tabular} \caption{Percentage of accidents divided by the range of losses in load values.}
\label{tab:table_5}
\end{table}
The Figure \ref{figure6} illustrates an example of how the probabilities were distributed. The maximum load value for each range and the cost to the road freight company was considered as being 1\% of its value, a deductible that load insurances usually charge. Thus, the occurrence of the event in the range between \$0.01 and R\$200,000.00, the value to be considered will always be the highest of the range, which in this case will be \$200,000.00 and deductible cost for the carrier would be \$2,000.00.
\begin{figure}
\centering
\includegraphics[width=0.9\textwidth]{Figures/Figure_6.png}
\caption{Accident probabilities and its costs to Monte Carlo Simulation.}
\label{figure6}
\end{figure}
Also through Figure \ref{figure6} it is possible to verify the accumulated percentage values, between 0 and 1, for each accident cost on the right. This will be important for the Monte Carlo simulation, which at each iteration selects a random value between 0 and 1 that will correspond to a accident cost. For example, according to Figure \ref{figure6} each percentage range is equivalent to its cost, so any value selected in the range between 0 and 0.990971 will correspond to a accident cost equal to \$0.00.
In this way, 1,000,000 iterations were performed for each arc of the problem and the average of accident costs was calculated and it generates $r_{ij}$ that will be used in the objective function of this VRP. This number of iterations was chosen because the accidents probabilities are very low and thus the values of $r_{ij}$ present better convergence. However, if the iterations are above the established, the resolution time becomes longer and therefore, for this problem, 1,000,000 is an ideal number.
\section{Results and Discussions}
\subsection{Probabilities and costs of risks}
Figures \ref{figure7} and \ref{figure8} represent the results for $Paccident_{ij}$ and $r_{ij}$, respectively. After performing analyses, it was observed that $Paccident_{ij}$ and $r_{ij}$ obtained are in line with what was expected. When comparing the arcs of the same type of road as Piracicaba-Santa Bárbara and Limeira-Mogi Mirim, it was verified that $Paccident_{ij}$ for the first (1.91455 \%) is greater than the second (0.147464 \%) as well as for $r_{ij}$ which is \$ 788.70 and \$ 61.61, respectively. It was expected because the vehicles flow on the Piracicaba-Santa Barbara arc (6,943 heavy vehicles) is greater than the other (535 heavy vehicles).
\begin{figure}
\centering
\includegraphics[width=1\textwidth]{Figures/Figure_7_calor.png}
\caption{Accidents Probabilities $Paccident_{ij}$ in \%.}
\label{figure7}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=1\textwidth]{Figures/Figure_8_calor.png}
\caption{Risk cost $r_{ij}$ in \$.}
\label{figure8}
\end{figure}
When comparing arcs that present similar or close vehicle flow such as Mogi Mirim-Araras and Mogi Mirim-Limeira, it is noted that $Paccident_{ij}$ for the first (0.300835 \%) is greater than the second (0, 147464 \%) as well as for $r_{ij}$ which is \$ 125.66 and \$ 56.95. And this is due to the fact that the road that connects Mogi Mirim and Araras is a single lane two way road, according to the nomenclature of \cite{cnt_painelacidentes2018}, which has the highest death rate among all type of roads.
Other analyzes like this were also carried out and the same conclusions were reached. In this way, it is possible to state that the methodology followed to find $Paccident_{ij}$ and $r_{ij}$ was satisfactory, since consistent results were found according to the results obtained.
\subsection{Optimization results}
Figure \ref{figure9} represents the graph with the results of each parcel, logistics and risk cost, and the sum between them for the optimization varying $\alpha$. Note that for $\alpha = 0$, the logistic cost assumes the minimum value when compared to others $\alpha$, but the risk is the highest. When considering a greater weight of security during the optimization, that is, when increasing $\alpha$, the risk cost starts to decrease and the logistic cost increases. This occurs because the model starts to prioritize arcs with lower risks and thus reducing the concern with the cost of tolls and fuel.
It is verified by Figure \ref{figure9} that for this problem a variation is perceived in $\alpha = 0.15$ which the logistic cost suffers an increase from \$ 729.70 to \$834.36 while the risk decreases from \$3832.44 to \$3175.29 representing a decrease of approximately 17.15 \%. Between $\alpha = 0.20$ and $\alpha = 0.75$, the minimum logistics costs remain at \$851.38 and risks at \$3106.47, which further increases the level of safety fact.
It is important to a decision maker to see it because the choice of routes will depend on what level of security it wants to work for its fleet. If security is considerable in the process of the company, it is important to work with $\alpha \geq 0.15$ but if the logistics cost is still relevant, the level of security should be $\alpha \leq 0.75$, because for $\alpha \geq 0.80$ the logistics cost will reach R\$ 966.34 and the risk R\$ 3070.95 which allows us to say that, due to the first parcel, there will be a considerable increase of approximately 13.50 \% and the second reduce only 1.1 \%, it would not be worth working in this range.
\begin{figure}
\centering
\includegraphics[width=1.0\textwidth]{Figures/Grafico_3_Ingles.png}
\caption{Results of $c_{ij}$, $r_{ij}$ e $z$}
\label{figure9}
\end{figure}
The routes were also evaluated and it is represented by Figures \ref{subfigure1}, \ref{subfigure2}, \ref{subfigure3} and \ref{subfigure4} which each delivery point is identified with a marker and letter, referring to the sequence of deliveries made by each truck. It is also noted that there are different colors for the markers that identify which truck is carrying out the delivery, facilitating the evaluation of the results. For example, the markers in red represent truck 1, in black truck 2, in green truck 3 and in blue the warehouse.
\begin{figure}
\centering
\subfigure[subfigure1][$\alpha = 0,00$]{\includegraphics[width=0.47\textwidth]{Figures/Rotas_Alpha_0,00_1.png} \label{subfigure1}}
\hfill
\subfigure[subfigure2][$\alpha = 0,15$]{\includegraphics[width=0.47\textwidth]{Figures/Rotas_Alpha_0,15_1.png} \label{subfigure2}}
\hfill
\subfigure[subfigure3][$\alpha = 0,20$]{\includegraphics[width=0.47\textwidth]{Figures/Rotas_Alpha_0,20_1.png}\label{subfigure3}}
\hfill
\subfigure[subfigure4][$\alpha = 1,00$]{\includegraphics[width=0.47\textwidth]{Figures/Rotas_Alpha_1.png}\label{subfigure4}} \hfill
\caption{Routes obtained for different values of $\alpha$.}
\label{figure10}
\end{figure}
Only routes were printed from $\alpha$ being 0.00, 0.15, 0.20 and 1.00. According to the graph in Figure \ref{figure9}, these are the points that show variations. For $0.10 \leq \alpha \leq 0.15$ there were some variations, the main one being eliminating the Cosmópolis-Limeira ($r_{ij} = 637.75$ red arc in Figure \ref{subfigure1}) and adding Mogi Mirim-Limeira ( $r_{ij} = 61.61$ green arc in the Figure \ref{subfigure2}) and thus reducing the risk, in this case alone, by approximately 85\% in relation to the total amount that was reduced for this comparison.
The arc Mogi Mirim-Limeira or vice versa, represented by green arc in Figure \ref{subfigure2}, it started to be added as soon as $\alpha$ increases and it occurs because its risk cost is one of the lowest. Its inclusion is the one that most impacted the minimization of risk, therefore for $\alpha \geq 0.20$ the risk cost decreases only approximately to 1\%, as this arc has already been considered.
When analyzing $\alpha = 0.20$, it was noticed that the arc Araras-Holambra (green arc in the Figure \ref{subfigure3}) was included ($r_{ij} = 182.59$) but a part of it passes through the same road with origin in Araras and destination Limeira ($ r_{ij} = 583.51$), represented by orange arc in Figure \ref{subfigure1}. This means that to leave Araras and go to Holambra, the truck initially travels along a less safe road, but then in the most part of the arc is on low risk roads, which makes $r_{ij}$ to have a lower value and this could be a limitation for the method.
\section{Conclusion}
This paper consisted of inserting the issue of route safety in the vehicle routing process of a road freight company in order to help the decision maker to select the best routes that minimize both the distance and the risk of accidents.
A method was developed that considers the statistical analysis to estimate the accidents probabilities for each arc and the costs related to the risks of accidents from the Monte Carlo simulation. The results obtained were coherent, converging to what was expected to the problem.
Through a mathematical model of VRP, logistics costs and risks were minimized and the results were analysed for each part of the objective function and the routes obtained for different values of a risk coefficient alfa ($\alpha$).
According to the outcomes displayed in Figure \ref{figure9}, it was identified that as the safety factor increases, the risk-related costs decrease, which is more noticeable when $0.10 \leq \alpha \leq 0.20$. It was also noticed that, when comparing the routes, it was identified that the model started to incorporate security routes as results.
A limitation observed in the proposed method is for two trips passing through the same road, the accidents probabilities were different and this can be explained because in one trip the truck would be on the more dangerous road for longer than the other, interfering estimation process.
However, it is possible to analyze that the method worked in a positive way by verifying the accidents probabilities and risk cost and it can help the decision maker of a road freight company to select the best routes considering distance and accident risk. In addition, the whole approach developed here is simple and adaptable to any VRP model and can be used by any company for free.
Finally \textit{Knime Analytics Platform} helps to deal with the use of real data in this paper simplifying data exploration, analysis, visualization and interpretation.
For future works, it is relevant to develop an approach that considers the risk of load theft in VRP and based on statistical analysis. In addition, it is necessary to add that during the day accident probability and risk of theft could vary because some variables, as traffic flow, changes according the time and this will help road freights and load insurers to make decisions about the time that a truck could pass at certain road.
\bibliographystyle{splncs04}
|
1,108,101,566,628 | arxiv | \section{Introduction.}\label{sec:introduction}
The graph automorphism problem asks whether a given input graph has a non-trivial automorphism. In other words the task is to decide whether a given graph is asymmetric. This computational problem is typically seen in the context of the graph isomorphism problem, which is itself equivalent under polynomial-time Turing reductions to the problem of computing a generating set for all automorphisms of a graph~\cite{DBLP:journals/ipl/Mathon79}. As a special case of the latter, the graph automorphism problem obviously reduces to the graph isomorphism problem. However, no reduction from the graph isomorphism to the graph automorphism problem is known.
In fact, while many computational problems surrounding structural equivalence of combinatorial objects can all be Turing-reduced to one another, the relationship between the graph automorphism and the graph isomorphism problem remains a repeatedly posed open question (see for example \cite{DBLP:journals/iandc/AgrawalA96, DBLP:journals/corr/AllenderGM15, Ghosh2014,MR1232421}).
With Babai's new ground-breaking algorithm~\cite{DBLP:conf/stoc/Babai16} that solves the graph isomorphism problem and thereby also the graph automorphism problem in quasi-polynomial time, the question arises whether it is possible to go further and devise a polynomial-time algorithm. For such an endeavor to succeed, special cases such as the group isomorphism and the tournament isomorphism problem, for which the currently fastest algorithms have a running time of~$n^{O(\log n)}$, should also be solvable in polynomial time. Tournaments, which are graphs in which between every pair of vertices there exists exactly one directed edge, also have an automorphism problem associated with them, asking whether a given tournament is asymmetric\footnote{Many publications in the context of graph isomorphism use the term rigid graph. However, the literature is inconsistent on the notion of a rigid graph, which can for example refer to having no non-trivial automorphism or no non-trivial endomorphism. We will use the notion asymmetric, which only ever means the former. Furthermore, we suggest the name graph asymmetry problem over graph automorphism problem, so as not to confuse it with the computational problem to compute the automorphism group.}.
Again, for this problem the currently best running time is~$n^{O(\log n)}$ and analogously to general graphs there is a simple reduction from the automorphism problem to the isomorphism problem, but no reverse reduction has been known.
In this paper we show that there is a randomized polynomial-time Turing reduction from the tournament isomorphism problem to the tournament automorphism problem. This is the first such reduction for any kind of combinatorial object (apart from polynomial-time solvable cases of course).
The main new technical tool that we develop in the first part of the paper is a technique to exploit an oracle to the graph automorphism problem in order to obtain a non-trivial automorphism-invariant partition of a graph that is finer than the orbit partition (Sections~\ref{sec:sampling:subsets}--\ref{sec:sampling:minimimal:orbits}). We call the parts of such a partition suborbits. This technique is essentially applicable to all graph classes, not just tournaments. It hinges on a method to extract a characteristic subset from a random source that repeatedly samples from a set of elements. Here we say that a set is characteristic if it is a union of level sets of the probability function.
In the second part of the paper we show that, for tournaments, access to suborbits suffices to compute automorphism groups (Section~\ref{sec:auto:group:from:suborbits}). For this we adapt the group-theoretic divide and conquer approach of Luks~\cite{DBLP:journals/jcss/Luks82} to our situation. In this second part we exploit that the automorphism group of tournaments is solvable and we leave it as an open question whether something similar can be forged that is applicable to the group isomorphism problem (see Section~\ref{sec:open:prob}).
It might be worth noting that the techniques actually do not use any of the new structural insights from the quasi-polynomial-time algorithm of~\cite{DBLP:conf/stoc/Babai16}. Rather, the randomized sampling idea is heavily based on an older practical randomized algorithm designed to quickly detect non-isomorphism~(\cite{DBLP:conf/alenex/KutzS07,SchweitzerThesis}). It appears to be one of the few cases where randomization helps to derive a theoretical result for an isomorphism problem. We also borrow some ideas from a paper of Arvind, Das, and Mukhopadhyay concerned with tournament canonization~\cite{DBLP:journals/jcss/ArvindDM10}.
The necessity for randomization to obtain theoretical results in the context of isomorphism checking appears to be quite rare. The earliest result exploiting randomization seems to go to back to Babai~\cite{BabaiRandom} and is a randomized algorithm for checking isomorphism of graphs of bounded color class size. However that algorithm is actually a Las Vegas algorithm (an algorithm that does not make errors), and in the meantime deterministic algorithms are available~\cite{DBLP:conf/focs/FurstHL80}. However, for the new reduction in this paper it seems unclear how to remove the use of randomization and even how to remove the possibility for errors.
\subsection{Related work:} With respect to related work, we focus on results concerning graph automorphism as well as results concerning tournaments and refer the reader to other texts (for example~\cite{MR1373683,DBLP:conf/stoc/Babai16,MR1232421, DBLP:journals/jsc/McKayP14, DBLP:conf/stacs/Schweitzer15}) for a general introduction to the graph isomorphism problem, current algorithms and overviews over complexity theoretic results.
\emph{(Tournament automorphism)} Let us start by highlighting two results specifically concerned with the tournament automorphism problem. Arvind, Das, and Mukhopadhyay~\cite{DBLP:journals/jcss/ArvindDM10} show that if tournament isomorphism is polynomial-time solvable then tournament canonization can be reduced in polynomial time to canonization of asymmetric tournaments. This implies now, with the result of the current paper, that from a canonization algorithm for asymmetric tournaments we can obtain a randomized canonization algorithm for tournaments in general. (In other words, the main theorem of our paper transfers to canonization.)
On the hardness side, Wager~\cite{DBLP:conf/mfcs/Wagner07,WagnerThesis} shows that tournament automorphism is hard for various circuit complexity classes ($\LanguageTYPESET{NL}$, $\LanguageTYPESET{C_=L}$, $\LanguageTYPESET{PL}$, $\LanguageTYPESET{DET}$, $\LanguageTYPESET{MOD_kL}$) under $\LanguageTYPESET{AC^0}$ reductions.
\emph{(Graph automorphism)} A lot of information on the complexity of graph automorphism can be found in the book by K{\"o}bler, Sch{\"o}ning, and Tor{\'a}n~\cite{MR1232421}. Concerning hardness of the automorphism problem, improving previous results of Tor{\'{a}}n~\cite{DBLP:journals/siamcomp/Toran04}, Wagner shows hardness results for graphs of bounded maximum degree~\cite{DBLP:conf/sofsem/Wagner08,WagnerThesis}. Agrawal and Arvind show truth table equivalence of several problems related to graph automorphism~\cite{DBLP:journals/iandc/AgrawalA96} and Arvind, Beigel, and Lozano study modular versions of graph automorphism~\cite{DBLP:journals/siamcomp/ArvindBL00} which for~$k\in\mathbb{N}$ ask whether the number of automorphisms of a given graph is divisible by~$k$.
The graph automorphism problem is of interest in quantum computing since it can be encoded as a hidden shift problem, as opposed to the graph isomorphism problem that is only known to be encodable as a hidden subgroup problem~\cite{DBLP:journals/qic/ChildsW07,DBLP:journals/siamcomp/HallgrenRT03}.
Recently, Allender, Grochow, and Moore~\cite{DBLP:journals/corr/AllenderGM15} developed a zero-error randomized reduction from graph automorphism to~$\LanguageTYPESET{MKTP}$, the problem of minimizing time-bounded Kolmogorov complexity, a variant of the minimum circuit size problem.
In that paper they also extend this to a bounded-error randomized reduction from graph isomorphism to~$\LanguageTYPESET{MKTP}$.
\emph{(Tournament isomorphism)} Concerning the tournament isomorphism problem, the currently fastest algorithm~\cite{DBLP:conf/stoc/BabaiL83} has a running time of~$n^{O(\log n)}$.
With respect to hardness, Wagner's results for tournament automorphism also apply to tournament isomorphism~\cite{DBLP:conf/mfcs/Wagner07}.
Ponomarenko showed that isomorphism of cyclic tournaments can be decided in polynomial time~\cite{Ponomarenko1994}, where a cyclic tournament is a tournament that has an automorphism that is a permutation with a single cycle spanning all vertices. Furthermore he showed that isomorphism of Schurian tournaments can be decided in polynomial time~\cite{Ponomarenko2013}.
\section{Sampling characteristic subsets.}\label{sec:sampling:subsets}
Let~$M$ be a finite set. We define a \emph{sampler~$\str$} over~$M$ to be a probability measure~$\Pr_{\str}\colon M\rightarrow [0,1]$ on the elements of~$M$. We think of a sampler as an oracle that we can invoke in order to obtain an element of~$M$. That is, given a sampler, we can sample a sequence of elements~$m_1,\ldots,m_t$ where each~$m_i$ is sampled independently from~$M$ according to~$\Pr_{\str}$.
We call a subset~$M'$ of~$M$ \emph{characteristic} with respect to~$\str$ if for all~$m,m'\in M$ it holds that~$m\in M'$ and~$\Pr_{\str}(m') = \Pr_{\str}(m)$ implies~$m'\in M'$. Another way of formulating this condition is that~$M'$ is invariant under all probability-preserving bijections~$\varphi\colon M \rightarrow M$, that is, those bijections that satisfy~$\Pr_{\str}(m) = \Pr_{\str}(\varphi(m))$ for all~$m\in M$.
When considering sampling algorithms we will not assume that we know the size of the set~$M$.
Our goal is to repeatedly invoke a sampler~$M$ so as to find a characteristic subset. The main difficulty in this is that we can never precisely determine the probability~$\Pr_{\str}(m)$ of an element~$m$. Indeed, the only thing we can hope for is to get a good estimate for such a probability. The following lemma indicates that this might be helpful since the set of probabilities cannot be arbitrarily dense.
\begin{lemma}\label{lem:some:empty:interval}
Let~$\Pr_{\str}$ be a discrete probability measure on the set~$M$. Let~$P = \{\Pr_{\str}(m)\mid m\in M\}$ be the set of probabilities that occur. For every positive integer~$i$ there is a~$j\in \{6i+1,\ldots,8i\}$ such that~$[ (j-1/4)/(8i^2),(j+1/4)/(8i^2)]\cap P = \emptyset$.
\end{lemma}
\begin{proof}
Suppose for all~$j\in \{6i+1,\ldots,8i\}$ there is some~$m_j$ with~$\Pr_{\str}(m_j) \in [ (j-1/4)/(8i^2),(j+1/4)/{8i^2}]$. Then~$\Pr_{\str}(m_j)\neq \Pr_{\str}(m_{j'})$ whenever~$j\neq j'$, implying in particular~$m_j \neq m_{j'}$. This yields~$2i$ distinct elements~$m_j$. Furthermore~$\Pr_{\str}(m_j)> 3/(4i)$ for all~$j\in \{6i+1,\ldots,8i\}$.
Thus~$\Pr_{\str}(\{m_j\mid j\in\{6i+1,\ldots,8i\}\}) > 2i \cdot 3/(4i) >1$ yielding a contradiction.
\end{proof}
Using the lemma we can design an algorithm that, with high probability, succeeds at determining a characteristic set.
\begin{theorem}\label{thm:invariant:sampling}
There is a deterministic
algorithm that, given~$\varepsilon>0$ and given access to a sampler~$S$ over an unknown set~$M$ of unknown size, runs in expected time polynomial in~$1/(\max_{m\in M}{\Pr_S(m)}) \leq |M|$ and~$\ln{1/{\varepsilon}}$ and outputs a non-empty subset of~$M$ that is characteristic with probability~$1-\varepsilon$.
\end{theorem}
\begin{proof}
Let~$p = \max_{m\in M}{\Pr_S(m)}$ and
let~$i = \lceil 1/p\rceil$. First note that~$|M|\geq 1/p$ and that~$i \leq 2/p \leq 2|M|$. Let~$P = \{\Pr_S(m) \mid m\in M\}$ be the set of values that occur as probabilities of elements in~$M$.
The idea of the proof is to sample many times as to get good estimates for probabilities using Chernoff bounds and then to include in the output all elements with a probability above a certain threshold.
The main difficulty of the Lemma arises from the fact that~$p$ is not known to the algorithm. We first describe an algorithm for the situation in which~$p$ is known which works in such a way so that it can be adapted in the end.
We start by sampling~$T= \max\{\left\lceil i^3 2^{17} (\ln{1/{\varepsilon'}})\right\rceil, \left\lceil i^3 2^{18} (\ln{1/{\varepsilon'}})\right\rceil^2\}$
elements~$m_1,\ldots,m_T$ from the sampler, where we set~$\varepsilon' = \min\{1/e,\varepsilon/8\}$.
We then compute for each appearing element~$m_k$ a probability estimator~$\#(m_k)$ for its probability by computing~$N(m_k)/T$ where~$N(m_k)$ is the number of times that element~$m_k$ has been sampled. Let~$Q= \{\#(m_k)\mid k\in \{1,\ldots, T\}\}$ be the set of probability estimators. Let~$\ell$ be the smallest number in~$\{6i+1,\ldots,8i\}$ such that~$[ (\ell-1/8)/(8i^2),(\ell+1/8)/{(8i^2)}]\cap Q=\emptyset$. If no such element exists, we declare the algorithm as failed.
Otherwise, we output~$M' = \{m_k \mid \#(m_k)> \ell/(8i^2)\}$. We call~$\ell$ the \emph{cut-off}.
\bigskip
We analyze the probability that this algorithm succeeds in computing a characteristic subset. For this, let us define~$\#(x) = 0$ for~$x\in M$ whenever~$x$ does not appear among the sampled elements.
\begin{claim}\label{claim:prob}
For each element~$x\in M$, the probability that~$|\#(x)-\Pr_S(x)| \geq 1/(2^7 i^2)$ is at most~$2e^{-\frac{T}{2^{17} i^3}} \leq 2\varepsilon'$.
\end{claim}
\proof
Consider an experiment where we sample~$T$ elements according to~$S$. We want to bound the probability that the observed~$T\cdot \#(x)$ deviates from its expected value~$\mu \coloneqq T\cdot \Pr_{\str}(x)$ by at least~$T/(2^7i^2)$.
This deviation is at least~$\delta\mu $ if we set~$\delta \coloneqq \frac{1}{2^7 i^2 \Pr_{\str}(x)} >0 $.
We can thus use the Chernoff bound (see~\cite[Corollary A.15, Page 515]{DBLP:books/daglib/0023084}) and conclude that the probability that
$|\#(x)-\Pr_S(x)| \geq 1/(2^7 i^2)$
is at most
\[2e^{-\mu \min\{\delta^2/4,\delta/2\} } \leq
2e^{\left(-T\min\{\frac{1}{2^{16} i^4 \Pr_{\str}(x)},\frac {1}{2^8 i^2}\} \right)}
\leq 2e^{\left(\frac{-T}{\max\{2^{17} i^3,2^8 i^2 \} }\right)} \leq
2e^{\left(\frac{-T}{ 2^{17} i^3}\right)}, \
where the second inequality uses the fact~$\Pr_{\str}(x) \leq p = 2/(2/p) \leq 2/\lceil 1/p\rceil =2/i$.
\hfill$\lrcorner$
Define~$A_k$ as the event that for the~$k$-th sampled element~$m_k$ we have~$|\#(m_k) - \Pr(m_k)| \geq 1/(2^6 i^2)$. Thus the event~$A_k$ happens if~$\#(m_k)$ deviates excessively from its expected value.
\begin{claim}[resume]\label{claim:Ak}
The probability that there is a~$k\in\{1,\ldots,T\}$ such that event~$A_k$ occurs is at most~$2\varepsilon'$.
\end{claim}
\proof
To bound the probability of event~$A_k$, we first consider~$\Pr(A_k \mid m_k = x)$, the probability of~$A_k$ under the condition that the~$k$-th sampled element~$m_k$ is equal to~$x$ for some fixed element~$x\in M$. Considering that we already know that~$m_k = x$ we need to consider an experiment where we sample~$T-1$ times independently from~$S$ and count the number of times we obtain element~$x$. This number is then~$N(m_k)-1$ since the item with number~$k$ itself adds one to the count of elements equal to~$x$.
If~\[\#(m_k) = N(m_k)/T \notin [ \textstyle{\Pr_S(m_k)}-1/(2^6 i^2),\textstyle{\Pr_S(m_k)}+1/(2^6 i^2)]\] then~\[(N(m_k)-1)/(T-1)
\notin [ \textstyle{\Pr_S(m_k)}-1/(2^7 i^2),\textstyle{\Pr_S(m_k)}+1/(2^7 i^2)],\] as shown by the simple fact that for positive integers~$2\leq a\leq b$ we have~$|a/b - (a-1)/(b-1)| \leq 1 / (b-1)$ and~$1/(2^7 i^2 ) \geq 1/(T-1)$.
Thus in our experiment with~$T-1$ trials, the observed value~$N(m_k)-1$ must deviate from its expected value~$\mu \coloneqq (T-1) \Pr_{\str}(x)$ by at least~$ (T-1)/(2^7 i^2)$.
From the previous claim we obtain an upper bound of\[2e^{-\frac{(T-1)}{ 2^{17} i^3}} \leq 2e^{-\frac{T}{ 2^{18} i^3}},\]
where the inequality uses the fact that~$T\geq 2$.
Since this bound is independent of~$x\in M$ and since~$x$ was arbitrary, the bound is also an upper for~$\Pr(A_k)$.
By the union bound and using~$T\geq \left\lceil i^3 (\ln{1/{\varepsilon'}}) 2^{18} \right\rceil^2$, we obtain that the probability that there is a~$k\in\{1,\ldots,T\}$ such that~$A_k$ happens is at most~\[T \cdot 2e^{-\frac{T}{2^{18} i^3}}\leq T2e^{-\sqrt{T}} {\varepsilon'} \leq 2{\varepsilon'},\]
where the last inequality follows since~$t^2e^{-t} <1$ for~$t\geq 1$.
\hfill$\lrcorner$
\begin{claim}[resume]\label{claim:failed}
If the algorithm is declared as failed then~$A_k$ occurs for some~$k\in \{1,\ldots,T\}$.
\end{claim}
\proof
By Lemma~\ref{lem:some:empty:interval} there is an integer~$j\in \{6i+1,\ldots,8i\}$ such that~$\Pr_{\str}(m)\notin [(j-1/4)/(8i^2),(j+1/4)/(8i^2)]$ for all~$m\in M$. Define~$B_k$ as the event that for the~$k$-th sampled element~$m_k$ we have~$\#(m_k) \in [ (j-1/8)/(8i^2),(j+1/8)/(8i^2)]$. The algorithm can only be declared a failure if event~$B_k$ happens for some~$k\in \{1,\ldots,T\}$. However, the event~$B_k$ implies the event~$A_k$.
\hfill$\lrcorner$
\begin{claim}[resume]\label{claim:empty}
Assuming the algorithm is not declared a failure, the probability that~$M'$ is empty is at most~$2\varepsilon'$.
\end{claim}
\proof
Since~$i\geq \lceil 1/p\rceil $ there is an element~$x\in M$ with~$\Pr_{\str} (x) \geq 1/i \geq \ell/{(8i^2)}$. Then the probability that~$\#(m_i) < (\ell-1/8)/{(8i^2)} \leq (8i-1/8)/(8i^2) = (1-1/(64i))/i\leq (1-1/(64i)) \Pr_{\str} (x)$ is at most~$2\varepsilon'$ by Claim~\ref{claim:prob}.
\hfill$\lrcorner$
\begin{claim}[resume]\label{claim:canonical}
If~$M'$ is not characteristic then~$A_k$ occurs for some~$k\in \{1,\ldots,T\}$ with probability at least~$(1-\varepsilon')$.
\end{claim}
\proof
By~Claim~\ref{claim:failed} we can assume that the algorithm was not declared a failure. Note that, if~$M'$ is not characteristic then one of the following three things happens: there is an element~$m_k$ with~$\#(m_k)> j$ but~$Pr_S(m_k)\leq \ell$ or there is an element~$m_k$ with~$\#(m_k)\leq \ell$ but~$Pr_S(m_k)> \ell$, or ~$\#(x) = 0$ for an element with~$Pr_S(x)> \ell$. However, by the choice of~$\ell$, we know that~$\#(m_k)\notin [ (\ell-1/8)/(8i^2),(\ell+1/8)/{(8i^2)}]$. Thus in the first two cases we conclude that event~$A_k$ occurs.
The third option is that~$\#(x) = 0$ for an element with~$Pr_S(x)> \ell$. There are at most~$1/\ell<8i^2/(6i)= 4i/3$ elements~$x$ with such a probability and for each the probability for~$\#(x) = 0$ is at most~$(1-\ell)^T\leq (1-3/(4i))^T\leq \varepsilon' 3/(4i)$. So by the union bound we obtain a total probability of at most~$\varepsilon'$.
\hfill$\lrcorner$
Combining the claims we obtain that the algorithm fails with probability at most~$2\varepsilon'+ 2\varepsilon' +\varepsilon \leq 5\varepsilon' \leq \varepsilon$.
Until this point we have assumed that the value of~$p$ is known to the algorithm. To remedy this we repeatedly run the algorithm with a simple doubling technique. In each iteration we run the described algorithm assuming that~$1/p\in [i,2i]$. Here we sample~$T$ elements of~$M$. In the next iterations we replace~$i$ by~$2i$ and repeat. We also replace~${\varepsilon'}$ by~${\varepsilon'} /2$. Since~$p\geq 1/|M|$, The number of iterations is logarithmic in~$1/p$.
The total number of sampled items is at most twice the number of items sampled in the last round. Thus, overall we obtain an algorithm with expected polynomial time.
To ensure that we obtain a suitable error bound it suffices to note that the probabilities of Claims~\ref{claim:prob},~\ref{claim:Ak} and~\ref{claim:canonical} actually decrease when~$i$ is replaced by an arbitrary smaller number. Skipping the first round, we obtain an error of at most~$5\varepsilon'/2 + 5\varepsilon'/4 +5\varepsilon'/8 +\ldots \leq \varepsilon$.
(Note that this argument in particular comprises the fact that if in an iteration a set is being output by the algorithm it is still characteristic with sufficiently high probability.)
\end{proof}
We note several crucial observations about any algorithm solving the problem just described.
There is no algorithm that for every set~$M$ and sampler~$S$ always outputs the same set~$M'$ with high probability.
Indeed, consider the set~$M = \{a,b\}$. Choosing~$\Pr_S (a) = \Pr_S(b) = 1/2$ means that~$M'$ must be~$\{a,b\}$.
Choosing~$\Pr_S(a) = 1$ and~$\Pr_{\str}(b) = 0$ implies that~$M'$ must be~$\{a\}$. However, there is a continuous deformation between these two samplers, while possibilities for the set~$M'$ are discrete. It is not difficult to see that the probability distribution of the output set~$M'$ must be continuous in the space of samplers, and thus, whatever the algorithm may be, there must be samplers for which the algorithm sometimes outputs~$\{a\}$ and sometimes outputs~$\{a,b\}$.
Let us also remark that the analysis of the running time of the algorithm is certainly far from optimal. In particular a large constant of~${(2^{18})}^2$ arises only from the goal to keep the computations simple and the desire to have a bound that also holds for small values of~$|M|$.
Once one is interested in small running times, one might even ask whether it is possible to devise an algorithm running in time sublinear in~$|M|$. However, recalling the coupon collector theorem and considering uniform samplers one realizes that one cannot expect to make do with~$o(|M|\log |M|)$ samplings. However, if the set~$M$ is of algebraic nature, for example forms a group, then there might be meaningful ways to sample characteristic substructures (see Section~\ref{sec:open:prob}).
\section{Gadget constructions for asymmetric tournaments}\label{sec:gadget:constructs}
There are several computational problems fundamentally related to the graph isomorphism problem. This relation manifests formally as polynomial-time Turing (or even many-one) reductions between the computational tasks. Such reductions are typically based on gadget constructions which we revisit in this section.
While the \emph{graph isomorphism problem}~$\LanguageTYPESET{GI}$ asks whether two given graphs are isomorphic, in the search version of this decision problem an explicit isomorphism is to be found, whenever one exits.
The \emph{graph automorphism problem}~$\LanguageTYPESET{GA}$ asks whether a given graph has a non-trivial automorphism (i.e., an automorphism different from the identity). In other words the task is to decide whether the given graph is asymmetric. Two other related problems are the task~$\LanguageTYPESET{AUT}$ to determine generators for the automorphism group~$\Aut(G)$ and the task to determine the size of the automorphism group~$|\Aut(G)|$.
For all named problems there is a colored variant, where the given graphs are vertex colored and isomorphisms are restricted to be color preserving. We denote the respective problems by~$\LanguageTYPESET{col\text{-}GI}$,~$\LanguageTYPESET{col\text{-}GA}$ and~$\LanguageTYPESET{col\text{-}AUT}$.
It is well known that between all these computational problems -- except~$\LanguageTYPESET{GA}$ -- there are polynomial-time Turing reductions (we refer for example to \cite{boothcolbourn},~\cite{MR1232421}, \cite{DBLP:journals/ipl/Mathon79}). Concerning the special case of~$\LanguageTYPESET{GA}$, while there is a reduction from~$\LanguageTYPESET{GA}$ to the other problems, a reverse reduction is not known.
The reductions are typically stated for general graphs, but many of the techniques are readily applicable to restricted graph classes. By a \emph{graph class} we always mean a collection of possibly directed graphs closed under isomorphism. The \emph{isomorphism problem for graphs in~$\mathcal{C}$}, denoted~$\LanguageTYPESET{GI}_\mathcal{C}$, is the computational task to decide whether two given input graphs from~$\mathcal{C}$ are isomorphic. If one of the input graphs is not in~$\mathcal{C}$ the answer of an algorithm may be arbitrary, in fact the algorithm may even run forever.
Analogously, for each of the other computational problems that we just mentioned, we can define a problem restricted to~$\mathcal{C}$ giving us for example~$\LanguageTYPESET{GA}_\mathcal{C}$, and $\LanguageTYPESET{AUT}_\mathcal{C}$ and the colored versions~$\LanguageTYPESET{col\text{-}GI}_\mathcal{C}$,~$\LanguageTYPESET{col\text{-}GA}_\mathcal{C}$, and~$\LanguageTYPESET{col\text{-}AUT}_\mathcal{C}$.
As remarked in~\cite{DBLP:journals/jcss/ArvindDM10}, most of the reduction results for general graphs transfer to the problems for a graph class~$\mathcal{C}$ if one has, as essential tool, a reduction from~$\LanguageTYPESET{col\text{-}GI}_\mathcal{C}$ to~$\LanguageTYPESET{GI}_\mathcal{C}$.
\begin{theorem}[Arvind, Das, Mukhopadhyay~\cite{DBLP:journals/jcss/ArvindDM10}]\label{thm:reductios:relative:to:class} Suppose that for a graph class~$\mathcal{C}$ there is a polynomial-time many-one reduction from $\LanguageTYPESET{col\text{-}GI}_\mathcal{C}$ to~$\LanguageTYPESET{GI}_\mathcal{C}$ (i.e., $\LanguageTYPESET{col\text{-}GI}_\mathcal{C} \leq_m^p \LanguageTYPESET{GI}_\mathcal{C}$)\footnote{Let us remark for completeness that a Turing reduction assumption~$\LanguageTYPESET{col\text{-}GI}_\mathcal{C} \leq_T^p \LanguageTYPESET{GI}_\mathcal{C}$ actually suffices for the theorem.}. Then
\begin{enumerate}
\item $\LanguageTYPESET{GA}_\mathcal{C}$ polynomial-time Turing-reduces to~$\LanguageTYPESET{GI}_\mathcal{C}$ (i.e.,~$\LanguageTYPESET{GA}_\mathcal{C} \leq_T^p \LanguageTYPESET{GI}_\mathcal{C}$),
\item The search version of~$\LanguageTYPESET{GI}_\mathcal{C}$ polynomial-time Turing-reduces to the decision version of~$\LanguageTYPESET{GI}_\mathcal{C}$, and
\item $\LanguageTYPESET{AUT}_\mathcal{C}$ polynomial-time Turing-reduces to~$\LanguageTYPESET{GI}_\mathcal{C}$ (i.e.,~$\LanguageTYPESET{AUT}_\mathcal{C} \leq_T^p \LanguageTYPESET{GI}_\mathcal{C}$).\label{item:aut:to:gi}
\end{enumerate}
\end{theorem}
In this paper we are mainly interested in two classes of directed graphs, namely the class of tournaments~$\LanguageTYPESET{Tour}$ and the class of asymmetric tournaments~$\LanguageTYPESET{AsymTour}$. For the former graph class, a reduction from the colored isomorphism problem to the uncolored isomorphism problem is given in~\cite{DBLP:journals/jcss/ArvindDM10}.
\begin{theorem}[Arvind, Das, Mukhopadhyay~\cite{DBLP:journals/jcss/ArvindDM10}]\label{thm:removing:colors:for:tour:iso}
The colored tournament isomorphism problem is polynomial-time many-one reducible to the (uncolored) tournament isomorphism problem (i.e.,~$\LanguageTYPESET{col\text{-}GI}_\mathcal{\LanguageTYPESET{Tour}} \leq_m^p \LanguageTYPESET{GI}_\mathcal{\LanguageTYPESET{Tour}}$).
\end{theorem}
However, for our purposes we also need the equivalent statement for asymmetric tournaments.
Taking a closer look at the reduction described in \cite{DBLP:journals/jcss/ArvindDM10} yields the desired result. In fact it also shows that the colored asymmetry problem reduces to the uncolored asymmetry problem. Denoting for a graph class~$\mathcal{C}$ by~$\LanguageTYPESET{Asym}\mathcal{C}$ the class of those graphs in~$\mathcal{C}$ that are asymmetric (i.e., have a trivial automorphism group), we obtain the following.
\begin{lemma}
\label{lem:col:tour:iso:to:tour:asym}
\begin{enumerate}
\item The isomorphism problem for colored asymmetric tournaments is polynomial-time many-one reducible to the isomorphism problem for (uncolored) asymmetric tournaments (i.e.,~$\LanguageTYPESET{col\text{-}GI}_\mathcal{\LanguageTYPESET{AsymTour}} \leq_m^p \LanguageTYPESET{GI}_\mathcal{\LanguageTYPESET{AsymTour}}$).
\item The colored tournament asymmetry problem is polynomial-time many-one reducible to the (uncolored) tournament asymmetry problem (i.e.,~$\LanguageTYPESET{col\text{-}GA}_\mathcal{\LanguageTYPESET{Tour}} \leq_m^p \LanguageTYPESET{GA}_\mathcal{\LanguageTYPESET{Tour}}$).
\end{enumerate}
\end{lemma}
\begin{proof}[Proof sketch]
In~\cite{DBLP:journals/jcss/ArvindDM10} given two colored tournaments~$T_1$ and~$T_2$, a gadget construction is described that adds new vertices to each tournament yielding~$T_1'$ and~$T_2'$ so that~$T_1\cong T_2 \Leftrightarrow T'_1\cong T'_2$.
The authors show that every automorphism of~$T'_i$ fixes the newly added vertices. However, from the construction it is clear that~$T_1$ is asymmetric if and only if~$T'_i$ is asymmetric, since all vertices that are added must be fixed by every automorphism. This demonstrates both parts of the lemma.
We sketch a gadget construction that achieves these properties and leave the rest to the reader. For each~$i\in \{1,2\}$ the construction is as follows. Suppose without loss of generality that the colors of~$T_i$ are~$\{1,\ldots,\ell\}$ with~$\ell\geq 2$.
We add a directed path~$u_1\rightarrow \ldots\rightarrow u_{\ell}$ to the graph. A vertex~$v\in V(T_i)$ has~$u_j$ as in-neighbor if~$j$ is the color of~$v$. Otherwise~$u_j$ is an out-neighbor of~$v$. We add two more vertices~$a$ and~$,b$ to the graph. The only out-neighbor of vertex~$a$ is~$b$. The in-neighbors of~$b$ are the vertices in~$\{a,u_1,\ldots,u_{\ell}\}$. It can be shown that~$a$ is the unique vertex with maximum in-degree. This implies that~$b$ and thus all~$u_j$ are fixed by all automorphisms.
\end{proof}
As mentioned above, reductions for computational problems on general graphs can often be transferred to the equivalent problems restricted to a graph class~$\mathcal{C}$. However, let us highlight a particular reduction where this is not the case. Indeed, it is not clear how to transfer the reduction from~$\LanguageTYPESET{GI}$ to~$\LanguageTYPESET{AUT}$ (which involves taking unions of graphs) to a reduction from~$\LanguageTYPESET{GI}_\mathcal{C}$ to~$\LanguageTYPESET{AUT}_\mathcal{C}$, even when provided a reduction of~$\LanguageTYPESET{col\text{-}GA}_\mathcal{C}$ to~$\LanguageTYPESET{GI}_\mathcal{C}$.
For the class of tournaments however, we can find such a reduction, of which we can make further use.
\begin{lemma}\label{lem:asym:iso:red:to:asym}
\begin{enumerate}
\item The isomorphism problem for tournaments polynomial-time Turing-reduces to
the task to compute a generating set for the automorphism group of a tournament (i.e., $\LanguageTYPESET{col\text{-}GI}_\mathcal{\LanguageTYPESET{Tour}} \leq_T^p \LanguageTYPESET{AUT}_\mathcal{\LanguageTYPESET{Tour}}$). \label{item:gi:to:aut}
\item The isomorphism problem for colored asymmetric tournaments is polynomial-time many-one reducible to tournament asymmetry (i.e., $\LanguageTYPESET{col\text{-}GI}_\mathcal{\LanguageTYPESET{AsymTour}} \leq_m^p \LanguageTYPESET{GA}_\mathcal{\LanguageTYPESET{Tour}}$). \label{item:asym:iso:to:asym}
\item The search version of the isomorphism problem for colored asymmetric tournaments Turing-reduces to tournament asymmetry.\label{item:asym:iso:to:asym:search}
\end{enumerate}
\end{lemma}
\begin{figure}
\centering
\scalebox{0.9}{
\begin{tikzpicture}
\node[circle, draw, inner sep=10, outer sep =3] (T1) at (-0,2) {$T_1$};
\node[circle, draw, inner sep=10, outer sep =3] (T2) at (-4,0) {$T_2$};
\node[circle, draw, inner sep=10, outer sep =3] (T1p) at (-0,-2) {$T'_1$};
\draw[transform canvas={yshift=0.3ex},->, ultra thick] (T1.290) -- (T1p.70);
\draw[transform canvas={yshift=0.3ex},->, ultra thick] (T1.270) -- (T1p.90);
\draw[transform canvas={yshift=0.3ex},->, ultra thick] (T1.250) -- (T1p.110);
\draw[transform canvas={yshift=0.3ex},->, ultra thick] (T1p.160) -- (T2.-30);
\draw[transform canvas={yshift=0.3ex},->, ultra thick] (T1p.180) -- (T2.-50);
\draw[transform canvas={yshift=0.3ex},->, ultra thick] (T1p.140) -- (T2.-10);
\draw[transform canvas={yshift=0.3ex},->, ultra thick] (T2.30) -- (T1.-160);
\draw[transform canvas={yshift=0.3ex},->, ultra thick] (T2.50) -- (T1.-180);
\draw[transform canvas={yshift=0.3ex},->, ultra thick] (T2.10) -- (T1.-140);
\node at (2.5,-2) {$T_1\cong T_2$};
\end{tikzpicture}}
\caption{A visualization of the triangle tournament~$\mathrm{Tri}(T_1,T_2)$.}
\label{fig:tri}
\end{figure}
\begin{proof}
Suppose we are given two tournaments~$T_1$ and~$T_2$ on the same number of vertices~$n$ for which isomorphism is to be decided. By Theorem~\ref{thm:removing:colors:for:tour:iso} we can assume that the tournaments are uncolored.
Let~$\mathrm{Tri}(T_1,T_2)$ be the tournament obtained by forming the disjoint union of the three tournaments~$T_1$,~$T_1'$ and~$T_2$ where~$T_1\cong T_1'$ .
We add edges from all vertices of~$T_1$ to all vertices of~$T_1'$, from all vertices of~$T_1'$ to all vertices of~$T_2$ and from all vertices of~$T_2$ to all vertices of~$T_1$ (see Figure~\ref{fig:tri}).
We observe that two vertices that are contained in the same of the three sets~$V(T_1)$,~$V(T_2)$,~$V(T_1')$ have~$n$ common out-neighbors. However, two vertices that are not contained in the same of these three sets have at most~$n-1$ common out-neighbors.
We conclude that an automorphism of~$\mathrm{Tri}(T_1,T_2)$ preserves the partition of~$V(\mathrm{Tri}(T_1,T_2))$ into the three sets~$V(T_1)$,~$V(T_1')$ and~$V(T_2)$.
Given a generating set for~$\Aut(\mathrm{Tri}(T_1,T_2))$ it holds that there is some generator that maps a vertex from~$V(T_1)$ to a vertex from~$V(T_2)$ if and only if~$T_1$ and~$T_2$ are isomorphic. This proves the first part of the lemma.
Suppose additionally that~$T_1$ and~$T_2$ are asymmetric.
We then further conclude that the tournament~$\mathrm{Tri}(T_1,T_2)$ has a non-trivial automorphism if and only if~$T_1$ and~$T_2$ are isomorphic. This shows that the decision version of asymmetric tournament isomorphism reduces to tournament asymmetry.
Since the search version is Turing-reducible to the decision version of isomorphism (Theorem~\ref{thm:reductios:relative:to:class}) this finishes the proof.
\end{proof}
For Turing reductions, the converse of the previous lemma also holds.
In fact the converse holds for arbitrary graph classes.
\begin{lemma} \label{lem:asym:red:to:asym:iso}
Let~$\mathcal{C}$ be a graph class.
\begin{enumerate}
\item The task to compute a generating set for the automorphism group of graphs in~$\mathcal{C}$ Turing-reduces to the isomorphism problem for colored graphs in~$\mathcal{C}$ (i.e.,~$\LanguageTYPESET{AUT}_\mathcal{\mathcal{C}} \leq_T^p \LanguageTYPESET{col\text{-}GI}_\mathcal{\mathcal{C}}$).
\item Asymmetry checking for graphs in~$\mathcal{C}$ polynomial-time Turing-reduces to isomorphism checking of asymmetric colored graphs in~$\mathcal{C}$ (i.e.,
$\LanguageTYPESET{GA}_\mathcal{\mathcal{C}} \leq_T^p \LanguageTYPESET{col\text{-}GI}_\mathcal{\LanguageTYPESET{Asym}\mathcal{C}}$).
\end{enumerate}
\end{lemma}
\begin{proof}
The proof of the first part is a well known reduction that already appears in~\cite{DBLP:journals/ipl/Mathon79}. We can also see it by applying Part~\ref{item:aut:to:gi} of Theorem~\ref{thm:reductios:relative:to:class} to the class of colored graphs in~$\mathcal{C}$.
For the second part, assume we have an oracle~$O_1$ for isomorphism checking of colored asymmetric graphs in~$\mathcal{C}$. Then we also have an oracle~$O_2$ for the search-version of isomorphism checking of colored asymmetric graphs in~$\mathcal{C}$. Indeed, we can find an isomorphism by individualizing more and more vertices in both graphs while keeping the graphs isomorphic. When all vertices are singletons, there is only one option for the isomorphism.
Now let~$G$ be a graph in~$\mathcal{C}$. Without loss of generality assume that~$V(G) = \{v_1,\ldots,v_n\}$.
For every~$t,t'\in \{1,\ldots,n\}$ with~$t>t'$ we call~$O_2(G_{(v_1,\ldots,v_{t-1},v_t)},G_{(v_1,\ldots,v_{t-1},v_{t'})})$.
Here the notation~$G_{(u_1,\ldots,u_{\ell})}$ denotes the graph~$G$ colored such that the color of~$u_i$ is~$i$ and vertices not in~$\{u_1,\ldots,u_{\ell}\}$ have color 0. (With respect to the partition of the vertices into color classes this is the same as constructing the graph obtained from~$G$ by individualizing~$u_1,\ldots,u_{\ell}$ one after the other.)
If we find an isomorphism among the calls then this isomorphism is non-trivial since it maps~$v_t$ to~$v_{t'}$ and thus~$G$ is not asymmetric. Conversely if~$G$ is not asymmetric, then let~$j$ be the least integers for which~$G_{(v_1,\ldots,v_{j})}$ is asymmetric. Then~$j<n$, (since~$G_{(v_1,\ldots,v_{n-1})}$ is always asymmetric) and there is a~$t'>j$ such that~$G_{(v_1,\ldots,v_{j-1},v_{j})}$ and~$G_{(v_1,\ldots,v_{j-1},v_{t'})}$ are isomorphic. This isomorphism will be found by the oracle.
While the oracle~$O_2$ can sometimes output incorrect answers, namely when one of the inputs is not asymmetric, $O_2$ is certifying in the sense that we can check whether a given answer is really an isomorphism. Thus, we avoid making any errors whatsoever.
\end{proof}
\section{Invariant automorphism samplers from asymmetry tests}
As discussed before, the asymmetry problem of a class of graphs reduces to the isomorphism problem of graphs in this class. However, whether there is a reduction in the reverse, or whether the asymmetry problem may actually be computationally easier than the isomorphism problem is not known. To approach this question, we now explore what computational power we could get from having available an oracle for the asymmetry problem.
An \emph{invariant automorphism sampler for a graph~$G$} is a sampler over~$\Aut(G)\setminus\{\id\}$ which satisfies the property that if~$\Pr_{\str}(\varphi) = p$ then~$\Pr_{\str}(\psi^{-1}\circ \varphi \circ \psi) = p$ for all~$\psi \in \Aut(G)$.
We first show how to use an oracle for asymmetry to design an invariant automorphism sampler for a tournament~$T$.
\begin{lemma}\label{lem:from:tour:asym:to:aut:sampler}
Given an oracle for asymmetry of tournaments ($\LanguageTYPESET{GA}_\mathcal{\LanguageTYPESET{Tour}}$)
we can construct for every given colored (or uncolored) tournament~$T$ that is not asymmetric an invariant automorphism sampler.
The computation time (and thus the number of oracle calls) required to sample once from~$\str$ is polynomial in~$|V(T)|$.
\end{lemma}
\begin{algorithm}
\caption{An invariant automorphism sampler for tournaments using an asymmetry oracle}\label{alg:iso:sampler:from:asym:oracle}
\label{alg:algorithm-label}
\begin{algorithmic}[1]
\REQUIRE A tournament~$T$ that is not asymmetric and an oracle~$O$ for tournament asymmetry.
\ENSURE An automorphism~$\varphi\in \Aut(T)\setminus\{\id\}$. As a random variable, the outputs of the algorithm form an invariant automorphism sampler for~$T$.
\vspace{0.2cm}
\STATE $T_{\mathrm{next}} \leftarrow T$
\WHILE{$\Aut(T_{\mathrm{next}})\neq \{\id\}$}
\STATE{Pick a vertex~$v$ independently, uniformly at random among all non-singleton color classes in~$T_{\mathrm{next}}$.}
\STATE $T\leftarrow T_{\mathrm{next}}$
\STATE{$T_{\mathrm{next}}\leftarrow T_{(v)}$}\COMMENT{individualize~$v$}
\ENDWHILE
\COMMENT{at this point~$T_{\mathrm{next}}$ is asymmetric}
\STATE Let~$V'$ be the set of those vertices that have the same color in~$T$ as~$v$.
\STATE Let~$V''$ be the set of those vertices~$v''$ in~$V'\setminus \{v\}$ for which~$\Aut(T_{(v'')})= \{\id\}$.
\STATE Let~$V'''$ be the set of those vertices~$v'''$ in~$V''$ for which
$T_{\mathrm{next}}\cong T_{(v''')}$. \\\COMMENT{use Part~\ref{item:asym:iso:to:asym} of Lemma~\ref{lem:asym:iso:red:to:asym} }
\STATE Pick a vertex~$u\in V'''$ uniformly at random.
\STATE Compute an isomorphism~$\varphi$ from~$T_{\mathrm{next}}$ to~$T_{(u)}$. \COMMENT {there is only one such isomorphism}
\RETURN $\varphi$
\end{algorithmic}
\end{algorithm}
\begin{proof}
Let~$O_1$ be an oracle for uncolored tournament asymmetry.
By Lemma~\ref{lem:col:tour:iso:to:tour:asym}, we can transform the oracle~$O_1$ for the asymmetry of uncolored tournaments into an oracle~$O_2$ for asymmetry of colored tournaments. By Lemma~\ref{lem:asym:iso:red:to:asym} Part~\ref{item:asym:iso:to:asym}, we can also assume that we have an oracle~$O_3$ that decides the isomorphism problem of colored asymmetric tournaments.
More strongly, Lemma~\ref{item:asym:iso:to:asym:search} Part~\ref{item:asym:iso:to:asym} makes a remark on the search version, thus we can assume that~$O_3$ also solves the isomorphism search problem for asymmetric tournaments.
To obtain the desired sampler~$\str$ we proceed as follows. In the given tournament~$T$ we repeatedly fix (by individualization, i.e., giving it a special color) uniformly, independently at random more and more vertices until the resulting tournament is asymmetric. This gives us a sequence of colored tournaments~$T = T_0, T_1, \ldots, T_t$ such that~$\Aut(T_t) = \{\id \}$,~$\Aut(T_{t-1}) \neq \{\id \}$ and such that~$T_t = ({T_{t-1}})_{(v)}$ for some vertex~$v$. In other words,~$T_t$ is obtained from~$T_{t-1}$ by individualizing~$v$ which makes the graph asymmetric.
Using the available oracle~$O_2$, we can compute the set~$V''$ of those vertices~$v''$ in~$V(T)\setminus\{v\}$ that have the same color as~$v$ such that~$\Aut((T_{t-1})_{(v'')})= \{\id\}$. There must be at least one vertex in~$V''$ since~$T_{t-1}$ is not asymmetric.
Using the oracle~$O_3$, we can then compute the subset~$V'''\subseteq V''$ of those vertices~$v'''$ for which~$(T_{t-1})_{(v''')}$ and~$T_t$ are isomorphic.
Next, we pick a vertex~$u\in V'''$ uniformly at random. Since both~$(T_{t-1})_{(u)}$ and~$T_{t}$ are asymmetric, using the oracle~$O_3$ for the isomorphism search problem we can compute an isomorphism~$\varphi$ from~$(T_{t-1})_{(u)}$ to~$T_{t}$. This isomorphism~$\varphi$ is unique and it is a non-trivial automorphism of~$\Aut(T)$. Algorithm~\ref{alg:iso:sampler:from:asym:oracle} gives further details.
\emph{(Invariance)} The invariance follows directly from the fact that all steps of the algorithm either consist of choosing a vertex uniformly at random or computing an object that is invariant with respect to all automorphisms fixing all vertices that have been randomly chosen up to this point.
\emph{(Running time)} Concerning the running time, one call of Algorithm~\ref{alg:iso:sampler:from:asym:oracle} uses less than~$2n$ calls to oracle~$O_2$ and at most~$n$ calls to oracle~$O_3$. The overall running time is thus polynomial.
\end{proof}
Let us comment on whether the technique of the lemma can be applied to graph classes other than tournaments. For the technique to apply to a graph class~$\mathcal{C}$, we require the oracle~$O_2$, which solves colored asymmetry~$\mathcal{C}$, and the oracle~$O_3$ which solves the isomorphism search problem for asymmetric colored objects in~$\mathcal{C}$. (The oracle~$O_1$ is a special case of~$O_2$.)
In the case of tournaments, having an oracle~$O_1$ (i.e., an oracle for uncolored asymmetry) is sufficient to simulate the oracles~$O_2$ and~$O_3$, but this is not necessarily possible for all graph classes~$\mathcal{C}$. It is however possible to simulate such oracles for every graph class that satisfy some suitable (mild) assumptions, as can be seen from the discussion in Section~\ref{sec:gadget:constructs}. In particular, given an oracle for asymmetry of all graphs we can construct an invariant automorphism sampler for all graphs that are not asymmetric.
\section{Invariant suborbits from invariant automorphism samplers}\label{sec:sampling:minimimal:orbits}
Let~$G$ be a directed graph.
Let~$\str$ be an invariant automorphism sampler for~$G$.
We now describe an algorithm that, given access to an asymmetry oracle, constructs a non-discrete partition of~$V(G)$ which is finer than or at least as fine as the orbit partition of~$G$ under~$\Aut(G)$ and invariant under~$\Aut(G)$. Here, a partition~$\pi$ is invariant under~$\Aut(G)$ if~$\pi = \psi(\pi)$ for all~$\psi \in \Aut(G)$. (A partition is discrete if it consists only of singletons.)
\begin{theorem}\label{thm:from:aut:sampler:to:suborbits}
For every~$c\in \mathbb{N}$, there is a randomized polynomial-time algorithm that, given a graph~$G$ and an invariant automorphism sampler~$\str$ for~$G$ constructs with error probability at most~$\frac{1}{|G|^c}$ a non-discrete partition~$\pi$ of~$V(G)$ such that
\begin{enumerate}
\item $\pi$ is finer than or at least as fine as the orbit partition of~$V(G)$ under~$\Aut(G)$ and
\item $\pi$ is invariant under~$\Aut(G)$.
\end{enumerate}
The algorithm also provides a set of certificates~$\Phi = \{\varphi_1,\ldots, \varphi_m\} \subseteq \Aut(G)$
such that for every pair of vertices~$v,v'\in V(G)$ that lie in the same class of~$\pi$ there is some~$\varphi_i$ with~$\varphi_i(v) = v'$.
\end{theorem}
\begin{proof}
Let~$M = \{(v,w)\mid v,w\in V(G), v\neq w, \exists \varphi\in \Aut(G)\colon \varphi(v) = w \}$ be the set of pairs of two distinct vertices lying in the same orbit.
With the sampler~$\str$ we can simulate a sampler~$\str'$ over~$M$ invariant under~$\Aut(G)$ as follows. To create an element for~$\str'$ we sample an element~$\varphi$ from~$\str$ and uniformly at random choose an element~$v$ from the support~$\supp(\varphi) =\{x\in V(G)\mid \varphi(x)\neq x\}$ of~$\varphi$. Then the element for~$\str'$ is~$(v,\varphi(v))$. It follows form the construction that~$\str'$ is a sampler for~$M$. Moreover, since all random choices are independent and uniform,~$\str'$ is invariant under automorphisms.
Using the algorithm from Theorem~\ref{thm:invariant:sampling} we can thus compute a characteristic subset~$M'$ of~$M$. Since~$\str'$ is $\Aut(G)$-invariant, the fact that~$M'$ is characteristic implies that it is also $\Aut(G)$-invariant.
For the given~$c\in \mathbb{N}$, to obtain the right error bound, we choose~$\varepsilon$ to be~$\frac{1}{|G|^c}$ for the algorithm from Theorem~\ref{thm:invariant:sampling}. Then the error probability is at most~$\varepsilon= \frac{1}{|G|^c}$ and the running time is polynomial in~$|M| = O(|G|^2)$ and~$\ln |G|^c = O(|G|)$ and thus polynomial in the size of the graph.
Regarding~$M'$ as a binary relation on~$V(G)$ we compute the transitive closure and
let~$\pi$ be the partition of~$V(G)$ into equivalence classes of said closure, where vertices that do not appear at all as entries in~$M'$ form their own class. By construction, elements that are in the same class of~$\pi$ are in the same orbit under~$\Aut(G)$. Moreover~$\pi$ is~$\Aut(G)$-invariant since~$M'$ is~$\Aut(G)$-invariant.
To provide certificates for the elements in~$M'$ we can store all elements given to us by~$S$. For each~$(v,w)\in M'$ we can thus compute an automorphism of~$\varphi_{v,w}\in \Aut(G)$ with~$\varphi_{v,w}(v) = w$.
For pairs in the transitive closure of~$M'$ we then multiply suitable automorphisms.
\end{proof}
If a partition~$\pi$ satisfies the conclusion of the lemma, we call it an \emph{invariant collection of suborbits}. We call the elements of~$\Phi$ the \emph{certificates}. Let us caution the reader that the set~$\Phi$ returned by the algorithm is not necessarily characteristic. Moreover, the orbits of the elements in~$\Phi$ might not necessarily be contained within classes of~$\pi$.
We call an algorithm an \emph{oracle for invariant suborbits} if, given a tournament~$T$, the algorithm returns a pair~$(\pi,\Phi)$ constituting invariant suborbits and certificates, in case~$T$ is not asymmetric, and returns the discrete partition~$\pi$ and~$\Phi =\{\id\}$ whenever~$T$ is asymmetric.
\section{Computing the automorphism group from invariant suborbits}\label{sec:auto:group:from:suborbits}
To exploit invariant suborbits we make use of the powerful group-theoretic technique to compute stabilizer subgroups.
\begin{theorem}[Luks~\cite{DBLP:journals/jcss/Luks82}]\label{thm:solvable:intersect}
There is an algorithm that,
given a permutation group~$\Gamma$ on~$\{1,\ldots,n\}$ and subset~$B\subseteq \{1,\ldots,n\}$, computes (generators for) the setwise stabilizer of~$B$. If~$\Gamma$ is solvable, then this algorithm runs in polynomial time.
\end{theorem}
We will apply the theorem in the following form: Let~$G$ be a graph and~$\Gamma$ a solvable permutation group on~$V(G)$. Then~$\Gamma \cap \Aut(G)$ can be computed in polynomial time. This follows directly from the theorem by considering the induced action of~$\Gamma$ on pairs of vertices from~$V(G)$ and noting that~$\Gamma \cap \Aut(G)$ consists of those elements that stabilize the edge set.
In our algorithm we will also use the concept of a quotient tournament (that can for example implicitly be found in~\cite{DBLP:journals/jcss/ArvindDM10}, see also~\cite{DBLP:conf/stacs/Schweitzer15}). Let~$T$ be a tournament and let~$\pi$ be a partition of~$V(T)$ in which all parts have odd size.
We define~$T/\pi$, the \emph{quotient of~$T$ modulo~$\pi$}, to be the tournament on~$\pi$ (i.e., the vertices of~$T/\pi$ are the parts of~$\pi$) where for distinct~$C,C'\in V(T/\pi) = \pi$ there is an edge from~$C$ to~$C'$ if and only if in~$T$ there are more edges going from~$C$ to~$C'$ than edges going from~$C'$ to~$C$. Note that since both~$|C|$ and~$|C'|$ are odd there are either more edges going from~$C$ to~$C'$ or more edges going from~$C'$ to~$C$. This implies that~$T/\pi$ is a tournament.
\begin{theorem}\label{thm:invariant:suborbits:give:tour:iso}
Suppose we are given as an oracle a randomized Las Vegas algorithm that computes invariant suborbits for tournaments in polynomial time. Then we can compute the automorphism group of tournaments in polynomial time.
\end{theorem}
\begin{algorithm}
\caption{Computing the automorphism of a tournament using invariant suborbits}\label{alg:aut:group}
\label{alg:auto:group}
\begin{algorithmic}[1]
\REQUIRE A (colored) tournament~$T$ and an oracle~$O$ for invariant suborbits with certificates.
\ENSURE A generating set for the automorphism group~$\Aut(T)$.
\vspace{0.2cm}
\IF [Case 0]{$T$ is not monochromatic} \label{monocrome:case}
\STATE Let $\mathrm{Col}$ be the set of vertex colors of~$T$.
\FOR{$c\in COL$}
\STATE Let $V^c$ be the set vertices in~$T$ of color~$c$.
\STATE $\Psi^c\leftarrow \Aut(T[V^c])$ \COMMENT{recursion}\label{item:case:0:recursion}
\STATE Let $\widehat{\Psi^c}$ be the set of extensions of $\Psi^c$ to~$V(T)$ obtained by fixing vertices outside~$V^c$.
\ENDFOR
\STATE $\Psi = \bigcup_{c\in \mathrm{Col}} \widehat{\Psi^c}$
\RETURN $\langle \Psi\rangle\cap \Aut(T)$ \COMMENT {solvable group stabilizer}\label{item:case:0:stab}
\ENDIF
\STATE $(\pi,\Phi)\leftarrow O(T)$ \COMMENT {$\pi$ forms invariant suborbits of~$T$, $\Phi$ the set of certificates}
\IF [$T$ is asymmetric]{$\pi$ is discrete}
\RETURN $\{\id\}$
\ELSIF [Case 1]{$\pi = \{V(T)\}$}
\STATE Choose $v\in V(T)$ arbitrarily.
\STATE Let~$T'$ be obtained from~$T$ by coloring~$v$ with~1, all in-neighbors of~$v$ with 2 and other vertices with~$3$.
\RETURN $\Phi \cup \Aut(V(T'))$ \COMMENT{recursion}\label{item:case:1:recursion}
\ELSIF [Case 2]{$\exists C,C'\in \pi\colon |C|\neq |C'|$}
\STATE Let~$T'$ be obtained from~$T$ by coloring each vertex~$v$ with color~$|[v]_{\pi}|$.
\RETURN $\Aut(V(T'))$ \COMMENT{recursion}\label{item:case:2:recursion}
\ELSE [Case 3]
\STATE For~$C\in \pi$ we let~$T_C$ be the graph obtained from~$T[C]$ by picking an arbitrary vertex~$v\in C$ and coloring~$v$ with~1, all in-neighbors of~$v$ with 2 and other vertices with~$3$.\label{item:individ:tri}
\FOR {$\{(C,C')\in \pi\mid C\neq C'\}$}
\STATE Compute~$\Aut (\mathrm{Tri}(T_C,T_C'))$ and extract an isomorphism $\varphi_{(C,C')}\colon T[C]\rightarrow T[C']$ whenever such an isomorphism exists.\COMMENT{recursion}\label{line:iso:call:via:aut}
\ENDFOR
\IF [Case 3a] {$\exists C,C'\in \pi\colon T[C] \ncong T[C']$}
\STATE Let~$T'$ be obtained from~$T$ by coloring~$V(T)$ so that~$v$ and~$v'$ have the same color if and only if~$T[([v])] \cong T[([v'])]$.
\RETURN $ \Aut(T')$ \COMMENT{recursion}\label{item:case:3a:recursion}
\ELSE [Case 3b]
\STATE $\Psi \leftarrow \Aut(T/\pi)$ \COMMENT{recursion on the quotient}\label{item:quotient}
\STATE $\widehat{\Psi}\leftarrow \{\widehat{g} \mid g\in \Psi\}$, where~$\widehat{g}(v) =\varphi_{([v],g([v]))}(v)$.
\FOR {$\{C \in \pi \}$}
\STATE $\Upsilon_C \leftarrow \Aut(T[C])$ \COMMENT{recursion}\label{item:auts:of:parts}
\STATE Compute~$\widehat{\Upsilon}_C$ the lifts of elements in~$\Upsilon_C$ by fixing vertices outside~$C$.
\ENDFOR
\RETURN $\langle \widehat{\Psi}\cup \bigcup_{C\in \pi} \widehat{\Upsilon}_C\rangle \cap \Aut(T)$ \COMMENT {solvable group stabilizer}\label{item:case:3b:stab}
\ENDIF
\ENDIF
\end{algorithmic}
\end{algorithm}
\begin{proof}
We describe an algorithm that computes the automorphism group of a colored tournament given a randomized oracle that provides invariant suborbits.
\emph{(Description of the algorithm)} Let~$T$ be a given colored tournament.
(Case 0: $T$ is not monochromatic.) If~$T$ is not monochromatic then we proceed as follows:
Let~$\mathrm{Col}$ be the set of colors that appear in~$T$.
For~$c\in \mathrm{Col}$, let~$V^c$ be the set of vertices of color~$c$ and let~$T^c = T[V^c]$ be the subtournament induced by the vertices in~$V^c$.
We recursively compute~$\Aut(T^c)$ for all~$c\in \mathrm{Col}$. Let~$\Psi^c$ be the set of generators obtained as an answer. We lift every generator to a permutation of~$V(T)$ by fixing all vertices outside of~$V^c$. Let~$\widehat{\Psi^c}$ be the set of lifted generators of~$\Psi^c$ and let~$\Psi = \bigcup_{c\in \mathrm{Col}} \widehat{\Psi^c}$ be the set of all lifted generators.
Since~$\Aut(T^c) =\langle \Psi^c\rangle$ is solvable, we conclude that~$\langle \Psi\rangle$ is a direct product of solvable groups and thus solvable. We can thus compute~$\langle \Psi\rangle\cap \Aut(T)$ using Theorem~\ref{thm:solvable:intersect} and return the answer.
\medskip
This concludes Case 0. In every other case we first compute
a partition~$\pi$ into suborbits using the oracle and a corresponding set of certificates~$\Phi$.
For a partition~$\pi$ of some set~$V$ we denote for~$v\in V$ by~$[v]_{\pi}$ the element of~$\pi$ containing~$v$. We may drop the index when it is obvious from the context.
If~$|T|=1$ then we simply return the identity.
(Case 1: $\pi$ is trivial). In case~$\pi$ is trivial (i.e.,~$\pi=\{V(T)\}$), we know that~$T$ is transitive. We choose an arbitrary vertex~$v\in V(T)$.
Let~$\lambda$ be the coloring of~$V(T)$ satisfying
\[\lambda(u) = \begin{cases}
1 & \text {if } u=v\\
2 & \text {if } (u,v)\in E(T)\\
3 & \text{otherwise}.
\end{cases}\]
We recursively compute a generating set~$\Psi$ for~$\Aut(T')$, where~$T'$ is~$T$ recolored with~$\lambda$. We then return~$\Psi \cup \Phi$.
(Case 2: not all classes of $\pi$ have the same size.)
We color every vertex with the size of the class of~$\pi$ in which it is contained. Now~$T$ is not monochromatic anymore and we recursively compute~$\Aut(T)$ with~$T$ having said coloring. (In other words,
we proceed as in Case~$0$.)
(Case 3: all classes of $\pi$ have the same size but~$\pi$ is non-trivial.)
We compute for each pair of distinct equivalence classes~$C$ and~$C'$ of~$\pi$ an isomorphism~$\varphi_{(C,C')}$ from~$T[C]$ to~$T[C']$ or determine that no such isomorphism exists, as follows: We choose for each~$C$ an arbitrary vertex~$v\in C$.
We let~$T_C$ be the tournament obtained from~$T[C]$ by
coloring~$v$ with~1, all in-neighbors of~$v$ with 2 and other vertices with~$3$. We let~$T_{C,C'} = \mathrm{Tri}(T_C,T_{C'})$ be the triangle tournament of~$T_C$ and~$T_{C'}$ where~$(T_{C})'$ is an isomorphic copy of~$T_C$ (as defined in Section~\ref{sec:gadget:constructs} in the proof of Lemma~\ref{lem:asym:iso:red:to:asym}).
Using recursion we compute~$\Aut(T_{C,C'})$. From the result we can extract an isomorphism from~$T[C]$ to~$T[C']$ since~$V(T[C])$ and~$V(T[C'])$ are blocks of~$T_{C,C'}$.
(Case 3a:) If it is not the case that for every pair~$C,C'$ of color classes there is an isomorphism from~$T[C]$ to~$T[C']$ then we color the vertices of~$T$ so that~$v,v'$ have the same color if and only if there is an isomorphism from~$T[([v])]$ to~$T[([v'])]$, where as before for every vertex~$u$ we denote by~$[u]$ the class of~$\pi$ containing~$u$.
With this coloring,~$T$ is not monochromatic anymore and we recursively compute~$\Aut(T)$ with~$T$ having said coloring. (In other words,
we proceed as in Case~$0$.)
(Case 3b:) Otherwise, for every pair~$C,C'$ of color classes, there is an isomorphism from~$T[C]$ to~$T[C']$. Note that all color classes are of odd size since~$T[C]$ is transitive (as dictated by~$\pi$).
Thus, we can compute the quotient tournament~$T/\pi$. We recursively compute a generating set~$\Psi = \{ g_1,\ldots,g_t \}$ for the automorphism group of~$T/\pi$.
We lift each~$g_i$ to a permutation~$\widehat{g_i}$ of~$V(T)$ as follows.
The permutation~$\widehat{g_i}$ maps each vertex~$v$ to~$\varphi_{([v],g_i([v]))}(v)$. Since~$g_i$ is a permutation and each~$\varphi_{(C,C')}$ is a bijection, the map~$\widehat{g_i}$ is a permutation of~$V(T)$. Let~$\widehat{\Psi}= \{ \widehat{g_1},\ldots,\widehat{g_t} \}$ be the set of lifted generators.
As next step, for each class~$C$ we recursively compute a generating set~$\Upsilon_C$ for~$\Aut(T[C])$. We lift each generator in~$\Upsilon_C$ to a permutation of~$V(T)$ by fixing all vertices outside of~$C$ obtaining the set~$\widehat{\Upsilon}_C$ of lifted generators.
Consider the group~$\Gamma$ generated by the set~$\widehat{\Psi} \cup \bigcup_{C\in \pi} \widehat{\Upsilon}_C$.
As a last step, using Theorem~\ref{thm:solvable:intersect} we compute the subgroup~$\Gamma' =\Gamma \cap \Aut(T)$.
The details of this algorithm are given in Algorithm~\ref{alg:aut:group}.
\medskip
\emph{(Running time)}
We first argue that all work performed by an iteration of the algorithm apart from the recursive calls is polynomial in~$n$, say~$O(n^c)$ for some constant~$c$. This is obvious for all instructions of the algorithm except the task to compute the intersection of $\langle \Psi\rangle\cap \Aut(T)$ in Case 0 (Line~\ref{item:case:0:stab}) and the task to compute $\langle \widehat{\Psi}\rangle \cap \Aut(T)$ in Case 3b (Line~\ref{item:case:3b:stab}).
However, in Case 0, the group generated by $\langle \Psi\rangle$ is a direct product of solvable groups, thus solvable, and in Case 3b, the group $\langle \widehat{\Psi}\cup \bigcup_{C\in \pi} \widehat{\Upsilon}_C\rangle$ is a subgroup of a wreath product of a solvable group with a solvable group and is thus solvable. (Alternatively we can observe that the natural homomorphism from the group $\langle \widehat{\Psi}\cup \bigcup_{C\in \pi} \widehat{\Upsilon}_C\rangle$ to~${\Psi}$ has kernel~$\langle\bigcup_{C\in \pi} \widehat{\Upsilon}_C\rangle$, a direct product of solvable groups.) In either case, using the algorithm from Theorem~\ref{thm:invariant:suborbits:give:tour:iso}, the group intersection can be computed in polynomial time.
It remains to consider the number of recursive calls.
We will bound the amount of work of the algorithm in terms of~$t$, the maximum size of a color class of~$T$, and the number of vertices~$n$. Denote by~$R(t,n)$ the maximum number of nodes in the recursion tree over all tournaments for which the color classes have size at most~$t$ and the number of vertices is at most~$n$. Note that~$R(t,n)$ is monotone increasing in both components.
First note that if~$t<n$ the algorithm will end up in Case~0.
The recursive bound in Case~0 (Line~\ref{item:case:0:recursion}) is then~$R(t,n) \leq 1+ \sum_{i = 1}^\ell R(a_i,a_i)$ for some positive integers~$a_1,\ldots,a_{\ell}\in \mathbb{N}$ (the color class sizes) that sum up to~$n$ but are smaller than~$n$.
In Case~1, we have~$t=n$. The tournament~$T'$ is colored into three color classes.
Since~$T$ is transitive (and thus every vertex has in- and out-degree~$(t-1)/2$), in~$T'$ there is one color class of size~$1$ and there are two classes of size~$(t-1)/2$.
The recursive call will lead to Case~0, which then
yields one trivial recursive call on a tournament of size~1 and two calls with tournaments of size~$(t-1)/2$.
We obtain a recursive bound (for Line~\ref{item:case:1:recursion}) of~$R(t,n)\leq 2+ 2 R((t-1)/2,(t-1)/2)\leq 3 R(t/2,t/2)$.
In Case 2, we have~$t=n$ and observe that the recursive call is for a tournament that is not monochromatic. Thus the recursive call will end up in Case 0. We thus obtain a recursive bound (for Line~\ref{item:case:2:recursion}) of~$R(t,n) \leq 2+ \sum_{i = 1}^\ell R(a_i,a_i)$ for some positive integers~$a_1,\ldots,a_{\ell}\in \mathbb{N}$ that sum up to~$n$ but are smaller than~$n$.
In Case 3, we have~$t=n$. Note that if the classes of~$\pi$ have size~$t'$ then~$t'\leq n/3$ (elements of~$\pi$ are all equally large and there are at least 2 but there is an odd number) and there are~$(n/t')^2=(t/t')^2$ recursive calls in Line~\ref{line:iso:call:via:aut}.
In the graph~$T_{C,C'}$ the color classes have size at most~$3 (t'-1)/2$ and there are at most~$3t'$ vertices. (The increase of a factor 3 comes from the~$\mathrm{Tri}()$ operation.)
Thus the cost for such calls is bounded by~$ (t/t')^2 \cdot R(3(t'-1)/2,3t')\leq (t/t')^2 \cdot R(3t'/2,3t') \eqqcolon R_3 $, where~$3t'/2\leq n/2 = t/2$.
Using the same arguments as before, in Case 3a we thus get a recursive bound for Line~\ref{item:case:3a:recursion} of~$\sum_{i = 1}^\ell R(b_i,b_i)$ and thus for Case 3a in total a bound of~$R(t,n) \leq 1 + R_3+ \sum_{i = 1}^\ell R(b_i,b_i)$ for some positive integers~$b_1,\ldots,b_{\ell}\in \mathbb{N}$ that sum up to~$n$ but are smaller than~$n-t'= t-t'$.
In Case 3b we need to additionally consider the cost for the recursive call in Line~\ref{item:quotient}. This cost is at most~$ R(t/t',t/t')$ where~$t'\geq 2$ since the coloring is not discrete.
Also there is a recursive cost of~$t/t'\cdot R(t',t') $ coming from Line~\ref{item:auts:of:parts}.
Thus in this case we end up with~$R(t,n) \leq 1 + R(t/t',t/t') + R_3 + t/t'\cdot R(t',t')$.
Summarizing we get that~$R(t,n)$ is bounded by
\[ \begin{cases}
1 & \text{if $n=1$}\\
2+ \sum\limits_{i = 1}^\ell R(a_i,a_i) & \text{in Cases 0 and 2, with~$\sum\limits_{i=1}^{\ell}a_i = n$ and $a_i\leq n-1$}\\
3 R(t/2,t/2)& \text{in Case 1}\\
1 + R_3+ \sum\limits_{i = 1}^\ell R(b_i,b_i)& \text{in Case 3a, with~$\sum\limits_{i=1}^{\ell}b_i = n$ and~$b_i\leq t-t'$}\\
1 + R(t/t',t/t') + R_3 + t/t'\cdot R(t',t') & \text{in Case 3b,}
\end{cases}\]
where~$ R_3 = (t/t')^2 \cdot R(3t'/2,3t')$ and~$t'$ satisfies~$3t'/2 \leq t/2$ and~$t/ t'\leq t/2$ and~$3t'\leq t$.
Let us define~$S(m)$ as the maximum of~$R(t,n)$ over all pairs of positive integers~$(t,n)$ with~$t+n\leq m$ and~$t\leq n$. Then we get from the above considerations that~$S(m)$ is bounded by one of the following
\[ \begin{cases}
1 & \text{if $m=2$}\\
2+ \sum\limits_{i = 1}^\ell S(a_i) & \text{$\sum\limits_{i=1}^{\ell}a_i \leq m$ and $a_i\leq m-1$}\\
3 S(m/2)& \\
1 + (m/t')^2 \cdot S(9/2 \cdot t')+ \sum\limits_{i = 1}^\ell S(b_i)& \sum\limits_{i=1}^{\ell}b_i = m$ and~$b_i\leq m-t'\\
1 + S(m/t') + (m/t')^2 \cdot S(9/2 \cdot t')+ m/t'\cdot S(t'), &
\end{cases}\]
where~$t'$ satisfies~$9/2 \cdot t'\leq 3/4\cdot m$ and~$t/ t'\leq m/2$.
It is now simply a calculation to show that for~$d$ sufficiently large, the function~$F(m)= m^d$ satisfies all the recurrence bounds for~$S$ (of course as lower bounds rather than upper bounds).
We show the calculation for the most interesting case, the Case 3a.
Let~$x = m-t'$. Then~$5t'\leq x$. Furthermore the equation for Case 3a says~$S(x+t')\leq 1+ ((x+t')/t')^2 S(9/2 t') + \sum\limits_{i = 1}^\ell S(b_i)$ where~$b_i\leq x$ and~$\sum\limits_{i=1}^{\ell}b_i = x+t'$. Note for the function~$F$ that~$\sum\limits_{i = 1}^\ell F(b_i)$, under the conditions~$b_i\leq x$ and~$\sum\limits_{i=1}^{\ell}b_i = x+t'$, gets maximized as~$x^d + (t')^d$.
For the right hand side we get~$1+ ((x+t')/t')^2 (9/2)^d (t')^d + x^d + (t')^d \leq 1+ x^d+ (6/5)^2 (9/2)^d x^2 (t')^{d-2} + (t')^d$ which is certainly bounded by~$(x+t')^d$ for~$d$ sufficiently large since the expansion of~$(x+t')^d$ contains the summands~$x^d$,~$(t')^d$ and~$d x^{d-1} t'\leq d 5^{d-3} x^{2} (t')^{d-2}$. Thus~$F$ is an upper bound for~$S$.
Overall we obtain a polynomial-time algorithm from this recursive bound. This in particular implies that the algorithm halts.
\medskip
\emph{(Correctness)}
For the correctness proof we analyze the different cases one by one. By induction we can assume that recursive calls yields correct answers.
For Case~0, since the last instruction intersects some group with the automorphism group it is clear that the algorithm can only return automorphisms of~$T$. Let us thus assume that~$\varphi\in \Aut(T)$. Then, for each color class~$c$, the set~$V_c$ is invariant under~$\varphi$ and~$\varphi|_{V_c}\in \Aut(T[V^c])$. This implies that~$\varphi \in \langle \Psi\rangle$.
For Case 1,~$T$ is transitive since~$\pi = V(T)$ shows that~$V(T)$ is an orbit. Thus,~$\Aut(T)$ is generated by the point stabilizer~$\Aut(T)_v \coloneqq \{\psi \in \Aut(T)\mid \psi(v) = v\}$ and an arbitrary transversal (i.e., a subset of elements of~$\Aut$ containing a representative from each coset of~$\Aut(T)_v$ in~$\Aut(T)$). Since~$\Phi$ contains a certificate for all pairs of distinct vertices~$(v,v')$ and since~$\Phi\subseteq \Aut(T)$ we conclude that~$\Aut(T) = \langle \Phi\cup \Aut(V(T'))\rangle$.
For Case 2, it suffices to note that for every integer~$i\in \mathbb{N}$ the set $\{v \in V(T) \mid |[v]_\pi| = i \}$ is invariant under~$\Aut(T)$.
For Case 3, Line~\ref{line:iso:call:via:aut} note that similar to Case 1, the graphs~$T[C]$ and~$T[C']$ are transitive and thus the individualization in Line~\ref{item:individ:tri} does not make isomorphic graphs non-isomorphic.
For Case 3a, again note that for~$v\in V(T)$ the set~$\{v' \in V(T) \mid T[([v])] \cong T[([v'])] \}$ is invariant. For Case 3b we argue similarly to Case~0. Since the last instruction intersects some group with the automorphism group it is clear that the algorithm can only return automorphisms of~$T$. Let us thus assume that~$\varphi\in \Aut(T)$.
Then~$\varphi$ induces an automorphism~$\psi$ of~$T/\pi$. Note that there is some~$\widehat{\psi}$ in~$\langle\widehat{\Psi}\rangle$ that also induces~$\psi$ on~$T/\pi$. It suffices now to show that the map~$\widehat{\psi}^{-1}\circ \varphi$ is in~$\langle\bigcup_{C\in \pi} \widehat{\Upsilon}_C\rangle$. Consider~$C\in \pi$. Then~$\widehat{\psi}^{-1}\circ \varphi$ maps~$C$ to~$C$ and more strongly it induces an automorphism of~$T[C]$ which must be contained in~$\langle\widehat{\Upsilon}_C\rangle$. We conclude that~$\widehat{\psi}^{-1}\circ \varphi$ is in~$\langle\bigcup_{C\in \pi} \widehat{\Upsilon}_C\rangle$ finishing the proof.
\end{proof}
We have now assembled all the required parts to prove the main theorem of the paper.
\begin{corollary}
\begin{enumerate}
\item There is a randomized (one-sided error) polynomial-time Turing reduction from tournament isomorphism to asymmetry testing of tournaments (i.e.,~$\LanguageTYPESET{GI}_\mathcal{\LanguageTYPESET{Tour}} \leq_{r,T}^p \LanguageTYPESET{GA}_\mathcal{\LanguageTYPESET{Tour}}$).
\item There is a randomized polynomial-time Turing reduction from the computational task to compute generators of the automorphism group of a tournament to asymmetry testing of tournaments (i.e.,~$\LanguageTYPESET{AUT}_\mathcal{\LanguageTYPESET{Tour}} \leq_{r,T}^p \LanguageTYPESET{GA}_\mathcal{\LanguageTYPESET{Tour}}$).
\end{enumerate}
\begin{proof}
Recall that a two-sided error algorithm for an isomorphism search problem can be readily turned into a one-sided error algorithm by checking the output isomorphism for correctness. Thus, by Lemma~\ref{lem:asym:iso:red:to:asym} Part~\ref{item:gi:to:aut}
it suffices to prove the second part of the corollary.
Combining Lemma~\ref{lem:from:tour:asym:to:aut:sampler} and Theorem~\ref{thm:from:aut:sampler:to:suborbits}, from an oracle to tournament asymmetry we obtain a randomized Monte Carlo (i.e., with possible errors) algorithm that computes invariant suborbits. Given a Las Vegas algorithm (i.e., no errors) for suborbits, the previous theorem provides us with a computation of the automorphism group of tournaments.
It remains to discuss the error probability we get from using a Monte Carlo algorithm instead of a Las Vegas algorithm. Since there is only a polynomial number of oracle calls, and since the error bound in Theorem~\ref{thm:from:aut:sampler:to:suborbits} can be chosen smaller than~$\frac{1}{|G|^c}$ for every fixed constant~$c$, the overall error can be chosen to be arbitrarily small.
\end{proof}
\end{corollary}
\section{Discussion and open problems}\label{sec:open:prob}
This paper is concerned with the relationship between the asymmetry problem~$\LanguageTYPESET{GA}_\mathcal{\mathcal{C}}$ and isomorphism problem~$\LanguageTYPESET{GI}_\mathcal{\mathcal{C}}$. While under mild assumptions there is a reduction from the former to the latter, a reduction in the other direction is usually not known. However, for tournaments we now have such a randomized reduction.
The first question that comes to mind is whether the technique described in this paper applies to other graph classes. While the sampling techniques from Sections~\ref{sec:sampling:subsets} to~\ref{sec:sampling:minimimal:orbits} can be applied to all graph classes that satisfy mild assumptions (e.g.,~$\LanguageTYPESET{col\text{-}GI}_\mathcal{\mathcal{C}} \leq_t^p \LanguageTYPESET{GI}_\mathcal{\mathcal{C}}$ and~$\LanguageTYPESET{col\text{-}GI}_\mathcal{\LanguageTYPESET{Asym}\mathcal{C}} \leq_t^p \LanguageTYPESET{GI}_\mathcal{\LanguageTYPESET{Asym}\mathcal{C}}$) the algorithm described in Section~\ref{sec:auto:group:from:suborbits} crucially uses the fact that automorphism groups of tournaments are solvable. This is not the case for general graphs, so for the open question of whether~$\LanguageTYPESET{GI}$ reduces to~$\LanguageTYPESET{GA}$ this may dampen our enthusiasm. However, what may bring our enthusiasm back up is that there are key classes of combinatorial objects that share properties similar to what we need.
In particular, this brings us to the question whether the techniques of the paper can be applied to group isomorphism. Just like for tournament isomorphism, finding a faster algorithm for group isomorphism (given by multiplication table) is a bottleneck for improving the run-time bound for isomorphism of general graphs beyond quasi-polynomial.
Since outer-automorphism groups of simple groups are solvable, we ask: Can we reduce the group isomorphism problem to the isomorphism problem for asymmetric groups? This question is significant since an asymmetry assumption on groups is typically a strong structural property and may help to solve the entire group isomorphism problem.
However, here one has to be careful to find the right notion of asymmetry since all groups have inner automorphisms. For such notions different possibilities come to mind.
A second natural open question would be whether there is a deterministic version of the algorithms given in this paper.
As a last open problem recall that it was shown in Section~\ref{sec:sampling:subsets} that one can extract a characteristic subset for a sampler over a set~$M$ in time that depends polynomially on~$M$. Since the automorphism group of a graph can be superpolynomial in the size of the graph, we had to take a detour via suborbits in Section~\ref{sec:sampling:minimimal:orbits}. There can be no general way to extract a characteristic subset of~$M$ in polynomial time if~$|M|$ is not polynomially bounded, since we might never see an element twice.
However, if~$M$ has an algebraic structure, in particular if~$M$ is a permutation group over a polynomial size set, this is not clear.
Thus we ask: Is there a polynomial-time (randomized) algorithm that extracts a characteristic subgroup using a sampler~$\Gamma$ over a permutation group?
\bibliographystyle{plainurl}
|
1,108,101,566,629 | arxiv | \section{Introduction}\label{sec:intro}
\input{Introduction}
\section{Setting}\label{sec:model}
\input{Setting}
\section{Methods}\label{sec:method}
\input{Estimator}
\section{Asymptotics}\label{sec:asymp}
\input{Asymptotics}
\section{Applications}\label{sec:appl}
\input{Applications}
\section{Simulation}
As a proof of concept, estimators corresponding to $g(c)=c^2$, $g(c)=\log(c)$ when $d=1$ are calculated based on the simulation model
\begin{equation*}
\left\{\begin{array}{lcl}
Y^n_i &=&X^n_i + \varepsilon^n_i \\
\mathrm{d}X_t&=&.03\,\mathrm{d} t + \sqrt{c_t}\,\mathrm{d} W_t + J^X_t\,\mathrm{d} N^X_t\\
\mathrm{d}c_t&=&6(.16-c_t)\,\mathrm{d} t + .5\sqrt{c_t}\,\mathrm{d} B_t + \sqrt{c_{t-}}J^c_t\,\mathrm{d} N^c_t
\end{array}\right.
\end{equation*}
where $\varepsilon^n_i\overset{\text{i.i.d.}}{\sim}N(0,.005^2)$, $\mathbb{E}[(W_{t+\Delta}-W_t)(B_{t+\Delta}-B_t)]=-.6\Delta$, $J^X_t\sim N(-.01,.02^2)$, $N^X_{t+\Delta}-N^X_t\sim\mathrm{Poisson}(36\Delta)$, $\log(J^c_t)\sim N(-5,.8)$, $N^c_{t+\Delta}-N^c_t\sim\mathrm{Poisson}(12\Delta)$.
Each simulation employs $23400\times21$ data points with $\Delta_n=1s$. We choose the following tuning parameters:
\begin{table}[H]
\begin{tabular}{l|lll}
functionals & $l_n$ & $k_n$ & $\nu_n$ \\
\hline
$g(c)=c^2$ & $\lfloor\Delta_n^{-.5}\rfloor$ & $\lfloor\Delta_n^{-.69}\rfloor$ & $1.6\overline{\sigma}^2\Delta_n^{.47}$ \\
$g(c)=\log(c)$ & $\lfloor\Delta_n^{-.5}\rfloor$ & $\lfloor\Delta_n^{-.7}\rfloor$ & $1.5\overline{\sigma}^2\Delta_n^{.47}$
\end{tabular}
\end{table}
where $\overline{\sigma}^2$ is an estimate of the average volatility by bipower variation \cite{pv09a}.
The results are shown in figure \ref{MC}.
\begin{figure}[H]
\centering
\caption{Simulation of volatility functional estimators}\label{MC}
\includegraphics[width=.5\textwidth]{MC_Qrt_preavg_d21_1_69_2000_v0_jmp_a15.pdf}\includegraphics[width=.5\textwidth]{MC_Log_preavg_d21_1_70_2000_v0_jmp_a15.pdf}
\end{figure}
\section{Discussions}
\input{Discussions}
\subsection{Quarticity estimation}
In the univariate setting, the so-called quarticity $\int_0^tc_s^2\,\mathrm{d} s$ appears in the asymptotic variances of many extant volatility estimators. The multivariate counterpart involves $\int_0^t c_s^{jl}c_s^{km}+c_s^{jm}c_s^{kl}\,\mathrm{d} s$, e.g., $\Xi(c_s,\gamma_s)^{jk,lm}$ in (\ref{AVAR}). Since the quarticity is an integrated functional of volatility, the volatility functional estimator facilitates uncertainty quantification for various volatility estimators.
\subsection{Realized Laplace transform}
\cite{tt12} put forward an estimator of the realized Laplace transform of volatility defined as
\[\int_0^t e^{iwc_s}\,\mathrm{d} s.\]
This transform can be viewed as the characteristic function of volatility under the occupation measure. By matching the the moments of realized Laplace transform with those induced by a model, we can estimate model parameter(s) or test the model. An open question noted by \cite{tt12} is the estimation of realized Laplace transform using noisy data. By the nonparametric estimation of volatility path in the first stage and the bias-corrected Riemann summation of functional plug-ins in the second stage, this paper contributes a rate-optimal solution to the open question.
\subsection{Generalized method of moments (GMM)}
\cite{lx16} proposed the generalized method of integrated moments for financial high-frequency data. In estimating an option pricing model, one observes the process $Z_t = (t, X_t, r_t, d_t)$ where $X_t$ is the price of the underlying observed without any noise, $r_t$ is the short-term interest rate, $d_t$ is the dividend yield. One model of the arbitrage-free option price under the risk-neutral probability measure is
\begin{equation*}
\beta_t = f(Z_t,c_t;\theta^*)
\end{equation*}
where $f$ is deterministic, $\theta^*$ is the true model parameter. The observed option price is often modeled as
\begin{equation*}
Y_{i\Delta_n} = \beta_{i\Delta_n} + \epsilon_i
\end{equation*}
where $\epsilon_i$ is pricing error and $\mathbb{E}(\epsilon_i)=0$. Let $g(Z_t,c_t;\theta) = \mathbb{E}[Y_t - f(Z_t,c_t;\theta)]$, then we have the following integrated moment condition:
\begin{equation*}
G(\theta^*) = 0
\end{equation*}
where $G(\theta) = \int_0^t g(Z_s,c_s;\theta)\,\mathrm{d}s$. Utilizing noisy observations of $X$ at higher frequencies, $\widehat{S}(g)_t^n$ of this paper provides a means to compute a bias-corrected sample moment function of GMM.
\subsection{Linear regression}
In the practice of linear factor models and financial hedging, one faces the tasks of computing the factor loadings and the hedge ratios. These tasks can be formulated as the estimation of the coefficient $\beta$ in the time-series linear regression model
\begin{equation*}
Z_t^c = \beta^\mathrm{T} S_t^c + R_t
\end{equation*}
where
\begin{equation*}
\left\{\begin{array}{cl}
S_t\equiv& S_0 + \int_0^tb^S_u\,\mathrm{d} u + \int_0^t\sigma^S_u\,\mathrm{d} W^S_u + J^S_t\\
Z_t\equiv& Z_0 + \int_0^tb^Z_u\,\mathrm{d} u + \beta^\mathrm{T}\int_0^t\sigma^S_u\,\mathrm{d} W^S_u + \int_0^t\sigma^R_u\,\mathrm{d} W^R_u + J^Z_t
\end{array}\right.
\end{equation*}
$\langle W^S,W^R\rangle=0$, $S_t\in\mathbb{R}^{d-1}$, $Z_t\in\mathbb{R}$, and $S^c$, $Z^c$ are the continuous parts of the It\^o semimartingales.
Let $X = (S^\mathrm{T},Z)^\mathrm{T}$, we can write $X_t=X_0 + \int_0^tb_u\,\mathrm{d} u + \int_0^t\sigma_u\,\mathrm{d} W_u + J_t$ where $b=(b^{S,T}, b^Z)^\mathrm{T}$, $W=(W^{S,\mathrm{T}},W^R)^\mathrm{T}$, $J=(J^{S,\mathrm{T}},J^Z)^\mathrm{T}$ and
\begin{equation*}
\sigma = \left[
\begin{array}{cc}
\sigma^S & 0 \\
\beta^\mathrm{T}\sigma^S & \sigma^R
\end{array}\right]
\end{equation*}
so
\begin{equation*}
c = \sigma\sigma^\mathrm{T} =
\left[\begin{array}{cc}
\sigma^S\sigma^{S,\mathrm{T}} & \sigma^S\sigma^{S,\mathrm{T}}\beta \\
\beta^\mathrm{T}\sigma^S\sigma^{S,\mathrm{T}} & \beta^\mathrm{T}\sigma^S\sigma^{S,\mathrm{T}}\beta + (\sigma^R)^2
\end{array}\right]
\equiv
\left[\begin{array}{cc}
c^{SS} & c^{SZ} \\
c^{ZS} & c^{ZZ}
\end{array}\right]
\end{equation*}
hence by letting $g(c)=s^{SS,-1}c^{SZ}$, we have $\beta=t^{-1}S(g)_t$. \cite{ltt17} proposed this method for the situation in which the process $X$ can be perfectly observed. When the observations contain noise, the methodology of this paper can extend the estimator of \cite{ltt17} to wider applicability.
\subsection{Principal component analysis (PCA)}
An interesting question about stochastic volatility is its spectral structure $c_sv_s = \lambda_sv_s$.
\cite{ax19} applied PCA to nonstationary financial data by conducting inference on the realized eigenvalue $\int_0^t\lambda_s\,\mathrm{d} s$, realized eigenvector $\int_0^t v_s\,\mathrm{d} s$, realized principal component $\int_0^tv_{s-}\,\mathrm{d} X_s$. In the basic setting where $\lambda_s$ is a simple eigenvalue of $c_s$ and $v_s$ is the corresponding eigenvector, $g(c_s)=\lambda_s$ and $g(c_s) = v_s$ are three-times continuously differentiable, therefore the inferential results of $S(g)$ are applicable. More recently, \cite{cmz18} extends the realized PCA to asynchronously observed high-dimensional noisy data,
while this paper extends the realized PCA methodology to be both noise-robust and rate-optimal.
\subsection{Elements}
Before stating the asymptotic result, some elements appearing in the limit need to be defined. Associate the following quantities with the smoothing kernel $\varphi$ for $l,m=0,1$:
\begin{equation}\label{phi.vars}
\begin{array}{ll}
\phi_0(s)=\int_s^1\varphi(u)\varphi(u-s)\,\mathrm{d} u, & \phi_1(s)=\int_s^1\varphi'(u)\varphi'(u-s)\,\mathrm{d} u \\
\Phi_{lm}=\int_0^1\phi_l(s)\phi_m(s)\,\mathrm{d} s, & \Psi_{lm}=\int_0^1s\,\phi_l(s)\phi_m(s)\,\mathrm{d} s
\end{array}
\end{equation}
Define $\Sigma$, $\Theta$, $\Upsilon$ as $\mathbb{R}^{d\times d\times d\times d}$-valued functions, such that for $x,z\in\mathbb{R}^{d\times d}$, $j,k,l,m=1,\cdots,d$,
\begin{equation}\label{tensors}
\begin{array}{lcl}
\Sigma(x)^{jk,lm} &=& x^{jl}x^{km} + x^{jm}x^{kl}\\
\Theta(x,z)^{jk,lm} &=& x^{jl}z^{km} + x^{jm}z^{kl} + x^{km}z^{jl} + x^{kl}z^{jm}\\
\end{array}
\end{equation}
and $\Xi$ also as a tensor-valued function
\begin{equation}\label{def.Xi}
\Xi(x,z) = \frac{2\theta}{\phi_0(0)^2}\left[\Phi_{00}\Sigma(x) + \frac{\Phi_{01}}{\theta^2}\Theta(x,z) + \frac{\Phi_{11}}{\theta^4}\Sigma(z) \right]
\end{equation}
where $\theta$ is introduced in (\ref{tuning}).
Now we are ready to describe the limit process.
\begin{defn}\label{def.Z(g)}
Given $g$ satisfying (\ref{g.cond}) or (\ref{g.cond1}), $Z(g)$ is a process defined on an extension of the probability space $(\Omega,\mathcal{F},(\mathcal{F}_t),\mathbb{P})$ specified in (\ref{prob.sp}),
such that conditioning on $\mathcal{F}$, $Z(g)$ is a mean-0 continuous It\^o semimartingale with conditional variance
\begin{equation*}
\widetilde{E}[Z(g)Z(g)^\mathrm{T}|\mathcal{F}]=V(g)
\end{equation*}
where $\widetilde{E}$ is the conditional expectation operator on the extended probability space and
\begin{equation}\label{AVAR}
V(g)_t = \int_0^t \sum_{j,k,l,m=1}^{d}\partial_{jk}g(c_s)\,\partial_{lm}g(c_s)^\mathrm{T}\,\Xi(c_s,\gamma_s)^{jk,lm}\,\mathrm{d} s
\end{equation}
with $\partial_{jk}g(c)$ being the first-order partial derivative of $g$ with respect to $c^{jk}$, $\gamma$ being the variance process of noise defined in (\ref{def.r}).
\end{defn}
\subsection{The formal results}
\begin{thm}\label{CLT}
Assume assumptions \ref{A-v}, \ref{A-r}. Given $g$ satisfying (\ref{g.cond}), we control the tuning parameters $l_n$, $k_n$, $\nu_n$ according to (\ref{tuning}), then we have the following stable convergence in law of discretized process to a conditional continuous It\^o semimartingale on compact subsets of $\mathbb{R}^+$:
\begin{equation}\label{clt1}
\Delta_n^{-1/4}\left[\widehat{S}(g)^n-S(g)\right]\overset{\mathcal{L}-s(f)}{\longrightarrow}Z(g)
\end{equation}
where $S(g)$ is defined in (\ref{def.S(g)}), $\widehat{S}(g)^n$ is from definition \ref{def.S(g)hat}, $Z(g)$ is identified in definition \ref{def.Z(g)}.
\end{thm}
Theorem \ref{CLT} is valid over the functional space (\ref{g.cond}), which is as general as the current literature can get. If applications require functionals whose derivatives satisfy the polynomial growth condition (\ref{g.cond1}), we can put less restrictions on the tuning parameters.
\begin{thm}\label{CLT.restricted}
Assume assumptions \ref{A-v}, \ref{A-r}. Replace the functional space (\ref{g.cond}) with (\ref{g.cond1}), replace the tuning conditions (\ref{tuning}) on $k_n$, $\nu_n$ with (\ref{tuning1}), then (\ref{clt1}) still holds true.
\end{thm}
However, theorem \ref{CLT.restricted} rules out operations that involve matrix inversion, it is not applicable to, for instance, inference of linear regression models. In the rest of this paper, we focus on the results over the general functional space (\ref{g.cond}).
The asymptotic result is stated with a probabilistic flavor, which is necessary to express the strongest convergence\footnote{It is functional stable convergence (or stable convergence of processes) in law.} by appendix \ref{apdx:derivation}. There is an alternative formulation which is more relevant for statistical applications:
\begin{equation}\label{clt2}
n^{1/4}\left[\widehat{S}(g)^n_t-S(g)_t\right]\overset{\mathcal{L}-s}{\longrightarrow}\mathcal{MN}\big(0,\sqrt{t}V(g)_t\big)
\end{equation}
this is true under the same conditions and $t$ is finite.
\subsection{Confidence intervals}
The asymptotic variance in (\ref{clt2}) can be estimated by plugging in spot estimates of volatility (\ref{def.chat}) and noise covariance matrix (\ref{def.rhat}):
\begin{equation}\label{def.V(g)hat}
\widehat{V}(g)^n_t \equiv k_n\Delta_n\sum_{i=0}^{N^n_t-1}\sum_{j,k,l,m=1}^{d}\partial_{jk}g(\widehat{c}^n_{ik_n})\,\partial_{lm}g(\widehat{c}^n_{ik_n})^\mathrm{T}\,\Xi(\widehat{c}^n_{ik_n},\widehat{\gamma}^n_{ik_n})^{jk,lm}
\end{equation}
\begin{prop}\label{AVAR.est}
$\widehat{V}(g)^n_t$ is consistent under (\ref{tuning}) and assumptions \ref{A-v}, \ref{A-r}. Specifically, for all finite $t$,
\begin{equation*}
\big\|\widehat{V}(g)^n_t - V(g)_t\big\| = O_p(\Delta_n^{\kappa-1/2})
\end{equation*}
where $\kappa$ is specified in (\ref{tuning}).
\end{prop}
\begin{proof}
The asymptotic variance (\ref{AVAR}) is a smooth functional of spot volatility and instantaneous noise covariance, so the consistency of (\ref{def.V(g)hat}) follows from the consistence of the spot volatility estimator (\ref{def.chat}) and the noise covariance estimator (\ref{def.rhat}). According to lemma \ref{est.beta} and (\ref{est.chi}), the error rate of (\ref{def.V(g)hat}) is determined by the estimation error of spot volatility. Therefore, the error rate of (\ref{def.V(g)hat}) is the same as the error rate of the volatility functional estimator without bias correction, which is $(k_n\Delta_n^{1/2})^{-1}$, then the proposition follows from (\ref{tuning}).
\end{proof}
Based on theorem \ref{CLT}, proposition \ref{AVAR.est} and the property of stable convergence, we have the following feasible central limit theorem:
\begin{corol}
Under (\ref{tuning}) and assumptions \ref{A-v}, \ref{A-r}, we have
\begin{equation}\label{clt3}
\big[\Delta_n^{1/2}\,\widehat{V}(g)^n_t\big]^{-1/2}\left[\widehat{S}(g)^n_t-S(g)_t\right]\overset{\mathcal{L}}{\longrightarrow}\mathcal{N}\big(0,\mathbb{I}\big)
\end{equation}
in restriction to the event $\{\omega\in\Omega, \widehat{V}(g)^n_t \text{ is positive definite}\}$, where $\Omega$ is defined in (\ref{prob.sp}).
\end{corol}
\subsection{Jump truncation}
It is worthwhile to take account of the dimension $d$ and volatility levels in jump truncation. One possibility is to use the truncation indicator $\prod_{r=1}^d\1{|\accentset{{\cc@style\underline{\mskip10mu}}}{Y}^{r,n}_i|\le\alpha_r\Delta_n^\rho}$ where $\alpha_r$ is related to the volatility of the $r$-th component.
\subsection{Semi-efficiency}
The efficient bound of volatility estimation is studied by \cite{r11}. Nonetheless, pre-averaging hasn't been able to attain the efficiency bound. According to \cite{jm15}, by taking the moving average adaptively in the time domain, the asymptotic variance of the pre-averaging method can be within 7\% of the efficiency bound. Using adaptive pre-averaging in volatility functional estimation is beyond the scope of this paper and is currently under investigation.
An efficient alternative is the spectral approach \cite{bhmr14, ab15}. The multi-scale approach \cite{z06}, realized kernels \cite{b08}, quasi-likelihood \cite{x10} are equally capable to rate-optimally handle noise. In the univariate case realized kernels and quasi-likelihood could be improved to be nearly efficient, cf. \cite{cp18}. The pre-averaging are adopted in this paper to simultaneously handle price jumps and microstructure noise.
\subsection{Finite-sample consideration}
First, we consider effective jump truncations. It is worthwhile to take account of the dimension $d$ and volatility levels in jump truncation. One possibility is to use the truncation indicator $\prod_{r=1}^d\1{|\accentset{{\cc@style\underline{\mskip10mu}}}{Y}^{r,n}_i|\le\alpha_r\Delta_n^\rho}$ where $\alpha_r$ is related to the volatility of the $r$-th component.
Next, in finite sample, the spot volatility estimator (\ref{def.chat}) might not be positive semidefinite due to the noise-correction term $\widehat{Y}^n_i$. \cite{ckp10} suggests to increase $l_n$ to attenuate noise in $\accentset{{\cc@style\underline{\mskip10mu}}}{Y}^n_i$ and dispense with $\widehat{Y}^n_i$:
\begin{equation*}
\widetilde{c}^n_i \equiv \frac{1}{(k_n-l_n)\Delta_n}\sum_{h=1}^{k_n-l_n+1}\accentset{{\cc@style\underline{\mskip10mu}}}{Y}^n_{i+h}\cdot\accentset{{\cc@style\underline{\mskip10mu}}}{Y}^{n,\mathrm{T}}_{i+h}\1{\|\accentset{{\cc@style\underline{\mskip10mu}}}{Y}^n_{i+h}\|\le\nu_n}
\end{equation*}
and let $l_n\asymp\theta\Delta_n^{-1/2-\delta}$ where $\delta\in(.1,.5)$.
According to \cite{c19b}, if one plugs in $\widetilde{c}^n_i$ with the tunning parameters satisfying
\begin{equation*}
\left\{\begin{array}{rcll}
k_n &\asymp& \varrho\Delta_n^{-\kappa} & \kappa\in\left(\big(\frac{2}{3}+\frac{2\delta}{3}\big)\vee\big(\frac{2+\nu}{4}+\frac{(2-\nu)\delta}{2}\big),\frac{3}{4}+\frac{\delta}{2}\right)\\
\nu_n &=& \alpha\Delta_n^{\rho} &\rho\in \left[\frac{1}{4}+\frac{\delta}{2}+\frac{1-\kappa}{2-\nu},\frac{1}{2}\right)
\end{array}\right.
\end{equation*}
another central limit theorem holds after a different bias correction. However, doing so sacrifices the convergence rate, which drops from $n^{1/4}$ down to $n^{1/4-\delta/2}$ (strictly less than $n^{1/5}$). Moreover, the choice of tunning parameters becomes less robust compared to (\ref{tuning}).
To preserve the optimal convergence rate in the unfortunate event where the spot volatility estimator is not positive semidefinite, it is advisable to project $\widehat{c}^n_i$ onto the convex cone $\mathcal{S}^+_d$ with respect to the Frobenius norm. Suppose $\widehat{c}^n_i = Q\Lambda Q^\mathrm{T}$ is the eigenvalue factorization, the positive semidefinite projection is $\widehat{c}'^n_i=Q\Lambda_+Q^\mathrm{T}$ where $\Lambda_+^{jj} = \Lambda^{jj}\vee 0$. By the convex geometry of $\mathcal{S}^+_d$, we have $\|\widehat{c}'^n_i-c^n_i\|\le \|\widehat{c}^n_i-c^n_i\|$, hence the convergence rate is retained.
\subsection{Building blocks}
For the local moving averages, we choose a smoothing kernel $\varphi$ such that
\begin{equation}\label{phi.cond}
\begin{array}{l}
\text{supp}(\varphi)\subset(0,1),\, \int_0^1\varphi^2(s)\,\mathrm{d} s>0 \\
\varphi\in\mathcal{C} \text{ is piecewise } \mathcal{C}^1;\, \varphi' \text{ is piecewise Lipschitz}
\end{array}
\end{equation}
Choose an integer $l_n$ as the number of observations in each smoothing window, define $\varphi^n_h=\varphi(h/l_n)$ and $\psi_n=\sum_{h=1}^{l_n-1}(\varphi^n_h)^2$. Associate the following quantities with a generic process $U$:
\begin{equation}\label{def.Ubar.Uhat}
\begin{array}{lcl}
\accentset{{\cc@style\underline{\mskip10mu}}}{U}^n_i &=& (\psi_n)^{-1/2}\sum_{h=1}^{l_n-1}\varphi^n_h\Delta^n_{i+h-1}U \\
\widehat{U}^n_i &=& (2\psi_n)^{-1}\sum_{h=0}^{l_n-1}(\varphi^n_{h+1}-\varphi^n_h)^2\Delta^n_{i+h}U\cdot\Delta^n_{i+h}U^\mathrm{T}
\end{array}
\end{equation}
$\accentset{{\cc@style\underline{\mskip10mu}}}{Y}^n_i$ is a local moving average of the noisy data $Y^n_i$'s and is a proxy for $\Delta^n_iX$, $\widehat{Y}^n_i$ serves as noise correction to $\accentset{{\cc@style\underline{\mskip10mu}}}{Y}^n_i$. Based on these 2 ingredients, choose $k_n>l_n$, define the spot volatility estimator as
\begin{equation}\label{def.chat}
\widehat{c}^n_{i} \equiv \frac{1}{(k_n-l_n)\Delta_n}\sum_{h=1}^{k_n-l_n+1}\Big(\accentset{{\cc@style\underline{\mskip10mu}}}{Y}^n_{i+h}\cdot\accentset{{\cc@style\underline{\mskip10mu}}}{Y}^{n,\mathrm{T}}_{i+h}\1{\|\accentset{{\cc@style\underline{\mskip10mu}}}{Y}^n_{i+h}\|\le\nu_n} - \widehat{Y}^n_{i+h}\Big)
\end{equation}
where $\nu_n\asymp\Delta_n^\rho$ is a truncation threshold for jumps. The choice of $\rho$ is stated in (\ref{tuning}). A spot noise variance estimator is also needed:
\begin{equation}\label{def.rhat}
\widehat{\gamma}^n_i \equiv \frac{1}{2m_n}\sum_{h=1}^{m_n}\Delta^n_{i+h}Y\cdot\Delta^n_{i+h}Y^\mathrm{T}
\end{equation}
where $m_n=\lfloor\theta'\Delta_n^{-1/2}\rfloor$, $\theta'$ positive finite.
\subsection{The estimator}
\begin{defn}\label{def.S(g)hat}
Let $N^n_t=\lfloor t/(k_n\Delta_n)\rfloor$, the estimator of $(\ref{def.S(g)})$ is defined as
\begin{equation*}
\widehat{S}(g)^n_t \equiv k_n\Delta_n\sum_{i=0}^{N^n_t-1}\left[g(\widehat{c}^n_{ik_n}) - B(g)^n_{ik_n}\right] \times a^n_t
\end{equation*}
where $B(g)^n_i$ is a de-biasing term of the form
\begin{equation*}
B(g)^n_i = \frac{1}{2k_n\Delta_n^{1/2}}\sum^d_{j,k,l,m=1}\partial^2_{jk,lm}g(\widehat{c}^n_i)\times\Xi\big(\widehat{c}^n_i,\widehat{\gamma}^n_i\big)^{jk,lm}
\end{equation*}
$\widehat{c}^n_i$, $\widehat{\gamma}^n_i$ are defined in (\ref{def.chat}), (\ref{def.rhat}), $\Xi$ is defined in (\ref{def.Xi}), and
$a^n_t = t/(N^n_tk_n\Delta_n)$ is a finite-sample adjustment.\footnote{Overlapping intervals are used to compute $\widehat{c}^n_i$'s, non-overlapping intervals are used to compute $\widehat{S}(g)^n_t$. The local moving averages computed over overlapping intervals in (\ref{def.chat}) are necessary to achieve the optimal convergence rate. By contrast, overlapping intervals in $\widehat{S}(g)'^n_t\equiv\Delta_n\sum_{i=0}^{\lfloor t/\Delta_n\rfloor-1}[g(\widehat{c}^n_i) - B(g)^n_i]$ do not improve the convergence rate nor efficiency, though lead to robustness in finite sample. In fact, the overlapping-interval-based estimator has the same asymptotic result as that of $\widehat{S}(g)^n_t$ in section \ref{sec:asymp}.}
\end{defn}
Besides $\varphi$, there are 3 tuning parameters in this estimator:
\begin{table}[H]
\centering
\begin{tabular}{c|c|r|l}
$a\Delta_n^b$ & scale $a$ & rate $b$ & description \\
\hline
$l_n$ & $\theta$ & $-1/2$ & length of overlapping window for local moving averages \\
$k_n$ & $\varrho$ & $-\kappa$ & length of disjoint window for spot volatility estimation\\
$\nu_n$ & $\alpha$ & $\rho$ & truncation level for jumps
\end{tabular}
\end{table}
With suitable choices of $l_n,\,k_n,\,\nu_n$ in (\ref{def.chat}), this estimator is applicable to any function $g:\mathcal{S}^+_d\mapsto\mathbb{R}^r$ that satisfies
\begin{equation}\label{g.cond}
g\in\mathcal{C}^3(\mathcal{S})
\end{equation}
where $\mathcal{S}\supset\cup_m\mathcal{S}^\epsilon_m$ for some $\epsilon>0$, $\mathcal{S}^\epsilon_m=\big\{A\in\mathcal{S}^+_d: \inf_{M\in\mathcal{S}_m}\|A-M\|\le\epsilon\big\}$ and $\mathcal{S}_m$ is identified in assumption \ref{A-v}.
\subsection{Choosing tuning parameters}
A proper combination of the tuning parameters is crucial for consistency, CLT, and optimal convergence rate. For these objectives, one needs
\begin{equation}\label{tuning}
\left\{\begin{array}{rcll}
l_n &\asymp& \theta\Delta_n^{-1/2} &\\% + o(\Delta_n^{1/4})\\
k_n &\asymp& \varrho\Delta_n^{-\kappa} &\text{ where } \kappa\in\left(\frac{2}{3}\vee\frac{2+\nu}{4},\frac{3}{4}\right)\\
\nu_n &=& \alpha\Delta_n^{\rho} &\text{ where }\rho\in \left[\frac{1}{4}+\frac{1-\kappa}{2-\nu},\frac{1}{2}\right)
\end{array}\right.
\end{equation}
$\theta,\varrho,\alpha>0$ are positive finite, and $\nu\in[0,1)$ is introduced in assumption \ref{A-v} which dictates the jump intensity.
\input{Intuition}
\begin{remk}
If the reader is interested to estimate $(\ref{def.S(g)})$ with $g$ satisfying
\begin{equation}\label{g.cond1}
\|\partial_h g(x)\| \le K_h(1+\|x\|^{r-h}),\,\, h=0,1,2,3,\, r\ge3
\end{equation}
the requirements on $k_n$ and $\nu_n$ can be loosened and become
\begin{equation}\label{tuning1}
\begin{array}{rcl}
k_n\Delta_n^\kappa &\asymp& \varrho,\, \text{ where } \kappa\in\left(\frac{2}{3},\frac{3}{4}\right)\\
\nu_n &=& \alpha\Delta_n^{\rho},\, \text{ where } \rho\in \left[\frac{1}{4}+\frac{1}{4(2-\nu)},\frac{1}{2}\right)
\end{array}
\end{equation}
For wider applicability, we choose to accommodate the functional space (\ref{g.cond}) and retain the requirement (\ref{tuning}).
\end{remk}
\subsubsection{variables}
\input{Pf-spotc2-vars}
\subsubsection{bounds on $\|\xi^{n,r}_i\|$}
By assumption \ref{A-v}, (\ref{SA-v}), (\ref{classic})
\begin{equation}\label{est.xi(0)n}
\left\{\begin{array}{lcl}
\left\| E\left(\xi^{n,0}_{i}|\mathcal{F}^{(0),n}_{i}\right) \right\| &\le& Kk_n\Delta_n\\
E\left( \|\xi^{n,0}_{i}\|^q|\mathcal{F}^{(0),n}_{i}\right) &\le& K_q(k_n\Delta_n)^{(q/2)\wedge1},\, q\ge0
\end{array}\right.
\end{equation}
combined with (\ref{tuning}),
\begin{equation}\label{est.xi(1)n}
\left\{\begin{array}{lcl}
\left\| E\left(\xi^{n,1}_i|\mathcal{F}^{(0),n}_i\right) \right\| &\le& K\Delta_n^{1/2}\\
E\left(\|\xi^{n,1}_i\|^q|\mathcal{F}^{(0),n}_i\right) &\le& K_q\Delta_n^{[(q/2)\wedge1]/2},\,q\in\mathbb{N}^+
\end{array}\right.
\end{equation}
By assumption \ref{A-r},
\begin{equation}\label{est.xi(2)n}
\begin{array}{l}
\left\| E^n_i\big(\xi^{n,2}_i|\big) \right\| \le K\Delta_n^{-1}\\
E^n_i\big(\|\xi^{n,2}_i\|^q\big) \le
\left\{\begin{array}{l}
K\,k_n^{-1/2},\, q=1;\\
K_q\left(k_n^{-q+1} + k_n^{-q}\Delta_n^{-q/2+1}\right), q\in\mathbb{N}^+/\{1\}.
\end{array}\right.
\end{array}
\end{equation}
\subsubsection{estimates of $\zeta(X,p)^n_i$ \& $\zeta(X,p)'^n_i$}
\input{Pf-spotc2-zetaX}
\subsubsection{estimates of $\zeta(p)^n_i$}
\input{Pf-spotc2-zetaY}
\subsubsection{estimates of $\beta^n_i$}
\input{Pf-spotc2-beta}
\subsection{Model}
This paper assumes the data is generated from a process $Y$, and for any $t>0$ there is a probability transition kernel $Q_t$ linking another process $X$ to $Y$ where $X$ is a solution to the stochastic differential equation
\begin{equation}\label{def.X}
X_t = X_0 + \int_0^tb_s\,\mathrm{d} s + \int_0^t\sigma_s\,\mathrm{d} W_s + J_t
\end{equation}
$b_s\in\mathbb{R}^d$, $\sigma_s\in\mathbb{R}^{d\times d'}$ with $d\le d'$ and the volatility $c_s=\sigma_s\sigma_s^\mathrm{T}$ is positive semidefinite, $W$ is a $d'$-dimensional standard Brownian motion, $J$ is purely discontinuous process described by (\ref{J}).
In this model, the noisy observations are samples from $Y$, and the underlying process before noise contamination is assumed as an It\^o semimartingale.
\begin{center}
\usetikzlibrary{positioning}
\usetikzlibrary{shapes,snakes}
\begin{tikzpicture}[xscale=12,yscale=6,>=stealth]
\tikzstyle{e}=[rectangle,minimum size=5mm,draw,thick]
\tikzstyle{v}=[ellipse, minimum size=5mm,draw,thick]
\node[e] (X) [draw=red!60] {It\^o Semimartingale $X$};
\node[e] (Y) [draw=blue!70,right=of X] {Noisy Process $Y$};
\node[v] (D) [draw=black!60,right=of Y] {Noisy Data};
\draw[thick,->,snake=snake] (X) to node[anchor=north]{$(Q_t)$} (Y);
\draw[thick,->] (Y) to node[anchor=north]{sample} (D);
\end{tikzpicture}
\end{center}
An example of this model is
\begin{equation}\label{def.Y}
Y_t = f(X_t, \varepsilon_t)
\end{equation}
where $\varepsilon$ is a white noise process and $f:\mathbb{R}^d\times\mathbb{R}^d\mapsto\mathbb{R}^d$ is such that the conditional mean of $Y_t$ is $X_t$. Generally, the noise model induced by $(Q_t)$ incorporates additive white noise, rounding error, the combination thereof as special cases. Besides the probabilistic structure, the inferential framework also requires additional assumptions:
\begin{itemize}
\item the drift $b$ has a smooth trajectory in certain sense (see appendix \ref{sec:assump});
\item the volatility $c$ is a locally spatially restricted It\^o semimartingale\footnote{However, it is important to accommodate long-memory volatility model. The volatility functional inference in long-memory and noisy setting is an open question under investigation.} such that both $c$ and $c^{-1}$ is locally bounded;
\item $J$ may exhibit infinite activities but has finite variation, i.e., finite-length trajectory;
\item the noise variance is an It\^o semimartingale; conditioning on all the information on $X$, there is no autocorrelation in noise.\footnote{When the observations are mixed with colored noise, the statistical property of this methodology is unknown. Since it is empirically important, the author hopes this question can be illuminated by future research.}
\end{itemize}
These assumptions are necessary for the CLT and for applicability over functions of statistical interest. For readers interested in the precise description of the model specification and assumptions, please refer to appendix \ref{sec:assump}.
\subsection{Observations}
This work treats regularly sampled observations and considers in-fill asymptotics\footnote{aka fixed-domain asymptotics, high-frequency asymptotics, small-interval asymptotics}. Specifically, the samples are observed every $\Delta_n$ time units on a finite time interval $[0,t]$ where $n=\lfloor t/\Delta_n\rfloor$ is the sample size. As $n\to\infty$, $\Delta_n\to0$ while $t$ is fixed.
Throughout this paper, $U^n_i$ is written for $U_{i\Delta_n}$ where $U$ can be a process or filtration, for example, $c^n_i$ denotes the value of volatility $c$ at time $i\Delta_n$; for any process $U$, $\Delta^n_iU$ represents the increment $U^n_i-U^n_{i-1}$.
\subsection{Notations} For $r\in\mathbb{N}^+$, $\mathcal{C}^r(\mathcal{S})$ denotes the space of $r$-time continuously differentiable functions on the domain $\mathcal{S}$; $\mathcal{S}^+_d$ is the convex cone of $d\times d$ positive semidefinite matrices; $\|\cdot\|$ denotes a norm on vectors, matrices or tensors; given $a\in\mathbb{R}$, $\lfloor a\rfloor$ denotes the largest integer no more than $a$; $a\vee b=\max\{a,b\}$, $a\wedge b=\min\{a,b\}$; $a_n \asymp b_n$ means both $a_n/b_n$ and $b_n/a_n$ are bounded for large $n$; $\mathbf{A}^\mathrm{T}$ is the transpose of the vector or matrix; for a multidimensional array, the entry index is written in the superscript, e.g., $X_t=(X^1_t,\cdots,X^d_t)^\mathrm{T}$, $c^{jk}$ denotes the $(j,k)$ entry in the matrix $c$; $\partial_{jk}g$ and $\partial^2_{jk,lm}g$ denote the gradient and Hessian with respect to the $(j,k)$-th and $(l,m)$-th entries; $\overset{\mathcal{L}-s(f)}{\longrightarrow}$ (resp. $\overset{\mathcal{L}-s}{\longrightarrow}$) denotes stable convergence of processes (resp. variables) in law\footnote{See section 2.2.1, 2.2.2 in \cite{jp12} for stable convergence. The sampling variation of the estimator depends on the realization of the process $c$, hence we need a mode of convergence in which the estimator converges jointly with other variables, so that one can consistently estimate the asymptotic variance to compute confidence intervals.}; $\overset{u.c.p.}{\longrightarrow}$ denotes uniform convergence in probability on compact sets; $\mathcal{MN}(\cdot,\cdot)$ is a mixed Gaussian distribution.
|
1,108,101,566,630 | arxiv | \section{Introduction}
During the first stages of star formation, highly collimated jets from
new born stars influence the physical structure of the hosting
cloud by sweeping up material, compressing and accelerating the
surrounding environment. The propagation of high velocity
outflows generates
shock fronts triggering endothermic chemical reactions and ice grain
mantle sublimation or sputtering. At a distance of 250 pc (Looney et
al. 2007), the chemically rich L1157 bipolar outflow (Bachiller \& P\'erez Guti\'errez 1997,
hereafter BP97, Bachiller et al. 2001) is an ideal laboratory to observe
the effects of such shocks on the gas chemistry.
L1157 is
driven by a low-mass ($\sim$ 4 $L_{\odot}$) Class 0 protostar L1157-mm
and it is associated with several blue-shifted (B0, B1, B2) and
red-shifted (R0, R1, R2) shocks at different ages (see Fig.~\ref{maps}--Top panel), and seen in
both CO (Gueth et al. 1996, 1998), and IR H$_2$ (e.g. Neufeld
et al. 1994, Nisini et al. 2010a). These shocks (see Fig.~\ref{maps}--Bottom panel), when mapped with
interferometers, reveal a clumpy bow structure (e.g. Tafalla \& Bachiller
1995; Benedettini et al. 2007; Codella et al. 2009) at the apex of different molecular
cavities, corresponding to different mass loss episodes (Gueth et al. 1996).
Both interferometer and single-dish surveys confirm that the L1157 outflow
is well traced by molecules thought to be released off from the dust
mantles such as H$_2$CO, CH$_3$OH, H$_2$O, and NH$_3$
(e.g. Codella et al. 2010, Lefloch et al. 2010, Vasta et al. 2012) as well as by the
refractory grain cores such as SiO (e.g. Nisini et al. 2007; Gusdorf et al. 2008).
The abundance of these neutral
molecules are enhanced, and the emission
shows broad wings (up to 20--30 km s$^{-1}$). On the
contrary, diazenylium (N$_2$H$^{+}$), usually used as tracer of cold prestellar cores
(e.g. Caselli et al. 2002), shows a completely different behaviour.
Single-dish (IRAM 30-m) and interferometric (IRAM PdB, SMA, CARMA)
observations indicate
that N$_2$H$^{+}$ traces only the central condensation L1157-mm
through narrow (0.4--1.0 km s$^{-1}$) emission and it
has not been observed, to date, towards the outflow component (Bachiller et
al. 2001, Chiang et al. 2010, Tobin et al. 2011, 2012, 2013, Yamaguchi et al. 2012).
The interferometric maps
show that the narrow N$_2$H$^{+}$ line traces the protostellar envelope
elongated along a direction perpendicular to the outflow axis
(i.e. along a hypothetical disk).
However, by analysing their IRAM PdB data, Tobin et al. (2011) concluded that
although the overall N$_2$H$^{+}$ velocity structure is unaffected by the
outflow, the morphology of the
sligthly blue-shifted emission ($\lvert$$v$--$v_{sys}$$\vert$ $\leq$ 0.8 km s$^{-1}$)
outlines the outflow cavity walls in the
inner 20$\arcsec$--30$\arcsec$ protostellar environment.
Tobin et al. (2011) proposed that such emission is due either to outflow
entrainment or to a hypothetical shock near the driving protostar.
The same suggestion is found in the ATCA N$_2$H$^{+}$(1--0) image of the
protostellar core CG30 by Chen et al. (2008).
On the other hand, J$\o$rgensen et al. (2004) investigated with BIMA the
protostellar binary NGC1333-IRAS2A-B at 3mm showing that the spatial
distribution of N$_2$H$^{+}$ peaks towards the nearby starless core IRAS2C,
and is missing in the outflows.
Therefore, it is still under debate what
role, if any, N$_2$H$^{+}$ is playing in a shocked gas scenario:
Is the N$_2$H$^{+}$ emission observed by Tobin et al. (2011) and
that marks the cavity opened up by the outflow due to
just enhanced gas column density or really associated with a shock?
Such question is important,
given that N$_2$H$^{+}$ is considered a standard
molecular tracer of cold
and quiescent prestellar environments (e.g. Tafalla et al. 2006).
In order to uniquely answer these questions
it is essential to study a region $not$ associated with a protostar,
as the young (2000 years; Gueth et al. 1996), and bright
bow-shock L1157-B1, located at $\sim$ 69$\arcsec$ ($\sim$ 0.1 pc, see Fig.~\ref{maps}) from the protostar.
As part of the Herschel\footnote{Herschel is an ESA space observatory
with science instruments provided by European-led principal
Investigator consortia and with important participation from NASA.}
Key Program
CHESS\footnote{http://www-laog.obs.ujf-grenoble.fr/heberges/chess/}
(Chemical Herschel Surveys of Star forming regions; Ceccarelli et
al. 2010), L1157-B1 is currently being investigated with a
spectral survey in the $\sim$80$\--$350 GHz interval using the IRAM
30-m telescope (Lefloch et al. in preparation), and in the $\sim$500$\--$2000 GHz range using the
Herschel HIFI instrument (de Graauw et al. 2010).
We present here the first unambiguous detection of N$_2$H$^{+}$ emission
towards a protostellar shock: the observed broad emission has been modeled
using a simple pseudo-time dependent chemical model,
showing how N$_2$H$^{+}$ can be used to shed light on the
chemical history of the pre-shock gas.
\section{Observations and results}
The N$_2$H$^{+}$(1$-$0) line at 93173.76 MHz\footnote{It refers to the
brightest hyperfine component: $F_{\rm 1}$,$F$ = 2,3--1,2.} was
observed towards L1157-B1 with the IRAM 30-m telescope at Pico Veleta
(Spain).
The pointed coordinates were $\alpha_{\rm
J2000}$ = 20$^{\rm h}$ 39$^{\rm m}$ 10$\fs$2, $\delta_{\rm J2000}$ =
+68$\degr$ 01$\arcmin$ 10$\farcs$5,
i.e. at $\Delta\alpha$ = +25$\farcs$6 and $\Delta\delta$ =
--63$\farcs$5 from the driving protostar. The IRAM
survey was performed during several runs in 2011 and 2012, using the
broad-band EMIR receivers and the FTS spectrometer in its 200 kHz
resolution mode, corresponding to a velocity resolution of 0.6 km
s$^{-1}$ at 93.2 GHz. The main-beam efficiency ($\eta_{\rm mb}$) was
0.75, while the HPBW is 26$\arcsec$.
All the spectra are reported here in units of main beam temperature
(T$_{\rm mb}$).
Figure~\ref{n2h+} shows the N$_2$H$^{+}$(1$\--$0) spectrum:
thanks to the high sensitivity of the IRAM-EMIR receiver
(r.m.s. = 2 mK after smoothing the spectrum to 1.3 km s$^{-1}$),
we are able to detect the three main groups of
hyperfine components of the $J$ = 1$-$0 transition.
The integrated intensity is 327$\pm$14 mK km s$^{-1}$.
The N$_2$H$^{+}$ emission in L1157-B1 was hidden in the
noise of the BP97 spectrum, which has 1$\sigma$ rms of 20 mK, definitely larger
than that of the present dataset (2 mK).
N$_2$H$^{+}$ is a
linear molecular ion in a stable closed-shell $^1$$\Sigma$
configuration. The dominant hyperfine interactions are those between
the molecular electric field gradient and the electric quadrupole
moments of the two nitrogen nuclei (e.g. Caselli et al. 1995),
producing a splitting of the J = 1--0 line into 15 hyperfine
components, characterised by the corresponding quantum numbers $F_{\rm
1}$ and $F$ (e.g. Pagani et al. 2009).
To fit the N$_2$H$^+$ spectrum,
we first assumed a unique velocity component and
used GILDAS-CLASS90\footnote{http://www.iram.fr/IRAMFR/GILDAS},
which gives the best fit (reported in Table 1) of the hyperfine
components (see the blue line in Fig.~\ref{n2h+}--Middle panel).
The sum of the opacity at the central velocities of all
the hyperfine components $\sum_{\rm i}$$\tau_{\rm i}$ is 0.1$\pm$0.9.
Although the opacity is not well determined the fit indicates
$\sum_{\rm i}$$\tau_{\rm i}$ $\le$ 1, thus suggesting optically thin
emission. Fits fixing $\tau_{\rm i}$ to larger values never gave better results.
\begin{table}
\caption{Parameters of the hyperfine fits to the N$_2$H$^+$(1--0)$^{a}$ line,
and total column density.}
\label{lines}
\centering
\begin{tabular}{cccccc}
\hline
\multicolumn{1}{c}{$T_{\rm peak}$} &
\multicolumn{1}{c}{rms} &
\multicolumn{1}{c}{$V_{\rm peak}$} &
\multicolumn{1}{c}{$FWHM$} &
\multicolumn{1}{c}{$\sum_{\rm i}$$\tau_{\rm i}$} &
\multicolumn{1}{c}{$N_{\rm tot}$$^c$} \\
\multicolumn{1}{c}{(mK)} &
\multicolumn{1}{c}{(mK)} &
\multicolumn{1}{c}{(km s$^{-1}$)} &
\multicolumn{1}{c}{(km s$^{-1}$)} &
\multicolumn{1}{c}{} &
\multicolumn{1}{c}{(cm$^{-2}$)} \\
\hline
\multicolumn{6}{c}{1 component fit} \\
\hline
34(2) & 2 & +1.3(0.1) & 4.3(0.2) & 0.1(0.9) & 2.4--7.8 10$^{12}$ \\
\hline
\multicolumn{6}{c}{2 components fit} \\
\hline
26(2) & 2 & +1.8(0.1) & 2.6(0.1) & 0.2(0.2) & 2.4--8.0 10$^{12}$ \\
14(2) & 2 & --1.1(0.4) & 5.9(0.9) & 0.1(0.1) & 0.4--1.3 10$^{12}$ \\
\hline
\end{tabular}
\begin{center}
$^a$ The spectrum has been centered at the frequency of the
main hyperfine component $F_{\rm 1}$,$F$ = 2,3--1,2 (93173.76). Frequencies have been extracted from the
Cologne Database or Molecular Spectroscopy (M\"uller et al. 2005). See also Pagani et al. (2009).
$^b$ At a spectral resolution of 1.3 km s$^{-1}$.
$^c$ Assuming a T$_{\rm kin}$ = 20-80 K and a source size of 20$\arcsec$--25$\arcsec$ (see text). \\
\end{center}
\end{table}
The peak LSR velocity (+1.3 km s$^{-1}$) of the N$_2$H$^{+}$
profile is sligthly blue-shifted with respect to the ambient velocity (+2.6 km
s$^{-1}$, BP97). The linewidth (4.3 km s$^{-1}$) is also considerably
larger than what observed by BP97 and Tobin et al. (2013) towards the driving protostar
L1157-mm (0.6--0.8 km s$^{-1}$).
This is clearly shown in Figure~\ref{n2h+}, where we report
the N$_2$H$^{+}$(1$\--$0) line
(see the red histogram in the Upper panel)
recently observed towards L1157-mm in the framework of the
ASAI\footnote{http://www.oan.es/asai/}
IRAM 30-m Large program (PI: R. Bachiller \& B. Lefloch).
The N$_2$H$^{+}$ profile from the B1 shock
is definitely broader and more blue-shifted that
what observed towards the L1157-mm protostar,
indicating a different origin.
Note also that the weak, but not blended,
$F_{\rm 1}$,$F$ = 0,1--1,2 line at $\sim$ --8 km s$^{-1}$ from
the main hyperfine component clearly shows blue-shifted emission.
\begin{figure}
\centering
\includegraphics[angle=0,width=7cm]{f1.ps}
\caption{{\it Top panel}: PACS image of L1157 of the integrated
H$_2$O emission at 1669 GHz (Nisini et al. 2010b). Offsets are with
respect to the L1157-mm sources (black star), at coordinates:
$\alpha_{\rm J2000}$ = 20$^{\rm h}$ 39$^{\rm m}$ 06$\fs$2,
$\delta_{\rm J2000}$ = +68$\degr$ 02$\arcmin$ 16$\farcs$0.
Magenta contours refer to the SiO(3-2) IRAM 30-m map reported
by Bachiller et al. (2001).
The labels indicate the main blue- and red-shifted knots.
Circles are for the IRAM 30-m HPBW at the N$_2$H$^{+}$(1$\--$0) frequency
(26$\arcsec$), centred at the driving L1157-mm protostar (observed by BP97 and Tobin et al. 2013),
and at $\Delta\alpha$ = +25$\farcs$6 and $\Delta\delta$ = --63$\farcs$5
from the driving protostar (present observations, see
black triangles and coordinates reported in Sect. 2).
{\it Bottom panel}: The L1157-B1 bow shock as traced using
the CH$_3$CN(8--7) $K$ = 0,1,2 emission at 3 mm, observed with
the IRAM PdB interferometer (Codella et al. 2009).}
\label{maps}
\end{figure}
The best fit of Fig.~\ref{n2h+} shows a
non-negligible residual ($\sim$ 3$\sigma$; see Bottom panel)
at about --4.0 km s$^{-1}$, which suggests
non-Gaussian emission from gas at high blue-shifted velocity.
Indeed a definitely more satisfactory fit can be obtained by assuming two blue-shifted
Gaussian components
(see the magenta lines in Fig.~\ref{n2h+} and Table 1): (i) a line centered at +1.8 km s$^{-1}$
with FWHM = 2.6 km s$^{-1}$, plus (ii) a broader (5.9 km s$^{-1}$) line peaking at --1.1 km s$^{-1}$
(dashed and dot-dashed magenta lines in Fig.~\ref{n2h+}, respectively).
In summary, despite the complexity due to the hyperfine components,
this clearly shows that a single-Gaussian component is insufficient to reproduce
the N$_2$H$^{+}$(1--0) profile towards the B1 shock, and one needs to invoke additional
broad blue-shifted emission.
The present observation
thus reports the first detection of N$_2$H$^{+}$ emission towards
a low-mass outflow, definitely far from the protostellar environment.
\section{Physical conditions of the N$_2$H$^+$ gas}
The line profiles in L1157-B1, as in other molecular shock
spots, have a relatively complex structure where several excitation
components are visible. Disentangling such components is not an easy
task. In L1157-B1, the recent CO multi-line analysis by Lefloch et
al. (2012) indicates that the line profiles are composed by a linear
combination of exponential curves $I_{\rm CO}$($v$) = $I_{\rm CO}$(0)
exp(--$\lvert$$v$/$v_{\rm 0}$$\rvert$), independently of the CO
transition considered.
The three velocity components correspond to three
different physical components: (1) a small ($\sim$
7$\arcsec$--10$\arcsec$) dissociative J-type shock called $g1$ (identified where
the line intensity is $\propto$ exp(--$\lvert$$v$/12.5$\rvert$))
dominating at the highest velocities ($\le$ --20 km s$^{-1}$), (2) the
outflow cavity walls, $g2$ ($\propto$
exp(--$\lvert$$v$/4.4$\vert$)), with size $\le$ 20$\arcsec$, and (3)
the larger ($\sim$ 25$\arcsec$) outflow cavity created by the older
bow shock L1157-B2, $g3$ ($\propto$
exp(--$\vert$$v$/2.5$\vert$)) dominating at velocities close to the
systemic one ($v$ $\ge$ --2 km s$^{-1}$). Each component shows the
same slope at all $J$, but different relative intensities. The higher
is the line excitation the brighter is the $g1$ component. On the
contrary, $g3$ is observed only towards the low--$J$ ($\le$ 3) CO
lines.
Figure~\ref{compa} compares the
N$_2$H$^{+}$(1--0) line with other line profiles observed towards
L1157-B1 (Lefloch et al. 2010, Codella et al. 2010, 2012): (i) the
CO(16--15) at 1841.3 GHz observed with Herschel-HIFI as an example of
a spectrum where the $g1$ component is clearly dominating the line
profile; (ii) the CO(3--2) profile, {\it as observed
towards L1157-B2}, representing a pure $g3$ profile, without the $g1$
and $g2$ components observed towards L1157-B1; (iii) the
NH$_3$(1$_{0}$$-$0$_{0}$)
transition, showing a profile well
reproduced by the $g2$ component alone.
The N$_2$H$^{+}$ line profile, despite the blending between hyperfine
components, seems to exclude the extremely high-velocity emission
associated with the $g1$ component, being consistent with the $g2$ and $g3$ ones.
In conclusions, N$_2$H$^+$ is
associated either with the B1 outflow cavity (with $T_{\rm kin}$
$\simeq$ 70 K and $n_{\rm H_2}$ $\ge$ 10$^5$ cm$^{-3}$, according
to the LVG CO analysis by Lefloch et al. 2012) and/or with the older and
colder B2 cavity ($\sim$ 20 K, $\ge$ 6 $\times$ 10$^4$ cm$^{-3}$).
\begin{figure}
\centering
\includegraphics[angle=0,width=8cm]{f2.ps}
\caption{{\it Upper panel:} N$_2$H$^{+}$(1$\--$0) line (black histogram; in T$_{\rm
mb}$ scale) observed in
L1157-B1 with the IRAM 30-m antenna. The red histogram refers to the N$_2$H$^{+}$(1$\--$0)
spectrum (scaled for a direct comparison)
as observed towards L1157-mm with the IRAM 30-m antenna in the framework of the ASAI IRAM Large Program (PI: R. Bachiller \&
B. Lefloch). The vertical dashed line
indicates the ambient LSR velocity (+2.6 km s$^{-1}$, from BP97).
The vertical seven blue lines stand for the 15 hyperfine
components of the N$_2$H$^{+}$(1$\--$0) pattern
(several of them spectrally unresolved at the present
frequency resolution; see Pagani et al. 2009).
We centered the spectrum at the frequency of the main hyperfine component $F_{\rm
1}$,$F$ = 2,3--1,2 (93173.76 MHz).
{\it Middle panel:} Analysis of the N$_2$H$^{+}$(1$\--$0) profile. The blue line shows the best fit
(FWHM = 4.3 km s$^{-1}$) assuming a single Gaussian component.
The magenta solid line shows the best fit using two Gaussian components
(dashed magenta: FWHM = 2.6 km s$^{-1}$; dot-dashed magenta: FWHM = 5.9 km s$^{-1}$) in
order to minimise the residual. The corresponding residuals are reported in the {\it Bottom panel}:
the single component approach gives a 3$\sigma$ (rms = 2 mK) residual.}
\label{n2h+}
\end{figure}
\begin{figure}
\centering
\includegraphics[angle=0,width=8cm]{f3.ps}
\caption{Comparison of the N$_2$H$^{+}$(1$-$0) fit (black; see Fig. 2) with typical
profiles of the $g1$, $g2$, and $g3$ components
(from Codella et al. 2010, and Lefloch et al. 2012, see text).
CO(16--15) represents $g1$ (1841.3 GHz, red, decreased by a factor 6.4 for a
direct comparison), while NH$_3$(1$_{0}$$-$0$_{0}$) (572.5 GHz, blue,
decreased by a factor 4.5) is for $g2$. In addition, we
report the CO(3--2) spectra for $g3$ (magenta, 345.8 GHz,
decreased by a factor 269.0) observed by Lefloch et al. (2012)
towards the L1157-B2 position, tracing a cavity older than the
L1157-B1 one, and created by a previous wind ejection (Gueth et
al. 1996). The spectra have been smoothed to a common spectral resolution
of 1.3 km s$^{-1}$.}
\label{compa}
\end{figure}
The low excitation N$_2$H$^{+}$(1$-$0) transition ($E_{\rm u}$ = 5 K)
has a critical density of $\sim$ 10$^5$ cm$^{-3}$ (e.g. Friesen et al. 2009).
The line emission is thus expected to be close to LTE conditions at the
densities of the $g2$ and $g3$ gas
components. Following the results of the LVG analysis by Lefloch et
al. (2012), we assume a $T_{\rm kin}$ between 20 and 70 K and an
emitting size of 20$\arcsec$--25$\arcsec$. The N$_2$H$^{+}$ total
column density is then well constrained $N$(N$_2$H$^+$) = (2--8) $\times$ 10$^{12}$
cm$^{-2}$.
Using the source-averaged column density $N$(CO)
= 1 $\times$ 10$^{17}$ cm$^{-2}$ (found for both $g2$ and
$g3$ by Lefloch et al. 2012), and assuming [CO]/[H$_2$]=10$^{-4}$, we
can derive the N$_2$H$^+$ abundance: $X$(N$_2$H$^{+}$) = 2--8 $\times$
10$^{-9}$. A lower abundance, between 4 $\times$ 10$^{-10}$ and $\sim$ 10$^{-9}$, is derived
for the weaker emission at higher velocity, represented
by the velocity component peaking at --1.1 km s$^{-1}$ (see Table 1).
These values are consistent with what found
towards the L1157-mm protostar by BP97 (4 $\times$
10$^{-9}$) using the IRAM 30-m antenna. On the other hand, Chiang et al. (2010)
measured lower values (3--6 $\times$ 10$^{-10}$) towards L1157-mm using the CARMA array,
possibly due to interefometric filtering.
Similar values have been also found in CO depleted
prestellar cores and dense protostellar
envelopes ($\sim$ 10$^{-10}$--10$^{-9}$; see e.g. Caselli et al. 2002,
Tafalla et al. 2004, 2006, Maret et al. 2007, Chen et al. 2007, 2008).
This value represents an estimate of the abundance of
the gas in the outflow cavities and will be used for a comparison with
the outputs predicted by our models.
\section{N$_2$H$^{+}$ chemistry in L1157-B1}
\begin{figure}
\centering
\includegraphics[angle=0,width=8.5cm]{f4.ps}
\caption{N$_2$H$^+$ abundance, with respect to H$_2$, versus the H$_2$ density at different times:
from $2 \times 10^3$ yr (the age of L1157-B1) to $1 \times 10^7$ yr. The dashed blue box gives the observed value
with the 1 $\sigma$ uncertainty (see text).
The gas is at a temperature of 70 K, but the curve is identical in the range 20 to 70 K. The cosmic ionisation rate
is 1$0^{-16}$ s$^{-1}$.}
\label{model}
\end{figure}
To understand the origin of the observed N$_2$H$^+$, we compared its measured
abundance with
the N$_2$H$^+$ abundance predicted by a simple pseudo-time dependent model.
We used the
publicy available ASTROCHEM code\footnote{http://smaret.github.com/astrochem/}. The code follows the evolution of the
chemical composition of a gas cloud initially in the diffuse state and with fixed temperature and density.
A simple gas-grain interaction due to freeze-out, thermal, and photo-desorption, has been considered.
In these calculations we assumed a nitrogen elemental
abundance equal to $2.1 \times 10^{-5}$ (with respect to H nuclei), carbon and
oxygen equal to $7.3 \times 10^{-5}$ and $1.8 \times 10^{-4}$ respectively, grain size of 0.1 $\mu$m,
and cosmic ionisation rates $\zeta$ in the $10^{-17}$--$10^{-16}$ s$^{-1}$ range (e.g. Dalgarno 2006; Padovani et al. 2009).
Figure~\ref{model} shows the predicted N$_2$H$^+$ abundance as a function of the volume density
at different evolutionary times, from $2 \times 10^3$ yr (the age of L1157-B1) to $1 \times 10^7$ yr.
The chemistry of N$_2$H$^+$ is relatively simple: it is formed by the reaction
of the H$_3^+$ (created by the cosmic rate ionisation of H$_2$) and destroyed by the reaction
of CO (or electrons in case of CO depletion).
Therefore, the larger the density the lower is the H$_3^+$ abundance, and consequently $X$(N$_2$H$^+$).
The comparison of the measured and predicted N$_2$H$^+$ abundances yields an important conclusion:
the observed N$_2$H$^+$ abundance is perfectly matched by a model of cold, quiescent, and
relatively old ($\ge$ 10$^4$ yr)
gas and does not require the intervent of a shock. The age of the shock in L1157-B1
is around 2000 yr (Gueth et al. 1996); hence
Fig.~\ref{model} shows that N$_2$H$^+$ was present before the shock occurred, and it is consistent
with a pre-shock H$_2$ density of $\leq$ 5 $\times$ 10$^{5}$ cm$^{-3}$.
In addition, given that $X$(e) $\propto$ $n_{\rm H_2}$$^{-1/2}$ (e.g. McKee 1989),
we can $speculate$ that the lower $X$(N$_2$H$^+$) abundance
(by a factor $\simeq$ 5--6) measured at the highest velocities indicates
a density gradient in the shocked gas in the cavity. In other words, the N$_2$H$^+$ emitting at higher velocities
could trace gas with $n_{\rm H_2}$ about one order of magnitude higher than that of the gas
at velocities closer to the systemic one.
\begin{figure}
\centering
\includegraphics[angle=0,width=8.5cm]{f5.ps}
\caption{Example of UCL\_CHEM model (Viti et al. 2004) showing how the fractional abundances (with respect to H$_2$)
of N$_2$H$^+$, H$_3$$^+$, and N$_2$ can vary as a function of distance (see text).}
\label{ucl}
\end{figure}
In addition, to verify whether the detected N$_2$H$^+$ molecules were pre-existing to the shock,
we used a shock model of L1157-B1 reported by Viti et al. (2011),
who coupled the chemical model UCL\_CHEM with a parametric shock model (Jim\'enez-Serra et al. 2008).
UCL\_CHEM is a gas-grain chemical code which first simulates the formation of high-density clumps from
an atomic diffuse cloud, and then follows their chemical evolution when subjected to the passage of a C-type shock.
Full details of the code can be found in Viti et al. (2004, 2011).
We updated the grid of models from Viti et al. (2011) varying the cosmic ray ionisation rate $\zeta$ (which of course directly
influences the behavour of ions) in the 10$^{-17}$--10$^{-16}$ s$^{-1}$ range.
Figure~\ref{ucl} reports an example of a UCL\_CHEM shock model
assuming $\zeta$ = 10$^{-16}$ s$^{-1}$ and a pre-shock density of 10$^{4}$ cm$^{-3}$.
We confirm that N$_2$H$^+$ is indeed formed in the gas phase and that the passage of a shock, with the
subsequent release of N$_2$ into the gas, does not yield an increase in the
N$_2$H$^+$ abundance. This is consistent with the lack of signatures
of very high-velocity associated with the $g1$ component in the N$_2$H$^+$(1--0) profile.
On the contrary, the passage of a shock does decrease the N$_2$H$^+$ abundance
by about 1--2 orders of magnitude, depending on the pre-shock conditions and velocity of the shock.
This allows us to further constrain the pre-shock density to $\sim$ 10$^4$ cm$^{-3}$
(see Fig.~\ref{model}) in order to mantain the observed abundance once the outflow cavities have been compressed to $\geq$ 10$^5$ cm$^{-3}$.
A value of $\zeta$ ($\sim$ 10$^{-16}$ s$^{-1}$) helps to achieve and mantain a high N$_2$H$^+$ abundance.
A pre-shock density of $\sim$ 10$^4$ cm$^{-3}$ is consistent with the results suggested by the study of deuteration
in L1157-B1 (Codella et al. 2012b) where it was found that the most likely scenario is that of a
a gas passing through a pre-shock phase with $n_{\rm H_2}$ $\le$ 4 $\times$ 10$^4$ cm$^{-3}$, during which
formaldehyde and methanol ices are formed.
\section{Conclusions}
We present the first detection of diazenylium towards
outflowing gas far from the driven low-mass protostar.
We found evidence that N$_2$H$^+$(1--0) emission observed towards the L1157-B1 shock
originates from the dense ($\ge$ 10$^5$ cm$^{-3}$) gas associated
with the cavities
opened, and accelerated by the prototellar wind.
The line width ($\ge$ 4 km s$^{-1}$) is significantly broader than
the N$_2$H$^+$ line widths previously observed towards the driving protostar L1157-mm
($\leq$ 1 km s$^{-1}$), as well as than the typical
line widths observed in quiescent regions, probably as a result
of the energy injection from the sweeping outflow.
The estimated N$_2$H$^+$
abundance is (2--8) $\times$ 10$^{-9}$, which can be reproduced by a model of quiescent gas
evolved for more than 10$^4$ yr (i.e. older than the
shock kinematical age, 2000 yr).
In other words, N$_2$H$^+$ can be considered a fossil record of the pre-shock phase, when the
gas density was $\sim$ 10$^4$ cm$^{-3}$. Modelling of C-shocks
confirms that $X$(N$_2$H$^+$) is not enhanced by the passage of the shock.
The present N$_2$H$^+$ detection is the result of the increase of
its column density due to the compression (by a factor $\sim$ 10) of swept-up material, and not to its relative abundance.
\begin{acknowledgements}
C. Codella, C. Ceccarelli, B. Lefloch, and S. Viti acknowledge the financial support from
the COST Action CM0805 ``The Chemical Cosmos''.
The Italian authors gratefully acknowledge funding from Italian Space Agency (ASI) through the contract I/005/011/0,
which also supports the fellowships of G. Busquet and A. G\'omez-Ruiz.
C. Ceccarelli and B. Lefloch acknowledge funding from the French Space Agency CNES and the National Research Agency funded project
FORCOM, ANR-08-BLAN-0225.
S. Viti acknowledges support from the [European Community's] Seventh Framework Programme [FP7/2007-2013] under grant agreement n$^{\circ}$ 238258.
\end{acknowledgements}
\vspace{0.5cm}
\noindent
{\bf References} \\
\noindent
Bachiller R., Per\'ez Guti\'errez M., Kumar M. S. N., et al., 2001, A\&A 372, 899 \\
\noindent
Bachiller R., \& Per\'ez Guti\'errez M., 1999, ApJ 487, L93 (BP97) \\
\noindent
Benedettini M., Viti S., Codella C., et al., 2007, MNRAS 381, 1127 \\
\noindent
Caselli P., Myers P.C., Thaddeus P., 1995, ApJ 455, L77 \\
\noindent
Caselli P., Benson P.J., Myers P. C., Tafalla M., 2002, ApJ 572, 238 \\
\noindent
Ceccarelli C., Bacmann A., Boogert A., et al., 2010, A\&A 521, L22 \\
\noindent
Chen X., Launhardt R., Bourke T.L., Henning Th., Barnes P.J., 2008, ApJ 683, 862 \\
\noindent
Chen X., Launhardt R., Henning Th., 2007, ApJ 669, 1058 \\
\noindent
Chiang H.-F., Looney L.W., Tobin J.J., \& Hartmann L., 2010, ApJ 709, 470 \\
\noindent
Codella C., Benedettini M., Beltr\'an M.T., et al. 2009, A\&A 507, L25 \\
\noindent
Codella C., Lefloch B., Ceccarelli C., et al., 2010, A\&A 518, L112 \\
\noindent
Codella C., Ceccarelli C., Bottinelli S., et al., 2012a, ApJ 744, L164 \\
\noindent
Codella C., Ceccarelli C., Lefloch B., et al., 2012b, ApJ 757, L9 \\
\noindent
Dalgarno A., 2006, Proceedings of the National Academy of Science 103, 12269 \\
\noindent
Friesen R.K., Di Francesco J., Shirley Y.L., Myers P.C., 2009, ApJ 697, 1457 \\
\noindent
de Graauw Th., Helmich F.P., Phillips T.G., et al., 2010, A\&A 518, L6 \\
\noindent
Gusdorf A., Pineau des For$\hat {\rm e}$ts G., Cabrit S., Flower D.R., 2008, A\&A 490, 695 \\
\noindent
Gueth F., Guilloteau S., \& Bachiller R., 1996, A\&A 307, 891 \\
\noindent
Gueth F., Guilloteau S., \& Bachiller R., 1998, A\&A 333, 287 \\
\noindent
Jim\'enez-Serra I., Caselli P., Mart\'{\i}n-Pintado J., Hartquist T.W., 2008, A\&A 482, 549 \\
\noindent
J$\o$rgensen J.K., Hogerheijde M.R., van Dishoeck E.F., Blake G.A., Sch\"{o}ier F.L., 2004, A\&A 413, 993 \\
\noindent
Lefloch B., Cabrit S., Codella C., et al., 2010, A\&A 518, L113 \\
\noindent
Lefloch B., Cabrit S., Busquet G., et al., 2012, ApJ 757, L25 \\
\noindent
Looney L. W., Tobin J., \& Kwon W., 2007, ApJ 670, L131 \\
\noindent
Maret S., Bergin E.A., \& Lada C.J., 2007, ApJ 670, L25 \\
\noindent
McKee, C.F., 1989, ApJ 345, 782 \\
\noindent
M\"uller H.S.P., Sch\"oier F.L., Stutzki J., Winnewisser G., 2005, J.Mol.Struct. 742, 215 \\
\noindent
Neufeld D.A., \& Green S., 1994, ApJ 432, 158 \\
\noindent
Nisini B., Codella C., Giannini T., et al., 2007, A\&A 462, 163 \\
\noindent
Nisini B., Giannini T., Neufeld D.A., et al., 2010a, ApJ 724, 69 \\
\noindent
Nisini B., Benedettini M., Codella C., et al., 2010b, A\&A 518, L12 \\
\noindent
Padovani M., Galli D., \& Glassgold A.E., 2009, A\&A 501, 619 \\
\noindent
Pagani L., Daniel F., \& Dubernet M.-L., 2009, A\&A 494, 719 \\
\noindent
Tafalla M., \& Bachiller R., 1995, ApJ 443, L37 \\
\noindent
Tafalla M., Myers P.C., Caselli P., Walmsley C.M., 2004, A\&A 416, 191 \\
\noindent
Tafalla M., Santiago-Garc\'{\i}a J., Myers P.C., et al., 2006, A\&A 455, 577 \\
\noindent
Tobin J.J., Hartmann L., Chiang H.-F., et al., 2011, ApJ 740, 45 \\
\noindent
Tobin J.J., Hartmann L., Bergin E., et al., 2012, ApJ 748, 16 \\
\noindent
Tobin J.J., Bergin E., Hartmann L., et al., 2013, ApJ 765, 18 \\
\noindent
Wilson T.L., \& Rood R., 1994, ARA\&A 32, 191 \\
\noindent
Vasta, M., Codella, C., Lorenzani, A., et al., 2012, A\&A 537, A98 \\
\noindent
Viti S., Collongs M.P., Dever J.W., McCoustra M.R.S., Williams D.A., 2004, MNRAS 354, 1141 \\
\noindent
Viti S., Jim\'enez-Serra I., Yates J.A., et al., 2011, ApJ 740, L3
\noindent
Yamaguchi T., Takano S., Watanabe Y., et al., 2012, PASP 64, 105
\end{document}
|
1,108,101,566,631 | arxiv | \section{Introduction}
\IEEEPARstart{M}{any} applications in Machine Learning, big data analysis, search engines, and network routing require massive parallelism. As the amount of data to be analyzed continues to grow, power dissipation and data transfer between memory and processing units have limited the scalability of parallel architectures \cite{white_paper}.
This has led researchers to consider Associative Processors (APs) as in-memory platforms for carrying-out massively parallel computations inside the memory without the need to move the data \cite{AP}. The ability of APs to unify data processing and data storage has drastically decreased compute energy and latency cost. An AP constitutes of an array of Content Addressable Memories (CAMs) which are storage devices that allow concurrent access to all data rows.
A CAM searches for a stored word based on an inputted search key. The AP improves its functionality by allowing parallel writing into masked bits of the matching CAM rows \cite{CAM}.
While binary CAMs perform exact-match searches for `0' and `1' bits, a more powerful ternary CAM (TCAM) can search a third “don’t care” value, allowing for very flexible pattern matching between search keys and stored values \cite{CAM}. Considering its promising applications potential, different implementations for the TCAM have been proposed including SRAM-based TCAM \cite{sram_tcam, sram_tcam2}. However, they were not widely adopted due to their low density and nonvolatility \cite{binary_AP}. Nowadays, emerging devices such as resistive memories have relaxed these constraints, leading to the revival of the TCAM-based AP approach in the research community \cite{ReRAM_AP}. New TCAM implementations based on resistive random access memory (ReRAM) have been proposed \cite{MCAM, MCAM2} to reduce power and improve area density in comparison to the conventional complementary metal-oxide semiconductor (CMOS) solutions.
Memristive-based TCAM (MTCAM) designs have been proposed to build 1D and 2D in-memory computing architectures based on AP \cite{yavits2015resistive, binary_AP}. The AP architecture presented in \cite{binary_AP} was designed to perform in-memory compute in the context of the binary full adder. It relies on a look-up table (LUT)-based 1-bit addition that employs four compare and write operations applied in parallel to the different rows, resulting in significant runtime savings.
Recently, ternary logic has gained interest among the circuit design community for its ability to increase power efficiency and reduce the complexity of arithmetic circuit designs. However, the implementation of ternary logic circuits requires the use of devices with multiple threshold voltages which is difficult to accomplish with the current CMOS technology with reasonable devices' area and latency \cite{CNTFET,mohammaden2021cntfet}. Therefore, recent studies have shed light on alternative devices for the design of ternary arithmetic circuits such as carbon nanotube field-effect transistors (CNTFETs) \cite{CNTFET,CNTFET2,mohammaden2021cntfet} and memristors \cite{nancy,other_adders}.
In order to carry-out ternary addition, different approaches were adopted in the literature. In \cite{CNTFET}, authors included a ternary-to-binary decoder stage for the inputs, and the addition was performed using binary logic gates before converting back the outputs to ternary logic. In \cite{CNTFET2}, the authors expressed the arithmetic function using a K-Map, and they used the obtained equations to determine the logic gates needed for the CNTFET-based implementation. In \cite{nancy}, the authors custom-designed the desired arithmetic function and implemented it using ternary logic gates composed of both memristors and CNTFETs.
In this paper, we propose a scalable CAM cell design and methodology for purposes of multi-valued logic AP applications. To enable the implementation of in-memory compute operations, we propose two novel algorithms that guide the automatic generation of the LUT for in-place multi-valued arithmetic or logic functions. The first relies on a depth-first search exploration of the state diagram obtained from the function's truth table, and the second capitalizes on common outputs to reduce the number of required write cycles.
The proposed methodology is universal and can be employed for different logic or arithmetic functions such as NOR, XOR, AND, multiplication, addition and subtraction. To illustrate, we present a novel implementation of a ternary AP (TAP) architecture based on a novel quaternary CAM (QCAM) design. We demonstrate the proposed design in the context of a LUT-based ternary full adder application.
Specifically, our contributions are as follows:
\begin{enumerate}
\item We propose a ``$n$T$n$R'' CAM cell for multi-valued AP (MvAP) arithmetic and logic operations. To exemplify our design, we present a TAP architecture using a ``3T3R'' QCAM cell as a building block.
\item We propose a scalable cycle-free state diagram mapping of the multi-valued arithmetic or logic function's truth table. The state diagram forms the core data structure to implement the proposed algorithms. These algorithms build on the states' connectivity along with other relevant attributes to structurally traverse the state diagram for a systematic generation of the LUTs.
\item A first approach that relies on depth-first search (DFS) parsing of the state diagram to determine the order of the passes.
\item A second optimized approach that exploits common write actions to reduce the number of required write cycles. It relies on breadth-first search (BFS) parsing and a grouping heuristic for the different state diagram nodes.
\item We test the algorithms for implementing a LUT-based ternary full adder (TFA) relying on a TAP architecture. We evaluate the energy, delay and area efficiency of the ternary adder implementations using the first and second approaches and compare them against each other as well as to the AP binary adder and other ternary full adder implementations.
\end{enumerate}
The rest of this paper is organized as follows.
Section \ref{sec:MvAP} proposes the multi-valued AP architecture and discusses the CAM implementation and operation. Then, an illustrative example of ternary AP is discussed in Section \ref{sec:TAP}. The extended functional operations from binary AP is proposed utilizing a novel state diagram interpretation in Section \ref{sec:AP_Operation}, and an optimized version is introduced in Section \ref{sec:Optimized_AP}. Experimentation results and analysis are performed on a novel ternary AP full adder implementation in Section \ref{sec:results}. Finally, the conclusion of this work is given.
\section{Proposed MvAP Architecture}
\label{sec:MvAP}
Traditionally, digital arithmetic computation is performed using two-valued logic: `0' or `1'. However, in modern digital design, researchers are increasingly looking into multi-valued logic (MVL) as a way to replace the classical binary characterization of variables. MVL, notably ternary logic, constitutes a promising alternative to the traditional binary logic \cite{MVL_ternary} as it provides multiple advantages such as reduced interconnects, smaller chip area, higher operating speeds and less power consumption \cite{ternary_advantages}. The voltage levels for multi-valued logic of radix-$n$ are $n$ levels spanning from $0$ to $V_{DD}$. Hence, the $i^{th}$ logic value is realized with $i*V_{DD}/(n-1)$ where $i\in [0,n-1]$. For instance, ternary logic system uses $\{0,\,1,\, 2\}$ logic values with $\{0,\, V_{DD}/2,\, V_{DD}\}$ voltage levels, respectively. This representation is referred to as the unbalanced representation unlike the balanced which uses $\{-1,\, 0,\, 1\}$ logic values and is realized with $\{-V_{DD},\, 0,\, V_{DD}\}$ voltage levels \cite{ternary_systems}. In this paper, we focus on the unbalanced ternary logic system.
\begin{figure*}[]
\centering
\includegraphics[width=1\linewidth]
{Figures/mvap.pdf}
\caption{An illustration of the MvAP architecture highlighting the proposed MvCAM cell.}
\label{fig:mvap}
\vspace{-0.1in}
\end{figure*}
Fig. \ref{fig:mvap} presents an illustration of the MvAP architecture comprised of a multi-valued CAM (MvCAM) array, a controller, a decoder and a set of Key, Mask and Tag registers. A MvCAM array consists of multiple rows containing MvCAM cells where $n$-valued digits (nits) are stored. The following sections present the proposed implementation of the different components.
\subsection{MvCAM Cell}
A compact design of a memristor-based MvCAM cell with $n$ transistors and $n$ memristors (``$n$T$n$R'') is presented as a natural extension to the ``2T2R'' TCAM cell designated for binary AP applications \cite{2T2R_fouda}. The proposed design is illustrated in the context of a multi-valued AP in Fig. \ref{fig:mvap}. The memristors in the cell function as storage elements whose states determine the stored nit value. The stored value is obtained by setting only one of the memristors to the low resistance state $R_{LRS}$ and maintaining the other $(n-1)$ memristors in the high resistance state $R_{HRS}$, as indicated in Table \ref{tab:general_stored}. Without loss of generality, the location of the single $R_{LRS}$ among the $(n-1)$ remaining $R_{HRS}$ memristors determines the logic state stored in the cell. Specifically, to store nit value $i$, memristor $M_i$ is the one set to $R_{LRS}$. A ``don't care'' state is represented by all memristors set to $R_{HRS}$.
To test the functionality of the ``$n$T$n$R'' cell, the matchline (ML) is initially precharged high.
The signal vector $(S_{n-1},\, S_{n-2}\, ...\, S_1,\, S_0)$ illustrated in Fig. \ref{fig:mvap} is then sent to check for a specific stored nit value in the cell. When searching for nit value $i$, signal $S_i$ is set to low while the other signals are set to high. The search outcome results in a match only when memristor $M_i$ is in the $R_{LRS}$ and the other memristors are in the $R_{HRS}$. Otherwise, the search outcome results in a mismatch. In the case of a match, the voltage of the ML discharges slowly and is hence preserved high, whereas in the case of a mismatch, the ML discharges quickly to ground.
\begin{table}[]
\centering
\caption{Mapping between the nit value stored in the MvCAM cell and the corresponding $n$ memristor states.}
\label{tab:general_stored}
\begin{tabular}{|c|ccccc|}
\hline
\multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Logic\\ value\end{tabular}} & \multicolumn{5}{c|}{Stored state} \\
& $M_{n-1}$ & $M_{n-2}$ & $\cdots$ & $M_1$ & $M_0$ \\ \hline
x & H & H & $\cdots$ & H & H \\
0 & H & H & $\cdots$ & H & L \\
1 & H & H & $\cdots$ & L & H \\
$\vdots$ &$\vdots$ & $\vdots$ & $\ddots$ & $\vdots$ & $\vdots$\\
$n-2$ & H & L & $\cdots$ & H & H \\
$n-1$ & L & H & $\cdots$ & H & H \\ \hline
\end{tabular}
\vspace{-0.1in}
\end{table}
\subsection{Search Key $n$-ary Decoder}
The $n$-ary decoder allows mapping the key-mask pair to the signal vector $(S_{n-1},\, S_{n-2}\, ...\, S_1,\, S_0)$. Table \ref{tab:general_decoder} presents the truth table for the $n$-ary decoder. The inputs to the decoder are a binary mask and nit-valued key. In the signal outputs of the decoder, the place of the signal set to zero is equal to the search key value. Specifically, to search for logic value $j$, the signal $S_j$ is the one set to zero. It is worth noting that the decoder logic is inverting since the target signal is set to low whereas all other signals are set to high. When the key is masked, i.e., the mask bit is a zero, all decoded signals are set to zero. One simple and generic way to implement such decoder is with simple successive approximation ADC with modified operation. But, in the case of the ternary decoder, it will be shown in the next section that it can be realized with some ternary logic circuits.
\begin{table}[ht]
\centering
\caption{Mapping between the key-mask pair and the corresponding decoded signals sent to the MvCAM cell.}
\label{tab:general_decoder}
\begin{tabular}{|c|c|ccccc|}
\hline
\multirow{2}{*}{Mask} & \multirow{2}{*}{Key} & \multicolumn{5}{c|}{Decoded signals} \\
& & $S_{n-1}$ & $S_{n-2}$ & $\cdots$ & $S_1$ & $S_0$ \\ \hline
0 & x& 0 & 0 & $\cdots$ & 0 & 0 \\
$n-1$ & 0& $n-1$ & $n-1$ & $\cdots$ & $n-1$ & 0 \\
$n-1$ & 1& $n-1$ & $n-1$ & $\cdots$ & 0 & $n-1$ \\
$\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ & $\ddots$ & $\vdots$ & $\vdots$ \\
$n-1$ & $n-2$ & $n-1$ & 0 & $\cdots$ & $n-1$ & $n-1$ \\
$n-1$ & $n-1$ & 0 & $n-1$ & $\cdots$ & $n-1$ & $n-1$ \\ \hline
\end{tabular}
\vspace{-0.1in}
\end{table}
\subsection{MvCAM Array}
A MvCAM array consists of several MvCAM rows. A MvCAM row contains several ``$n$T$n$R'' cells along with a sensing circuit to distinguish between the full match and the mismatch states. A full match state is obtained when all cells in the row match the searched nits, while a mismatch is obtained when at least one cell in the row does not contain the searched nit. An essential requirement for the array is to concurrently compare the stored data in all rows with an inputted key and mask pair. For purposes of in-memory compute, we overwrite the activated columns of the matched rows with new data. The key determines the nit value to be searched for, while the mask determines the columns of interest to be separately activated during each of the compare and write operations. The nit key and its binary mask are inputted to a decoder that generates the corresponding signal vector $(S_{n-1},\, S_{n-2}\, ...\, S_1,\, S_0)$, as indicated in Table \ref{tab:general_decoder}.
\subsubsection{Compare} The compare operation includes a precharge and an evaluate phase (see Fig. \ref{fig:timing_diagram}). During precharge, the capacitor is charged high, then a masked key is applied to the array in the evaluate phase. This leads the capacitor of each MvCAM row of cells to discharge through a resistor whose value is equal to the equivalent resistance of the corresponding row. In the case of a full match (fm), the capacitor retains most of its charges due to the presence of only high-resistance paths. In the case of one mismatching cell per row (1mm) or more (2mm, 3mm, etc.), the capacitor discharges quickly to ground through one, two or more low-resistance paths.
\subsubsection{Write}
After the compare operation, the sense amplifier connected to the output of the matching circuit senses the voltage across the capacitor and generates correspondences between a row match and logic `1', and a row mismatch and logic `0'. Hence, all matching rows are ``tagged'', meaning their Tag field is set to logic `1'. For example, in Fig. \ref{fig:mvap}, cells in the first row match the masked key and the row is tagged, whereas cells in the last row do not match the masked key.
We note that the sensing amplifier is followed by a latch that holds the Tag bit throughout the write action. The write enable signal is asserted to overwrite the new masked columns of the tagged MvCAM rows with new data.
Each write action for an ``$n$T$n$R'' cell triggers one memristor set and one memristor reset, except for writing to (from) a ``don't care'' state which only requires one reset (set). This is attributed to the fact that each stored nit is associated with a distinct memristor set to $R_{LRS}$, except for the ``don't care'' value in which no memristor is set to $R_{LRS}$.
Finally, we note that in an optimized architecture, the precharge phase can be performed in parallel with the write operation to reduce the combined compare and write cycle time. As such, a transmission gate powered by the write enable signal is used to relay to the ML the proper programming voltage to program the memristors, while another pass gate powered by the inverse of the write enable isolates the precharge capacitor from the programming voltage, as illustrated in Fig. \ref{fig:mvap}. The ML programming voltage is set to high during reset, and to low during set. The ``$n$T$n$R'' virtual ground is set to zero during reset, and pulled high during set.
\begin{figure}
\centering
\includegraphics[width=1\linewidth,height=0.57\linewidth]
{Figures/timing_diagram.pdf}
\caption{Sketch showing consecutive compare and write operations and the resulting memristor state changes for a ``3T3R'' CAM cell, assuming the compare operation results in a match.}
\label{fig:timing_diagram}
\vspace{-0.1in}
\end{figure}
\section{Illustrative TAP Architecture}
\label{sec:TAP}
In \cite{hayes2001third}, the author analyzed the number system to find the best radix in terms of the economical perspective (i.e., number of computations). The optimal radix is found to be the natural number $e=2.718$ \cite{radwan2015mathematical}. So, the ternary logic system is adopted as the best number system since the integer 3 is the nearest to $e$. Herein, we illustrate the MvAP architecture with a ternary AP (TAP) relying on a ``3T3R'' quaternary CAM (QCAM) cell. The ``3T3R'' cell is built using three transistors and three memristors and stores the ternary logic values `0', `1' and `2' in addition to a ``don't care'' value.
Trits are stored in the form of one memristor set to $R_{LRS}$ and two memristors set to $R_{HRS}$. For example, the combination $(M_2,\, M_1,\, M_0) = (R_{HRS},\, R_{HRS},\, R_{LRS})$ indicates a logic `0' since $M_0$ is the memristor which is set to $R_{LRS}$ as shown in Table \ref{tab:tap_logic}. For the same stored value, when the decoded signal triplet sent is $(S_2,\, S_1,\, S_0) = (2,\, 2,\, 0)$, ML discharges very slowly since only high-resistance paths will connect ML to ground either through $R_{HRS}$ or $R_{off}$, thus resulting in a match. For all other decoded signal combinations, ML discharges quickly to ground through a low-resistance path, thus resulting in a mismatch.
\begin{table}[]
\centering
\caption{Different combinations of searched and stored data which can lead to either a match or a mismatch state. ``H'' and ``L'' donate $R_{HRS}$ and $R_{LRS}$, respectively.}
\label{tab:tap_logic}
\begin{tabular}{|c|c|c|ccc|c}
\cline{1-6}
\multicolumn{2}{|c|}{Searched Data} & \multicolumn{4}{c|}{Stored Data} & \\ \hline
Mask & Key & \begin{tabular}[c]{@{}c@{}}Logic\\ value\end{tabular} & $M_2$ & $M_1$ & $M_0$ & \multicolumn{1}{c|}{State} \\ \hline
0 & x & x & x & x & x & \multicolumn{1}{c|}{Match} \\
2 & 0 & 0 & H & H & L & \multicolumn{1}{c|}{Match} \\
2 & 1 & 0 & H & H & L & \multicolumn{1}{c|}{Mismatch} \\
2 & 2 & 0 & H & H & L & \multicolumn{1}{c|}{Mismatch} \\
2 & 0 & 1 & H & L & H & \multicolumn{1}{c|}{Mismatch} \\
2 & 1 & 1 & H & L & H & \multicolumn{1}{c|}{Match} \\
2 & 2 & 1 & H & L & H & \multicolumn{1}{c|}{Mismatch} \\
2 & 0 & 2 & L & H & H & \multicolumn{1}{c|}{Mismatch} \\
2 & 1 & 2 & L & H & H & \multicolumn{1}{c|}{Mismatch} \\
2 & 2 & 2 & L & H & H & \multicolumn{1}{c|}{Match} \\
2 & 0 & x & H & H & H & \multicolumn{1}{c|}{Match} \\
2 & 1 & x & H & H & H & \multicolumn{1}{c|}{Match} \\
2 & 2 & x & H & H & H & \multicolumn{1}{c|}{Match} \\ \hline
\end{tabular}
\end{table}
\begin{figure}
\centering
\includegraphics[width=0.6\linewidth,height=0.6\linewidth]
{Figures/ternary_decoder.pdf}
\caption{Ternary decoder; truth table (top) and circuit implementation (bottom). The inverters with `$+$' sign and `$-$' sign represent PTI and NTI, respectively. The rest of the gates are conventional binary gates }
\label{fig:ternary_decoder}
\vspace{-0.15in}
\end{figure}
\begin{table}[]
\centering
\vspace{-0.15in}
\caption{Truth table for the STI, PTI and NTI ternary inverters.}
\label{tab:inverter}
\begin{tabular}{|c|c|c|c|}
\hline
\textit{x} & \textit{STI(x)} & \textit{PTI(x)} & \textit{NTI(x)} \\ \hline
0& 2 & 2 & 2 \\ \hline
1& 1 & 2 & 0 \\ \hline
2& 0 & 0 & 0 \\ \hline
\end{tabular}
\end{table}
For the TAP, as is the case for the MvAP, the Key register contains the ternary values to be searched for inside the QCAM array, while the Mask register indicates which column or columns of the array are activated during comparison or writing. Upon compare, each key-mask pair generates a decoded signal triplet in which only one of the signals is set to zero, while the others are set to $V_{DD}$, i.e., logic value $n-1=2$ for the case of ternary logic. For example, to search for logic `0', the decoded signal triplet is $(S_2,\, S_1,\, S_0) = (2,\, 2,\, 0)$ as shown in Fig. \ref{fig:ternary_decoder}. Equations (\ref{eq:s2}), (\ref{eq:s1}) and (\ref{eq:s0}) represent the corresponding logic functions for the signal values obtained based on the truth table of Fig. \ref{fig:ternary_decoder}. The figure also presents the decoder circuit for the case of ternary logic comprising positive ternary inverters (PTIs), negative ternary inverters (NTIs) \cite{ternary_logic_gates}, binary AND, binary OR and binary inverter gates. The truth tables for the ternary inverters are depicted in Table \ref{tab:inverter} \cite{ternary_logic_gates}.
\begin{subequations}
\begin{equation}
S_2=Mask \cdot PTI(Key)
\label{eq:s2}
\end{equation}
\begin{equation}
S_1=Mask \cdot (NTI(Key) + \overline{PTI(Key)})
\label{eq:s1}
\end{equation}
\begin{equation}
S_0=Mask \cdot \overline{NTI(Key)}
\label{eq:s0}
\end{equation}
\end{subequations}
For purposes of in-memory compute, matching rows are overwritten by new data. Each write operation of a ternary logic value includes one memristor set and one memristor reset, except for writing to (from) a ``don't care'' state which only requires one memristor reset (set).
To implement a specific ternary arithmetic function, we iterate through the LUT entries for 1-trit operation, and the process is repeated to perform multi-trit operations. For each 1-trit operation, the Key register is set to the corresponding LUT input values and applied concurrently to all rows of the array in the columns specified by the Mask register. These represent the operand columns.
Each key-mask pair is fed to a decoder that generates the signal triplet $(S_2,\, S_1,\, S_0)$. After the compare operation, the rows of the QCAM array will generate either a match or a mismatch. Then, the write operation is performed on the matching rows of the array. New data consisting of the LUT output for the corresponding input replaces the stored value in the newly masked columns of the array. Table \ref{tab:set_reset} illustrates an example where trit $B$ initially takes on ternary value `1' which is equivalent to initial memristor states $(M_2,\, M_1,\, M_0) = (H,\, L,\, H)$. Assuming that the function's LUT dictates that $B$ should be overwritten by ternary value `0' post the compare operation, the final memristor states will be $(M_2,\, M_1,\, M_0) = (H,\, H,\, L)$. As such, for this example, $M_1$ should be reset and $M_0$ should be set, whereas $M_2$ remains the same as illustrated in Fig. \ref{fig:timing_diagram}.
\begin{table}[t]
\centering
\caption{A write example for the ternary ``3T3R'' cell. `x', `R' and `S' mean no change, reset and set, respectively. }
\label{tab:set_reset}
\begin{tabular}{l|c|c|c|}
\cline{2-4}
& $A$ & $B$ & $C_{in}$ \\ \hline
\multicolumn{1}{|l|}{Current state} & 0 & 1 & 2 \\ \hline
\multicolumn{1}{|l|}{Current stored ($M_2$, $M_1$, $M_0$)} & (H, H, L) & (H, L, H) & (L, H, H) \\ \hline
\multicolumn{1}{|l|}{Next state} & 0 & 0 & 1 \\ \hline
\multicolumn{1}{|l|}{Next stored ($M_2$, $M_1$, $M_0$)} & (H, H, L) & (H, H, L) & (H, L, H) \\ \hline
\multicolumn{1}{|l|}{Action} & (x, x, x) & (x, R, S) & (R, S, x) \\ \hline
\end{tabular}
\vspace{-0.15in}
\end{table}
\section{AP Operation}
\label{sec:AP_Operation}
A general-purpose AP enables the implementation of arithmetic functions such as addition, subtraction, multiplication and division as well as logical operations by relying on the truth tables of the desired function applied in a specific order. We refer to this as the look-up table based approach. The process is performed digit-wise and is repeated for multi-digit operations. The rows of the MvCAM array store the input vectors. For in-place operation, the output is written back to some or all of the input locations. All rows of the data array are processed in parallel. Each digit-wise operation is comprised of consecutive compare and write steps.
\begin{enumerate}
\item \textbf{Compare:} For every pass of the LUT, a masked key takes on the input vector values of this pass (see Table \ref{tab:binary_LUT} for the example of binary AP addition \cite{binary_AP}). The masked key is applied to all rows of the array and compared against the stored input data.
\item \textbf{Write:} A match for a row sets its Tag bit to a `1', while a mismatch for a row sets its Tag bit to a `0'. Tagged rows are overwritten by the corresponding output from the LUT consisting of a new masked key. For the example of the binary AP, the sum bit $S$ and the carry-out bit $C_{out}$ are written back to the input locations $B$ and $C_{in}$ respectively, keeping $A$ untouched. The in-place write-back of the output dictates the order of the passes in the LUT. This is required to avoid mistakenly revisiting in future passes rows that have already been overwritten, as will be discussed later.
\end{enumerate}
\begin{table}[b]
\vspace{-0.15in}
\centering
\caption{Look-up table of the binary AP adder.}
\label{tab:binary_LUT}
\begin{tabular}{|ccc|ccc|c|}
\hline
\multicolumn{3}{|c|}{Input} & \multicolumn{3}{c|}{Output} & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Pass\\ order\end{tabular}} \\
A & B & C & A & B & C & \\ \hline
0 & 0 & 0 & 0 & 0 & 0 & No action \\
0 & 0 & 1 & 0 & 1 & 0 & 3 \\
0 & 1 & 0 & 0 & 1 & 0 & No action \\
0 & 1 & 1 & 0 & 0 & 1 & 4 \\
1 & 0 & 0 & 1 & 1 & 0 & 2 \\
1 & 0 & 1 & 1 & 0 & 1 & No action \\
1 & 1 & 0 & 1 & 0 & 1 & 1 \\
1 & 1 & 1 & 1 & 1 & 1 & No action \\ \hline
\end{tabular}
\end{table}
\subsection{Proposed State Diagram for LUT Generation}
The proper pass order for a given arithmetic function can be ensured as follows. Consider that $x$ appears in the truth table of the function as both an input in one entry and an output in some other entry, then the order of processing $x$ as an input must satisfy one of the two properties below:
\begin{enumerate}
\item The pass in which $x$ appears as an input must be tested before the pass in which $x$ appears as an output.
\item $x$ as an input results in `No action', i.e., the output to be overwritten is identical to the stored input. Such an input has no pass number because no action is needed, hence it will never be tested after the pass in which $x$ appears as an output. This implies that the order of the pass in which $x$ appears as an output is independent of the pass in which $x$ appears as an input.
\end{enumerate}
These properties ensure that the resulting passes are visited correctly.
Herein, we propose a directed state diagram representation of the truth table of the arithmetic or logic function to be implemented using AP. Our objective is to rely on this state diagram representation to identify the proper processing order of the function's truth table and, accordingly, generate its LUT.
The elements of the state diagram can be best described as follows.
\begin{itemize}
\item Directed edge: application of the arithmetic function under consideration.
\item State: stored input to be operated upon.
\item Next state: corresponding output as per the LUT.
\item $noAction$ state: state that remains the same upon in-place operation, that is, the LUT input is identical to its corresponding LUT output.
\end{itemize}
Without loss of generality, Fig. \ref{fig:binary_state_diagram} first illustrates the state diagram of the binary adder's truth table. In this example, the edge corresponds to the binary add operation, the state corresponds to the 1-bit input triplet $(A_i,\, B_i,\, C_{in})$ and the next state corresponds to the output triplet $(A_i,\, S_i,\, C_{out})$. Finally, the $noAction$ state is the one pointing to itself, indicating that it remains the same upon in-place addition, that is, $(A_i,\, S_i,\, C_{out}) = (A_i,\, B_i,\, C_{in})$.
The binary adder state diagram in Fig. \ref{fig:binary_state_diagram} is also labeled with the LUT pass order \cite{binary_AP}. An analysis of the pass order shows that they were ordered to avoid erroneously performing multiple consecutive additions on the same entry. This translates in the state diagram to the first pass writing `110' by `101', and the second pass writing `100' by `110'. No other pass can overwrite these outputs once their respective inputs are visited. On the other hand, if passes 1 and 2 are exchanged, `100' results in `110' after the first pass, which will be overwritten by `101' after the second pass as indicated by the directional flow of the state diagram. Such a domino effect is not desired. Therefore, it is evident that to construct the LUT for a generic arithmetic function, the order of the passes must be determined through a structured traversal of the directed state diagram.
\begin{figure}
\centering
\includegraphics[width=0.655\linewidth,height=0.65\linewidth]
{Figures/binary_adder_state_diagram}
\caption{State diagram of the AP binary adder. The states store the values for the triplets ($A$, $B$, $C_{in}$). The arrow represents the addition operation. The pass order of Table \ref{tab:binary_LUT} are labeled.}
\label{fig:binary_state_diagram}
\vspace{-0.15in}
\end{figure}
\subsection{Automated LUT Generation}
Herein, we build upon our state diagram interpretation of the truth table to guide the automatic development of a general-purpose LUT. As we note from Fig. \ref{fig:binary_state_diagram}, the state diagram comprises of a collection of trees whose roots are $noAction$ states. We note that the input-output pairs are connected through backward edges propagating to the roots. Our objective is to identify the proper order of passes for in-place operation so that no pass overwrites the outcome of earlier ones. This can be guaranteed if and only if the following holds for the state diagram.
\begin{enumerate}
\item The state diagram is a uni-directional graph with no cycles, i.e., no forward edges.
\item If the state diagram has cycles, then we should be able to break these cycles by redirecting forward edges backwards. If in the original state diagram input vector $x = (x_1,\, x_2)$ has its output $y = (x_1,\, y_2)$ creating a forward edge, we search for an alternate output $y' = (y_1,\, y_2)$ that forms a backward edge and breaks the cycle. $y'$ is a valid output so long $y_2$ remains unchanged since $y_2$ represents the output digit to be overwritten as per the LUT, while $y_1$ is a dummy extra written digit. Therefore, we need to invoke a larger vector write post the compare operation. This will be illustrated in the following example.
\end{enumerate}
For example, we implement the state diagram for the LUT-based ternary full adder (TFA) in the context of TAP. Fig. \ref{fig:TFA_state_diagram} presents the state diagram of the TFA's truth table. We perform in-place ternary addition with inputs $(A,\, B,\, C_{in})$. The outputs $(S,\, C_{out})$ overwrite $(B,\, C_{in})$, while $A$ is kept untouched. In the state diagram, if the input triplet $(A,\, B,\, C_{in})$ whose output $(A,\, S,\, C_{out})$ represents a forward edge forming a cycle, we search for an alternate output $(y_1,\, S,\, C_{out})$ that forms a backward edge and breaks the cycle. We then invoke a 3-trit write post the compare operation instead of the standard 2-trit write. Specifically, as illustrated by the dashed red edge in Fig. \ref{fig:TFA_state_diagram}, a direct implementation of the in-place ternary addition state diagram results in one cycle: state `101' leads to `120' and state `120' leads back to `101'. To resolve this problem, we overwrite the $A$ trit value to a `0’ for the input `101'. Hence, input `101' now results in `020' as an output (see green edge in Fig. \ref{fig:TFA_state_diagram}) and input `120' results in `101' as an output. This incurs a minor cost consisting of an extra trit to be written for one of the passes. However, it eliminates the cycle and enables a smooth implementation of the LUT-based approach for the TFA.
\begin{figure}[t]
\centering
\includegraphics[width=1\linewidth, height=0.85\linewidth]
{Figures/TFA_state_diagram.pdf}
\caption{State diagram of AP ternary full adder. For each subtree, the passes are determined based on a depth-first search to enforce the priorities.}
\label{fig:TFA_state_diagram}
\vspace{-0.15in}
\end{figure}
With a cycle-free state diagram, i.e., a backward propagation based input-output relation (left to right), we devise that the passes should progress to visit the trees of the state machine from right to left in a depth-first search (DFS) approach, starting from the root of each tree. Since the roots are $noAction$ nodes, we do not assign pass numbers to them and, hence, do not include them in the ordering of the passes. Algorithm \ref{alg:passes_order_nb} presented herein details the traversal scheme of the state diagram which ultimately determines the proper order of the passes for in-place operation. Table \ref{tab:ternary_nb_LUT} presents the resulting LUT for the TFA after applying Algorithm \ref{alg:passes_order_nb}.
\begin{algorithm}[h!]
\caption{Ordering of the passes for the LUT-based ternary full adder following the non-blocked approach.
\label{alg:passes_order_nb}}
\begin{algorithmic}[1]
\STATE Global pass number $p=0$
\STATE Global LUT length $L=length(LUT)$
\FORALL{$T_i$}
\item \textsc{BuildLUT}($T_i.root$)
\ENDFOR
\RETURN
\end{algorithmic}
\begin{algorithmic}[1]
\item[]
\item[\textbf{procedure}{ \textsc{BuildLUT}(state $j$)}]
\IF{$j.noAction == 0$}
\STATE $p++$
\STATE $j.passNum=p$
\ENDIF
\FORALL{$v \in j.child$}
\STATE \textsc{BuildLUT}($v$)
\ENDFOR
\RETURN
\end{algorithmic}
\end{algorithm}
\begin{table}[]
\centering
\caption{Look-up table of the LUT-based TFA.}
\label{tab:ternary_nb_LUT}
\begin{tabular}{|ccc|ccc|c|}
\hline
\multicolumn{3}{|c|}{Input} & \multicolumn{3}{c|}{Output} & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Pass\\ order\end{tabular}} \\
A & B & C & A & B & C & \\ \hline
0 & 0 & 0 & 0 & 0 & 0 & No action \\
0 & 0 & 1 & 0 & 1 & 0 & 1 \\
0 & 0 & 2 & 0 & 2 & 0 & 21 \\
0 & 1 & 0 & 0 & 1 & 0 & No action \\
0 & 1 & 1 & 0 & 2 & 0 & 10 \\
0 & 1 & 2 & 0 & 0 & 1 & 2 \\
0 & 2 & 0 & 0 & 2 & 0 & No action \\
0 & 2 & 1 & 0 & 0 & 1 & 3 \\
0 & 2 & 2 & 0 & 1 & 1 & 11 \\
1 & 0 & 0 & 1 & 1 & 0 & 15 \\
1 & 0 & 1 & 0 & 2 & 0 & 12 \\
1 & 0 & 2 & 1 & 0 & 1 & 16 \\
1 & 1 & 0 & 1 & 2 & 0 & 14 \\
1 & 1 & 1 & 1 & 0 & 1 & 17 \\
1 & 1 & 2 & 1 & 1 & 1 & 18 \\
1 & 2 & 0 & 1 & 0 & 1 & 13 \\
1 & 2 & 1 & 1 & 1 & 1 & 19 \\
1 & 2 & 2 & 1 & 2 & 1 & 20 \\
2 & 0 & 0 & 2 & 2 & 0 & 8 \\
2 & 0 & 1 & 2 & 0 & 1 & No action \\
2 & 0 & 2 & 2 & 1 & 1 & 5 \\
2 & 1 & 0 & 2 & 0 & 1 & 9 \\
2 & 1 & 1 & 2 & 1 & 1 & No action \\
2 & 1 & 2 & 2 & 2 & 1 & 4 \\
2 & 2 & 0 & 2 & 1 & 1 & 7 \\
2 & 2 & 1 & 2 & 2 & 1 & No action \\
2 & 2 & 2 & 2 & 0 & 2 & 6 \\ \hline
\end{tabular}
\vspace{-0.1in}
\end{table}
\section{Optimized AP Operation}
\label{sec:Optimized_AP}
In the first approach, hereafter referred to as the \emph{non-blocked} approach, similar to traditional AP operation, each pass comprises of a compare cycle followed by a write cycle. Often, for a given function, different input vectors result in similar output vectors. For example, for the TFA, different input triplets $(A,\, B,\, C_{in})$ often result in similar output pairs $(S,\, C_{out})$. Since write cycles are much more expensive than compare cycles, we can leverage this fact to devise a second approach that capitalizes on these common output vectors.
As such, we propose a more optimized approach that targets the dual objective of determining the proper pass order and grouping different input vectors that share the same output vector. The proposed approach starts by processing compare cycles for the input vectors that have the same output vector, then the write action occurs once all input vectors of a group (or block) are visited, thereby improving the efficiency of the LUT-based approach. Hereafter, we refer to this as the \emph{blocked} approach.
In order to traverse the state diagram using the blocked approach in such a way to determine the correct ordering of the passes, we adopt a breadth-first search (BFS)-like traversal of the nodes. Every time a new block is determined, the state diagram is dynamically updated to eliminate the most recently chosen block nodes. Post each update, we group nodes to the same block in terms of (a) children of the same node, and (b) other nodes at the same level sharing the same write action. Note that the set of nodes (b) may not have necessarily existed in the initial state diagram at the same level as set of nodes (a); however, dynamic updates of the state diagram as we construct the LUT will enforce such conditions as we explain next.
To implement the algorithm for the automatic generation of the LUT using the blocked approach, we consider that each node in the state diagram represents an input or output vector depending on whether the node is subject to or the result of an add operation, respectively. Each node is also associated with a set of attributes detailed in Table \ref{tab:LUT_definitions}. One of the attributes is $writeDim$ representing the dimension of the output vector to be written when the node is regarded as an input state. Another attribute is the $outVal$ array whose entries represent the `$n$-ary'-to-decimal value of the node's write action when it is regarded as an output state. The need for the $outVal$ array of entries is explained as follows using the TFA as an example. For the TFA, when the node is the result of a 2-trit write-back, $outVal(writeDim=2)$ stores the ternary-to-decimal conversion of the written $BC$ value. In the event of breaking cycles, we may add an extra dummy dimension and invoke a 3-trit write-back. The node that is regarded as the input of the operation will have $writeDim=3$ and the node that is regarded as the output will have $outVal(writeDim=3)$ store the corresponding equivalent decimal written $ABC$ value. Note that these values will be adjusted as explained later to avoid overlap between the different decimal value conversions. For example, $ABC = 000$ will be mapped to a different number than $BC = 00$ as indicated in line 5 of Algorithm \ref{alg:filling_grplvl}. This will help in properly differentiating grouping of nodes that have the same parent but different write action dimensions.
Thus, nodes at the same level having the same $writeDim$ and parent $outVal$ share the same write action. As such, for a specific node, we rely on its parent $outVal(writeDim)$ value for the grouping and use it as a key component of the dynamic $grpLvl$ table that we rely on to guide the BFS-like traversal algorithm. The table stores the number of nodes belonging to the same group, i.e., having a similar write action, in each level of the tree. The algorithm performs pass ordering and grouping by updating the $grpLvl$ table after each determined block to reflect updates to the state diagram. The algorithm proceeds as follows.
\begin{table}[!t]
\centering
\caption{State attributes definitions for the blocked and non-blocked algorithms.}
\label{tab:LUT_definitions}
\begin{tabular}{|l|l|}
\hline
\multicolumn{1}{|c|}{Attribute} & \multicolumn{1}{c|}{Definition} \\ \hline
$noAction$ & \begin{tabular}[c]{@{}l@{}}Determines the type of the state:\\ 1 for a No Action state\\ 0 for an Action state\end{tabular} \\ \hline
$grpNum$ & Specific write group of the state \\ \hline
$level$ & State level as indicated in Fig. \ref{fig:TFA_state_diagram} \\ \hline
$outVal$ & `$n$-ary'-to-decimal conversion of the state vector \\ \hline
$writeDim$ & Write-back dimension for the output vector of the state \\ \hline
$parent$ & \begin{tabular}[c]{@{}l@{}}Pointer to the parent of the state which is accessible from\\ the state through a backward edge\end{tabular} \\ \hline
$child$ & \begin{tabular}[c]{@{}l@{}}Pointer to the child of the state which is accessible from\\ the state through a forward edge\end{tabular} \\ \hline
$passNum$ & Pass order assigned to the state in the LUT \\ \hline
\end{tabular}
\vspace{-0.15in}
\end{table}
\subsubsection{$grpLvl$ initialization}
To populate the initial $grpLvl$ table, we apply Algorithm \ref{alg:filling_grplvl}.
Table \ref{tab:grp_lvl} represents the initial $grpLvl$ table for the TFA state diagram corresponding to Fig. \ref{fig:TFA_state_diagram}.
For each state which is an $Action$ state, we find $l$, the level of the node as indicated by Fig. \ref{fig:TFA_state_diagram}, and $g$, the $outVal(writeDim)$ value of the node's parent. For example, the group number $g$ for node `101' in the TFA state diagram of Fig. \ref{fig:TFA_state_diagram} is $outVal(3)$ of its parent node `020'. This corresponds to the adjusted value $6+\sum_{i=0}^{2}3^i=19$, where `6' is the ternary-to-decimal conversion of the vector `020'. Whereas the group number $g$ for node `011' is $outVal(2)$ of its parent node `020', corresponding to the adjusted value $6+\sum_{i=0}^{1}3^i=10$, where `6' is the ternary-to-decimal conversion of the vector `20'. Accordingly, for each $Action$ node in level $l$, an initial group number $grpNum=g$ is assigned to the node, and the entry corresponding to group $g$ and level $l$ in the $grpLvl$ table is incremented. Entries in the $grpLvl$ table thus reflect the number of nodes that share the same level and write action. For example, 5 nodes in Level 2 share the same write action $BC=01_3=1_{10}$, having an adjusted value of $1+\sum_{i=0}^{1}3^i=5$. Thus, $grpLvl[l=2][g=5]=5$ as shown in Table \ref{tab:grp_lvl}.
\begin{algorithm}[h!]
\caption{Initializing the $grpLvl$ table.}
\label{alg:filling_grplvl}
\begin{algorithmic}[1]
\item [Global pass number $p=0$]
\item [\# Preparing Action states group table $grpLvl$]
\STATE $S$ is the set of all states $\forall T_i$
\FORALL{states $j \in S$}
\IF{$j.noAction==0$}
\STATE Level $l=j.level$
\STATE Group number $g=j.parent.outVal(writeDim)+\sum_{i=0}^{writeDim-1}(n^{i})$
\STATE $j.grpNum=g$
\STATE $grpLvl[l][g]++$
\ENDIF
\ENDFOR
\STATE $G=max({g})$
\STATE $L=max({l})$
\item[\# Use $grpLvl$ table to build LUT for Action states]
\STATE
\textsc{BuildLUTBlocked($S$, $grpLvl$, $G$, $L$)}
\RETURN
\end{algorithmic}
\end{algorithm}
\begin{table*}[]
\centering
\caption{$grpLvl$ table initial values corresponding to Fig. \ref{fig:TFA_state_diagram}. It indicates that Group 19 should be processed first since it is the only group that has no entries beyond Level 1. Note that for the TFA example, $writeDim=1$ does not exist, and thus by default, no nodes can have $grpNum=\{1,\, 2,\, 3\}$.}
\label{tab:grp_lvl}
\begin{tabular}{cc|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|}
\cline{3-21}
&
&
\multicolumn{19}{c|}{\cellcolor[HTML]{EFEFEF}$grpNum$} \\ \cline{3-21}
&
&
\multicolumn{3}{c|}{\begin{tabular}[c]{@{}c@{}}$parent.$\\ $outVal(1)$\\ $+\sum_{i=0}^{0}3^i$\end{tabular}} &
\multicolumn{9}{c|}{$parent.outVal(2)+\sum_{i=0}^{1}3^i$} &
\multicolumn{7}{c|}{$parent.outVal(3)+\sum_{i=0}^{2}3^i$} \\ \cline{3-21}
&
&
\textbf{1} &
\textbf{2} &
\textbf{3} &
\textbf{4} &
\textbf{5} &
\textbf{6} &
\textbf{7} &
\textbf{8} &
\textbf{9} &
\textbf{10} &
\textbf{11} &
\textbf{12} &
\textbf{13} &
\textbf{14} &
\textbf{15} &
\textbf{16} &
\textbf{17} &
\textbf{18} &
\textbf{19} \\ \hline
\multicolumn{1}{|c|}{\cellcolor[HTML]{EFEFEF}} &
\textbf{1} &
0 &
0 &
0 &
0 &
{\color[HTML]{DD0808} 1} &
0 &
{\color[HTML]{DD0808} 1} &
{\color[HTML]{DD0808} 2} &
0 &
{\color[HTML]{DD0808} 2} &
{\color[HTML]{DD0808} 1} &
0 &
0 &
0 &
0 &
0 &
0 &
0 &
{\color[HTML]{DD0808} 1} \\ \cline{2-21}
\multicolumn{1}{|c|}{\cellcolor[HTML]{EFEFEF}} &
\textbf{2} &
0 &
0 &
0 &
0 &
{\color[HTML]{DD0808} 5} &
{\color[HTML]{DD0808} 1} &
0 &
{\color[HTML]{DD0808} 1} &
0 &
{\color[HTML]{DD0808} 1} &
0 &
0 &
0 &
0 &
0 &
0 &
0 &
0 &
0 \\ \cline{2-21}
\multicolumn{1}{|c|}{\cellcolor[HTML]{EFEFEF}} &
\textbf{3} &
0 &
0 &
0 &
0 &
0 &
0 &
0 &
{\color[HTML]{DD0808} 2} &
0 &
{\color[HTML]{DD0808} 1} &
0 &
0 &
0 &
0 &
0 &
0 &
0 &
0 &
0 \\ \cline{2-21}
\multicolumn{1}{|c|}{\multirow{-4}{*}{\cellcolor[HTML]{EFEFEF}$level$}} &
\textbf{4} &
0 &
0 &
0 &
0 &
0 &
0 &
{\color[HTML]{DD0808} 1} &
0 &
0 &
0 &
{\color[HTML]{DD0808} 1} &
0 &
0 &
0 &
0 &
0 &
0 &
0 &
0 \\ \hline
\end{tabular}
\vspace{-0.15in}
\end{table*}
\subsubsection{Selecting the next block/group}
At each iteration, our objective is to find the next target block/group $g_{tgt}$ for an adequate ordering of the passes in the LUT. We keep in mind the following. Nodes that reside in Level 1 must be processed first from a pass ordering perspective (these qualify as nodes whose parents have already been processed or whose parents are $noAction$ states). For purposes of grouping, we must therefore look for groups that are fully or maximally residing in Level 1. In fact, we consider the following two cases as indicated in Algorithm \ref{alg:finding_gtgt}.
\begin{itemize}
\item In an ideal scenario, there exists a group that has nodes belonging to the top level (Level 1) and no nodes in lower levels. This group would qualify as the next target group $g_{tgt}$.
\item Another possible scenario is that no group has all its nodes in the top level. In this case, we choose the group that has a maximum number of nodes in the top level as $g_{tgt}$. However, we need to break this group since we can only process states belonging to the top level. We split $g_{tgt}$ by creating a new group for the remaining states present in lower levels. In this way, the total number of groups $G$ is incremented, and $g_{tgt}$ will only contain nodes that are in the top level.
\end{itemize}
To illustrate, initial $grpLvl$ values shown in Table \ref{tab:grp_lvl} indicate that Group 19, should be processed first. It is the only group that has entries in the top level and no entries in lower levels. Supplementary Tables 1, 2 and 3 present $grpLvl$ tables for the following three iterations, and at each iteration, we identify all possible new $g_{tgt}$ groups. We continue until all the entries in the $grpLvl$ table at the top level become zero.
\begin{algorithm}
\caption{Finding the next block/group $g_{tgt}$.}
\label{alg:finding_gtgt}
\begin{algorithmic}[1]
\item[\textbf{procedure}{ \textsc{BuildLUTBlocked}($S$, $grpLvl$, $G$, $L$)}]
\STATE $topLevel=1$
\WHILE{$grpLvl[topLevel][.] \neq zeros(1,G)$}
\STATE $found=-1$
\FOR {$g=0 \to G$}
\STATE $cond_1=(grpLvl[topLevel][g]>0)$
\STATE $cond_2=(\sum_{l=2}^{L}{grpLvl[l][g]} == 0)$
\IF{$cond_1$ $\AND$ $cond_2$}
\STATE $g_{tgt}=g$
\STATE \textsc{updateLUT($g_{tgt}$)}
\STATE $found=1$
\ENDIF
\ENDFOR
\IF{$found==-1$}
\STATE $[g_{tgt}, max_{grpLvl}]=max(grpLvl[topLevel][.])$
\item[\# Create new group for remaining states of $g_{tgt}$ in]
\item[\# lower levels]
\STATE $G++$
\FOR{$l=2 \to L$}
\STATE $grpLvl[l][G]=grpLvl[l][g_{tgt}]$
\STATE $grpLvl[l][g_{tgt}]=0$
\ENDFOR
\FORALL{states $j$}
\IF{($j.grpNum==g_{tgt}$) $\AND$ ($j.level>1$)}
\STATE $j.grpNum=G$
\ENDIF
\ENDFOR
\STATE \textsc{updateLUT}($g_{tgt}$)
\ENDIF
\ENDWHILE
\RETURN
\end{algorithmic}
\end{algorithm}
\subsubsection{Updating $grpLvl$ and assigning pass numbers}
We rely on Algorithm \ref{alg:passes_order_blocked} to order the passes and build the LUT. In each iteration, once the next $g_{tgt}$ is identified, we extract the nodes with $grpNum=g_{tgt}$ from the state diagram and assign them as the next block to be processed in the LUT. We label them accordingly with their corresponding pass number. Note that within the $g_{tgt}$ group, passes can be numbered arbitrarily. For example, in Group 2 shown in Table \ref{tab:ternary_blocked_LUT}, triplet `102' order can be interchanged with any other triplet order from the same group, say `120'. The children of the selected group nodes are then elevated to Level 1 and their subtree nodes are elevated by one level as well. $grpLvl$ table and the state diagram are updated to reflect the changes accordingly. Finally, we set the top level entry in the $grpLvl$ table corresponding to $g_{tgt}$ to zero. Hence, as the state diagram is traversed and pass numbers are allocated to the nodes, entries in the $grpLvl$ table will be updated, mimicking updates in the state diagram. This is exemplified by Supplementary Fig. 1, 2 and 3 and their corresponding $grpLvl$ Tables 1, 2 and 3. The resulting LUT for the TFA following the blocked approach is shown in Table \ref{tab:ternary_blocked_LUT}.
\begin{algorithm}
\caption{Updating the $grpLvl$ table and ordering the passes of the LUT.}
\label{alg:passes_order_blocked}
\begin{algorithmic}[1]
\item[\textbf{procedure}{ \textsc{updateLUT}($g_{tgt}$)}]
\STATE $topLevel=1$
\item[\# Generate pass number for states in the target group]
\FORALL{states $j$}
\IF{$j.grpNum==g_{tgt}$}
\STATE $p++$
\STATE $j.passNum=p$
\FORALL{$v \in$ tree whose root is $j$}
\STATE $grpLvl[v.level-1][v.grpNum]++$
\STATE $grpLvl[v.level][v.grpNum]--$
\STATE $v.level--$
\ENDFOR
\ENDIF
\ENDFOR
\STATE $grpLvl[topLevel][g_{tgt}]=0$
\RETURN
\end{algorithmic}
\end{algorithm}
\begin{table}[]
\centering
\caption{Look-up table of the LUT-based TFA following the blocked approach.}
\label{tab:ternary_blocked_LUT}
\begin{tabular}{|ccc|ccc|c|c|c|}
\hline
\multicolumn{3}{|c|}{Input} &
\multicolumn{3}{c|}{Output} &
\multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Pass\\ order\end{tabular}} &
\multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Group\\ number\end{tabular}} &
\multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Write\\ action\end{tabular}} \\
A & B & C & A & B & C & & & \\ \hline
1 & 0 & 1 & 0 & 2 & 0 & 1 & 1 & W020 \\ \hline
1 & 0 & 2 & 1 & 0 & 1 & 2 & \multirow{4}{*}{2} & \multirow{4}{*}{W01} \\
1 & 1 & 1 & 1 & 0 & 1 & 3 & & \\
1 & 2 & 0 & 1 & 0 & 1 & 4 & & \\
2 & 1 & 0 & 2 & 0 & 1 & 5 & & \\ \hline
1 & 1 & 2 & 1 & 1 & 1 & 6 & \multirow{4}{*}{3} & \multirow{4}{*}{W11} \\
1 & 2 & 1 & 1 & 1 & 1 & 7 & & \\
2 & 0 & 2 & 2 & 1 & 1 & 8 & & \\
2 & 2 & 0 & 2 & 1 & 1 & 9 & & \\ \hline
0 & 0 & 2 & 0 & 2 & 0 & 10 & \multirow{4}{*}{4} & \multirow{4}{*}{W20} \\
0 & 1 & 1 & 0 & 2 & 0 & 11 & & \\
1 & 1 & 0 & 1 & 2 & 0 & 12 & & \\
2 & 0 & 0 & 2 & 2 & 0 & 13 & & \\ \hline
1 & 2 & 2 & 1 & 2 & 1 & 14 & \multirow{2}{*}{5} & \multirow{2}{*}{W21} \\
2 & 1 & 2 & 2 & 2 & 1 & 15 & & \\ \hline
0 & 0 & 1 & 0 & 1 & 0 & 16 & \multirow{2}{*}{6} & \multirow{2}{*}{W10} \\
1 & 0 & 0 & 1 & 1 & 0 & 17 & & \\ \hline
2 & 2 & 2 & 2 & 0 & 2 & 18 & 7 & W02 \\ \hline
0 & 1 & 2 & 0 & 0 & 1 & 19 & \multirow{2}{*}{8} & \multirow{2}{*}{W01} \\
0 & 2 & 1 & 0 & 0 & 1 & 20 & & \\ \hline
0 & 2 & 2 & 0 & 1 & 1 & 21 & 9 & W11 \\ \hline
0 &
0 &
0 &
0 &
0 &
0 &
No action &
\multicolumn{1}{l|}{\multirow{6}{*}{}} &
\multirow{6}{*}{} \\
0 & 1 & 0 & 0 & 1 & 0 & No action & \multicolumn{1}{l|}{} & \\
0 & 2 & 0 & 0 & 2 & 0 & No action & \multicolumn{1}{l|}{} & \\
2 & 0 & 1 & 2 & 0 & 1 & No action & \multicolumn{1}{l|}{} & \\
2 & 1 & 1 & 2 & 1 & 1 & No action & \multicolumn{1}{l|}{} & \\
2 & 2 & 1 & 2 & 2 & 1 & No action & \multicolumn{1}{l|}{} & \\ \hline
\end{tabular}
\vspace{-0.15in}
\end{table}
\subsubsection*{\textbf{Circuits to Enable the Blocked Approach}} We note that there is a minimal cost overhead for blocking to delay the write action until the end of the block. Our proposed solution is to add to each row a D flip-flop clocked by the Tag bit.
Prior to processing a block, write enable signals are discharged. Hence, as we traverse the LUT to process the passes of the block, a match for a row will have its Tag bit toggle from `0' to `1', setting the write enable signal at the output of the flip-flop to a `1'. At the end of each block, the rows for which the flip-flop outputs are ‘1’ are overwritten. This ensures that all rows that have matches within the block will be overwritten together by the same output. The flip-flop is reset after every block. Timewise, the flip-flop’s toggling time cost can be hidden, whereas storage wise, one extra flip-flop per row is needed.
\section{Results and Analysis}
\label{sec:results}
In this section, we study the proposed ternary adder implementations (non-blocked and blocked) in terms of energy, delay and area, and we compare them against the binary AP adder and other ternary adder implementations. First, we study the characteristics of the ``3T3R'' cell. For our experimental results, we rely on 45nm predictive technology model \cite{asu} for our simulations. The transistor threshold voltage is $V_t=0.4V$, and $V_{DD}=0.8V$.
\subsection{Design Space Exploration: QCAM Cell Dynamic Range and Energy Analysis}
In our ternary AP adder design, a 1-trit addition involves the comparison of the key-mask pair with three stored triplets of memristor states $(M_1,\, M_2,\, M_3)$, where each triplet corresponds to one of the trits $A_i$, $B_i$ and $C_{in}$. The outcome of the comparison can result in a: full match (fm), one mismatch (1mm), two mismatches (2mm) or three mismatches (3mm).
For purposes of the analysis of the ``3T3R'' cell, we define the dynamic range (DR) as the maximum voltage difference between the fm and the closest mismatch case which is 1mm \cite{bahloul2017design,rakka2020design}, measured after 1ns of evaluation time of the $(S_h,\, S_x,\, S_L)$ signal triplet as indicated below.
\begin{equation}
DR=V_{fm}-V_{1mm}
\end{equation}
where, $V_{fm}$ and $V_{1mm}$ represent the ML voltages for the fm and 1mm states, respectively. Typically, for accurate sensing of the comparison outcome, we aim for a high dynamic range. However, a high DR comes at the expense of increased compare energy consumption.
To further assess this, we define for purposes of the add operation the compare energies $E_{fm}$, $E_{1mm}$ , $E_{2mm}$, $E_{3mm}$ corresponding to the fm, 1mm, 2mm and 3mm states, respectively. We rely on HSPICE simulations to study the dynamic range and compare energies for the ``3T3R'' cell in the context of the LUT-based ternary adder. We assess these metrics for the following design space parameter combinations.
Without loss of generality, we set the total number of cells per row $N=41$ to enable 20-trit addition, where each of the $A$ and $B$ vectors have 20 cells per vector, and we have one extra cell for the $C_{in}$ trit. We also sweep $R_L \in \{20, 30, 50, 100\} K\Omega$, and set $R_H=\alpha*R_L$ where $\alpha \in \{10, 20, 30, 40, 50\}$. A capacitive load $C_L=100fF$ is used for the comparator to properly latch $V_{ML}$ and distinguish between the fm and 1mm states due to fast discharge.
Fig. \ref{fig:3T3R_DR} presents the corresponding dynamic range values for the ``3T3R'' cell for 20-trit addition as a function of $R_L$ and $\alpha$. The maximum, thus, best dynamic range is observed for lowest $R_L$ values. For example, $DR \approx 240mV$ when $R_L=20K \Omega$ and $\alpha=50$. The compare energy for the ``3T3R'' cell for 20-trit addition as a function of $R_L$ and $\alpha$ is plotted in Fig. \ref{fig:3T3R_energy}. For the same $R_L=20K \Omega$, the lowest energy is obtained at the highest $\alpha=50$. In fact, for $R_L=20K \Omega$, when $\alpha$ increases from 10 to 50, $E_{fm}$ drops by $71.61\%$, $E_{1mm}$ drops by $22.27\%$, $E_{2mm}$ drops by $9.45\%$ and $E_{3mm}$ drops by $4.37\%$.
As such, for the remaining experiments, we adopt the memristor values $(R_L, R_H) = (20K\Omega, 1M\Omega)$ which provides the best dynamic range with the corresponding lowest compare energy consumption for this $R_L$ value.
\begin{figure}
\centering
\includegraphics[width=0.75\linewidth]
{Figures/3T3R_DR.pdf}
\caption{Dynamic range for the ``3T3R'' cell for 20-trit addition as a function of $R_L$ and $\alpha$.}
\label{fig:3T3R_DR}
\vspace{-0.15in}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.75\linewidth]
{Figures/3T3R_energy.pdf}
\caption{Compare energy for the ``3T3R'' cell for 20-trit addition as a function of $R_L$ and $\alpha$.}
\label{fig:3T3R_energy}
\vspace{-0.15in}
\end{figure}
\subsection{Evaluation Against Binary System}
As previously mentioned, the optimal number system is found to be $e$ which lies in between binary and ternary system. Hence, we evaluate the TAP performance against the Binary AP. Herein, we compare the ternary LUT-based addition to the respective binary LUT-based addition in terms of energy and area. For this, we employ the non-blocked approach since these metrics are common to both the blocked and non-blocked approaches. The following section analyzes the delay of the different proposed approaches.
Thus, in our experiments, we study the average energy for $p$-trit addition in comparison to the equivalent $q$-bit addition for different $p$ and $q$ values, where $p \in \{5t,10t,20t,32t,40t,80t\}$ and $q \in \{8b,16b,32b,51b, 64b, 128b\}$, respectively. For example, we compare $p=20t$ representing 20-trit addition to the equivalent $q=32b$ representing 32-bit addition.
We rely on HSPICE to characterize the compare energy for 1-bit (1-trit) addition in the context of LUT-based binary (ternary) adder for the 2T2R \cite{2T2R_fouda} (3T3R) cell with $R_{L} = 20K\Omega$ and $R_{H}=1M \Omega$. We adopt $C_L=100fF$ to correctly latch $V_{ML}$, and the number of cells per row is equal to $2q+1$ ($2p+1$). We set the evaluate time to 1ns for which we observe a DR approximately equal to 200mV for the different simulations, allowing for good differentiation between the match and mismatch cases. The precharge time is also set to 1ns.
We developed a functional simulator using MATLAB to obtain the average for the compare energy and write energy for both the ternary and binary LUT addition, relying on a total of 10,000 $p$-trit and $q$-bit additions. The functional simulator estimates the number of set/reset operations taking into consideration whether we are writing only one, two, or all three $(A,\, S,\, C_{out})$ cells based on the different LUTs and number of sets/resets required per cell (see Table \ref{tab:set_reset}). We assume the memristor write energy per set or reset operation to be on average around 1nJ as was stated in \cite{write_energy} for different programming and initial memristor conditions. The functional simulator also utilizes the 1-bit and 1-trit compare energy values obtained using HSPICE to estimate the $q$-bit ($p$-trit) compare energy based on the different match/mismatch combinations.
For purposes of area comparison, we rely on the number of cells per row for the $q$-bit ($p$-trit) addition assuming that the ``2T2R'' cell area is 0.67x the area of one ``3T3R'' cell. Results are indicated in Table \ref{tab:energy_area}. Overall, compared to the LUT-based binary addition, the LUT-based ternary addition results in about 12.6\% reduction in the total number of sets/resets needed, 12.25\% reduction in total energy and 6.2\% area reduction.
\begin{table*}[]
\centering
\caption{Energy and area comparison of the ternary AP adder with the binary AP adder \cite{binary_AP}.}
\label{tab:energy_area}
\begin{tabular}{ll|cc|cc|cc|cc|cc|cc|}
\cline{3-14}
& & \textbf{8b} & \textbf{5t} & \textbf{16b} & \textbf{10t} & \textbf{32b} & \textbf{20t} & \textbf{51b} & \textbf{32t} & \textbf{64b} & \textbf{40t} & \textbf{128b} & \textbf{80t} \\ \hline
\multicolumn{1}{|l|}{\multirow{4}{*}{Energy}} & \#Set = \#Reset & 5.99 & 5.22 & 11.99 & 10.53 & 24.04 & 21.02 & 38.24 & 33.67 & 47.98 & 42.17 & 95.98 & 84.54 \\ \cline{2-14}
\multicolumn{1}{|l|}{} & Write energy (nJ) & 11.99 & 10.44 & 23.99 & 21.06 & 48.07 & 42.04 & 76.48 & 67.35 & 95.96 & 84.33 & 192.0 & 169.1 \\ \cline{2-14}
\multicolumn{1}{|l|}{} & Compare energy (pJ) & 0.94 & 3.99 & 1.91 & 8.06 & 3.90 & 16.4 & 6.36 & 26.84 & 8.11 & 34.0 & 17.5 & 72.58 \\ \cline{2-14}
\multicolumn{1}{|l|}{} & Total energy (nJ) & 11.99 & 10.44 & 23.99 & 21.07 & 48.07 & 42.06 & 76.49 & 67.38 & 95.97 & 84.36 & 192.02 & 169.17 \\ \hline
\multicolumn{2}{|l|}{Normalized Area} & 16x & 15x & 32x & 30x & 64x & 60x & 102x & 96x & 128x & 120x & 256x & 240x \\ \hline
\end{tabular}
\vspace{-0.1in}
\end{table*}
\subsection{Performance Comparison to Other Ternary Adder Designs}
In this section, we compare the proposed ternary LUT-adder approaches against other ternary adder implementations.
Particularly, we compare the total energy consumed by the proposed ternary AP (TAP) adder implementations against hybrid CNTFET and memeristor-based implementations of the carry-ripple adder (CRA), carry-skip adder (CSA) and carry-lookahead adder (CLA) \cite{other_adders}. We also conduct a delay analysis for the TAP blocked and non-blocked approaches in comparison to the LUT-based binary adder and the CLA.
Our comparison is based on extrapolating the authors' 4-bit adder's power and delay simulations to reflect energy consumption and delay values for 20-trit addition at $V_{DD}=0.8V$.
For our adder implementation, the consumed energy does not differ between the non-blocked and blocked approaches. We thus rely on Table \ref{tab:energy_area} to obtain the total energy for our TAP implementation.
Fig. \ref{fig:energy_comparisons} presents the energy for the different ternary adder implementations as function of the number of rows (\#Rows, i.e., number of parallel additions). TAP consumes about 52.64\% less energy than the CLA, which in turn demonstrated lower energy consumption compared to the CSA and CRA. We note that for all adder implementations, the energy grows linearly with the number of add operations.
We define the delay as the number of clock cycles needed to concurrently compare and write multiple rows within the data array. While in the non-blocked approach every compare is followed by a write action, the blocked approach delays the write action until the end of the sequence of block compares, thus improving the overall delay of the adder. Note that, irrespective of whether a match occurs or not, we account for the write cycle.
Fig. \ref{fig:delay_comparisons} shows the delay for the LUT ternary adder using the non-blocked and blocked approaches, along with the CLA and the binary AP adder as a function of the number of rows (\#Rows). Compared to the CLA, the non-blocked (blocked) TAP demonstrates lower runtime starting when the number of $p$-trit add operations (i.e., \#Rows) exceeds 64 (32). At 512 rows, the non-blocked and blocked TAP approaches demonstrate a 6.8x and 9.5x reduction in delay compared to the CLA, respectively. The blocked approach further shows a 1.4x reduction in delay compared to the non-blocked approach for all \#Rows. These results assume a traditional precharge cycle similar to Fig. \ref{fig:timing_diagram}.
\begin{figure}
\centering
\includegraphics[width=0.85\linewidth]{Figures/energy_comparisons.pdf}
\caption{Energy comparison of the TAP versus other ternary adders \cite{other_adders} for set/reset energy of {1nJ}.}
\label{fig:energy_comparisons}
\vspace{-0.1in}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.75\linewidth]
{Figures/delay_comparisons.pdf}
\caption{Delay comparison for the blocked and non-blocked TAP implementations with the CLA \cite{other_adders} and the binary AP adder \cite{binary_AP}.}
\label{fig:delay_comparisons}
\vspace{-0.1in}
\end{figure}
In an optimized implementation where the precharge is embedded within the write cycle, the TAP adder is 9x smaller than the CLA and the blocked TAP approach introduces around 1.2x improvement compared to the non-blocked TAP approach. This is due to the need for precharge post evaluate for the compare cycles that are not followed by a write action.
Finally, it is worth noting that the binary AP adder demonstrates the lowest delay at 2.3x savings compared to the ternary TAP in lieu of increased area and energy.
\section{Conclusion}
In this paper, we proposed a novel multi-valued associative processor with illustrative example on ternary-radix. In addition, we proposed efficient LUT-based ternary full adder methodology in the context of AP. The AP implementation relies on a novel quaternary CAM ``3T3R'' cell. Novel algorithms are used to build the ternary adder LUT following two approaches: a first non-blocked approach that formalizes the intuition behind LUT pass ordering and a second blocked approach that targets latency reduction in terms of capitalizing on common write action cycles. The efficiency of the proposed approaches is proven by a formal simulator built using MATLAB in which we incorporate HSPICE simulations. Results show that the ternary AP has lower energy and area compared to the binary AP, albeit higher delay. Moreover, compared to other hybrid CNTFET and memristor implementations of the ternary adder, the proposed ternary AP adder has lower energy and delay. Furthermore, the results demonstrate performance efficiency for the blocked AP approach. In order to improve the performance of MvAP, multi-valued CAM cell would need to optimized with less number devices and with more efficient write techniques.
|
1,108,101,566,632 | arxiv | \section*{QCD and antiferromagnets}
There are three reasons to investigate two-dimensional QED with
massive fermions: \cite{HHI1}
\begin{equation}
{\cal L} = - \hbox{$1\over 4$} \, F_{\mu\nu} F^{\mu\nu} +
\sum_{a=1}^N \psi \kern-.65em\raise.6em\hbox{$-$} _a \Big\{ \gamma^\mu (i \partial_\mu - e A_\mu) -
m_a \Big\} \psi_a ~~~.
\label{qed2}
\end{equation}
First of all, it carries many features
essential in QCD; confinement, chiral condensates, and $\theta$
vacuum. One can evaluate various physical quantities such as
chiral condensates, Polyakov loop, and string tension at zero and finite
temperature with arbitrary fermion masses to explore QCD physics.
Secondly QED is an effective theory of spin systems.\cite{Itoi}
A $s= \hbox{${1\over 2}$} $ anti-ferromagnetic spin chain
\begin{equation}
H = J \sum \vec S_n \cdot \vec S_{n+1} \hskip 1cm
(S= \hbox{${1\over 2}$} , J>0)
\label{spinchain}
\end{equation}
is equivalent to two flavor massless QED$_2$ in a uniform charge
background in the strong coupling limit. Similarly a spin ladder
system is equivalent to coupled QED$_2$. Their correlation functions can
be evaluated systematically. Physics of two-dimensional
anti-ferrromagnets is the core to understand the high
$T_c$ superconductivity, for which QED description provides an
indispensable tool.
Thirdly there is significant development in technology.
We show that a field theory problem is reduced
to a quantum mechanics problem of finite degrees of freedom, which
can be solved numerically on work stations. Technically this method
is much simpler and easier to handle than the lattice gauge theory
or light front method.
\section*{Reduction to quantum mechanics}
Consider the model (\ref{qed2}) defined on a circle $S^1$ with a
circumference
$L$. With periodic and anti-periodic boundary conditions imposed
on bosonic and fermionic fields, the model is mathematically
equivalent to a theory defined on a line ($R^1$) at finite temperature.
Hence various physical quantities at $T\not=0$ on $R^1$ are obtained from
the corresponding ones at $T=0$ on $S^1$ by substituting $L$ by
$T^{-1}$.
$\psi_a$ is bosonized on a circle. It is expressed in terms of
zero modes $q_a$ and oscillatory mode $\phi_a(x)$.
The only physical degree of freedom associated with gauge fields is the
Wilson line phase $ \Theta_{\rm W} $ along the circle.
When fermions are massless, zero modes ($q_a, \Theta_{\rm W} $) decouple from
oscilatory modes $\phi_a$. The latter consists of 1 massive boson
and $N-1$ massless bosons. The model is exactly solvable.
Fermion masses provide nontrivial interactions among zero and oscillatory
modes. All boson fields become massive.
When fermion masses
are degenerate, $m_a=m$, we have 1 heavy
boson with mass $\mu_1$ and $N-1$ lighter bosons with mass $\mu_2$.
The vacuum wave
function is written as $f(p_W, {\varphi} _1, \cdots, {\varphi} _{N-1};\theta)$.
$\theta$ is the vacuum angle parameter of the theory.
$f(p_W, {\varphi} )$ must satisfy
\begin{eqnarray}
&&\big\{ K + V \big\} ~ f = \epsilon ~ f \cr
\noalign{\kern 7pt}
&&K = - {N^2\over 4\pi^2} {\partial^2\over \partial p_W^2}
- (N-1) \bigg\{
\sum_{a=1}^{N-1} {\partial^2\over \partial {\varphi} _a^2}
-{2\over N-1} \sum_{a<b}^{N-1} {\partial^2\over \partial {\varphi} _a\partial {\varphi} _b}
\bigg\} \cr
\noalign{\kern 4pt}
&&V(p_W, {\varphi} )={(\mu Lp_W)^2\over 4}
- {NmL B \kern-.73em\raise.6em\hbox{$-$}\hbox{} \over \pi}
\sum_{a=1}^N \cos \Big( {\varphi} _a - {2\pi p_W\over N}\Big)
\label{Schro1}
\end{eqnarray}
where $\sum_{a=1}^N {\varphi} _a= \theta$, $\mu^2=Ne^2/\pi$ and
$ B \kern-.73em\raise.6em\hbox{$-$}\hbox{} = B(\mu_1 L)^{1/N} B(\mu_2 L)^{(N-1)/N}$. $B(z)$ is given by
$B(z) = (z/4\pi)e^{ \gamma + (\pi/ z)} \exp \big\{
- 2 \int_0^\infty dx / (e^{z\cosh x} -1) \big\}$.
The boson masses are determined by
\begin{equation}
\mu_1^2 = \mu^2 + \mu_2^2 ~~~,~~~
\mu_2^2 = {8\pi m B \kern-.73em\raise.6em\hbox{$-$}\hbox{} \over L}
\raise.16ex\hbox{$\langle$}\lower.16ex\hbox{} \cos \Big( {\varphi} _a - {2\pi p_W\over N} \Big) \, \raise.16ex\hbox{$\rangle$}\lower.16ex\hbox{} _f
\label{bosonmass}
\end{equation}
where $\raise.16ex\hbox{$\langle$}\lower.16ex\hbox{} ~ \, \raise.16ex\hbox{$\rangle$}\lower.16ex\hbox{} _f$ denotes the $f$ average. (\ref{Schro1}) and
(\ref{bosonmass}) are solved simultaneously. Schematically
$V(p_W, {\varphi} ) \rightarrow f(p_W, {\varphi} ) \rightarrow \mu_\alpha \rightarrow V(p_W, {\varphi} )$.
This is a
Schr\"odinger problem in which the potential has to be reproduced by the
equation itself.
\section*{Quark dynamics}
Chiral condensates are given by
\begin{equation}
\raise.16ex\hbox{$\langle$}\lower.16ex\hbox{} \psi \kern-.65em\raise.6em\hbox{$-$} _a\psi_a \, \raise.16ex\hbox{$\rangle$}\lower.16ex\hbox{} _\theta = - {2 B \kern-.73em\raise.6em\hbox{$-$}\hbox{} \over L}
\raise.16ex\hbox{$\langle$}\lower.16ex\hbox{} \cos \Big( {\varphi} _a - {2\pi p_W\over N} \Big) \, \raise.16ex\hbox{$\rangle$}\lower.16ex\hbox{} _f~~~.
\label{condensate}
\end{equation}
The string tension $\sigma$ between two external sources, one with
charge $+q$ and the other with $-q$, is
\begin{equation}
\sigma = Nm \Big\{ \raise.16ex\hbox{$\langle$}\lower.16ex\hbox{} \psi \kern-.65em\raise.6em\hbox{$-$} \psi\, \raise.16ex\hbox{$\rangle$}\lower.16ex\hbox{} _{\theta_{\rm eff}}
- \raise.16ex\hbox{$\langle$}\lower.16ex\hbox{} \psi \kern-.65em\raise.6em\hbox{$-$} \psi\, \raise.16ex\hbox{$\rangle$}\lower.16ex\hbox{} _\theta \Big\} ~~~,~~~
\theta_{\rm eff} = \theta-\myfrac{2\pi q}{e} ~.
\label{string1}
\end{equation}
These quantities are evaluated numerically with arbitrary
values for $L=T^{-1}$, $m$, and $\theta$. At $T=0$
and for $m\ll \mu$,
\begin{equation}
{\sigma \over \mu^2} = - {N\over 2\pi} \, \Big( 2e^\gamma
\, {m\over \mu} \Big)^{{2N\over N+1}}
\bigg\{ \Big( \cos {\bar\theta_{\rm eff}\over N} \Big)^{{2N\over N+1}}
-\Big( \cos {\bar\theta\over N} \Big)^{{2N\over N+1}} \bigg\}
\label{string2}
\end{equation}
where
$\bar\theta=\theta-2\pi[(\theta+\pi)/2\pi]$. The $q$ dependence of
the string tension at various temperature is displayed in fig.\ 1.
\begin{figure}[t,b]
\epsfxsize= 9.cm
\epsffile[100 240 450 500]{s-tension.ps}
\caption{The charge, $q$, dependence of the string tension in
the $N=3$ model with $m/\mu=.01$ at $\theta=0$ at various
temperature $T$. At $T=0$ a cusp singularity develops at
$q= \hbox{${1\over 2}$} e$.
\label{fig:s-tension}}
\end{figure}
The following conclusions are obtained.
\vskip 6pt
\myitem{1.}At $T=0$ the chiral condensate is not analytic in
$m$. \cite{Coleman}
\myitem{2.}At $T=0$, there appears a cusp singularity at
$\theta=\pi$.
\myitem{3.}Sufficiently large asymmetry in fermion masses removes
the cusp singularity at $\theta=\pi$.
\myitem{4.}Witten's picture \cite{Witten} of chiral dynamics in QCD$_4$
is reproduced in QED$_2$.
\myitem{5.}The string tension vanishes for an integer $q/e$. \cite{CJS}
\myitem{6.}The string tension is non-vanishing only when $m\raise.16ex\hbox{$\langle$}\lower.16ex\hbox{}
\psi \kern-.65em\raise.6em\hbox{$-$} \psi\, \raise.16ex\hbox{$\rangle$}\lower.16ex\hbox{} $ has non-trivial $\theta$ dependence.
\myitem{7.}The chiral condensate increases as a fermion mass becomes
very large: $\raise.16ex\hbox{$\langle$}\lower.16ex\hbox{} \psi \kern-.65em\raise.6em\hbox{$-$} \psi\, \raise.16ex\hbox{$\rangle$}\lower.16ex\hbox{} _{T=0} \sim - (e^{2\gamma}/\pi)\, m$ for $m \gg
\mu$.
\myitem{8.}However, a contribution of a heavy fermion to the string tension
becomes negligible as its mass becomes large.
\myitem{9.}At $\theta=\pi$ a discontinuity in chiral condensates develops
at a critical fermion mass $m_c$.
\section*{Gauge theory of anti-ferromagnetic spin chains}
A $s= \hbox{${1\over 2}$} $ spin chain, (\ref{spinchain}), is equivalent to QED$_2$.
The derivation goes as follows.
Write $\vec S_n = c_n^\dagger \hbox{${1\over 2}$} \vec \sigma c_n^{}$ where
$c_{n\alpha}$ is an annihilation operator of an electron at site $n$
with spin $\alpha$. The Hamiltonian (\ref{spinchain}) is tranformed
to the Lagrangian
\begin{equation}
L^{(1)}= \sum \Big\{ ~ i c_n^\dagger \dot c_n^{}
+ \phi_n (c_n^\dagger c_n^{} - 1)
-{J\over 2} (\chi_n^* \chi_n - \chi_n^{} c_n^\dagger c_{n+1}^{} -
\chi_n^* c_{n+1}^\dagger c_n^{} )
~ \Big\} ~.
\label{spinLagrangian1}
\end{equation}
$\chi_n$ is an link variable, defined on the link connecting
sites $n$ and $n+1$. The Lagrangian $L^{(1)}$ has local $U(1)$ gauge
invariance as well.
In a spin chain,
the magnitude of $\chi_n$ is almost frozen, ie.\
the effective potential for $|\chi_n|=\chi$ has a sharp
minimum at $\chi = 1/\pi$, the curvature
there being proportional to the lattice volume. To a very good
approximation, one can write
$\chi_n = (i/\pi) \, e^{ia_0A_n}$ where $a_0$ is the lattice spacing.
$\phi_n$ and $A_n$ are the time and space components of a $U(1)$ gauge
field $A_\mu(x)$.
In an anti-ferromagnetic spin
chain two adjacent sites form one block. Each block contains four electron
states. The coupling of the gauge field $A_\mu$ in $L^{(1)}$ is spin-blind.
Therefore, an electron spin becomes
flavor in the continuum Dirac field, whereas an even-odd site index
becomes a spin index.
\begin{eqnarray}
&&\hskip .4cm c_{a\alpha}
\hskip .7cm \Longleftrightarrow \hskip .8cm \psi^{(\alpha)}_a \cr
\noalign{\kern 6pt}
\alpha :&& \hskip .3cm \hbox{spin} \hskip 2.1 cm \hbox{flavor} \cr
a :&& \hskip .0cm \hbox{even-odd} \hskip 1.8cm \hbox{spin}
\end{eqnarray}
In the continuum limit $L^{(1)}$ becomes
\begin{equation}
{\cal L} ^{(2)} = - {1\over 4e^2} \, F_{\mu\nu}^2
+ \sum_\alpha \psi^{(\alpha)} \kern-1.9em\raise.6em\hbox{$-$ i \gamma^\mu (\partial_\mu - iA_\mu)
\psi^{(\alpha)} + {1\over a_0} \, A_0 ~~.
\label{spinLagrangian2}
\end{equation}
We have added the Maxwell term with the understanding that the limit
$e^2 \rightarrow \infty$ is taken at the end. Light velocity in $ {\cal L} ^{(2)}$
is given by $c = a_0 J/\pi$.
A $s= \hbox{${1\over 2}$} $ spin chain is equivalent to 2-flavor massless QED$_2$ in
the strong coupling limit in a uniform background charge. After
bosonization 2-flavor QED$_2$ contains two bosons, one with (mass)$^2=
2e^2/\pi$ and the other with a vanishing mass. In the $e^2\rightarrow\infty$ limit
the former decouples. The latter, the massless boson, is the gapless mode
known in the Bethe ansatz solution. It controles the long-range behavior of
various correlation functions.
\section*{Acknowledgment}
This work was supported in part by the U.S.\ Department of Energy
under contract DE-AC02-83ER-40105.
\section*{References}
|
1,108,101,566,633 | arxiv |
\section{Proof nets: dynamics}
\label{s:pn-dynamics}
\begin{figure}[t]
\centering\ovalbox{
\begin{tabular}{cccc}
\begin{tabular}{c|ccc}
\myinput{imm-B-rule}&\myinput{imm-red-weakening}
\end{tabular}
\\\hline
\myinput{imm-pure-contraction}
\end{tabular}
}
\caption{proof nets cut-elimination rules\label{fig:rew-rules}}
\end{figure}
The rewriting rules are in Figure \ref{fig:rew-rules}. Let us explain them. First of all, note that the notion of cut in our syntax is implicit, because cut-links are not represented explicitly. A cut is given by a node whose incoming and outgoing connections are principal ({\em i.e.}\ with a little square on the line).
The rule $\Rew{\msym}$ is nothing but the usual elimination of a multiplicative cut, except that the step also opens the box associated with the $\parr$-link.
The two $\Rew{\esym}$ rules reduce the exponential redexes. Let us explain how to read them. For the graph noted $H$ in Figure \ref{fig:rew-rules} there are two possibilities: either it is simply a dereliction link (a ${\mathsf{d}}$-link) or it is a $\parr$ with its box, so there is no ambiguity on what to duplicate/erase. Every pair of short gray lines denotes the sequence (of length $m_i$, with $i\in\set{1,\ldots, k}$) of boxes closing on the corresponding links. The rule has two cases, one where $!$ is cut with $k\in\set{1,2,\ldots}$ derelictions and one where it is cut with a weakening. In the first case the sub-graph $H$ is copied $k$ times (if $k=1$ no copy is done) into $H^1,\ldots H^k$ and each copy enters in the $m_i$ boxes enclosing the corresponding (and removed) dereliction. Moreover, the $k$ copies of each target of $H$ are contracted together, {\em i.e.}\ the nodes are merged. In the case of a cut with a weakening, $H$ is erased and replaced by a set of weakenings, one for every target of $H$. Note that the weakenings are also pushed out of all boxes closing on the targets of $H$\footnote{Note that, for the sake of a simple representation, the figure of the weakening cut-elimination rule is slightly wrong: it is not true that the links $l_1,\ldots,l_j$ having as target a given conclusion $x_i$ of $H$ are all inside $m_i$ boxes, because each one can be inside a different number of boxes.}. This is done to preserve the invariant that weakening are always pushed out of boxes as much as possible. Such invariant is also used in the rule: the weakening is at the same level of $H$. Last, if the weakenings created by the rule are contracted with any other link then they are removed on the fly (because by definition weakenings cannot be contracted).
Now, we establish the relationship between terms and nets at the level of reduction. Essentially, there is only one fact which is not immediate, namely that $\Rew{\esym}$ actually implements the $\Rew{\esym}$ rule on terms, as it is proved by the following lemma.
\begin{lemma}[substitution]
\label{l:graph-sub}
Let $t=s[x/v\L]$ then $\lamtonetsvar{t}{X}\Rew{\esym}\lamtonetsvar{s\set{x/v}\L}{X}$ for any set of names $X\supseteq \fv{t}$.
\end{lemma}
\begin{proof}
First of all observe that $t$ and $s[x/v]L$ both reduce to $s\set{x/v}L$ and by remark \ref{rem:not-inj} both translate to the same net. Hence it is enough to prove that $\lamtonetsvar{s[x/v]L}{X}\Rew{e}\lamtonetsvar{s\set{x/v}L}{X}$. We prove it by induction on the number $k$ of substitutions in $L$. If $k=0$ then the proof is by induction on the number $n$ of free occurrences of $x$ in $s$. Cases:
\begin{itemize}
\item $n=0$) In $\lamtonetsvar{s[x/v]}{X}$ the bang associated to $v$ is cut with a weakening. The elimination of the cut gets a net $G'$ without the $!$-link and the $\parr$-box associated to $v$, leaving a free weakening for every free variable of the box, {\em i.e.}\ of every free variable of $v$: then $G'$ is exactly $\lamtonetsvar{s\set{x/v}}{X\cup\fv{v}}=\lamtonetsvar{s}{X\cup\fv{v}}$.
\item $n>1$) Write $s=C[x]$ for some occurrence of $x$. Now, consider $u=C[y][y/v][x/v]$ and note that:
\begin{center}
$u\Rew{}C[v][x/v]\Rew{}C[v]\set{x/v}=s\set{x/v}$
\end{center}
The difference between $G'=\lamtonetsvar{u}{X}$ and $G=\lamtonetsvar{s[x/v]}{X}$ is that one of the occurrences of $x$ in $G$ has been separated from the others and cut with a copy of $\lamtonets{v}$. Consider the step $G\Rew{}H$ which reduces the cut on $x$ in $G$ and the sequence $G'\Rew{} H'_y \Rew{} H'_{y,x}$ which first reduces the cut on $y$ in $G'$ and then reduces in $H'$ the (unique) residual of the cut on $x$ in $G'$. By the definition of reduction in nets $H=H'_{y,x}$. Now by \textit{i.h.}\ applied to $u$ and $y$ we get that $\lamtonetsvar{C[v][x/v]}{X}=H'_y$ and by the \textit{i.h.}\ applied to $C[v][x/v]$ and $x$ we get that $\lamtonetsvar{C[v]\set{x/v}}{X}=H'_{y,x}$. From $H=H'_{y,x}$ and $C[v]\set{x/v}=s\set{x/v}$ we get $\lamtonetsvar{s\set{x/v}}{X}=H$ and conclude.
\item $n= 1$) By induction on $s$. Some cases:
\begin{itemize}
\item If $t= \l y. u$ then by \textit{i.h.}\ $\lamtonetsvar{u[x/v]}{X\cup\set{y}}\Rew{e}\lamtonetsvar{u\set{x/v}}{X\cup\set{y}}$ and so we get $\lamtonetsvar{\l y. (u[x/v])}{X\cup\set{y}}\Rew{e}\lamtonetsvar{\l y. (u\set{x/v})}{X\cup\set{y}}$. Now, observe that $\l y. (u\set{x/v})=(\l y. u)\set{x/v}=t\set{x/v}$ and that the two nets $\lamtonetsvar{\l y. (u[x/v])}{X\cup\set{y}}$ and $\lamtonetsvar{(\l y. u)[x/v]}{X\cup\set{y}}$ have the same reduct after firing the exponential cut on $x$, and so we get $\lamtonetsvar{(\l y. u)[x/v]}{X\cup\set{y}}\Rew{e}\lamtonetsvar{(\l y. u)\set{x/v})}{X\cup\set{y}}$.
\item If $s=w[y/u]$ then either $x\in u$ or $x\in w$. In the first case by remark \ref{rem:not-inj} we get that $\lamtonetsvar{s[x/v]}{X}=\lamtonetsvar{w[y/u][x/v]}{X}=\lamtonetsvar{w[y/u[x/v]]}{X}$. Now by \textit{i.h.}\ $\lamtonets{u[x/v]}\Rew{e} \lamtonets{u\set{x/v}}$. Then we have $\lamtonetsvar{s[x/v]}{X}\Rew{e} \lamtonetsvar{w[y/u\set{x/v}]}{X}=\lamtonetsvar{w[y/u]\set{x/v}}{X}=\lamtonetsvar{s\set{x/v}}{X}$. The second case is analogous.
\item If $s= (\l y.w)u$. The case $x\in u$ uses remark \ref{rem:not-inj} and the \textit{i.h.}\ as in the $s=w[y/u]$ case. The case $x\in w$ is slightly different. As before $((\l y.w)u)[x/v]$ and $((\l y.w[x/v])u)$ have the same reduct. By \textit{i.h.}\ hypothesis $\lamtonets{w[x/v]}\Rew{e}\lamtonets{w\set{x/v}}$ and thus $\lamtonetsvar{(\l y.w[x/v])u}{X}\Rew{e}\lamtonetsvar{(\l y.w\set{x/v})u}{X}$. We conclude since $\lamtonetsvar{((\l y.w)u)[x/v]}{X}\Rew{e}\lamtonetsvar{((\l y.w\set{x/v})u)}{X}=\lamtonetsvar{((\l y.w)u)\set{x/v}}{X}$.
\end{itemize}
\end{itemize}
If $k>0$ and $\L=\L'[y/r]$ then we get by \textit{i.h.}\ that $\lamtonetsvar{s[x/v]\L'}{X}\Rew{e}\lamtonetsvar{s\set{x/v}\L'}{X}$. By definition of the translation and of graph reduction it follows that $\lamtonetsvar{s[x/v]\L'[y/r]}{X}\Rew{e}\lamtonetsvar{s\set{x/v}\L'[y/r]}{X}$.
\end{proof}
\begin{theorem}[strong bisimulation]
\label{tm:str-bis}
Let $t$ be a term and $X$ a set of variables containing $\fv{t}$. The translation is a strong bisimulation between $t$ and $\lamtonetsvar{t}{X}$, {\em i.e.}\ $t\Rew{a} t'$ if and only if $\lamtonetsvar{t}{X}\Rew{a} \lamtonetsvar{t'}{X}$, for $a\in\set{{\tt m},{\tt e}}$.
\end{theorem}
\begin{proof}
By induction on the translation. If $t=x$ there is nothing to prove, and if $t=\l x.s$ or $t=xs$ it immediately follows by the \textit{i.h.}, since all the redexes of $t$ are contained in $s$. If $t=s[x/u]$ and the redex is in $s$ or $u$ then just apply the \textit{i.h.}. If $u=v\L$ and the redex is $s[x/v\L]\Rew{\esym} s\isub{x}{v}\L$ then apply Lemma \ref{l:graph-sub}. If $t=(\l x. s)u$ and the redex is in $s$ or $u$ then just apply the \textit{i.h.}. If $t=(\l x. s)u\Rew{\msym} s[x/u]=t'$ then have a look at Figure \ref{fig:counter}.a: clearly $t\Rew{\msym} t'$ iff $\lamtonetsvar{t}{X}\Rew{\msym} \lamtonetsvar{t'}{X}$.
\end{proof}
\begin{figure}[h]
\centering\myinput{counter-examples}
\caption{a) A $\Rew{\msym}$-step on terms and on nets. b-c) Counter-examples to correctness without $\parr$-boxes\label{fig:counter}}
\end{figure}
Strong bisimulations preserve reduction lengths, so they preserve divergent/normalizing reductions, and termination properties in general.
\emph{Technical digression about confluence}. For confluence the point is slightly more delicate, since in general it is preserved only modulo the quotient induced by the strong bisimulation. But mild additional hypothesis allow to transfer confluence. Given two rewriting systems $(S_1, \to)$ and $(S_2, \leadsto)$ and a strong bisimulation $\equiv$ (defined on all terms of $S_1$ and $S_2$), to transfer confluence from $S_1$ to $S_2$ it is enough to ask that if $s_1\equiv s_2$ and $s_1\to s_1'$ then there is a unique $s_2'$ s.t. $s_2\leadsto s_2'$ and $s_2\equiv s_2'$, see \cite{phdaccattoli} (pp. 83-86) for more details. It is easily seen that in our case the translation enjoys this property in both directions.
These observations (and confluence of $\l_{vker}$) prove:
\begin{corollary}
Let $t\in\l_{vker}$ and $X$ a set of variables. Then $t$ is weakly normalizing/strongly normalizing/a normal form/without a normal form iff $\lamtonetsvar{t}{X}$ is. Moreover, proof nets are confluent.
\end{corollary}
Actually, the translation is more than a strong bisimulation: the reduction graphs\footnote{\emph{Reduction graphs}, which are the graphs obtained considering all reductions starting from a given object, \textit{are not nets}.} of $t$ and $\lamtonets{t}$ are \textit{isomorphic}, not just strongly bisimilar. An easy but tedious refinement of the proof of Theorem \ref{tm:str-bis} proves:
\begin{theorem}[dynamic isomorphism]
Let $t$ be a term and $X$ a set of variables containing $\fv{t}$. The translation induces a bijection $\phi$ between the redexes of $t$ and the redexes of $\lamtonetsvar{t}{X}$ s.t. $R: t\Rew{a} t'$ if and only if $\phi(R): \lamtonetsvar{t}{X}\Rew{a} \lamtonetsvar{t'}{X}$, where $a\in\set{{\tt m},{\tt e}}$.
\end{theorem}
A nice by-product of the strong bisimulation approach is that preservation of correctness by reduction \textit{comes for free}, since any reduct of a proof-net is the translation of a term.
\begin{corollary}[preservation of correctness]
Let $G$ be a proof net and $G\Rew{} G'$. Then $G'$ is correct.
\end{corollary}
\paragraph{The original boring translation.} For the sake of completeness, Figure \ref{fig:ord-trans} sketches the ordinary CBV translation from $\l$-terms (possibly with iterated applications) to proof nets (including the case for explicit substitutions and using a traditional syntax with boxes on $!$). An easy computation shows that the term $t=\delta (yz) \delta$, where $\delta=\l x. xx$ maps to a net without normal form, while $t$ is a $\l_{\beta v}$-normal form (see \cite{AccLinearity} for more details). This mismatch is the motivation behind our work.
\begin{figure}[t]
\centering
\myinput{imm-ordinary-translation}
\caption{\label{fig:ord-trans} the ordinary CBV translation from terms to nets.}
\end{figure}
\section{Proof nets: the literature on term representations}
\label{s:history}
When relating $\l$-terms and proof nets a number of technical choices are possible:
\begin{enumerate}
\item \emph{Explicit substitutions}: proof nets implement a $\beta$-step by two cut-elimination steps. This refined evaluation can be seen on the calculus only if the syntax is extended with explicit substitutions.
\item \emph{Variables}: to properly represent variables it is necessary to work modulo associativity and commutativity of contractions, neutrality of weakening with respect to contraction, and permutations of weakenings and contractions with box-borders. In the literature there are two approaches: to explicitly state all these additional congruences or to use a syntax naturally quotienting with respect to them. Such a syntax uses n-ary $?$-links collapsing weakening, dereliction and contractions and delocalizing them out of boxes. It is sometimes called \emph{nouvelle syntaxe}.
\item \emph{Axioms}: various complications arise if proof nets are presented with explicit axiom and cut links. They can be avoided by working modulo cuts on axioms, which is usually done by employing an interaction nets presentation of proof nets.
\item \emph{Exponential cut-elimination}: the cut-elimination rules for the exponentials admit many presentations. Essentially, either they are big-step, {\em i.e.}\ an exponential cut is eliminated in one shot (making many copies of the $!$-premise of the cut), or they are small-step, with a rule for each possible $?$-premise (weakening, dereliction, contraction, axiom, box auxiliary port).
\end{enumerate}
We now list the works in the literature which are closer in spirit to ours, {\em i.e.}\ focusing on the representation of $\l$-calculi into proof nets (and for space reasons we omit many other interesting works, as for instance \cite{DBLP:conf/ifl/Mackie05}, which studies the representation of \emph{strategies}, not of \emph{calculi}). The first such works were the Ph.D. thesis of Vincent Danos \cite{Danos:Thesis:90} and Laurent Regnier \cite{Reg:Thesis:92}, which focused on the call-by-name (CBN) translation. Danos and Regnier avoid explicit substitutions, use n-ary contractions, explicit axioms, and big-step exponential rules, see also \cite{Danos95proof-netsand}. They characterize the image of the translation using the variant on the Danos-Regnier correcteness criterion which requires that any switching graph has $\# w+1$ connected components, where $\# w$ is the number of weakenings. In \cite{DBLP:journals/tcs/DanosR99} Danos and Regnier use the CBV translation\footnote{Let us point out that \cite{DBLP:journals/tcs/DanosR99} presents an oddity that we believe deserves to be clarified. The authors show that an optimized geometry of interaction for the proof nets of the CBV-translation is isomorphic to Krivine'
s abstract machine (KAM): this is quite puzzling, because
the KAM is CBN, while they use the CBV translation.}. Both translations are injective.
In \cite{DBLP:journals/tcs/Laurent03,phdlaurent} Olivier Laurent extends the CBN translation to represent (the CBN) $\l\mu$-calculus. He does not use explicit substitutions nor n-ary $?$-links, while he employs explicit axiom links and small-step exponential rules. His work presents two peculiar points. First, the translation of $\l\mu$-terms is not injective, because---depending on the term---the $\mu$-construct may have no counterpart on proof nets. This induces some mismatches at the dynamic level. Second, Laurent finds a simpler criterion, exploiting the fact that the fragment encoding (the CBN) $\l\mu$-calculus is polarized. In \cite{phdlaurent} Laurent also show how to represent the CBV $\l\mu$-calculus. However, such a representation does not use the same types of the boring translation, as $A\to B$ maps to $?!(A\multimap B)$, and not to $!(A\multimap B)$.
Lionel Vaux \cite{phdvaux} and Paolo Tranquilli \cite{tranquillithesis,DBLP:journals/tcs/Tranquilli11} study the relationship between the differential $\l$-calculus and differential proof nets. Vaux also extends the relationship to the classical case (thus encompassing a differential $\l\mu$-calculus), while Tranquilli refines the differential calculus into a \emph{resurce calculus} which better matches proof nets. They do not use explicit substitutions, nor n-ary contractions, while they use interaction nets (so no explicit axioms and cut link) and small-step exponential rules. Both Tranquilli and Vaux rely on the Danos-Regnier criterion, despite the fragment encoding their calculi is polarized and can be captured using Laurent's criterion by using boxes for coderelictions; in the context of $\l$-calculus such boxes do not reduce the parallelism of the representation.
Delia Kesner and co-authors \cite{DBLP:conf/lics/CosmoK97,DBLP:journals/mscs/CosmoKP03,DBLP:journals/iandc/KesnerL07} study the relationship with explicit substitutions (in the CBN case). The main idea here is that explicit substitutions correspond to exponential cuts. They use explicit axiom links and small-step exponential rules, but they do not employ n-ary contractions (and so they need additional rules and congruences). Because of explicit substitutions the translation is not injective: now different terms may map to the same proof net, as in this paper. They do not deal with correctness.
In none of these works the translation is a strong bisimulation. In \cite{DBLP:conf/csl/AccattoliG09} the author and Stefano Guerrini use a syntax inspired by proof nets (and extended with jumps) to represent the CBN $\l$-calculus with explicit substitutions. That work is the only one employing (the equivalent of) n-ary $?$-links and (the equivalent of) small-step exponential rules. In \cite{DBLP:conf/csl/AccattoliG09} the correctness criterion is a variation over Lamarche's criterion for essential nets, which relies in an essential way on the use of jumps. A reformulation in the syntactic style of this paper of both \cite{DBLP:conf/csl/AccattoliG09} and of Danos and Regnier's proof nets for the CBN $\l$-calculus can be found in \cite{phdaccattoli}, together with a detailed account of the strong bisimulation.
Here, hypergraphs allow us to use n-ary $?$-links and collapse axioms and cut links (as if we were using interaction nets). More precisely, we represent n-ary $?$-links by allowing $e$-nodes to have more than one incoming link. This choice overcomes some technicalities about \emph{gluing} and \emph{de-gluing} of $?$-links. Such technicalities are always omitted, but they are in fact necessary to properly define subnets and cut-elimination. We also employ big-step exponential rules and explicit substitutions.
\section{Introduction}
A key feature of linear logic\ (LL) is that it is a refinement of intuitionistic logic, {\em i.e.}\ of $\l$-calculus. In particular, \emph{one} $\beta$-reduction step in the $\l$-calculus corresponds to the sequence of \emph{two} cut-elimination steps in linear logic, steps which are of a very different nature: the first is multiplicative and the second is exponential. The Curry-Howard interpretation of this fact is that $\l$-calculus can be refined adding a constructor $t[x/u]$ for \emph{explicit substitutions}, and decomposing a $\beta$-step $(\l x.t)u\Rew{\beta} t\isub{x}{u}$ into the sequence $(\l x.t)u\Rew{\msym} t[x/u] \Rew{\esym} t\isub{x}{u}$.
Another insight due to linear logic\ is that proofs can be represented graphically---by the so-called proof nets---and the reformulation of cut-elimination on proof nets takes a quite different flavour with respect to cut-elimination in sequent calculus. The parallel nature of the graphical objects makes the commutative cut-elimination steps, which are the annoying burden of every proof of cut-admissibility, (mostly) disappear.
These two features of LL have influenced the theory of explicit substitutions in various ways \cite{DBLP:journals/iandc/KesnerL07,DBLP:journals/mscs/CosmoKP03}, culminating in the design of \emph{the structural $\l$-calculus} \cite{DBLP:conf/csl/AccattoliK10}, a calculus isomorphic (more precisely \emph{strongly bisimilar}) to its representation in LL proof nets\ \cite{DBLP:conf/csl/AccattoliG09,phdaccattoli}. Such a calculus can be seen as an algebraic reformulation of proof nets\ for $\l$-calculus \cite{Danos:Thesis:90,Reg:Thesis:92}, and turned out to be simpler and more useful than previous calculi with explicit substitutions.
Girard's seminal paper on linear logic\ \cite{DBLP:journals/tcs/Girard87} presents two translations of $\l$-calculus into LL. The first one follows the typed scheme $\cbnp{A\Rightarrow B}=!\cbn{A}\multimap \cbn{B}$, and it is the one to which the previous paragraphs refer to. It represents the ordinary---or call-by-name (CBN)---$\l$-calculus. The second one, identified by $\cbvp{A\Rightarrow B}=!(\cbv{A}\multimap \cbv{B})$, was qualified as \textit{boring} by Girard and received little attention in the literature \cite{DBLP:journals/tcs/MaraistOTW99,DBLP:journals/mscs/PravatoRR99,DBLP:journals/tcs/DanosR99,DBLP:conf/gg/FernandezM02,DBLP:journals/corr/abs-1003-5515,DBLP:conf/ifl/Mackie05}. Usually, it is said to represent Plotkin's call-by-value (CBV) $\l_{\beta v}$-calculus \cite{DBLP:journals/tcs/Plotkin75}. These two representations concern typed terms only, but it is well-known that they can be extended to represent the whole untyped calculi by considering linear recursive types ($o=!o\multimap o$ for call-by-name
and and $o=!(o\multimap o)$ for call-by-value).
Surprisingly, the extension of the CBV translation to the untyped calculus $\l_{\beta v}$-calculus introduces a violent unexpected behavior: some normal terms in $\l_{\beta v}$ map to (recursively typed) proof nets\ without normal form (see \cite{AccLinearity} for concrete examples and extensive discussions). This fact is the evidence that there is something inherently wrong in the CBV translation.
In this paper we show how to refine the three actors of the play (the CBV $\l$-calculus, the translation and the proof nets\ presentation) in order to get a perfect match between terms and proof nets. Technically, we show that the new translation is a strong bisimulation\footnote{A strong bisimulation between two rewriting systems $S$ and $R$ is a relation $\equiv$ between $S$ and $R$ s.t. whenever $s\equiv r$ then for every step from $s \Rew{S} s'$ there is a step $r\Rew{R} r'$ s.t. $s'\equiv r'$, \emph{and viceversa} (for $s,s'\in S$ and $r,r'\in R$).}, and since strong bisimulations preserve reductions length (in both directions), the normalization mismatch vanishes.
Interestingly, to obtain a strong bisimulation we have to make some radical changes to both the calculus and the presentation of proof nets. The calculus, that we call the \emph{value substitution kernel} $\l_{vker}$ \cite{AccLinearity}, is a subcalculus of the \emph{value substitution calculus} $\l_{vsub}$ studied in \cite{DBLP:conf/flops/AccattoliP12}, which is a CBV $\l$-calculus with explicit substitutions. Such a kernel is as expressive as the full calculus, and can be thought as a sort of CPS representation of $\l_{vsub}$.
Here, however, we mostly take the calculus for granted (see \cite{AccLinearity} for more details) and rather focus on proof nets. Our two contributions are:
\begin{enumerate}
\item \deff{Graphical syntax and algebraic formalism}: it is far from easy to realize a strong bisimulation between terms and nets, as it is necessary to take care of many delicate details about weakenings, contractions, representation of variables, administrative reduction steps, and so on. The search for a strong bisimulation may seem a useless obsession, but it is not. Operational properties as confluence and termination then transfer immediately from graphs to terms, and viceversa. More generally, such a strong relationship turns the calculus into an algebraic language for proof nets, providing an handy tool to reason by structural induction over proof nets.
\item \deff{Correctness criterion}: we provide a characterization of the proof nets\ representing $\l_{vker}$ based on graph-theoretical principles and which does not refer to $\l_{vker}$, that is, we present a \emph{correctness criterion}. Surprisingly, the known criteria for the representation of the call-by-name $\l$-calculus (with explicit substitutions) fail to characterize the fragment encoding the call-by-value $\l$-calculus. Here we present a simple and non-standard solution to this problem. We hack the usual presentation of proof nets\ so that Laurent's criterion for polarized nets \cite{DBLP:conf/tlca/Laurent99,DBLP:journals/tcs/Laurent03,phdlaurent}---the simplest known correctness criterion---captures the fragment we are interested in. The hacking of the syntax consists in using boxes for $\parr$-links rather than for $!$-links. An interesting point is that the fragment we deal with is not polarized in Laurent's sense, despite it is polarized in the Lamarche/intuitionistic sense.
\end{enumerate}
The use of boxes for $\parr$-links may look terribly ad-hoc. Section \ref{s:par} tries to argue that it is not. Moreover, Section \ref{s:history} presents an account of the technical points concerning the representations of terms with proof nets, and how they have been treated in the literature.
\section{Proof nets: definition}
\label{s:proofnets}
\emph{Introduction}. Our presentation of proof nets is non-standard in at least four points (we suggest to have a quick look to Figure \ref{fig:trans}):
\begin{enumerate}
\item \textbf{Hypergraphs}: we use hypergraphs (for which formulas are nodes and links---{\em i.e.}\ logical rules---are hyperedges) rather than the usual graphs with pending edges (for which formulas are edges and links are nodes). We prefer hypergraphs because in this way contraction can be represented in a better way (providing commutativity, associativity, and permutation with box borders \emph{for free}) and at the same time we can represent cut and axiom links implicitly (similarly to what happens in interaction nets).
\item \textbf{$\parr$-boxes}: We put boxes on $\parr$-links and not on $!$-links. This choice is discussed in Section \ref{s:par}, and it allows to use a very simple correctness criterion---{\em i.e.}\ Laurent's criterion for polarized nets---without losing any property.
\item \textbf{Polarity}: we apply a polarized criterion to a setting which is not polarized in the usual sense.
\item \textbf{Syntax tree}: since we use proof nets to represent terms, we will dispose them on the plane according to the syntax tree of the corresponding terms, and not according to the corresponding sequent calculus proof (also the orientation of the links does not reflect the usual premise-conclusion orientation of proof nets).
\end{enumerate}
\emph{Nets}. Nets are directed and labelled hyper-graphs $G=(V(G),L(G))$, {\em i.e.}, graphs where $V(G)$ is a set of labelled \deff{nodes} and $L(G)$ is a set of labelled and \deff{directed hyperedges}, called \deff{links}, which are edges with 0,1 or more sources and 0,1 or more targets\footnote{
An hyper-graph $G$ can be understood as a bipartite graph $B_G$, where $V_1(B_G)$ is $V(G)$ and $V_2(B_G)$ is $L(G)$, and the edges are determined by the relations \textit{being a source} and \textit{being a target} of an hyperedge.}. Nodes are labelled with a type in $\set{e,m}$, where $e$ stays for \textit{exponential} and $m$ for \textit{multiplicative}, depicted in blue and brown, respectively. If a node $u$ has type $e$ (resp. $m$) we say that it is a $e$-node (resp. $m$-node). We shall consider hyper-graphs whose links are labelled from $\set{!,{\mathsf{d}},{\mathsf{w}},\parr,\otimes}$. The label of a link $l$ forces the number and the type of the source and target nodes of $l$, as shown in Figure \ref{fig:links} (the types will be discussed later, and the figure also contains the $\square$-link, which is not used to define nets: it will be used later to define the correction graph). Note that every link (except $\square$) has exactly one connection with a little circle: it denotes the principal node, {\em i.e.}\ the node on
which the link can
interact. Remark the principal node for tensor and $!$, which is not misplaced. Moreover, every $\parr$-link has an associated \deff{box}, {\em i.e.}, a sub-hyper-graph of $G$ (have a look to Figure \ref{fig:trans}). The \deff{sources} (resp. \deff{targets}) of a net are the nodes without (resp. outgoing) incoming links; a node which is not a source nor a target is \deff{internal}. Formally:
\begin{figure}[t]
\centering
\myinput{imm-linksv}
\caption{\label{fig:links} links.}
\end{figure}
\begin{definition}[net]
A \deff{net} $G$ is a quadruple $(|G|, B_G, \fv{G}, r_G)$, where $|G|=(V(G),L(G))$ is an hyper-graph whose nodes are labelled with either $e$ or $m$ and whose hyperedges are $\set{!,{\mathsf{d}},{\mathsf{w}},\parr,\otimes}$-links and s.t.:
\begin{itemize}
\item \deff{Root}: $r_G\in V(G)$ is a source $e$-node of $G$, called the \deff{root} of $G$.
\item \deff{Conclusions}: $\fv{G}$ is the set of targets of $G$, also called \deff{free variables} of $G$, which are targets of $\set{{\mathsf{d}},{\mathsf{w}}}$-links (and not of $\otimes$-links).
\item \deff{Multiplicative}: $m$-nodes have \emph{exactly one} incoming and \emph{one} outgoing link.
\item \deff{Exponential}: an $e$-node has at most one outgoing link, and if it is the target of more than one link then they all are ${\mathsf{d}}$-links. Moreover, an $e$-node cannot be isolated.
\item \deff{Boxes}: For every $\parr$-link $l$ there is a net $\bbox{l}$, called the \deff{box} of $l$ ($B_G$ is the set of boxes of $G$ and $\bbox{l}\in B_G$), with a distinguished free variable $x$, called the \deff{variable} of $l$, and s.t.:
\begin{itemize}
\item \deff{Border}: the root $r_{\bbox{l}}$ and the free variable $x$ are the $e$-nodes of $l$, and any free variable $\neq x$ of $\bbox{l}$ is not the target of a weakening.
\item \deff{Nesting}: for any two $\parr$-boxes $\bbox{l_1}$ and $\bbox{l_2}$ if $\emptyset\neq I:=\bbox{l_1}\cap\bbox{l_2}$, $\bbox{l_1}\not \subseteq\bbox{l_2}$, and $\bbox{l_2}\not \subseteq\bbox{l_1}$ then all the nodes in $I$ are free variables of both $\bbox{l_1}$ and $\bbox{l_2}$.
\item \deff{Internal closure}: any link $l$ of $G$ having as target an internal $e$-node of $\bbox{l}$ is in $\bbox{l}$.
\item \deff{Subnet}: the nodes and the links of $\bbox{l}$ belong to $G$ and the $\parr$-links in $\bbox{l}$ inherit the boxes from $G$.
\end{itemize}
\end{itemize}
\end{definition}
\emph{Some (technical) comments on the definition}. In the border condition the fact that the free variables $\neq x$ are not (the target) of a weakening means that weakenings are assumed to be pushed out of boxes as much as possible (of course the rewriting rules will have to preserve this invariant). The internal closure condition is a by-product of collapsing contractions on nodes, which is also the reason of the unusual formulation of the nesting condition: two boxes that are morally disjoint can in our syntax share free variables, because of an implicit contraction merging two of their conclusions.
\emph{Terminology about nets}. The \deff{level} of a node/link/box is the maximum number of nested boxes in which it is contained\footnote{Here the words \emph{maximum} and \emph{nested} are due to the fact that the conclusions of $\parr$-boxes may belong to two not nested boxes, because of the way we represent contraction.} (a $\parr$-link is not contained in its own box). Two links are \deff{contracted} if they share an $e$-target. Note that the exponential condition states that only derelictions ({\em i.e.}\ ${\mathsf{d}}$-links) can be contracted. In particular, no link can be contracted with a weakening. A \deff{free weakening} in a net $G$ is a weakening whose node is a free variable of $G$. Sometimes, the figures show a link in a box having as target a contracted $e$-node $x$ which is outside the box: in those cases $x$ is part of the box, it is outside of the box only in order to simplify the representation.
\emph{Typing}. Nets are typed using a recursive type $o=!(o\multimap o)$, that we rename $e=!(e\multimap e)=!(e^\bot\parr e)$ because $e$ is a mnemonic for \emph{exponential}. Let $m=e\multimap e=e^\bot\parr e$, where $m$ stays for \emph{multiplicative}. Note that $e=!m$ and $m=!m\multimap !m$. Links are typed using $m$ and $e$, but the types are omitted by all figures except Figure \ref{fig:links} because they are represented using colors and with different shapes ($m$-nodes are brown and dot-like, $e$-nodes are white-filled cyan circles). Let us explain the types in Figure \ref{fig:links}. They have to be read bottom-up, and thus negated (to match the usual typing for links) if the conclusion of the logical rule is the bottom node of the link, as it is the case for the $\set{{\mathsf{w}},{\mathsf{d}},\otimes}$-links, while $!$ and $\parr$ have their logical conclusion on the top node, and so their type does not need to be negated.
\begin{figure}
\centering
\myinput{various}
\caption{\label{fig:various} various images.}
\end{figure}
\emph{Induced $!$-boxes}. Note that a $!$-link is always applied to something ($m$-nodes cannot be conclusions), and there is not so much freedom for this \emph{something}: either it is a dereliction link or a $\parr$ with its box. Note also that in both cases we get (what would usually be) a valid content for a $!$-box. For the dereliction case it is evident, and for the $\parr$ case it is guaranteed by the definition of net: the content of a $\parr$-box ends on $e$-nodes. Hence, any $!$-link has an associated box, induced by $\parr$-boxes, which needs not to be represented explicitly.
\emph{The translation}. Nets representing terms have the general form in Figure \ref{fig:various}.a, also schematized as in Figure \ref{fig:various}.b. The translation $\lamtonets{\cdot }$ from terms to nets is in Figure \ref{fig:trans} (the original boring translation is sketched in Fig. \ref{fig:ord-trans}, page \pageref{fig:ord-trans}). A net which is the translation of a term is a \deff{proof net}. Note that in some cases there are various connections entering an $e$-node, that is the way we represent contraction. In some cases the $e$-nodes have an incoming connection with a perpendicular little bar: it represents an arbitrary number ($>0$) of incoming connections. The net corresponding to a variable is given by a $!$ on a dereliction and not by an (exponential) axiom, as it is sometimes the case. The reason is that an axiom (in our case a node, because axioms are collapsed on nodes) would not reflect on nets some term reductions, as $x[x/v]\Rew{\esym} v$, for which both the redex
and the reduct would be mapped on the same net.
The translation $\lamtonets{\cdot}$ is refined to a translation $\lamtonetsvar{\cdot }{X}$, where $X$ is a set of variables, in order to properly handle weakenings during cut-elimination. The reason is that an erasing step on terms simply erases a subterm, while on nets it also introduces some weakenings: without the refinement the translation would not be stable by reduction. The clause defining $\lamtonetsvar{t}{X\cup\set{y}}$ when $y\notin\fv{t}$ is the first on the second line of Figure \ref{fig:trans}, the definition is then completed by the following two clauses: $\lamtonetsvar{t}{\emptyset}:= \lamtonets{t}$ and $\lamtonetsvar{t}{X\cup\set{y}}:= \lamtonetsvar{t}{X}$ if $y\in\fv{t}$.
\emph{$\alpha$-equivalence}. To circumvent an explicit and formal treatment of $\alpha$-equivalence we assume that the set of $e$-nodes and the set of variable names for terms coincide. This convention removes the need to label the targets of $\lamtonetsvar{t}{X}$ with the name of the corresponding free variables in $t$ or $X$. Actually, before translating a term $t$ it is necessary to pick a \emph{well-named} $\alpha$-equivalent term $t'$, {\em i.e.}\ a term where any two different variables (bound or free) have different names.
\begin{figure}
\centering
\myinput{imm-translation}
\caption{\label{fig:trans} the translation from terms to nets.}
\end{figure}
\begin{remark}
\label{rem:not-inj}
The translation of terms to nets is not injective. By simply applying the translation it is easily seen that the following pairs of terms have the same net:
\begin{equation}
\label{eq:quotient}
\begin{array}{lll@{\hspace{.5cm}}l}
t[x/s][y/u] & \preeqw{vo_{CS}} & t[y/u][x/s] & \mbox{if } x\notin\fv{u} \ \&\ y\notin\fv{s}\\
v\ u[x/s] & \preeqw{vo_1} & (v\ u)[x/s] & \mbox{if } x\notin\fv{v}\\
t[x/s[y/u]] & \preeqw{vo_2} & t[x/s][y/u] & \mbox{if } y\notin\fv{t}\\
\end{array}\end{equation}
Let $\eqw{vo}$ be the reflexive, transitive, and contextual closure of $\preeqw{vo_{CS}}\cup \preeqw{vo_1}\cup \preeqw{vo_2}$. In the proof of Lemma \ref{l:graph-sub}, we will use the fact that if $t\eqw{vo} s$ then $t$ and $s$ are mapped on the same net. We also claim---without proving it---that $\eqw{vo}$ is exactly the quotient induced on terms by the translation to nets.
\end{remark}
\emph{Paths.} A path $\tau$ of length $k\in\mathbb{N}$ from $u$ to $v$, noted $\tau:u\to^k v$, is an alternated sequence $u=u_1,l_1,\ldots,l_k, u_{k+1}=v$ of nodes and links s.t. the link $l_i$ has source $u_i$ and target $u_{i+1}$ for $i\in\set{1,\ldots,k}$. A cycle is a path $u\to^k u$ with $k>0$.
\emph{Correctness}. The correctness criterion is based on the notion of correction graph, which is---as usual for nets with boxes---obtained by collapsing every box at level 0 into a generalized axiom link.
\begin{definition}[correction graph]
Let $G$ be a net. The correction graph $\zeronet{G}$ of $G$ is the hyper-graph obtained from $G$ by collapsing any $\parr$-box at level 0 into a $\square$-link applying the rule in Fig. \ref{fig:various}.c.
\end{definition}
\begin{definition}[correctness]
A net $G$ is correct if:
\begin{itemize}
\item \deff{Source}: $\zeronet{G}$ has exactly one $e$-source (the root of $G$).
\item \deff{Acyclicity}: $\zeronet{G}$ is acyclic.
\item \deff{Recursive correctness}: the interior of every box is correct.
\end{itemize}
\end{definition}
As usual an easy induction on the translation shows that the translation of a term is correct, {\em i.e.}\ that:
\begin{lemma}
Every proof net is correct.
\end{lemma}
\section{Motivating \mathintitle{$\parr$}{par}-boxes}
\label{s:par}
The two encodings of $\l$-calculus can be seen as fragments of Intuitionistic Multiplicative and Exponential Linear Logic (IMELL). Let us stress that in IMELL what we noted $\otimes$ and $\parr$ correspond to the right and left rules for the linear implication $\multimap$, and not to the left and right rules for $\otimes$ (the four rules for $\otimes$ and $\multimap$ are collapsed in LL but not in Intuitionistic LL, in particular our $\parr$ acts on the output of the term, {\em i.e.}\ on the right of the sequent, and corresponds to the right rule for $\multimap$).
Our argument is that in IMELL there is no correctness criterion unless the syntax is extended with boxes for both $!$ \emph{and} $\multimap$ (our $\parr$), as we shall explain in the next paragraphs. The fragment of IMELL encoding the CBN $\l$-calculus is a special case where the box for $\multimap$ needs not to be represented. The fragment encoding the CBV $\l$-calculus is a special case where the box for $!$ needs not to be represented. So, the two encodings are dual with respect to the use of boxes, and then there is nothing exotic in our use of $\parr$-boxes.
The difficulty of designing a correctness criterion for IMELL is given by the presence of weakenings, which break connectedness. In most cases weakenings simply prevent the possibility of a correctness criterion. The fragment encoding the CBN $\l$-calculus, and more generally Polarized Linear Logic, are notable exceptions. For the encoding of the CBN $\l$-calculus there exist two correctness criteria. Let us show that none of them works for the CBV $\l$-calculus.
The first is the Danos-Regnier criterion, in the variant replacing connectedness with the requirement that the number of connected components of every switching graph is $1+\#w$, where $\#w$ is the number of weakenings at level 0 (after the collapse of $!$-boxes) \cite{Reg:Thesis:92}. In our case this criterion does not work: the net in Fig. \ref{fig:counter}.b verifies the requirement while it does not represent any proof or term.
The second criterion is Olivier Laurent's polarized criterion, because the CBN encoding is polarized. In its original formulation it cannot be applied to the encoding of the CBV $\l$-calculus, because such a fragment is not polarized (there can be a weakening as a premise of a tensor, which is forbidden in polarized logic).
Our re-formulation of Laurent's criterion rejects the net in Figure \ref{fig:counter}.b (because the two $\parr$-links form a cycle), but without using $\parr$-boxes it would accept the net in Figure \ref{fig:counter}.c, which is not correct\footnote{The net in Figure \ref{fig:counter}.c would be rejected by the original version of the criterion, which is based on a different orientation. But the original orientation cannot be applied to our fragment.}.
Thus, the known criteria do not work and there is no criteria for IMELL. The usual way to circumvent problems about correctness is to add some information to the graphical representation, under the form of boxes (as we did) or jumps ({\em i.e.}\ additional connections). It is well known that in these cases various criteria can be used, but this extra information either is not canonical or limits the degree of parallelism. Another possible solution is to modify the logical system adding the mix rules. However, such rules are debatable, and also give rise to a bad notion of subnet (for details see \cite{phdaccattoli}, pp. 199-201).
Let us stress that our counter-examples to the known criteria do not rely on the exponentials ({\em i.e.}\ non-linearity): it is easy to reformulate them in Intuitionistic Multiplicative Linear Logic (IMLL) with units\footnote{Just replace each sequence of a ! over a dereliction with an axiom, and the weakenings with $\bot$-links.}, for which then there is no correctness criterion.
In the case studied in this paper the use of $\parr$-boxes does not affect the level of parallelism in a sensible way. Indeed, in IMELL the parallelism given by proof nets concerns the left rules (of $\otimes$ and $\multimap$, plus contractions and weakenings) and cuts: in our case there is no $\otimes$ (remember our $\otimes$ and $\parr$ rather correspond to the rules for $\multimap$), our technical choices for variables keep the parallelism for contraction and weakenings, and the parallelism of the left rule for $\multimap$ (our $\otimes$) and cuts is preserved (it is given by the equations in (\ref{eq:quotient}), page \pageref{eq:quotient}).
\section{Proof nets: sequentialization}
In this section we show how to extract a term $t$ from every correct net $G$ in such a way that $t$ translates back to $G$, {\em i.e.}\ we show that every correct net is a proof net. The proof of this fact is based on the notion of \emph{kingdom}, along the lines of the proof for polarized nets, see \cite{phdlaurent} (pp. 57-63).
\begin{definition}[Kingdom]
\label{d:king}
Let $G$ be a correct net and $x\notin\fv{G}$ one of its $e$-nodes. The \deff{kingdom} $\kingv{x}$ of $x$ is the set of links defined by induction on the link $l$ of source $x$:
\begin{itemize}
\item $l$ is a $!$-link: $\kingv{x}$ is given by $l$ plus the ${\mathsf{d}}$-link or the $\parr$-box on the $m$-target of $l$.
\item $l$ is a $\otimes$-link: $\kingv{x}$ is given by $l$ plus the ${\mathsf{d}}$-link or the $\parr$-box on the $m$-target of $l$ plus $\kingv{y}$, where $y$ is the $e$-target of $l$.
\end{itemize}
\end{definition}
The main property of $\kingv{x}$ is that it is the smallest subnet of root $x$, as we shall soon prove\footnote{We call \emph{kingdom of $x$} the net in def. \ref{d:king}, but at this point nothing guarantees that it is the smallest subnet of root $x$.}. To state this fact precisely we need the notion of subnet.
\begin{definition}[subnet]
Let $G$ be a correct net. A subnet $H$ of $G$ is a subset of its links s.t. it is a correct net and satisfying:
\begin{itemize}
\item \deff{Internal closure}: if $x$ is an internal $e$-node of $H$ then any link of $G$ of target $x$ belongs to $H$.
\item \deff{Box closure}:
\begin{itemize}
\item \deff{Root}: if a $\parr$-link $l$ belongs to $H$ then its box does it too.
\item \deff{Free variables}: if a free variable of a box $B$ of $G$ is internal to $H$ then $B\subseteq H$.
\end{itemize}
\end{itemize}
\end{definition}
The following lemma is essentially obvious, and usually omitted, but in fact it is used in the proof of Lemma \ref{l:kingdom}.
\begin{lemma}
\label{l:subsubnets}
Let $G$ be a correct net, $H$ a subnet of $G$, $x$ an internal $e$-node of $H$. Then there exists a subnet $K$ of $H$ having $x$ as root and s.t. it is a subnet of $G$.
\end{lemma}
\begin{proof}
It is enough to show that there is a subnet of $H$ of root $x$, since it is obvious that any subnet of $K$ is a subnet of $G$. By induction on the length of the maximum path from $x$ to a free variable of $K$.
\end{proof}
To properly describe kingdoms we need the following definition.
\begin{definition}[(free/ground) substitution]
Let $G$ be a correct net. A \deff{substitution} is an $e$-node which is the target of a $\set{{\mathsf{w}},{\mathsf{d}}}$-link (or, equivalently, which is not the target of a $\otimes$-link) and the source of some link. A substitution $x$ is \deff{ground} if it is a node of $\zeronet{G}$ ({\em i.e.}\ it is not internal to any $\parr$-box\footnote{Note that our collapsed representation of contractions and cuts does not allow to simply say that $x$ is a node at level 0: indeed the conclusion of a $\parr$-box can have level $>0$ and yet belong to $\zeronet{G}$.}), and it is \deff{free} if it is ground and there is no ground substitution of $G$ to which $x$ has a path (in $\zeronet{G}$).
\end{definition}
\begin{lemma}[kingdom]
\label{l:kingdom}
Let $G$ be a correct net and $x\notin\fv{G}$ one of its $e$-nodes. $\kingv{x}$ is the kingdom of $x$, {\em i.e.}, the smallest subnet of $G$ rooted at $x$. Moreover, it has no free substitutions, no free weakenings, and whenever $y\in\fv{\kingv{x}}$ is internal to a subnet $H$ of $G$ then $\kingv{x}\subseteq H$.
\end{lemma}
\proof
Let $H$ be a correct subnet of $G$ rooted at $x$. We show by induction on the length of the maximum path from $x$ to a free variable of $G$ that $\kingv{x}\subseteq H$ and that $\kingv{x}$ is correct. Let $l$ be the link of source $x$. Cases:
\begin{itemize}
\item \deff{Base case}: $l$ is a $!$-link. By the conclusion condition $H$ has to contain the ${\mathsf{d}}$-link $i$ or the $\parr$-link on the $m$-target of $l$. In the case of a $\parr$-link the box closure condition implies that the whole box $B$ is in $H$, hence $\kingv{x}\subseteq H$. In the case of a ${\mathsf{d}}$-link correctness is obvious, in the case of a $\parr$-box it follows by the correctness of the interior of the box, guaranteed by the recursive correctness condition. Moreover, no free substitutions and no free weakenings belong to $\kingv{x}$ (boxes cannot close on weakenings). Pick $y\in\fv{\kingv{x}}$, which in the ${\mathsf{d}}$-link case is the target of $i$ and in the other case is a free variable of the $\parr$-box $B$. If $y$ is internal to $H$ then the conditions for a subnet guarantee that $i$ or $B$ are in $H$. Then clearly $\kingv{x}\subseteq H$.
\item \deff{Inductive case}: $l$ is a $\otimes$-link. As in the previous case $H$ has to contain the ${\mathsf{d}}$-link or the $\parr$-box on the $m$-target of $l$. Moreover, by lemma \ref{l:subsubnets} $H$ contains a subnet $K$ rooted in the $e$-target $y$ of $l$. By inductive hypothesis $\kingv{y}$ is the kingdom of $y$, therefore we get $\kingv{y}\subseteq K\subseteq H$. Hence $\kingv{x}\subseteq H$. By \textit{i.h.}\ we also get that $\kingv{y}$ is correct, hence $y$ is its only $e$-source and $x$ is the only $e$-source of $\kingv{x}$. Acyclicity follows by correctness of $G$. Recursive correctness follows from the box closure condition and correctness of $G$. Moreover, by \textit{i.h.}\ $\kingv{y}$---and so $\kingv{x}$---has no free substitutions and no free weakenings. The part about free variables uses the \textit{i.h.}\ for the free variables of $\kingv{y}$ and the conditions for a subnet as in the previous case for the other free variables.
\qed
\end{itemize}
\begin{lemma}[substitution splitting]
\label{l:sub-splitting}
Let $G$ be a correct net with a free substitution $x$. Then
\begin{enumerate}
\item The free variables of $\kingv{x}$ are free variables of $G$.
\item $G\setminus \kingv{x}$ is a subnet of $G$.
\end{enumerate}
\end{lemma}
\begin{proof}
1) Suppose not. Then there is a free variable $y$ of $\kingv{x}$ which is not a free variable of $G$. There are two possible cases:
\begin{itemize}
\item \emph{$y$ is a substitution}. Then $x$ has a path to a substitution in $\zeronet{G}$, against the definition of free substitution, absurd.
\item \emph{$y$ is the distinguished free variable of a $\parr$-box $B$}. Thus, $y$ is internal to some $\parr$-box $B$ and so it is not a node of $\zeronet{G}$. By Lemma \ref{l:kingdom} we get that $\kingv{x}\subseteq B$ and so $x$ is not a node of $\zeronet{G}$, against the definition of free substitution, absurd.
\end{itemize}
2) By point 1 the removal of $\kingv{x}$ cannot create new $e$-sources. Being a substitution, $x$ is the target of some link. Therefore the removal of $\kingv{x}$ cannot remove the root of $G$. It is also clear that the removal cannot create cycles, and the box closure condition for subnets guarantees that the recursive correctness of $G$ implies the one of $G\setminus \kingv{x}$.
\end{proof}
\begin{lemma}
\label{l:no-free-sub-no-sub}
Let $G$ be a correct net with a ground substitution. Then $G$ has a free substitution.
\end{lemma}
\begin{proof}
Consider the following order on the elements of the set $S_g$ of ground substitutions of $G$: $z\leq y$ if there is a path from $z$ to $y$ in $\zeronet{G}$. Acyclicity of $\zeronet{G}$ implies that $S_g$ contains maximal elements with respect to $\leq$, if it is non-empty. Note that a maximal element of $S_g$ is a free substitution in $G$. Now, if $G$ has a ground substitution $x$ then $S_g$ is non-empty. Thus, $G$ has a free substitution.
\end{proof}
The next lemma is used in the proof of the sequentialization theorem.
\begin{lemma}[kingdom characterization]
\label{l:no-free-sub}
Let $G$ be a correct net. Then $G= \kingv{r_G}$ iff $G$ has no free substitutions nor free weakenings.
\end{lemma}
\begin{proof}
$\Rightarrow$) By Lemma \ref{l:kingdom}. $\Leftarrow$) By lemma \ref{l:kingdom} we get that $\kingv{r_G}\subseteq G$. If the two do not coincide then by the internal closure condition for subnets, the multiplicative condition on nets, and the fact that they share the same root, we get that $G$ contains a ground substitution $x$ on a free variable of $\kingv{r_G}$. By lemma \ref{l:no-free-sub-no-sub} $G$ contains a free substitution, absurd.
\end{proof}
\begin{theorem}[sequentialization]
Let $G$ be a correct net and $X$ be the set of $e$-nodes of its free weakenings. Then there is a term $t$ s.t. $\lamtonetsvar{t}{X}=G$ (and $\fv{G}=\fv{t}\cup X$).
\end{theorem}
\proof
By induction on the number of links. By the root and conclusion conditions the minimum number of links is 2 and the two links are necessarily a $!$-link on top of a ${\mathsf{d}}$-link. Let $x$ be the $e$-node of the ${\mathsf{d}}$-link. Then $\lamtonets{x}=G$. We now present each inductive case. After the first one we assume that the net has no free weakening.
\begin{itemize}
\item \textit{There is a free weakening $l$ of $e$-node $y$}. Then $G'=G\setminus \set{l}$ is still a correct net and by \textit{i.h.}\ there exist $t$ s.t. $\lamtonetsvar{t}{X\setminus\set{y}}=G'$. Then $\lamtonetsvar{t}{X}=G$.
\item \textit{There is a free substitution $x$}. Then by Lemma \ref{l:kingdom} and Lemma \ref{l:sub-splitting} $\kingv{x}$ and $G\setminus\kingv{x}$ are correct subnets of $G$. By the \textit{i.h.}\ there exist $s$ and $u$ s.t. $\lamtonets{s}=\kingv{x}$ and $\lamtonetsvar{u}{\set{x}}=G\setminus\kingv{x}$ (note that if $x\in \fv{u}$ then $\lamtonetsvar{u}{\set{x}}=\lamtonetsvar{u}{\emptyset}=\lamtonets{u}$). Then $\lamtonets{u\esub{x}{s}}=G$.
\item \textit{No free substitution}: by lemma \ref{l:no-free-sub} $G=\kingv{r_G}$. In case the root link $l$ of $G$ is:
\begin{itemize}
\item \textit{a $!$-link over a ${\mathsf{d}}$-link}: base case, already treated.
\item \textit{a $!$-link over a $\parr$-link}: let $H$ be the box of the $\parr$-link and $x$ its distinguished free variable. By definition of a net the set of free weakenings of $H$ either is empty or it contains only $x$. If $x$ is (resp. is not) the node of a free weakening then by \textit{i.h.}\ there exists $t$ s.t. $\lamtonetsvar{t}{\set{x}}=H$ (resp. $\lamtonets{t}=H$). Then $\lamtonets{\l x.t}=G$.
\item \textit{A $\otimes$-link $l$}: let $x$ be its $e$-target and $a$ its $m$-target. Note that $G=\kingv{r_G}$ implies that $G$ is composed by $l$, $\kingv{x}$ and either the ${\mathsf{d}}$-link or the $\parr$-link (plus its box) on $a$. By \textit{i.h.}\ there exists $s$ s.t. $\lamtonets{s}=\kingv{x}$. Now, if $a$ is the source of a ${\mathsf{d}}$-link of $e$-node $y$ we conclude, since $\lamtonets{y s}= G$. Otherwise, $s$ is the source of a $\parr$ of box $H$ and the \textit{i.h.}\ gives a term $u$ and a set $X$ s.t. $\lamtonetsvar{u}{X}=H$. Let us prove that $H$ and $\kingv{x}$ can only share free variables, as the translation prescribes: no link at level $0$ of $\kingv{x}$ can be in $H$, and no box at level 0 of $\kingv{x}$ can intersect $H$ other than on free variables, by the nesting condition. By reasoning about the distinguished free variable of $H$ as in the previous case we then get $\lamtonets{(\l y. u) s}=G$.
\qed
\end{itemize}
\end{itemize}
\section{Terms}
In this section we introduce the calculus which will be related to proof nets, called \emph{the value substitution kernel} $\l_{vker}$ \cite{AccLinearity}. Its syntax is:
\begin{center}
$\begin{array}{ccc@{\hspace*{0.5cm}\sep\hspace*{0.5cm}\sep}ccc}
t,s,u,r&::=& x \mid \l x. t\mid v s\mid t[x/u] &v&::=& x\mid \l x. t
\end{array}
$\end{center}
where $t[x/u]$ is an \emph{explicit substitution} and values are noted $v$. Note that the left subterm of an application can only be a value. The rules of $\l_{vker}$ are:
\begin{center}
$\begin{array}{ccc@{\hspace*{0.5cm}\sep\hspace*{0.5cm}\sep}ccc}
(\l x. t) u &\mapsto_{{\tt m}} &t[x/u]&
t[x/v\L]&\mapsto_{{\tt e}} & t\isub{x}{v}\L
\end{array}$\end{center}
where $\L$ is a possibly empty list of explicit substitutions $[x_1/u_1]\ldots [x_k/u_k]$ (and the fact that in the lhs of $\mapsto_{{\tt e}}$ $\L$ appears inside $[\ ]$ while in the rhs it appears outside $\{\ \}$ is not a typo). The calculus is confluent \cite{AccLinearity}.
The peculiarity of the value substitution kernel is that iterated applications as $(tu)s$ are not part of the language. The idea is that they are rather represented as $(x s)[x/t u]$ with $x$ fresh. The calculus containing iterated applications is called \emph{the value substitution calculus} $\l_{vsub}$, and it has been studied in \cite{DBLP:conf/flops/AccattoliP12,AccLinearity}. In \cite{AccLinearity} it is shown that $\l_{vsub}$ can be represented inside $\l_{vker}$ (mapping iterated applications $(tu)s$ to $(x s)[x/t u]$, as before) and that a term $t$ and its representation $\kt{t}$ are equivalent from the point of view of termination (formally $t$ is strongly (resp. weakly) normalizing iff $\kt{t}$ is, and the same is true with respect to weak---{\em i.e.}\ not under lambda---reduction). If one is interested in observing termination (as it is usually the case) than $\l_{vsub}$ and $\l_{vker}$ are observationally equivalent (via $\kt{\cdot}$). As pointed out to us by Frank Pfenning, the map $\kt{\cdot}$ is reminiscent of the
notion of \emph{$A$-reduction} in the theory of CPS-translations \cite{DBLP:conf/pldi/FlanaganSDF93,DBLP:journals/lisp/SabryF93}. The idea is then that $\l_{vker}$ (and thus proof nets) is essentially the language of $A$-normal forms associated to $\l_{vsub}$. However, the study of the precise relationship with $A$-normal forms is left to future work.
The calculus $\l_{vsub}$ has been related to Herbelin and Zimmermann's $\l_{CBV}$ \cite{DBLP:conf/tlca/HerbelinZ09} in \cite{DBLP:conf/flops/AccattoliP12}. In turn, $\l_{CBV}$ is related to Plotkin's $\l_{\beta v}$ in \cite{DBLP:conf/tlca/HerbelinZ09}, where it is shown that the equational theory of $\l_{\beta v}$ is contained in the theory of $\l_{CBV}$.\medskip
The rest of the paper shows that $\l_{vker}$ can be seen as an algebraic language for the proof nets used to interpret the call-by-value $\l$-calculus.
|
1,108,101,566,634 | arxiv | \section{Introduction}
Nonlinear optics is commonly used to extend the spectrum covered by lasers over unaccessible
regions \cite{dunn99}. For instance, second harmonic generation now is a well established process
applied in frequency conversion, and with continuous wave diode lasers typically it is implemented
inside resonant enhancement optical cavities \cite{zimmermann92}. Third- and up to fifth-harmonic
generation is now obtained with pulsed lasers easily accessing the UV spectral region with
familiar infrared diode-pumped solid-state lasers. The production of sub-harmonics, on the other
hand, has important applications in metrology and quantum optics. Division in 3:1 ratio is
achieved with active phase stabilization \cite{pfister96} and, more recently, dynamical signatures
of self-phase-locking for the same process were observed \cite{zondy04}. Concerning the 2:1 ratio,
both passive and active methods for the phase stabilization were applied
\cite{nabors90,mason98,feng04}. More generally, frequency downconversion with OPO's offers a
rather flexible way to access wide regions of the infrared and near-infrared spectrum, but to
generate continuous-wave and single-frequency radiation one employs single resonant OPO's, which
require multi Watts pump lasers \cite{bosenberg96}, or double resonant OPO's which, with a modest
electronic stabilization of the composing elements, show a considerably reduced threshold
\cite{nabors90,gibson99}.
We report on the first demonstration of optical frequency multiplication by a factor 3/2. We show
that our frequency multiplier, based on a multi-resonant OPO, is inherently phase coherent,
preserving the single longitudinal character of the incident field without active phase
stabilization, efficient, with a 30 \% slope efficiency and few tens of milliWatts threshold, and
stable on time scales of the order of several minutes.
\section{The 3/2 frequency multiplier}
The converter is based on an OPO where the pump, the signal, and the idler fields are all resonant
in the cavity and which is operated at frequency degeneracy making the signal and idler
frequencies to coincide. The OPO generated field has then half the frequency of the pump, and by
inserting in the cavity a nonlinear crystal for summing the pump and the OPO fields, we are able
to generate radiation at 3/2 the pump frequency. Exact degeneracy operation is obtained owing to
the double gain of the indistinguishable splitting process with respect to all the other processes
originating signal and idler photons \cite{nabors90}.
The triple resonance condition has the advantage of reducing the threshold of oscillation on the
pump intensity down to the milliWatts level \cite{martinelli01} and allows active stabilization of
the cavity length with respect to the pump frequency. On the other hand, the dispersive behavior
of the optical elements of the cavity, i.e. mirrors and nonlinear crystals, prevents one from
controlling the frequencies of the OPO generated fields independently, which has so far made
single mode operation in triply resonant OPO's hard to achieve. In our system the triply resonant
condition allows to actively stabilize the cavity length against the pumping laser, strongly
relaxing the requirements on the passive stabilization. We observed an oscillation threshold as
low as 40 mW. By introducing an independent control on the OPO frequency modes via a fine tuning
of the relative phase accumulated between the pump and OPO-generated fields over one cavity
roundtrip we achieve the simultaneous resonance of the pump and OPO fields at frequency
degeneracy.
\begin{figure} \vspace{0mm} \begin{center}
\hspace{-0mm}\includegraphics[width=0.7\textwidth,angle=0]{Figure1.eps}
\vspace{-0mm} \caption{\label{setup} 3/2 frequency multiplier experimental setup. A continuous
wave and single frequency pump laser delivering 400 mW of 1006.5 nm is converted into 40 mW
radiation at 671 nm. The pump laser is resonantly coupled into a cavity where 20 mm long
periodically poled KTP \cite{karlsson97} nonlinear crystals are set so to satisfy quasi
phase-matching for degenerate frequency down-conversion (OPO), and sum frequency generation
between the pump and down-converted light at 2013 nm (SFG). The wedged surfaces of the crystals
are cut at an angle of 100 mrad with respect to the crystal axis. The input (output) facet of the
OPO (SFG) crystal is at normal incidence. The two inclined surfaces facing each other are
parallel. The transverse displacement of the nonlinear crystals provides an independent control
over the cavity dispersion, insuring simultaneous resonance of the two infrared fields.}
\end{center}
\end{figure}
We demonstrate the 3/2 frequency multiplier producing radiation at 671 nm starting from a laser
source at 1006.5 nm, as schematically reported in Fig. \ref{setup}. The pump laser is composed by
a semiconductor Master-Oscillator Power-Amplifier system. The master laser is an antireflection
coated diode laser stabilized on an extended cavity in the Littrow configuration
\cite{wieman91,ricci95} delivering 30 mW at 1006.5 nm on a single longitudinal mode with less than
500 kHz linewidth. This laser is then amplified to 400 mW preserving its spectral properties
through a semiconductor tapered amplifier \cite{nyman06}. The pump radiation is coupled into an
optical cavity composed by highly reflective mirrors at 1006.5 nm and 2013 nm, and highly
transmitting at 671 nm \cite{Mirrors-671-1006.5-2013}. The input mirror has a 10 \% transmission
at 1006.5 nm in order to maximize the coupling of the pump field into the cavity under resonance.
One of the folding mirrors is mounted on a piezoelectric transducer (PZT) to actively stabilize
the cavity length to the pump field resonance. To this purpose the error signal is provided by the
polarization analysis of the reflected pump \cite{haensch80}, and by inserting into the cavity a
vertical polarizer.
The independent control on the OPO cavity frequency modes under pump resonance conditions is
obtained by cutting the crystals with a wedged shape \cite{imeshev98} (see Fig. \ref{setup}).
Displacing the crystals along the direction of the wedge enables one to change the optical path in
the crystal and, due to the dispersion, it allows a fine tuning of the OPO resonance modes while
keeping the cavity resonant with the pump field. The two nonlinear crystals are 20 mm long,
2$\times$1 mm$^2$ cross section, periodically poled KTP \cite{karlsson97} that insure
quasi-phase-matching for linearly and identically polarized fields. The OPO and SFG crystals have
a poling period of $\Lambda_{\rm OPO}$ = 38 and $\Lambda_{\rm SFG}$ = 19.5 $\mu$m respectively,
and they are identically cut in an asymmetric way such that one surface is at normal incidence,
while the other has an angle $\phi$ of 100 mrad with respect to the crystal axis. In the resonator
the crystals have the wedged side facing and parallel such that the optical axis coincide. This
configuration, while it allows to control the relative phase between the pump and OPO fields, it
insures a negligible deviation of the beam propagation at different wavelengths, and hence
simultaneous resonance of the pump and degenerate OPO fields. To reach the double resonance and
frequency degenerate condition, we observe a 400 $\mu$m periodicity on the crystal transverse
position. This is consistent with the calculated periodicity $\Lambda_{\rm OPO}/\phi$ = 380
$\mu$m. The crystal surfaces are all anti-reflection coated such that the reflectivity per surface
is 0.1\% at 1006.5 nm and 2013 nm, and 0.3\% at 671 nm.
\begin{figure} \vspace{0mm} \begin{center}
\hspace{-0mm}\includegraphics[width=0.8\textwidth,angle=0]{Figure2.EPS}
\vspace{-0mm} \caption{\label{SpettriFP} Transmission spectra of the frequency multiplied light
through a confocal Fabry-Perot (FP) spectrum analyzer. Displacing the nonlinear crystals
transversally we tune the cavity dispersion in order to impose single frequency emission (b), or
multi mode emission (a). c) The Gaussian beam profile of the 3/2 frequency multiplied output is
verified by coupling the single frequency radiation mainly into the fundamental transverse mode of
the FP cavity, which results in doubling the spacing among the resonance peaks \cite{SpectrumFP}.
}
\end{center}
\end{figure}
\section{Spectral properties and conversion efficiency}
The spectral properties of the generated red light are analyzed both with a lambda-meter for the
rough wavelength determination, and a confocal Fabry-Perot spectrometer (FP) to check the single
longitudinal mode operation \cite{Wavemeter}. As expected, the spectrum of the generated red light
depends on cavity dispersion. When we change the transverse position of the crystal by tens of
microns we are able to switch between single frequency emission at the expected value and multi
frequency emission with central wavelength displaced as much as 0.08 nm from 671 nm, with a
simultaneous reduction in the output power. Figure \ref{SpettriFP} reports the typical spectra
from the Fabry-Perot analyzer when the OPO operates close to the degenerate point. Depending on
the transverse position of the crystals, the system emits single (spectrum \ref{SpettriFP}b) or
multi (\ref{SpettriFP}a) longitudinal mode radiation with a stability of the order of minutes. In
the multi-longitudinal mode operation, energy conservation results in the symmetric positioning of
the frequency components with respect to the degenerate mode \cite{CrystalEfficiency}. The spatial
mode of the red light has a nearly Gaussian profile. As a check we carefully aligned the FP
analyzer in order to discern the even and odds transverse modes of the cavity \cite{SpectrumFP}.
As reported in Fig. \ref{SpettriFP}c, we can couple 97 \% of the power into the even transverse
modes, indicating that at least 94 \% of the generated power is in the fundamental transverse
mode. While multi-longitudinal mode operation is stable on hours, when the converter emits single
frequency radiation it proves to be stable on timescale of order of several minutes. Such a
stability requires no active stabilization of the crystals position. Figure
\ref{StabilitaAmpiezza} depicts the amplitude of the generated red light when the frequency
multiplier works in single longitudinal mode. The measured amplitude noise is 1.4 \% RMS on a 50
kHz bandwidth. \\
The single longitudinal mode emission proves that the OPO works at frequency degeneracy, and it is
known that for type-I phase matching (as provided by the periodically poled crystals) in frequency
degenerate OPO's the pump and downconverted fields are phase locked and that they may exhibit
$\pi$ phase jumps \cite{nabors90}. On the other hand the phase of the fundamental field in the
cavity is locked to that of the incident beam because of the pump-cavity resonance condition.
Since the frequency sum process should not add any relevant phase noise, we have an evidence that
the 3/2 multiplication process is phase coherent. A comparison with independently generated phase
coherent fields will allow a thorough characterization of the stability of the phase transfer
\cite{stenger02}.
\begin{figure} \vspace{0mm} \begin{center}
\hspace{-0mm}\includegraphics[width=0.8\textwidth,angle=0]{Figure3.eps}
\vspace{-0mm} \caption{\label{StabilitaAmpiezza} Frequency multiplier amplitude stability on 3
minutes and 10 ms (inset) time timescale under single longitudinal mode emission. The measured RMS
amplitude noise at full power is 1.4 \% on 50 kHz bandwidth. Under multi longitudinal mode
operation the amplitude stability does not not change qualitatively when measuring on the same
bandwidth.}
\end{center}
\end{figure}
We determine the conversion efficiency by varying the pump power and measuring the generated power
in the red as a function of the IR power coupled into the cavity \cite{CoupledPump}. We observe a
threshold for OPO oscillation smaller than 50 mW and obtain a 30 \% incremental efficiency above
150 mW pump power coupled into the cavity (see Fig. \ref{Efficiency}). Reducing the intensity of
the pump below 2/3 of the full power raises the amplitude noise in the output, and makes the
system more critical to operate on a single longitudinal mode. Such a degrading can be overcome by
using a different geometry optimized for lower pump levels, with better focussing of the cavity
mode on the nonlinear crystals, and choosing different crystals with higher nonlinear
polarizability \cite{martinelli01}.
\begin{figure}\vspace{0mm} \begin{center}
\hspace{-0mm}\includegraphics[width=0.8\textwidth,angle=0]{Figure4.EPS}
\vspace{-0mm} \caption{\label{Efficiency} Extracted power at 671 nm as function of the pump power
coupled into the cavity. The vertical gray line indicates the threshold value for a stable single
frequency operation of the converter. The error bars correspond to the RMS amplitude noise.}
\end{center}
\end{figure}
The wavelength tunability of the source can be limited either by the tunability of the fundamental
laser, or by that of the 3/2 frequency multiplier. Typically anti-reflection coated infrared
semiconductor lasers have a tunability of few percent in wavelength, and in our case the laser can
emit from 990 nm to 1040 nm. Concerning the multiplier, the nonlinear crystals can be temperature
tuned to satisfy quasi-phase-matching at different wavelengths. With our crystals, to generate
radiation at 670 nm, one nm shorter wavelength, we have to tune master laser to 1005 nm, cool the
OPO crystal by 5 Celsius, and cool the SFG crystal by 20. With a given choice of grating periods,
a reasonable temperature tunability of the multiplier is 0.5 \% in wavelength. This can be
extended, without loss of efficiency, to the full 5 \% tunability of the pump by using
multichannel periodically poled crystals \cite{myers95bis}, which include the 10 grating periods
necessary to access the relevant wavelength intervals. The mirrors of the cavity have a flat
response beyond the window accessible through the master laser.
\section{Conclusion}
To summarize, we demonstrated for the first time a scheme to multiply the frequency of continuous
wave optical radiation by a factor 3/2 and preserving its single frequency character. The
frequency multiplier is phase coherent, has a high conversion efficiency, is very stable, and its
stability on hours timescale could easily be achieved by optimizing the design of the
opto-mechanical apparatus. Employing existing technology, the scheme will easily find applications
in many disciplines requiring laser in the green to red spectral interval, such a spectroscopy.
Together with integer harmonic generation, 3/2 frequency multiplication will allow to access the
complete visible spectrum via harmonic generation of semiconductor lasers. It makes possible to
establish phase coherent links among spectral regions distant 2/3 of an octave \cite{ramond02},
and it may considerably simplify the realization of RGB laser systems.\\
It is worth noting that the frequency multiplier also acts as a parity discriminator on the pump
resonant mode. In fact, neglecting cavity dispersion, the frequency degenerate and resonant down
conversion can take place only when the pump is resonant in the cavity with an even number of
modes. This is confirmed by the single frequency emission of the converter with a twofold
periodicity when stepping the cavity length between adjacent pump resonances.
\section*{Acknowledgment}
We thank M. Artoni, G. Oppo and N. Poli for a critical reading of the manuscript, R. Ballerini, M.
De Pas, M. Giuntini and A. Hajeb for technical assistance. We are indebted with G.M. Tino, R.
Grimm, F. Schreck and Laser \& Electro-Optic Solutions for the general support and the loan of
parts of the apparatus. We also acknowledge stimulating discussions with C. Salomon. This work was
supported by EU under contract RII3-CT-2003-506350, and Ente Cassa di Risparmio di Firenze.
\end{document}
|
1,108,101,566,635 | arxiv | \section{Introduction}
Spectra of radiation from ultra-relativistic electrons in finite targets often contain appreciable edge effects. One of the common known
examples is transition radiation on discontinuities of dielectric susceptibility, and its interference between the boundaries of a traversed plate
\cite{Ginzburg,trans-rad-interf}. Less well-known but not less important are effects related to actual breaks in the electron trajectory -- e.g., radiation from an electron in a gap between the magnets in
a storage ring \cite{synchr-rad-straight-section}, or for an electron circumscribing a finite arc in a bending magnet \cite{Bagrov-Fedosov-Ternov}.
Studies of edge effects in direct radiation historically begun with the simplest problems, when electron deflection angles are either small enough (dipole radiation), or of the order of unity. More recently, attention was drawn to situations when angles of deflection of a high-energy electron in the target are small compared to unity, but well exceed the inverse Lorentz factor serving as the scale for radiation emission angles. Under such conditions, the radiation is forward-peaked, but highly non-dipole. Conventionally, measured at that are radiation spectra integral over photon emission angles.
\begin{figure}
\includegraphics{Diagr1}
\caption{\label{fig:Diagr1} Diagram for collinear-collinear radiation interference from double hard scattered electron (high-$\omega$ spectral region).
}
\end{figure}
An impetus to studies of nondipole radiation from high-energy electrons was given by recent experimental investigation of radiation at double scattering \cite{NA63-plans}, aiming at verification of predictions \cite{Blankenbecler,Zakharov,BK-structured} about interference fringes in spectra of bremsstrahlung on two amorphous foils. Such fringes may basically be described by classical electrodynamics (granted that typical photon energies $\hbar\omega$ are much lower than the electron energy). Embarking at the textbook formula for spectral distribution of irradiated energy\begin{eqnarray}\label{dIdomega-through-angles}
\frac{dI}{d\omega}=\omega^2\int d^2n
\left|\frac{e}{2\pi}\int_{-\infty}^{\infty} dt
[\vec{n},\vec{v}(t)]e^{i\omega t-i\vec{k}\cdot\vec{r}(t)}\right|^2,
\end{eqnarray}
(we let $c=1$), one can bring it to a form of a single integral \cite{Bondarenco-Shulga}
\begin{equation}\label{dIdomega-combined}
\left\langle\frac{dI}{d\omega}\right\rangle_{1,2}=\left\langle\frac{dI_{\text{BH}}}{d\omega}\right\rangle_1+\left\langle\frac{dI_{\text{BH}}}{d\omega}\right\rangle_
-\frac{2e^2\gamma^4}{\pi}\!\int_0^{\infty}\!d\theta^2\theta^2\left\langle
G\right\rangle_1\!\left\langle
G\right\rangle_2\cos\left[\frac{\omega T}{2\gamma^2}(1+\gamma^2\theta^2)\right]
\end{equation}
where $\gamma=1/\sqrt{1-v^2}$ is the electron Lorentz factor, $\theta$ the radiation emission angle with respect to the intermediate electron line,
\begin{equation*}\label{J-scalar-def}
G(\theta)\approx \frac{1}{1+\gamma^{2}\theta^2
-\frac{1}{
2\gamma^{2}\theta^2}\left(1+\frac{\theta^2-\chi^2-\gamma^{-2}}{\sqrt{[\gamma^{-2}+(\theta-\chi)^2][\gamma^{-2}+(\theta+\chi)^2]}}\right),
\end{equation*}
and $\left\langle ...\right\rangle$ denotes the averaging over
electron scattering angles $\chi_1$ and $\chi_2$ in each foil. The
last term in (\ref{dIdomega-combined}) gives rise to oscillatory
spectral behavior
\begin{equation}\label{dI-largelambda}
\left\langle\frac{dI}{d\omega}\right\rangle_{1,2}\underset{\omega\gg\gamma^2/T}\simeq\left\langle\frac{dI_{\text{BH}}}{d\omega}\right\rangle_1+\left\langle\frac{dI_{\text{BH}}}{d\omega}\right\rangle_
+\frac{8e^2}{\pi}\left\langle G(0)\right\rangle_1\left\langle
G(0)\right\rangle_2 \left(\frac{2\gamma^2}{\omega T}\right)^2 \cos
\frac{\omega T}{2\gamma^2},
\end{equation}
\begin{equation}\label{I-BH}
\frac{dI_{\text{BH}}}{d\omega}(\gamma\chi)\underset{\gamma\chi\gg1}\simeq\frac{2e^2}{\pi}(\ln\gamma^2\chi^2-1)
\end{equation}
[with $G(0)=1-\frac1{\left(1+\gamma^2\chi^2\right)^{2}}$], which is similar to that for interference
of transition radiation from different boundaries of the traversed
plate \cite{trans-rad-interf}, or to that for radiation in a gap of a storage ring \cite{synchr-rad-straight-section}. The phase of the cosine in
Eq.~(\ref{dI-largelambda}) may be understood as the physical ratio
${\omega T}/{2\gamma^2}=T/l_0(\omega)$, where
\begin{equation}\label{l0-def}
l_0(\omega)=\frac{2\gamma^2}{\omega}
\end{equation}
is the ``free" photon formation length. Notably, oscillations
(\ref{dI-largelambda}) enhance in the highly non-dipole regime
$\gamma^{-1}\ll\chi\ll1$, when
$G\underset{\chi\gg\gamma^{-1}}\to\frac1{1+\gamma^2\theta^2}$,
becoming independent of the scattering angles which are averaged
over. That is not unnatural, as long as in the opposite, dipole
limit their average must vanish (dipole Bethe-Heitler contributions
do not interfere on the average).
One may ask, however, why does the phase of the interference term
depend on $T/l_0(\omega)$ alone, but is independent of the electron scattering angles.
It might as well depend on the scattering-modified photon formation
length
\begin{equation}\label{lchi-def}
l_{\chi}(\omega)\simeq l_f(\omega,\theta)\big|_{\theta\sim\chi}\equiv\frac2{\omega(\gamma^{-2}+\theta^2)}\bigg|_{\theta\sim\chi}\sim\frac2{\omega\chi^2}\qquad\quad (\chi\gg\gamma^{-1})
\end{equation}
relevant, e.g., in the theory of Landau-Pomeranchuk-Migdal effect
\cite{Galitsky-Gurevich}, which, after all, is just another
manifestation of radiation interference. A closer examination
reveals that in the two-foil case, interference effects related with
length (\ref{lchi-def}) are erased by averaging. But for problems
without averaging, they may be viable and observable.
Whereas from Fig.~\ref{fig:Diagr1} it is clear which kinds of photons
are interfering on scale $l_0(\omega)$, for scale $l_{\chi}(\omega)$
that may be less obvious. In what follows, we shall demonstrate that the
interference at low $\omega$ involves nontrivial geometry
both in the longitudinal and transverse directions, and engages not one
but two categories of photons: intra-jet (inside a jet formed by a
temporarily straight moving electron) and inter-jet (between the
jets) \cite{Bond-Shul-double-scat}. Therefore, the mechanism of edge
radiation interference at low $\omega$ is not the same as at high $\omega$, but more intricate.
At the same time, all highly-nondipole radiation problems in finite targets have much in common. In this paper, we propose a generalization for their spectral decomposition. It seems expedient to begin with its statement.
\section{Separation of volume and edge contributions for nondipole radiation in finite targets}\label{sec:decomposition}
To assess edge effects, first of all, it is imperative to define how to
discriminate them from the volume contribution. A natural and generally applicable procedure is to indefinitely expand the target (i.e., formally send its size
$T\to\infty$ with the rest of the parameters held fixed), and split
\begin{equation}\label{vol+edge}
\frac{dI}{d\omega}=\frac{dI_{\text{vol}}}{d\omega}+\frac{dI_b}{d\omega},
\end{equation}
where $\frac{dI_{\text{vol}}}{d\omega}\propto T$ [i.e., $\frac{dI_{\text{vol}}}{d\omega}=T\left(\underset{T\to\infty}\lim\frac1T\frac{dI}{d\omega}\right)$], and $\frac{dI_b}{d\omega} \underset{T\to\infty}=O(1)$.
The leading, ``volume" contribution $\frac{dI_{\text{vol}}}{d\omega}$ is expected to be generated ``locally"
(i.e., at spatial scales much smaller than $T$). So, it must depend on
$\omega$ essentially through the ratio
\begin{equation}\label{hard-scale}
\frac{l_{\text{ext}}}{l_0(\omega)}.
\end{equation}
Here $l_{\text{ext}}=\max\tau(\chi\lesssim\gamma^{-1})$ is the time scale within which the electron deflection angles do not overwhelm typical radiation emission angles $\sim\gamma^{-1}$, and thereby the radiation coherence is maintained. Hence, $l_{\text{ext}}$ plays the role of the external field coherence length.
The remainder $\frac{dI_b}{d\omega}$, which must embody all the edge effects, should depend on variable
(\ref{hard-scale}) somewhat differently, because it is generated in a physically different way. Besides that, it can depend on a
ratio
\begin{equation}\label{soft-scale}
\frac{T}{l_{\chi}(\omega)}.
\end{equation}
But when scales of $\omega$, at which variables (\ref{hard-scale}) and (\ref{soft-scale}) are of the order of unity,
vastly differ ($\chi\gg\gamma^{-1}$), the dependence
of $\frac{dI_b}{d\omega}$ on them may actually be treated as additive.
To gain more insight into the structure of edge effects, it should be realized that in the angular distribution of highly nondipole radiation on a finite target there are always prominent jets. They are related with formation of photons attached to the initial or the final electron line during long times, and thereby being narrowly collimated (to within angles $\sim\gamma^{-1}$ around the electron velocity).\footnote{The term ``jet" is widely used in high-energy hadron physics, where it signifies production of many hadrons of different species and energies in about the same direction. Since the advent of QCD, though, such a term is also applied on the quantum field theoretical, parton level, often involving just a quark and a collinear gluon emitted by it. A similar nomenclature could be extended as well to electrodynamics, where a ``jet" consists of an electron and a photon collinear to it (see also \cite{Carimalo-Schiller-Serbo}).} Besides that, radiation persists also at angles between the jets, being fainter but filling a wider angular region (see, e.g., Fig. 2 of \cite{Bond-Shul-double-scat}). Even though in radiation spectra the emission angles are integrated over, jet effects can survive under conditions of nondipole radiation. In a generic case considered herein, these jets do not overlap in the plane of emission angles, and thus do not interfere, but jet photons from one electron line can interfere with interjet photons emitted at the opposite boundary of the target.
Under the given conditions, the formal additivity of manifestations of the two scales in the radiation spectrum has a clear-cut physical meaning. In decomposition
\begin{equation}\label{hard+soft}
\frac{dI_b}{d\omega}\underset{\chi\gg\gamma^{-1}}\simeq 2\frac{dI_{1b}}{d\omega}\left(\frac{l_{\text{ext}}}{l_0(\omega)}\right)+\frac{dI_{bb}}{d\omega}\left(\frac{T}{l_{\chi}(\omega)}\right),\qquad \frac{dI_{bb}}{d\omega}\underset{T/l_{\chi}\to0}\to0,
\end{equation}
$\frac{dI_{1b}}{d\omega}$ corresponds to a single-boundary contribution, and is normally independent of $T$, as well as of the wide
deflection angles collectively denoted here as $\chi$. On the other hand,
$\frac{dI_{bb}}{d\omega}$ corresponds to effects of interference between the boundaries, and appears to be basically independent of $\gamma$. More precisely, since $\frac{dI_{bb}}{d\omega}$ generally involves some jet effects, too, sp it bears a
mild $\gamma$-dependence, generally being expressible as\footnote{When the target boundaries are not identical, factor 2 at $A_1$ in Eq.~(\ref{dIsoft=2A1Fj+A2}), as well as factor 2 at $\frac{dI_{1b}}{d\omega}$ in Eq.~(\ref{hard+soft}), must be replaced by summation over the boundaries -- cf. Eq.~(\ref{dIsoftdomega-sum-formfactors}) below.}
\begin{equation}\label{dIsoft=2A1Fj+A2}
\frac{dI_{bb}}{d\omega}=\frac{2e^2}{\pi}\left[2A_1\left(\frac{\omega T\chi^2}{2} \right)F_{\perp}\left(\frac{\omega T\chi}{\gamma} \right)+A_2\left(\frac{\omega T\chi^2}{2} \right)\right].
\end{equation}
Functions $A_1$ and $A_2$ usually are oscillatory and decreasing as power laws. They may be called ``quasi-antenna" formfactors, where the role of the ``antenna" is played by the \emph{sufficiently bent} electron's trajectory (abstracting from the electron as a pointlike particle), which may be treated as a long ``wire", along which the electric current evoked by the electron motion flows essentially at the speed of light
($\gamma\to\infty$). In turn, $F_{\perp}$ is a monotonously decreasing proper field formfactor, absorbing all the jet effects, and furnishing exponential decrease of the oscillations at intermediate $\omega$.
In separation (\ref{hard+soft}), single and double boundary
contributions are logarithmically divergent individually (cf., e.g., \cite{Goldman}),
\begin{equation}\label{pm-Log-omega}
2\frac{dI_{1b}}{d\omega},
\frac{dI_{bb}}{d\omega}\underset{\omega\to0}\sim\pm\ln\frac1{\omega},
\end{equation}
but the divergences cancel in their
sum.
The quoted formulas pertain to the case when the target edges are
sharp. To take their non-zero width $\Delta T\ll T$ into account, one has
to replace
\begin{equation}\label{}
\frac{dI_{1b}}{d\omega}\to \frac{dI_{1b}}{d\omega} F_{\text{edge}}(\omega\Delta T/\gamma^2),
\end{equation}
with an additional formfactor $F_{\text{edge}}$, whereas
contributions $\frac{dI_{\text{vol}}}{d\omega}$,
$\frac{dI_{bb}}{d\omega}$ are unaltered.
\section{Non-dipole spectral decompositions for specific problems}
\subsection{Radiation in a finite magnet}\label{subsec:magnet}
To illustrate the conjectured decomposition, consider
a few specific examples. First, consider a process of radiation from a fast electron passing through a long but finite magnet
(see Fig.~\ref{fig:magnet}). This physical problem was studied on general grounds in \cite{Bagrov-Fedosov-Ternov},
but our objective is to decouple single-edge and edge interference contributions based on the long-magnet asymptotics.
In this case, for all $\omega$ it is advantageous to employ
for the radiation spectrum the double time integral representation
[see \cite{BKS} and Eq.~(\ref{dIdomega-photon-proparator}) below].
The result of the integrations gives structure (\ref{vol+edge}), (\ref{hard+soft}) with entries
\begin{equation}\label{dI-synch}
\frac{dI_{\text{vol}}}{d\omega}=2e^2X\left\{-(2\Omega_s)^{1/3}\text{Ai}'\left[(2\Omega_s)^{2/3}\right
-\Omega_s\int_{(2\Omega_s)^{2/3}}^{\infty}d\alpha
\text{Ai}(\alpha)\right\}
\end{equation}
(representing the synchrotron radiation intensity times the magnet
length, $\Omega_s=\frac{\omega
R}{2\gamma^3}$, $X=\frac{\gamma T}{R}=\gamma\chi$, $R$ stands for the trajectory bending radius),
\begin{eqnarray}\label{Iinterf-Gi}
\frac{\pi}{e^2}\frac{dI_{1b}}{d\omega}=(2\Omega_s)^{\frac23}\pi\text{Gi}\left[(2\Omega_s)^{\frac23}\right]-3\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\nonumber\\
+\int_1^{\infty}\!\!\!\frac{dw}{w-\frac34}\Bigg\{
+\Omega_s^{\frac23}\!\left[2\!\left(\!1-\frac3{4w}\right)^{\!\frac23}
\!\!-w\left(\!1-\frac3{4w}\right)^{\!-\frac13}\right
\!\pi\text{Gi}\!\left(\Omega_s^{\frac23}w\!\left(\!1-\frac3{4w}\right)^{\!-\frac13}\right)\!\!\Bigg\},\nonumber\\
\qquad\qquad
\end{eqnarray}
where $\text{Gi}(\alpha)=\frac1{\pi}\int_0^{\infty}dt \sin\left(\alpha t+\frac13 t^3\right)$ is the Scorer function, arising naturally instead of the Airy function $\text{Ai}(\alpha)=\frac1{\pi}\int_0^{\infty}dt \cos\left(\alpha t+\frac13 t^3\right)$ in description of single-edge effects, and finally,
\begin{equation}\label{boundary-interf-part-magnet}
\frac{dI_{bb}}{d\omega}=\frac{2e^2}{\pi}\left[2A_{1}\left(\Omega_s
X^3\right)F_{\perp}(\Omega_s X^2)+A_{2}\left(\Omega_s X^3\right)\right],
\end{equation}
where
\begin{equation}\label{Fj-def}
F_{\perp}(z)=zK_1(z),\qquad F_{\perp}(0)=1
\end{equation}
is a proper field form factor \cite{Bond-Shul-double-scat}, whose relevance for the present
process will be elucidated in Sec~\ref{subsec:imp-par},
whereas antenna formfactors are
\begin{equation}\label{A1-magn}
A_{1}=\frac{2}{\Omega_s
X^3}\int_0^{\infty}\frac{du}{(1+u)^2
\Bigg\{\sin\left[\frac{\Omega_s X^3}{3}(1+u)\right]
-\sin\left[\frac{\Omega_s
X^3}{2}\left(\frac23+u\right)\right]\Bigg\},
\end{equation}
\begin{eqnarray}\label{A2-magn}
A_{2}=\int_0^{\infty}\frac{du}{1+u}\Bigg\{\cos\left[\frac{\Omega_s X^3}{12}(1+3u)\right]-\cos\left[\frac{\Omega_s X^3}{12}(1+u)\right]\nonumber\\
+\frac2{1+u}\cos\left[\frac{\Omega_s X^3}{12}(1+u)^3\right]\Bigg\}.
\end{eqnarray}
\begin{figure}
\includegraphics{magnet}
\caption{\label{fig:magnet} Geometry of radiation from an electron
in a finite magnet. Photon emissions from the three distinct
spatial regions are generally interfering.}
\end{figure}
In spite of some bulkiness of the encountered formulae, the present case is the most ``normal" manifestation of decomposition (\ref{vol+edge}), (\ref{hard+soft}): Structure (\ref{boundary-interf-part-magnet}) exactly corresponds to conjectured representation (\ref{dIsoft=2A1Fj+A2}), while (\ref{Iinterf-Gi}) depends solely on $\Omega_s$, which may be cast in form (\ref{hard-scale}), (\ref{l0-def}) with $l_{\text{ext}}=R/\gamma$.
Single-boundary contribution $\frac{dI_{1b}}{d\omega}$ (an analog of transition radiation, arising in the absence of atomic matter) is not positive definite, so, it does not give an independent radiation intensity, and needs a sufficient volume radiation background.
Its salient feature is that at high $\omega$, the edge contribution falls off as a (positive) power law
\begin{equation}\label{highomega}
\frac{dI}{d\omega}\underset{\Omega_s\to\infty}\simeq 2\frac{dI_{1b}}{d\omega}\simeq \frac{7e^2}{15\pi\Omega_s^2},
\end{equation}
which must ultimately dominate over the exponentially attenuating volume synchrotron contribution. Qualitative prediction of asymptotic behavior
$\frac{dI}{d\omega}\underset{\omega\to\infty}\sim\omega^{-2}$
was made in \cite{Bagrov-Fedosov-Ternov}
At low $\omega$, terms (\ref{Iinterf-Gi}) and (\ref{A1-magn}) combine to mutually cancel their logarithmic singularities [cf. Eq.~(\ref{pm-Log-omega})] and ensure the correspondence with the factorization theorem.
A related problem arises for radiation at electron passage through a short crystal (see, e.g., \cite{edge-crystal}), particularly when the electron passing through
the inter-planar channel nearly conserves its impact parameter,
thus experiencing a constant transverse force. In that case, the
target edges are sharper than those for laboratory magnets, but
there arises an additional need for averaging over electron impact
parameters.
\subsection{Bremsstrahlung at double hard scattering}\label{subsec:double-scattering}
A subtler example, already mentioned in the Introduction, arises when instead of interaction with a continuous target, the electron undergoes two instantaneous scatterings. Specifically, let the electron scatter repeatedly, at points separated by a time interval $T$, through angles
$\vec{\chi}_1$, $\vec{\chi}_2$ with a relative azimuth
$\varphi_{12}=\arccos
\left(\vec{\chi}_1\cdot\vec{\chi}_2/|\vec{\chi}_1||\vec{\chi}_2|\right)$,
such that $|\vec{\chi}_1|,|\vec{\chi}_2|\gg\gamma^{-1}$. After
appropriate integrations in (\ref{dIdomega-through-angles}), one
arrives \cite{Bond-Shul-double-scat} at the expression for the radiation spectrum (\ref{vol+edge}), (\ref{hard+soft}), with
\begin{equation}\label{Ivol-IBH}
\frac{dI_{\text{vol}}}{d\omega}=\sum_{i=1}^{2}\frac{dI_{\text{BH}}}{d\omega}(\gamma\chi_i),
\end{equation}
\begin{equation}\label{Ihard-2scat}
\frac{dI_{1b}}{d\omega}=-\frac{e^2}{\pi}\int_0^{\infty}
\frac{d\theta^2\theta^2}{(\gamma^{-2}+\theta^2)^2}\cos\left[\frac{\omega
T}{2\gamma^2}\left(1+\gamma^2\theta^2\right)\right]
\end{equation}
[the latter being similar to the non-dipole limit of the \emph{non-averaged} interference term in (\ref{dIdomega-combined})], and
\begin{eqnarray}\label{dIsoftdomega-sum-formfactors}
\frac{dI_{bb}}{d\omega}=
\frac{2e^2}{\pi}\Bigg[A_1\left(\frac{\omega T\chi_1^2}2,\frac{\omega T\chi_1{\chi}_2}2 e^{i\varphi_{12}}\right)F_{\perp}\left(\frac{\omega T\chi_1}{\gamma}\right)\qquad\qquad\qquad\qquad\nonumber\\
+A_1\left(\frac{\omega T\chi_2^2}2,\frac{\omega T\chi_1{\chi}_2}2
e^{i\varphi_{12}}\right)F_{\perp}\left(\frac{\omega
T\chi_2}{\gamma}\right
+A_2\left(\frac{\omega T\chi_1{\chi}_2}2
e^{i\varphi_{12}}\right)\Bigg]\quad
\end{eqnarray}
with
\begin{equation}\label{A1-def}
A_1\left(z_1,z_2\right)=-\text{Ci}\left(z_1\right
+\mathfrak{Re}\left\{\cos z_2\text{Ci}\left(z_1+z_2\right)+\sin
z_2\text{si}\left(z_1+z_2\right)\right\},
\end{equation}
\begin{equation}\label{A2-def}
A_2\left(z\right)=-\mathfrak{Re}\left\{\cos z
\text{Ci}\left(z\right)+\sin z\text{si}\left(z\right)\right\},
\end{equation}
$\text{Ci}(z)=-\int_z^{\infty}\frac{dx}{x}\cos x$,
$\text{si}(z)=-\int_z^{\infty}\frac{dx}{x}\sin x$, and $F_{\perp}$ given by Eq. (\ref{Fj-def}). Contributions (\ref{Ihard-2scat}) and (\ref{A1-def}), (\ref{A2-def}), depending on different photon formation
lengths, exhibit oscillations in different spectral regions (see
Fig.~\ref{fig:Oscill}). Presently, only hard oscillations (\ref{Ihard-2scat}), corresponding to formation length $l_0$ were investigated experimentally \cite{NA63-plans}.
\begin{figure}
\includegraphics{Oscill}
\caption{\label{fig:Oscill} (Adapted from Ref.~\cite{Bond-Shul-double-scat}.) Spectrum of electromagnetic radiation from an electron scattering two times through co-planar ($\varphi_{12}=0$) equal angles $\chi=30\gamma^{-1}$. Dot-dashed curve, Eq.~(\ref{dIdomega-O(omega)}).}
\end{figure}
Comparison of the obtained structures with generic
Eqs.~(\ref{hard+soft}), (\ref{dIsoft=2A1Fj+A2}) indicates that
(\ref{dIsoftdomega-sum-formfactors}) is perfectly consistent with
(\ref{dIsoft=2A1Fj+A2}). However, since now $l_{\text{ext}}=\max\tau(\chi\lesssim\gamma^{-1})=T$ (in spite that angles $\sim\chi$ are also achieved at scattering in single points), (\ref{Ihard-2scat}) depends on $T$,
and therefore it is not quite a single-boundary contribution.
Still, it can be treated as such, in the sense that the photons are formed only on one of the boundaries, but contributions from different boundaries can interfere. But a $T$-dependent scale can already not be relevant for $\frac{dI_{\text{BH}}}{d\omega}$. Fortunately, since the latter is actually $\omega$-independent [see Eq. (\ref{I-BH})], the problem does not exist. Physically, the encountered anomaly owes to the straightness of the electron motion in between the scattering points, so that even if photons are formed at its ends pretty far from each other, they move in the same direction and hence can interfere, as was depicted in Fig. \ref{fig:Diagr1}.
A more serious issue arises when initial and final electrons are
collinear [see Fig. 5(c) of \cite{Bond-Shul-double-scat}]. Such effects are not taken into account by the decomposition of Sec. \ref{sec:decomposition}. In particular, they should include also an overlap of two formfactors $F_{\perp}$, corresponding to initial and final electron states. But the latter case may be regarded as exceptional.
\section{Space-time analysis}
Proper understanding of the structure of the encountered spectral
decomposition (\ref{dIsoft=2A1Fj+A2}) requires studying its
space-time origin. Below we will explain in a nutshell the physical
mechanisms behind this structure.
\subsection{Intermediate $\omega$. Ray optics}\label{subsec:imp-par}
As was already mentioned in the Introduction, at high $\omega$ the interference is governed by the longitudinal coherence with length $l_0$.
With the decrease of $\omega$, transverse wavelengths increase and become comparable with the geometrical transverse scales. So, there increases the importance of proper treatment of transverse spatial dimensions.
They can be brought out, e.g., by passing, via Fourier transformation, from radiation emission angles to photon impact (or emission) parameters. For single scattering, such a photon impact parameter representation is rather well-known \cite{Bjorken-Kogut-Soper,Zakharov,BK-LPM}:
\begin{equation}\label{K1^2(1-J)}
\frac{dI}{d\omega } =\left(\frac{e}{\pi}\right)^2\int d^2\xi
\left[\frac{\partial}{\partial
\vec{\xi}}K_0(\xi/\gamma)\right]^2\left|1-e^{i\vec{\chi}\cdot
\vec{\xi}}
\right|^2=\frac{dI_{\text{BH}}}{d\omega}(\gamma\chi)
\end{equation}
where $\vec{b}=\vec{\xi}/\omega$ is the impact parameter.
Integration in (\ref{K1^2(1-J)}) recovers form (\ref{I-BH}).
An extension of this approach to double scattering leads to representation
\cite{Bondarenco-Shulga,Bond-Shul-double-scat}
\begin{eqnarray}\label{imp-par}
\frac{dI}{d\omega}=\frac{dI_{\text{BH}}}{d\omega}(\gamma\chi_1)+\frac{dI_{\text{BH}}}{d\omega}(\gamma\chi_2)\qquad\qquad\qquad\qquad\qquad\qquad\nonumber\\
-\frac{e^2}{\pi^3\omega T}\iint d^2 \xi_1 d^2 \xi_2
\frac{\partial}{\partial
\vec{\xi}_1}K_0\left(\frac{\xi_1}{\gamma}\right)
\cdot\frac{\partial}{\partial
\vec{\xi}_2}K_0\left(\frac{\xi_2}{\gamma}\right)\qquad
\nonumber\\
\times \mathfrak{Im}\left\{\left(1-e^{-i\vec{\chi}_1\cdot \vec{\xi}_1}
\right)\left(1-e^{-i\vec{\chi}_2\cdot \vec{\xi}_2} \right)
e^{-i\frac{\omega
T}{2\gamma^{2}}+i\frac{(\vec{\xi}_1-\vec{\xi}_2)^2}{2\omega T}}\right\}.
\end{eqnarray}
On its basis, it is straightforward to demonstrate the onset of ray
optics in the given process: When $\chi\gg\gamma^{-1}$,
exponentials $e^{-i\vec{\chi}_1\cdot \vec{\xi}_1}$,
$e^{-i\vec{\chi}_2\cdot \vec{\xi}_2}$ are rapidly oscillating, and
along with the Gaussian factor
$e^{i\frac{(\vec{\xi}_1-\vec{\xi}_2)^2}{2\omega T}}$, they form
(real) stationary phase points, defining rays parallel to one of the
external electron lines. The monotonous decrease of the
impact-parameter-dependent photon distributions at these impact
parameters gives rise to the proper field formfactor for $A_1$ -- see
Fig.~\ref{fig:Diagr}(a).\footnote{For transition radiation such
effects are impossible in principle, because there the charged
particle trajectory is everywhere rectilinear.} On the contrary,
Fig.~\ref{fig:Diagr}(b) demonstrates the absence of a modulating formfactor for $A_2$.
\begin{figure}
\includegraphics{DiagrA1A2}
\caption{\label{fig:Diagr} (a) Graphical illustration of formation of the jet-interjet interference contribution $A_1$ and its modulating formfactor $F_{\perp}$.
The condition of interference
between collinear and noncollinear photons, besides the
coincidence of emission directions, is the equality of impact
parameters: $l_{\perp}(\omega)={\gamma}/{\omega}$ and $b_i\sim T\chi$. There is also a cross diagram, in which interjet
photons are emitted from the first scattering vertex, and jet
ones from the final electron line.
(b) The same for interjet-interjet interference contribution $A_2$.}
\end{figure}
In the case of radiation in a magnet, it should be remembered yet that synchrotron photons within the magnet are stripped at a non-zero intrinsic impact parameter\footnote{Transverse scale (\ref{Delta-x-syn}) is related via $\Delta x_{\text{syn}}=t_{\text{syn}}^2/2R$ with the longitudinal scale such that $t_{\text{syn}}\sim l_{\chi}[\chi^2(t_{\text{syn}})]$, where $\chi(t)=t/R$ and $l_{\chi}(\chi)=1/\omega\chi^2$, as given by Eq.~(\ref{lchi-def}). Another characteristic transverse parameter \cite{Artru-imp-par-synchr}, equal to $R/2\gamma^2$, under the present conditions is smaller.}
\begin{equation}\label{Delta-x-syn}
\Delta x_{\text{syn}}=\xi_x/\omega\sim R^{1/3}\omega^{-2/3}.
\end{equation}
At small $\omega$, that quantity is large, but with the increase of $\omega$, it ultimately falls below the impact parameter difference between electron entrance and exit from the magnet:
\begin{equation}
b_i=\left|\int_0^T dt\left[\vec{\chi}(t)-\vec{\chi}_i\right]\right|=T^2/2R,
\end{equation}
or
\begin{equation}
b_f=\left|\int_0^T dt\left[\vec{\chi}_f-\vec{\chi}(t)\right]\right|=T^2/2R,
\end{equation}
which are independent of $\omega$. That happens at $\Delta x_{\text{syn}}\ll b_i, b_f$, i.e., $\omega T\chi^2\gg1$, which is exactly the region where oscillations of $A_1$ develop. Then, $b_i, b_f$ become sufficiently sharply defined, so the electron proper field amplitude (\ref{Fj-def}) factors out with $b\to b_i, b_f$, and its exponential falloff at large $\omega$ eventually suppresses the spectral oscillations.
\subsection{Low $\omega$. ``Radio" contribution}
At sufficiently low $\omega$, the ray optics concepts break down, isolated points no longer play a distinguished role, and the entire electron trajectory radiates as a whole. At that, the foreground is taken by time aspects.
Formally, at $\omega\to0$, there emerges a hierarchy of time scales, some of which, being reciprocal to $\omega$, expand indefinitely, whereas
others, determined by the target thickness, remain finite. To carry
out the corresponding scale separation self-consistently, it is convenient to embark at the representation for radiation spectrum, in which integration over photon emission angles is performed exactly. The resulting double
time integral representation expresses covariantly as
\cite{Bond-Shul-double-scat}
\begin{equation}\label{dIdomega-photon-proparator}
\frac1{\omega}\frac{dI}{d\omega}
= \frac{e^2}{\pi}\mathfrak{Im} \int_{-\infty}^{\infty}ds_2\int^{s_2}_{-\infty}ds_1 u_{\mu}(t_1)u_{\nu}(t_2
e^{-i\omega
(t_2-t_1)}D_{\mu\nu}\left(\omega,\left|\vec{r}(t_2)-\vec{r}(t_1)\right|\right)
\end{equation}
and may be viewed as a kind of unitarity relation (see
Fig.~\ref{fig:loop-diagram}). Here $s=t/\gamma$ is the electron
proper time, $u_{\mu}$ its 4-velocity, and $D_{\mu\nu}$ the photon
propagator. Specializing $D_{\mu\nu}$ in the frequency-position
representation and Feynman gauge,
$D_{\mu\nu}(\omega,r)=-\frac{g_{\mu\nu}}{r-i0} e^{i\omega r}$,
reproduces the widely used formula \cite{BKS},
whereas isolating the imaginary part of $\frac1{r-i0}$ and
presenting it in an integral form through a ``vacuum" term, to
representation \cite{Blankenbecler-Drell}.
\begin{figure}
\includegraphics{int-d2n-diagram}\includegraphics{loop-diagram}
\caption{\label{fig:loop-diagram} Graphical illustration of unitarity relation (\ref{dIdomega-photon-proparator}).}
\end{figure}
With the aid of formula (\ref{dIdomega-photon-proparator}), one can derive a next-to-leading order (NLO) correction to the low-$\omega$
approximation:
\begin{equation}\label{dIdomega-O(omega)}
\frac{dI}{d\omega}\underset{\omega\to0}\simeq
\frac{dI_{\text{BH}}}{d\omega}(\gamma\chi)+C_1\omega+\mathcal{O}(\omega^2),
\end{equation}
where the first (LO) term is the well-known factorization limit
given by Eq. (\ref{I-BH}), and the coefficient at the NLO term
reads \cite{Bond-NLO}
\begin{equation}\label{C1}
C_1=-\frac{e^2}2 \int_{-\infty}^{\infty} dt
[\vec{\chi}(t)-\vec{\chi}_i]\cdot [\vec{\chi}_f-\vec{\chi}(t)],
\end{equation}
with $\vec{\chi}_i$ and $\vec{\chi}_f$ being the initial and final
electron deflection angles. The physical meaning of $C_1$ is that apart from the $e^2$ factor, it represents the difference
between the time delay for the actual trajectory and for its
angle-shaped approximation \cite{Bond-NLO}.
From Eq. (\ref{C1}), it is evident that for monotonous electron deflection (in particular, for cases considered in Secs.~\ref{subsec:magnet}, \ref{subsec:double-scattering}), always $C_1< 0$. Therefore, in those cases the spectrum suppression at low $\omega$ is non-monotonous.
The salient feature of $C_1$ is its independence of $\gamma$ (or electron mass) for a definite trajectory $\vec{\chi}(t)$, i.e., definite particle energy and the field strength. Continuing the analysis under this assumption to all higher orders in $\omega$, one would recover complete antenna formfactors (\ref{dIsoft=2A1Fj+A2}), which are functions of a
$\gamma$- (or mass-) independent product $\omega T\chi^2$.
\section{Bremsstrahlung in an amorphous plate}\label{sec:amorph-plate}
Yet another, and perhaps practically most important example of a target with sharp boundaries is a solid plate (in the simplest case -- amorphous). The volume contribution in this case is described by
\begin{equation}\label{dIvol-amorph}
\frac{dI_{\text{vol}}}{d\omega}=
\left\langle \frac{dI_{\text{BH}}}{d\omega}\right\rangle \Phi_{M}(s),
\end{equation}
with $\left\langle \frac{dI_{\text{BH}}}{d\omega}\right\rangle=\frac{2e^2}{3\pi}\gamma^2\left\langle\chi^2\right\rangle$ and the Migdal function \cite{Migdal}
\begin{equation}\label{Phi-def}
\Phi_{M}(s)=6s^2\left\{4\mathfrak{Im}\psi\left[(1+i)s\right]-\frac1s-\pi\right\},\qquad \Phi_{M}(s)\underset{s\to\infty}\to1,
\end{equation}
involving $\psi(z)=\Gamma'(z)/\Gamma(z)$ and the argument related to ratio (\ref{hard-scale}) by
\begin{equation}\label{s-def}
8s^2=\frac{l_{\text{ext}}}{l_0(\omega)}=\frac{\omega}{2\gamma^4 d\left\langle\chi^2\right\rangle/d\tau}.
\end{equation}
The single-boundary contribution, evaluated in \cite{Goldman}, may be expressed as
\begin{equation}\label{1b-amorph-def}
\frac{dI_{1b}}{d\omega}=\frac{e^2}{\pi}B(2(1+i)s),
\end{equation}
\begin{eqnarray}\label{B-Goldman-def}
B(\sigma )&=&\mathfrak{Re}\Bigg\{2\sigma \int_0^{\infty}dx E_1\left[\sigma \tanh x\right] e^{-\sigma (x-\tanh x)}-2\nonumber\\
&\,&+\int_0^{\infty}dx e^{-\sigma x}\left(1-\sigma x\right)\left(\coth x-\frac1x\right)\Bigg\},
\qquad B(\sigma )\underset{s\to\infty}\to0,
\end{eqnarray}
and $E_1(z)=\int_{z}^{\infty}\frac{dx}{x}e^{-x}$.
It features a logarithmic infrared divergence. Boundary-boundary interference contribution can be read off from Eq.~(6.13) of \cite{BK-LPM}:
\begin{equation}\label{bb-amorph-def}
\frac{dI_{bb}}{d\omega}=\frac{2e^2}{\pi}A\left(\frac{\omega T\left\langle\chi^2\right\rangle}{2}\right),
\end{equation}
\begin{equation}\label{A-amorph-def}
A(z)=\ln\left(2|\sinh\sqrt{iz}|\right)-\sqrt{z/2},
\end{equation}
where the last term is the subtraction corresponding to the infrared square root singularity contained in (\ref{dIvol-amorph}), (\ref{Phi-def}), and also ensuring that $A(z)\underset{z\to\infty}\to0$.
\begin{figure}
\includegraphics{Amorph-spectrum}
\caption{\label{fig:Amorph-spectrum} Normalized spectrum of bremsstrahlung on an amorphous plate, in which the final rms deflection angle equals $\sqrt{\left\langle\chi^2\right\rangle}=5\gamma^{-1}$. Dashed curve is the pure volume contribution (\ref{dIvol-amorph}), (\ref{Phi-def}). Dot-dashed, volume two plus single-boundary contributions (\ref{1b-amorph-def}), (\ref{B-Goldman-def}). Solid curve is the sum of all contributions.
}
\end{figure}
In contrast to (\ref{A1-magn})--(\ref{A2-magn}) and (\ref{A1-def})--(\ref{A2-def}), contribution (\ref{A-amorph-def}) due to the interference between the boundaries is not oscillatory. That is natural, as long as those oscillations depend on the final electron deflection angle, but in the present problem it is random, and is averaged over, thereby erasing the oscillations. Therewith also disappears the need for the attenuation formfactor $F_{\perp}$, which may be put equal to unity.
The behavior of a typical highly-nondipole radiation spectrum evaluated by Eqs. (\ref{dIvol-amorph})--(\ref{A-amorph-def}), along with its partial contributions, is shown in Fig.~\ref{fig:Amorph-spectrum}. At $\omega\to0$, it tends to the value dictated by the infrared factorization theorem (see \cite{Bondarenco-Shulga} and refs. therein). In practice, at $\omega\to0$ there may also arise a bump due to transition radiation, which was not taken into account within the present treatment \cite{Klein}.
\section{Summary and outlook}
In spite of the diversity of possible electron scattering configurations
in finite targets, the corresponding radiation spectra admit similar decompositions, as described in Sec.~\ref{sec:decomposition}. Understanding of the underlying radiation mechanisms helps to determine which coherence length is relevant for which photon frequency range.
There are certain exceptions from the simplest variant of the decomposition theorem described here. Some of them were mentioned in Secs.~\ref{subsec:double-scattering} and \ref{sec:amorph-plate}. They merit more detailed investigation in the future.
The relationship between the novel interference features discussed in the present article and jet and interjet, as well as time delay concepts of high-energy electrodynamics, permitting the neglect of the charged particle mass, opens new vistas for their investigation, with possible extensions to nuclear and hadron physics (cf., e.g., \cite{Eisberg-Yennie-Wilkinson,Dokshitzer}). The parallelism with radio- and antenna physics may bring new tools for the theory development. Investigations of radiation spectra behavior at low $\omega$ is also important from the applied point of view.
\section*{Acknowledgments}
This work was supported in part by the National Academy of Sciences
of Ukraine (Project CO-1-8/2017) and the Ministry of Education and
Science of Ukraine (Project No. 0115U000473).
|
1,108,101,566,636 | arxiv | \section*{Introduction}
Since the work of Zakharov and Shabat \cite{ZS}, the dressing
techniques have played an important role in the theory of classical
integrable systems. These techniques have been developed by Drinfeld
and Sokolov in \cite{DS} in the framework of affine Toda field
theories. Later, Feigin and one of us proposed (\cite{FF-CIME},
\cite{FF-Inv}) another approach to these theories; this approach was
shown (\cite{E-TMP}, \cite{EFr}) to be equivalent to that of
\cite{DS}. In those works, the space of local fields of the Toda
theory (equivalently, the mKdV hierarchy) associated to an affine Lie
algebra ${\frak g}$ is described as the ring of functions on the coset
space $N_{+}/A_{+}$ of a unipotent subgroup of the Kac-Moody group $G$
corresponding to ${\frak g}$. The mKdV flows are then identified with
the right action of the principal commutative Lie algebra ${\frak a}$
normalizing $A_{+}$, $N_{+}$ being viewed as an open subset of the
flag manifold of $G$. This leads to a system of variables, in which
the flows become linear and hence can be integrated.
In the works on quantization of the Toda theories, an important role
is played by the vertex operator algebra structure on the space of
local fields. At the classical level, this gives rise to what we call
here a vertex Poisson algebra (VPA) structure on the space of local
fields of a Toda theory. The notion of the VPA structure coincides
with the notion of ``coisson algebra'' (on the disc) introduced by
Beilinson and Drinfeld in \cite{BD}. The goal of this work is to
define this and related structures on the space of fields of a Toda
theory in the Lie group terms using the identification described
above.
To this end, we make use of an idea introduced earlier by Feigin and one of
us in \cite{EFe}, where a similar problem was solved in the setting of the
classical lattice Toda theory associated to $\widehat{\frak{sl}}_2$. In
that work, the space of local fields was extended by the ``half screening
charges'' and ``half integrals of motions''. The screening charges and
integrals of motions (IM's) are sums of the lattice translates of certain
expressions, and their ``halfs'' are just the sums of positive translates
of the same expressions. The full space was then identified with the
quotient $G/H$, where $H$ is the Cartan subgroup of $G$. The smaller spaces
of fields, without half screening charges (resp. without half IM's), can be
obtained by taking the quotient of $G/H$ from the left by a Borel subgroup
$B_{-}$ (resp. from the right by the positive part of the loop group of the
Cartan subgroup). In this interpretation, the Poisson structure on the
full space is given by the difference $\ell(R)-r(R^{\infty})$, where
$\ell(R)$ stands for the left action of the trigonometric $r$-matrix $R\in
{\frak g}^{\otimes 2}$ and $r(R^{\infty})$ stands for the right action of
its ``infinitely twisted'' version $R^{\infty}\in {\frak g}^{\otimes
2}$. This leads to a description of the Poisson structures of the
quotients.
In this work, we enlarge the space of local fields of the continuous affine
Toda theories in a similar way, by adding continuous analogues of the half
screening charges and IM's. Using results of \cite{EFe,EFr}, we identify
the full space $\bar\pi_{0}$ obtained this way with the space of functions
on $B_{-}\times N_{+}$. We then study the structure of nonlocal VPA on
$\bar\pi_0$. The axioms for this structure are given in Section 1. The main
feature is that the the Poisson bracket $\{u(x),v(y)\}$, where $u,v$ are
elements of the algebra, can be expressed as a linear combination of local
terms of the form $u_{k}(x)\pa_{x}^{k}\delta(x-y)$ with $k\ge 0$, and of
nonlocal terms of the form $a(x)\pa_{x}^{-1}\delta(x-y)b(y)$.
A similar formalism was introduced by Radul \cite{Radul} in the framework
of formal variational calculus (see also \cite{LR}). Natural examples of
nonlocal VPA's are given by the higher Adler-Gelfand-Dickey (AGD)
structures (i.e. structures obtained from the pair of the first two AGD
structures by application of the Magri recursion procedure) introduced in
\cite{EOR,AvM}.
In Sections 2 and 3 we give a geometric description of the nonlocal
VPA structure of $\bar\pi_{0}=\CC[B_{-}\times N_{+}]$. A generic
element $g=(b_{-},n_{+})$ of $B_{-}\times N_{+}$ can be considered as
the product of expansions of the scattering matrix of the Lax operator
from $-\infty$ to $x$ for a small spectral parameter $\lambda$, and
from $x$ to $+\infty$ for a large $\lambda$.
The Poisson bracket on $N_+$ is obtained by a straightforward
extension of the local VPA structure on $N_+/A_+$. On the other hand,
according to \cite{BLZ} and in the spirit of \cite{Faddeev},
\cite{Bab}, we define the Poisson bracket on $B_{-}$ via the
trigonometric $r$-matrix. We show that the trigonometric Poisson
brackets on $B_{-}$ are compatible with the Poisson brackets of local
fields (see Lemma \ref{lemma2.2}). The Poisson brackets of $N_{+}$ and
between $N_+$ and $B_-$ also have nonlocal parts which we determine in
Lemmas \ref{lemma3.4} and \ref{lemma3.5}. To derive the complete
expression for the Poisson brackets (Thm. \ref{thm3.1}), we use the
evolution equation
\begin{equation} \label{gp}
\pa_{x}g(x)=g(x)p_{-1},
\end{equation}
where $p_{-1}$ is a degree $-1$ element of ${\frak a}$.
After that we obtain another realization of the VPA structures on $N_{+}$
and $N_{+}/A_{+}$ purely in terms of the unipotent group elements (i.e. the
Drinfeld-Sokolov dressing operators) -- see Cor. 3.1 and formula
(\ref{finalPBn-n}). The latter formula could in principle be obtained
directly in the framework of $N_+$, but the simple form (\ref{gp}) of the
action of $\pa_x$ on a generic element $g$ of the whole Kac-Moody group $G$
makes the derivation easier on $G$.
Let us now say a few words about possible applications and extensions of
this work. First, one may think of the following program of quantization of
our results: to formulate quantum axioms corresponding to nonlocal VPA in
the spirit of \cite{BD}\footnote{In the course of writing this paper, we
became aware of several works dealing with vertex operator structures
containing logarithmic terms, see \cite{Gu,Fl} and references therein}; to
quantize the geometric formulas (\ref{finalPBg-g}), (\ref{finalPBn-n}),
(\ref{localversion}) for nonlocal VPA structures; to realize these formulas
in terms of the algebra of local fields of quantum Toda theories. The
combination of small and large limits for the spectral parameter which we
use is reminiscent of the work \cite{BLZ}. Second, it would be interesting
to carry out the present work in the case of higher AGD structures; this
would lead to a family of compatible nonlocal VPA structures on
$B_{-}\times N_{+}$ (we construct such a family in sect. 4, but its
connection with the AGD structures is not clear to us). Next, it would be
interesting to obtain similar results for other soliton equations such as
the nonlinear Schr\"odinger (NLS) equation; in that case ${\frak a}$ should
be replaced by the loop algebra with values in the Cartan subalgebra
${\frak h}$. The geometric interpretation of the NLS variables analogous to
the one used here, was obtained by Feigin and one of us \cite{FF-toappear}
(the case of $\widehat{\frak{sl}}_2$ was also treated in
\cite{ABF}). Finally, the fact that the VPA structure given in
Thm. \ref{thm3.1} is left $G$-invariant, leads us to conjecture the
existence of affine Weyl group symmetries of the mKdV hierarchies, mixing
local and nonlocal terms (see remark \ref{rem4.2}). These symmetries are
probably connected with the Darboux transformations.
The first author would like to thank B. Feigin for his collaboration
in \cite{EFe} where the ideas of extension of the space of local
fields were used, and A. Orlov and V. Rubtsov for their collaboration
in \cite{EOR} on the subject of nonlocal Poisson structures, and for
discussions about this work. The second author thanks B. Feigin for
useful discussions. He is also grateful to P. Schapira for his
hospitality at Universit\'{e} Paris VI, where this work was completed.
The first author would like to dedicate this paper to A. Guichardet,
his senior colleague who is about to retire from Ecole Polytechnique.
\section{Nonlocal vertex Poisson algebras.}
Let $(R,\pa)$ be a differential ring, and let $\cA$ be the associated
ring of formal pseudodifferential operators. The ring $\cal A$ has
generators $\wt\pa, \wt\pa^{-1}$, and $i(r)$ for $r\in R$, and
relations $\wt \pa\wt\pa^{-1}=\wt\pa^{-1}\wt\pa=1$,
$[\wt\pa,i(r)]=i(\pa r)$ for $r\in R$, and $i:R\to \cal A$ is an
algebra morphism. In what follows, we will denote by $\pa,\pa^{-1}$
and $r$, $\wt\pa,\wt\pa^{-1}$ and $i(r)$, respectively.
\subsection{The modules $\cR_{n}$.}
Let $n\ge 1$ be an integer. We will use the following notation in the
algebra $\cA^{\otimes n}$: $\pa_{x_{i}},\pa_{x_{i}}^{-1}$ will stand
for $\otimes_{j=1}^{i-1}1\otimes r\otimes_{j=i+1}^{n}1$,
$\otimes_{j=1}^{i-1}1\otimes \pa\otimes_{j=i+1}^{n}1$,
$\otimes_{j=1}^{i-1}1\otimes \pa^{-1}\otimes_{j=i+1}^{n}1$,
respectively. For $T\in\cA$, we also denote
$\otimes_{j=1}^{i-1}1\otimes T\otimes_{j=i+1}^{n}1$ by $T(x_{i})$.
We define $\cR_{n}$ to be the quotient of $\cA^{\otimes n}$ by the
left ideal $I_{n}$, generated by $\sum_{i=1}^{n}\pa_{i}$ and
$r(z_{i})-r(z_{j})$ for $r\in R$ and $i,j=1,\ldots,n$. We will
sometimes denote $\cR_{n}$ by $\cR_{n}(R)$. Let us consider $\cR_{n}$
as a left $\cA^{\otimes n}$-module and denote by
$\de_{x_{1},\ldots,x_{n}}$ its generator $1+I_{n}$. We then have the
relations
\begin{equation}
(r(x_{i})-r(x_{j}))\de_{x_{1},\ldots,x_{n}}=0, \quad
(\sum_{i=1}^{n}\pa_{x_{i}})\de_{x_{1},\ldots,x_{n}}=0,
\label{definedelta}
\end{equation}
so that $\de_{x_{1},\ldots,x_{n}}$ plays the role of the distribution
$\prod_{i=2}^{n}\de(x_{1}-x_{i})$.
The module $\cR_{2}$ admits the following simple description.
Let $T\mapsto T^{*}$
be the anti-automorphism of $\cA$, defined by $r^{*}=r$
and $\pa^{*}=-\pa$. Let us endow $\cA$ with the $\cA^{\otimes 2}$-module
structure, defined by $(a\otimes b)c=acb^{*}$, for $a,b,c\in\cA$. Then
the linear map $op:\cR_{2}\to\cA$, defined by $op((a\otimes
b)\de_{x_{1}x_{2}})=ab^{*}$, is an isomorphism of $\cA^{\otimes
2}$-modules. The inverse map to $op$ is given by $op^{-1}(a)=(a\otimes
1)\de_{x_{1}x_{2}}= (1\otimes a^{*})\de_{x_{1}x_{2}}$.
\subsection{Definition of the nonlocal VPA structure.}
A nonlocal VPA structure on $(R,\pa)$ is a linear map $P:R\otimes R\to
\cR_{2}$, satisfying the following conditions:
\begin{equation}
P(ab\otimes c) =a(x_{1})P(b\otimes c)+b(x_{1})P(a\otimes c),
\label{Leibniz}\end{equation}
\begin{equation}
P(\pa a\otimes b)=\pa_{x_{1}}P(a\otimes b),
\label{D-linearity}
\end{equation}
\begin{equation}
P(b\otimes a)=-\sigma(P(a\otimes b)),
\label{antisymmetry}
\end{equation}
for $a,b,c\in R$, where $\sigma$ is the involutive automorphism of
$\cR_{2}$ defined by $\si((a\otimes b)\de_{x_{1}x_{2}})=(b\otimes
a)\de_{x_{1}x_{2}}$, for $a,b\in\cA$, and the Jacobi identity that we
formulate below.
Let us define a map $P_{x,yz}:R\otimes \cR_{2}\to \cR_{3}$ by the
following rules (we attach indices $y,z$ to $\cR_{2}$ and $x,y,z$
to $\cR_{3}$):
$$
P_{x,yz}(a\otimes\de_{y,z})=0, \quad \quad
P_{x,yz}(a\otimes\pa_{y}m)=\pa_{y} P_{x,yz}(a\otimes m),
$$
$$
P_{x,yz}(a\otimes b(y)T(z)\de_{yz})=(op\circ P)(a\otimes
b)(x)T(z)\de_{xyz}+b(y)P_{x,yz}(a\otimes T(z)\de_{yz})
$$
for $a,b\in R$, $T\in\cA$, $m\in\cR_{2}$.
One can check easily that $P_{x,yz}$ is well-defined by these
conditions. This follows from the identity $P_{x,yz}(a\otimes
\pa_{y}b(y)m)-P_{x,yz}(a\otimes b(y)\pa_{y}m)=P_{x,yz}(a\otimes (\pa
b)(y)m)$, which can be checked by puting $m$ in the form $(1\otimes
T)\de_{yz}$, $T\in \cA$.
The Jacobi identity is then expressed as
\begin{equation}
P_{x,yz}(a\otimes P(b\otimes c))=\si_{xy}[P_{x,yz}(b\otimes
P(a\otimes c))]+\si_{xz}[P_{x,yz}(c\otimes P(b\otimes a))],
\label{Jacobi}
\end{equation}
for any $a,b,c\in R$, where $\si_{xy},\si_{xz}$ are the automorphisms
of $\cR_{3}$, defined by
$$
\si_{xy}(T(x)U(y)V(z)\de_{xyz})=U(x)T(y)V(z)\de_{xyz},
$$
and
$$
\si_{xz}(T(x)U(y)V(z)\de_{xyz})=V(x)U(y)T(z)\de_{xyz},
$$
for $T,U,V\in \cA$.
\medskip
\noindent
\begin{remark} \label{rem1.1}
We may consider elements of $\cR_{2}$ as kernels in $x_{1}$ and
$x_{2}$, expressed as linear combinations of
$r(x_{1})\pa_{x_{1}}^{k}\de(x_{1}-x_{2})$, and think of $P(a\otimes
b)$ as $\{a(x_{1}),b(x_{2})\}$. The terms with $k\ge 0$ are called
local and the terms with $k<0$ are called nonlocal.
The expression $P_{x,yz}(a \otimes m)$ should then be thought of as
expressing the Poisson bracket of the form $\{a(x),m(y,z)\}$, where
$m$ is some kernel.
Note that $P_{x,yz}$ also has the properties
$$
P_{x,yz}(a\otimes\pa_{z}m)=\pa_{z}
P_{x,yz}(a\otimes m),
$$
$$
P_{x,yz}(a\otimes T(y)b(z)\de_{yz})
=(op\circ P)(a\otimes
b)(x)T(z)\de_{xyz}+b(z)P_{x,yz}(a\otimes T(y)\de_{yz}).
$$
For $a,b,c\in R$, $\{a(x),\{b(y),c(z)\}\}$ is expressed as
$P_{x,yz}(a\otimes P(b\otimes c))$. On the other hand,
$\{b(y),\{a(x),c(z)\}\}$ is expressed as $\si_{xy}[P_{x,yz}(b\otimes
P(a\otimes c))]$, and $\{c(z),\{b(y),$ $a(x)\}\}$ as
$\si_{xz}[P_{x,yz}(c\otimes P(b\otimes a))]$. This explains the
connection between the standard Jacobi identity for the Poisson
brackets and formula (\ref{Jacobi}).
\hfill $\Box$
\end{remark}
\subsection{Connection with the Beilinson-Drinfeld formalism.}
Let $\cA_{+}$ be the subalgebra of $\cA$, generated by $R$ and $\pa$
(the algebra of differential operators). Let us set
$\cR_{2}^{+}(R)=op^{-1}(\cA_{+})$. We will say that $P$ defines
a local VPA, if $P$ takes values in $\cR_{2}^{+}(R)$.
In this case, the notion described here coincides with that of
``coisson algebra'' (on the disc) of Beilinson and Drinfeld
(\cite{BD}). Indeed, let $X=\on{Spec}\CC[t]$, and $D_{X}$ be the
ring of differential operators on $X$. Let $A$ be the algebra $R[t]$,
considered as a $D_{X}$-module by the rule that ${d\over {dt}}$ acts
on $R$ as $\pa$. We extend $P$ to an operation
$$
\{,\}\in
\on{Hom}_{D_{X\times X}}
(A\boxtimes A,\Delta_{*}A)
$$ ($\Delta:X\to X \times X$ denotes the diagonal embedding) as in
\cite{BD}, (0.1.7), in the following way. Let us call $t_{1}=t\otimes
1$ and $t_{2}=1\otimes t$ the coordinates of $X\times X$; $A\boxtimes
A$ is identified with $R\otimes R[t_{1},t_{2}]$, ${\pa\over{\pa
t_{1}}}$ and ${\pa\over{\pa t_{2}}}$ acting on $R\otimes R$ as
$\pa\otimes 1$ and $1\otimes\pa$; on the other hand, $\Delta_{*}A$ is
identified with $\cA_{+}[t]$, with $t_{1,2}$ acting as $t$ and
${\pa\over{\pa t_{1,2}}}$ as ${\pa\over{\pa t}}$.
We then set $\{at_{1}^{n},bt_{2}^{m}\}=t^{n+m}P(a\otimes b)$, for
$a,b\in R$.
\begin{remark} As explained in \cite{BD}, some local VPAs can
be obtained as a classical limit of a family of chiral algebras (or
vertex operator algebras \cite{Bor,FLM}), in the same way as one
obtains Poisson algebras as a classical limit of a family of
associative algebras. In particular, the local VPA $\pi_0$ described
below is the classical limit of the vertex operator algebra of a
Heisenberg algebra (see Remark 2 in \cite{FF-Inv}).
It would be interesting to generalize the notion of chiral algebra to
allow for nonlocality.
\hfill \qed
\end{remark}
\subsection{A class of nonlocal VPA's.}
In this section we give a construction of a class of nonlocal
VPA's. The results of this section will only be used in the proof of
Prp. 3.3. However, the construction presented here might be of general
interest.
\begin{proposition} \label{prop1.1}
Let $E_{k}, k\geq 0$, be the subspace of $\Der(R)^{\otimes 2}$,
consisting of all tensors $\sum_{\al}u_{\al}\otimes v_{\al}$, such that
$$
\sum_{\al}(u_{\al}a)(\pa^{i}v_{\al}b)=0, \quad \forall a,b\in R,
\quad i=0,\ldots,k.
$$
Suppose we are given elements $\sum_{\al}x_{i}^{(\al)} \otimes
y_{i}^{(\al)}$ of $\Der(R)^{\otimes 2}$ for all $i\geq -1$.
Assume that
$$
\sum_{\al}x_{i}^{(\al)}\otimes y_{i}^{(\al)}=(-1)^{i+1}\sum_{\al}
y_{i}^{(\al)}\otimes x_{i}^{(\al)},
$$
and that
$$
\sum_{\al}[x_{i}^{(\al)},\pa]\otimes y_{i}^{(\al)}\in \sum_{\al}
x_{i-1}^{(\al)}\otimes y_{i-1}^{(\al)}+E_{i}
$$
(here we set for $i\le -2$, $\sum_{\al} x^{(\al)}_{i}\otimes
y^{(\al)}_{i}=0$; so that $\sum_{\al} x^{(\al)}_{-1}\otimes
y^{(\al)}_{-1}\in (\Der(R)^{\pa})^{\otimes 2}$).
Then the formula
\begin{equation}
P(a\otimes b)=\sum_{i\ge -1}
\sum_{\al}(x_{i}^{(\al)}a)(x)(y_{i}^{(\al)}b)(y) \pa_{x}^{i}\de_{xy},
\label{NLVOA}
\end{equation}
for $a,b\in R$ defines a nonlocal VPA structure on $R$.
\end{proposition}
{\em Proof. \/} The first condition ensures the antisymmetry of $P$, the
second condition is equivalent to the $\pa$-linearity condition
(\ref{D-linearity}). The first condition being satisfied, the Jacobi
identity for $P$ is automatically satisfied: for example, the term
$P_{xy,z}(P(a\otimes b)\otimes c)$ is equal to
$$
\sum_{i,j,\al,\beta}(x_{j}^{(\beta)}x_{i}^{(\al)}a)(x)(y_{i}^{(\al)}b)(y)
(y_{j}^{(\beta)}c)(z)(-1)^{i+j}\pa_{y}^{i}\pa_{z}^{j}\de_{xyz}
$$
$$
+\sum_{i,j,\al,\beta}(x_{i}^{(\alpha)}a)(x)(x_{j}^{(\beta)}
y_{i}^{(\al)}b)(y)
(y_{j}^{(\beta)}c)(z)(-1)^{j}\pa_{x}^{i}\pa_{z}^{j}\de_{xyz}
$$
whose second term is cancelled by
$$
\sum_{i,j,\al,\beta}(x_{j}^{(\beta)}x_{i}^{(\al)}b)(y)(y_{i}^{(\al)}c)(z)
(y_{j}^{(\beta)}a)(x)(-1)^{i+j}\pa_{z}^{i}\pa_{x}^{j}\de_{xyz}
$$
which is a cyclic permutation of the first one.
\hfill $\Box$
\medskip
\noindent
\begin{remark} \label{rem1.2}
Let us denote by $\Der(R)$ the Lie algebra of derivations of $R$ and
assume that we have $\varpi \in (\Der(R)^{\pa})^{\otimes 2}$, and
$P_{+}:R\otimes R\to \cA_{+}$, such that writing
$\varpi=\sum_{i}\varpi_{i}\otimes \varpi'_{i}$ ($\varpi_{i}$, $\varpi'_{i}$
derivations of $R$ commuting with $\pa$),
\begin{equation} \label{always}
op\circ P(a\otimes b)=P_{+}(a\otimes b)+\varpi_{i}(a)\pa^{-1}\varpi'_{i}(b).
\end{equation}
In view of the form of $\varpi$, the nonlocal part of axioms
(\ref{antisymmetry}), (\ref{Jacobi}) is
satisfied. The l.h.s. of the Jacobi identity contains terms
of the form
$$a(x)b(y)c(z)\pa_{x_{i}}^{-1}\pa_{x_{j}}^{-1},
a(x)b(y)c(z)\pa_{x_{i}}^{-1}\pa_{x_{j}}^{k},
a(x)b(y)c(z)\pa_{x_{i}}^{k}\pa_{x_{j}}^{\ell},
$$
$a,b,c\in R$, $k,\ell\ge 0$, $x_{i}\ne x_{j}$ run through $x,y,z$. The
sum of the terms of the first type then cancels automatically (see
Lemma \ref{lemma2.3}).
All nonlocal VPA structures that we study in this work will be of the
type described here.
Note that in particular, if $P_+$ is $0$, then $P$ given by
(\ref{always}) always defines a VPA structure.
We will denote by $\cA_{-1}$ the span of $\cA_{+}$ and the $a\pa^{-1}
b$, $a,b\in R$, and by $\cR_{2}^{-1}(R)$ the space $op^{-1}(\cA_{-1})$.
\qed
\end{remark}
\subsection{Hamiltonian vector fields.}
We wish to show briefly here how the notions introduced above can be
related to the Gelfand-Dickey-Dorfman theory of formal variational
calculus (see \cite{GD}, and the introduction of \cite{BD}). Let us
assume that $P$ takes values in $\cA_{+}$. Define ${\cal V}_{f}\in\End(R)$
by
\begin{equation} \label{hamvect}
{\cal V}_{f}(a)=- \left( op \circ P(a\otimes f) \right) \cdot 1,
\end{equation}
the result of the action of the differential operator $P(a\otimes f)$ on
$1\in R$. Then ${\cal V}_{f}$ is a derivation of $R$, commuting with
$\pa$. ${\cal V}_{f}$ actually depends only on the class of $f$ on $R/\pa R$,
and is called the Hamiltonian vector field corresponding to the density
$f$. We will also use the notation
$$
{\cal V}_{f}(a)=\{\int_{-\infty}^{\infty} f,a\}=-\{a, \int_{-\infty}^{\infty}
f\},
$$
where $\int_{-\infty}^{\infty} f$ denotes the class of $f$ in $R/\pa R$.
It can be used to define a Lie algebra structure on $R/\pa R$, by
the rule,
$$
\{ \int_{-\infty}^{\infty} f,\int_{-\infty}^{\infty} g \}=
\int_{-\infty}^{\infty} {\cal V}_{f}(g), \quad \quad \forall f,g\in R.
$$
A derivation $D$ of $R$ which commutes with $\pa$
defines an operation $D_{2}$ on $\cR_{2}$, in the following way. We set
$D_{2}\de_{xy}=0$, and extend $D_{2}$ to the whole $\cR_{2}$ by the
condition that it commutes with $\pa_{x}$ and $\pa_{y}$, and the
formulae
$$
D_{2}(a(x)m(x,y))=a(x)D_{2}m(x,y)+
(Da)(x)m(x,y),
$$
$$
D_{2}(b(y)m(x,y))=b(y)D_{2}m(x,y)+ (Db)(y)m(x,y)
$$
for $m\in
\cR_{2}$, $a,b\in\cA$; this definition makes sense because $D$
commutes with $\pa$.
We then say that $D$ is an infinitesimal automorphism of $R$,
if $P(Da\otimes b)+P(a\otimes Db)=D_{2}P(a\otimes b)$ for $a,b\in R$.
\begin{proposition} \label{prop1.2}
Under the conditions above, ${\cal V}_{f}$ is an
infinitesimal
automorphism of $R$. Moreover, $f\mapsto {\cal V}_{f}$ is a Lie algebra
homomorphism from $R/\pa R$ to $\Der(R)^{\pa}$.
\end{proposition}
{\em Proof\/}. The first part is straightforward and the second is
contained in \cite{GD}.
\hfill $\Box$
\section{Nonlocal extensions of the VPA of local fields $\pi_{0}$.}
\subsection{Notation and definition of $\pi_{0}$.}
Let $\wt{\frak g}$ be an affine Lie algebra, with generators $e_{i}$,
$f_{i}$, $\al^\vee_{i}$, $i=0,\ldots,l$, and $d$, subject to the
relations of \cite{Kac}. Let $a_{i},a_{i}^{\vee}$ be the labels of
$\widetilde{{\frak g}}$, $h$ be its Coxeter number; then
$K=\sum_{i}a_{i}^{\vee}\al^\vee_{i}$ is a generator of the center of
$\wt{\frak g}$. Let $\widehat{\frak g}$ be the subalgebra of
$\wt{\frak g}$ with the same generators except $d$, ${\frak g}$ be the
quotient of $\widehat{\frak g}$ by $\CC K$, and let us denote the same
way elements of $\wt{\frak g}$ and their images in ${\frak g}$. Let
${\frak h}$ be the Cartan subalgebra of ${\frak g}$, generated by the
$\al^\vee_{i}$'s, ${\frak n}_{+}$ and ${\frak n}_{-}$ be the
pronilpotent subalgebras generated by the $e_{i}$'s and the $f_{i}$'s
respectively, and ${\frak b}_{-}={\frak h}\oplus {\frak n}_{-}$. Let
$\alpha_{i}$, $i=0,\ldots,l$ be the simple roots of $\wt{\frak g}$,
positive with respect to this decomposition. For each $x \in \g$ we
can write $x=x_+ + x_-$, where $x_+ \in \n_+, x_- \in {\frak b}_-$.
There is an invariant inner product $\langle , \rangle$ on
$\wt{\frak g}$; let us denote in the same way its restriction to
$\widehat{\frak g}$. Let $\sigma$ be any section of ${\frak g}$ to
$\widehat{\frak g}$; the restriction of $\langle , \rangle$ to
$\sigma({\frak g})$ defines an inner product on ${\frak g}$,
independent of $\sigma$ and again denoted by $\langle , \rangle$.
Let $\wt{\frak h}$ be the Cartan subalgebra of $\G$ spanned by
$\al^\vee_i, i=0,\ldots,l$ and $d$. The restriction of the inner
product $\langle , \rangle$ to $\wt{\frak h}$ is non-degenerate and
hence defines an isomorphism $\wt{\frak h} \simeq \wt{\frak h}^*$. Let
$\omega^\vee_i \in \wt{\frak h}^*$ be the $i$th fundamental coweight,
i.e. it satisfies $\langle \omega^\vee_i,\al_j \rangle = \delta_{i,j},
(\omega^\vee_i,d)=0$. Denote by $h_i$, $h^\vee_i$ the elements of
$\wt{\frak h}$, which are the images of $\alpha_i, \omega^\vee_i \in
\wt{\frak h}^*$ under the isomorphism $\wt{\frak h} \simeq \wt{\frak
h}^*$ (note that this is not a standard notation).
Let $$p_{-1}=\sum_{i=0}^{l} \frac{(\al_i,\al_i)}{2} f_{i},$$ and let
${\frak a}$ be the centralizer of $p_{-1}$ in ${\frak g}$. It is a
commutative subalgebra of ${\frak g}$, called the principal abelian
subalgebra. We have ${\frak a}={\frak a}_{+}\oplus{\frak a}_{-}$,
where ${\frak a}_{\pm}={\frak a}\cap {\frak n}_{\pm}$. Let $I$ be the
set (with multiplicities) of integers, congruent to the exponents of
$\wt{\frak g}$ modulo $h$. Then ${\frak a}_{\pm}$ is generated by
elements $p_{n}$, $n\in\pm I$.
We
normalize the $p_{n}$, $n\in \pm I$ is such a way that
$$
\langle p_{n},p_{-n} \rangle ={1\over h}, \quad n\in I,
$$
where $h$ is the Coxeter number of $\wt{{\frak g}}$.
Let $\pi_{0}$ be the free differential ring generated by $u_{i}$,
$i=1,\ldots,l$; we have $\pi_{0}=\CC[u_{i},\pa u_{i},\ldots]$.
\begin{proposition}[\cite{BD}] \label{prop2.1}
The space $\pi_{0}$ has a VPA structure, defined by the formula
\begin{equation} \label{PBgen}
P(A\otimes B)= \sum_{1\leq i,j \leq l} (\al_{i},\al_{j})
\sum_{n\geq
0} \sum_{m\geq 0} \Big(\frac{\pa A}{\pa u_i^{(n)}} \pa^{n+1} \otimes
\frac{\pa B}{\pa u_j^{(m)}} \pa^m\Big) \delta_{xy}.
\end{equation}
\end{proposition}
\medskip
\noindent
\begin{remark} \label{rem2.1}
This structure is uniquely determined by the following formulas:
$$P(A \otimes 1) = 0, \quad \quad \forall A \in \pi_0,$$ and
\begin{equation}
P( u_{i}\otimes u_{j})=(\al_{i},\al_{j})\pa_{x}\de_{xy}.
\label{PBphi-phi}
\end{equation}
\hfill $\Box$
\end{remark}
The space $\pi_{0}$ also has the structures of ${\frak n}_{+}$-- and
${\frak a}_{-}$--modules, defined in \cite{FF-CIME}. Below we will
extend these structures in three ways.
\subsection{Extension by half IM's.}
Let $B_{-}$ be the ind-algebraic group corresponding to ${\frak b}_{-}$,
and $N_{+}$ be the pro-algebraic Lie group corresponding to ${\frak
n}_{+}$, $A_{+}$ its subgroup corresponding to ${\frak a}_{+}$. Let $G$ be
the group corresponding to ${\frak g}$, containing $B_{-}$ and $N_{+}$ as
subgroups.
Recall from \cite{FF-CIME}, \cite{FF-Inv} the identification of
$\pi_{0}$ and $\CC[N_{+}/A_{+}]$ as rings and ${\frak
n}_{+}$-modules. Moreover, the Lie algebra ${\frak a}_{-}$ acts on
$\CC[N_+/A_+]$ from the right, since we can identify $N_+$ with an open
subspace of $B_{-}\backslash G$. Let $\pa_n$ be the derivation of
$\pi_0$ corresponding to the right action of $p_{-n}$ on $N_+/A_+$. In
particular, $\pa_1 \equiv \pa$.
Consider for $n\in I$, the hamiltonians
$H_{n}\in \pi_{0}$ from \cite{EFr}. They satisfy
$$
\ep_{-\al_i} e_{i} \cdot H_{n}=\pa A_{n}^{(i)},
$$
for certain $A_{n}^{(i)}\in\pi_{-\al_{i}}$ (in the notation of
\cite{EFr}). We have an isomorphism $\pi_0 \simeq \CC[N_+/A_+]$ (see
\cite{FF-CIME}, \cite{FF-Inv}).
$$
\pa_{n}H_{m}=\pa_{m}H_{n}=\pa H_{n,m}
$$
for certain $H_{n,m}\in\pi_{0}$. Following \cite{EFr}, Sect. 4, define
$\pi_{0}^{+}=\pi_{0}\otimes \CC[F_{n}]_{n\in I}$,
and extend the action of $\pa_n$ to it by the formula
\begin{equation}
\pa_{n}F_{m}= H_{n,m}.
\label{actionofhigherflowsonhalfIM}
\end{equation}
(so that $F_{n}$ can be viewed as a ``half integrals of motion''
$\int_{-\infty}^x H_{n}$). In particular, $\pa F_m = H_m$.
According to Thm. 5 and Prop. 9 of \cite{EFr}, the ring $\pi_0^+$ is
isomorphic to $\CC[N_+]$, and the action of $\pa_n$ on $\pi_0^+$
defined this way corresponds to the right action of $p_{-n}$ on $N_+$.
Finally, the action of the generators of ${\frak n}_{+}$ are defined by
\begin{equation}
e_{i} \cdot F_{n}=\ep_{-\al_{i}}^{-1} A_{n}^{(i)} = \phi_n(e_i)
\label{actionofgeneratorsofn-onhalfIM}
\end{equation}
(in the notation of \cite{EFr}).
Now we extend the VPA structure on $\pi_{0}$ to a nonlocal VPA
structure on $\pi_{0}^{+}$. For $a\in \pi_{0}$, $n\in I$, we have
\begin{equation} \label{inthn}
\{\int_{-\infty}^{\infty}H_{n},a\}=n\pa_{n}a,
\end{equation}
(\cite{DS}, prop. 4.5) so that
\begin{equation} \label{onemore}
P(H_{n}\otimes a)\in n\pa_{n}a(y)\de_{xy}+\pa_{x}\cR_{2}^{+}(\pi_{0}).
\end{equation}
Let $i_{n}(a)$ be the element of $\cR_{2}^{+}(\pi_{0})$, such that
$P(H_{n}\otimes a)= n\pa_{n}a(y)\de_{xy}+\pa_{x}(i_{n}(a))$
(this defines
$i_{n}(a)$ uniquely, since $\xi\in\cA$, $\pa\cdot\xi=0$ implies $\xi=0$).
We then define
\begin{equation}
P(F_{n}\otimes a)=n\pa_{n}a(y)
\pa_{x}^{-1}\de_{xy}+i_{n}(a).
\label{PBF-phi}
\end{equation}
Next, we have:
$$
P(H_{n}\otimes H_{m})=n\pa_{n}H_{m}(x)\de_{xy}
+\pa_{x}(i_{n}(H_{m})).
$$
Integrating this expression w.r.t. the second variable, we obtain
$$
n\pa_{n}H_{m}(x)+\pa_{x}(\int_{-\infty}^{\infty}i_{n}(H_{m})(x,y)dy).
$$
where
$$
\int_{-\infty}^{\infty}i_{n}(H_{m})(x,y)dy
$$
is defined as $a_{0}(x)$, where $i_{n}(H_{m}) =
\sum_{i\ge 0}a_{i}(x)\pa_{x}^{i}\de_{xy}$.
On the other hand, we obtain from formula (\ref{inthn}) that
$$
\{H_{n}(x),\int_{-\infty}^{\infty}H_{m}\}=-m\pa_{m}H_{n}(x).
$$
Comparing the last two formulas we find:
$\int_{-\infty}^{\infty}i_{n}(H_{m})(x,y)dy=-(n+m)H_{n,m}(x)$ (for
degree reasons it cannot contain constant terms) and so
$i_{n}(H_{m})=-(n+m)H_{n,m}(x)\de_{xy} +\pa_{y}Q_{n,m}$, with
$Q_{n,m}\in\cA_{+}$. Thus, we obtain:
$$
P(H_{n}\otimes H_{m})=n(\pa_{n}H_{m})(x)\de_{xy}-(n+m)
H_{n,m}(y)\pa_{x}\de_{xy}+(\pa Q_{n,m})(x) \pa_y \delta_{xy}.
$$
Using formula $H_i = \pa F_i$ we finally obtain:
\begin{equation}
P(F_{n}\otimes F_{m})
=(mH_{n,m}(x)+nH_{n,m}(y))\pa_{x}^{-1}\de_{xy}+Q_{n,m}(x)\delta_{xy}.
\label{PBF-F}
\end{equation}
\begin{proposition} \label{prop2.2}
The formulae (\ref{PBphi-phi}),
(\ref{PBF-phi}), (\ref{PBF-F})
define a nonlocal VPA
structure on $\pi_{0}^{+}$.
\end{proposition}
{\em Proof\/}. To establish the antisymmetry condition, we should check
that
$$
P(F_{n}\otimes F_{m})=-\sigma(P(F_{m}\otimes F_{n}));
$$
this identity is $\pa_{x}^{-1}\pa_{y}^{-1}$ applied to
the same identity, with $H_{n}$ and $H_m$ in place of
$F_{n}$ and $F_m$, which is true. The same argument works with
the bracket $P(F_{n}\otimes a)$, $a\in\pi_{0}$, with only
one application of $\pa^{-1}$.
Let us pass to the Jacobi identity. We should check it for the tensors
$F_{n}\otimes F_{m}\otimes F_{k}$,
$F_{n}\otimes F_{m}\otimes a$, $F_{n}\otimes a\otimes b$,
$a,b\in\pi_{0}$. These identities are
$\pa_{x}^{-1}\pa_{y}^{-1}\pa_{z}^{-1}$,
resp. $\pa_{x}^{-1}\pa_{y}^{-1}$, $\pa_{x}^{-1}$ applied to the same
identities with
$H$'s replacing the $F$'s, which are true.
\hfill $\Box$
\medskip
\begin{proposition} \label{prop2.3}
The action of $\pa_{n}$ on $\pi_0^+$ is an infinitesimal automorphism
of the nonlocal VPA structure on $\pi_{0}^{+}$.
Moreover, we have for all $a\in \pi_{0}^{+}$,
\begin{equation} \label{hamactonpi0+}
P(H_{n}\otimes a)\in n\pa_{n} a(x)\de_{xy} +
\pa_{x}\rho_{n}(a),
\end{equation}
with $\rho_{n}(a)\in \cR_{2}^{-1}(\pi_{0}^{+})$, such that
$$
\rho_{n}(a)\in \sum_{m\in
I}H_{n,m}(x)\rho_{n,m}(a)(y)\pa_{x}^{-1}\de_{xy}
+\cR_{2}^{+}(\pi_{0}^{+}),
$$
where $\rho_{n,m}$'s are linear endomorphisms of $\pi_{0}^{+}$.
\end{proposition}
{\em Proof \/}. Since $n\pa_{n}$ coincides with ${\cal V}_{H_n}$, it
satisfies the infinitesimal automorphism identity on $\pi_{0}^{\otimes
2}$. To see that this identity is also satisfied for the tensors
$F_{n}\otimes a$ ($a\in\pi_{0}$) and $F_{n}\otimes F_{m}$, we remark
that they are $\pa_{x}^{-1}$, resp. $\pa_{x}^{-1}\pa_{y}^{-1}$ applied
to the similar identities for $H_{n}\otimes a$, resp. $H_{n}\otimes
H_{m}$.
According to formula (\ref{onemore}), equation \ref{hamactonpi0+}) is
satisfied for $a\in \pi_{0}$, with $\rho_{n}(a)$ actually lying in
$\cR_{2}^{+}(\pi_{0})$. Therefore (\ref{hamactonpi0+}) holds for
$a=F_{m}$, as can be seen by applying $\pa_{x}$ to (\ref{PBF-F}). By
Leibnitz rule, if (\ref{hamactonpi0+}) is true for $a, b \in
\pi_0^+$, then it is true for $ab$. Moreover, we find that
$$
\rho_{n}(ab)=a(y)\rho_{n}(b)+b(y)\rho_{n}(a).
$$
This proves (\ref{hamactonpi0+}) and the properties of $\rho_{n}$ in
general. \hfill $\Box$
\medskip
\noindent
\begin{remark}
Formula (\ref{hamactonpi0+}) can be viewed as a nonlocal substitute of
the hamiltonian property of $H_{n}$. \hfill $\Box$
\end{remark}
\subsection{Extension by $\CC[B_{-}]$.}
Let $\CC[B_{-}]$ be the ring of algebraic functions on $B_{-}$. Let
$\wt \pi_{0}=\pi_{0}\otimes \CC[B_{-}]$. We extend the actions of
${\frak n}_{+}$ and ${\frak a}_{-}$ on $\pi_{0}$, to actions of
${\frak g}$ and ${\frak a}_{-}$ on $\wt\pi_{0}$, in the following way.
Recall the identification as rings and ${\frak n}_{+}$-modules, of
$\pi_{0}$ with $\CC[N_{+}/A_{+}]$. Moreover, the ${\frak
a}_{-}$-action on $\pi_{0}$ is identified with its action by vector
fields on $B_{-}\backslash G/A_{+}$ from the right, where
$N_{+}/A_{+}$ is an open subset (recall that $G$ is the group
corresponding to $\frak g$). The ring $\wt\pi_{0}$ is then identified
with the ring of functions on $B_{-}\times N_{+}/A_{+}$.
We define on it the actions of ${\frak g}$ and ${\frak a}_{-}$ as
follows: the value of the vector field generated by $x\in {\frak g}$
at $(b_{-},n_{+}A_{+})$ is
\begin{equation}
(r(\Ad (b_{-}^{-1})(x))_{-}, \ell(\Ad (b_{-}^{-1})(x))_{+})
\end{equation}
and the value of the vector field $\pa_{n}$ (denoted by $\pa$ for
$n=1$) generated by $p_{-n}$, $n\in I$ is
\begin{equation}
(r(\Ad (n_{+})(p_{-n})_{-}), \ell(\Ad (n_{+})(p_{-n})_{+}));
\end{equation}
here $r(y)$ and $\ell(y)$ denote the right and left vector fields
generated by a Lie algebra element $y$. The proof of these formulas is
analogous to the proof of Lemma 1 in \cite{EFr}.
In particular, for $n=1$ we have according to Lemma 2 of \cite{EFr}
$$
\pa n_{+}= (n_+ p_{-1} n_+^{-1})_+ n_+ = - (n_+ p_{-1} n_+^{-1})_- n_+
+ (n_+ p_{-1} n_+^{-1}) n_+ =
$$
\begin{equation}
= -(p_{-1}+\sum_{i}u_{i}h_{i}^{\vee})n_{+}+n_{+}p_{-1}.
\label{evoln+}
\end{equation}
We also have:
\begin{equation}
\pa b_{-}= b_- (n_+ p_{-1} n_+^{-1})_- = b_{-}(p_{-1}+\sum_{i}
u_{i}h_{i}^{\vee}).
\label{evolb-}
\end{equation}
The last two formulas should be considered in an arbitrary
representation of $G$ of the form $V((\la))$, where $V$ is
finite-dimensional (see \cite{EFr}).
Set now
\begin{equation}
P(b_{-}\otimes
u_{i})=-\pa_{y}\big[\Ad(b_{-}(y))(h_{i})b_{-}(x)\pa_{x}^{-1}\de_{xy}\big].
\label{PBb-phi}
\end{equation}
By this formula we mean the following. In any representation of $B_-$
of the form $V((\lambda))$, an element $b_-$ of $B_-$ can be viewed as
a matrix $(b_{-,kl})$ whose entries $b_{-,kl}$ are Taylor series in
$\la^{-1}$ with coefficients in the ring $\CC[B_-]$. Such functions in
fact generate $\CC[B_-]$. The left hand side of formula
(\ref{PBb-phi}) is the matrix whose entries are $P(b_{-,kl} \otimes
u_i)$. The right hand side of the formula is also a matrix of the same
size, whose entries are elements of $\cR_2(\CC[B_-])$. Via Leibnitz
rule, formula (\ref{PBb-phi}) defines $P(f \otimes u_i) \in
\cR_2(\CC[B_-])$ for any $f \in \CC[B_-]$. We interpret similarly
formulas below for $P(b_- \otimes b_-), P(g \otimes g)$, etc.
\medskip\noindent
\begin{remark} \label{addrem1}
In view of (\ref{evolb-}), we consider $b_{-}(x)$ as the ordered
exponential
$$P\exp\int_{-\infty}^{x}(p_{-1}+\sum_{i=1}^{l}u_{i}(z)h_{i}^{\vee})
dz.$$ Note that in any representation $b_{-}(x)$ is represented by the
matrix
\begin{equation} \label{scr}
\left( \on{Id} - \sum_{i=0}^l \frac{(\al_i,\al_i)}{2} f_i S_i + \ldots \right)
\exp \left( \sum_{i=1}^l h_i^\vee \varphi_i(x) \right),
\end{equation}
where $\varphi_i(x) = \int_{-\infty}^x u_i(y) dy$, and $S_i$ is the
$i$th ``half screening'' $$S_i = \int_{-\infty}^x e^{-\varphi_i(y)}
dy.$$ The reason for this terminology is that if in the last formula
we integrate over a closed contour, we obtain the classical limit of
the screening operator of conformal field theory; the sum of the
screening operators coincides with the hamiltonian of the affine Toda
field theory. The other terms in the first factor of formula
(\ref{scr}) can be expressed as consecutive Poisson brackets of the
$S_i$'s.
Poisson bracket (\ref{PBb-phi}) can be informally obtained as
follows. Since
$$
P((p_{-1}+\sum_{i=1}^{l}u_{i}h_{j}^{\vee})\otimes \varphi_{j})
=
-h_{j}\de_{xy},
$$
we can write
$$
\begin{array}{rcl}
\{b_{-}(x),\varphi_{j}(y)\}=
\int_{-\infty}^{x}dz
P\exp\int_{-\infty}^{z}(p_{-1}+\sum_{i=1}^{l}u_{i}h_{i}^{\vee})
(-h_{j}\de_{zx})
\\
P\exp\int_{z}^{x}(p_{-1}+\sum_{i=1}^{l}u_{i}h_{i}^{\vee})
\\
=
-\big(
P\exp\int_{-\infty}^{y}(p_{-1}+\sum_{i=1}^{l}u_{i}h_{i}^{\vee}) \big)
h_{j}
\big(
P\exp\int_{y}^{x}(p_{-1}+\sum_{i=1}^{l}u_{i}h_{i}^{\vee})
\big)1_{y<x}
\\
=-b_{-}(y)h_{j}b_{-}(y)^{-1}b_{-}(x)1_{y<x},
\end{array}
$$
where $1_{y<x}$ is the function of $x,y$ equal to $1$ when $y<x$ and
to $0$ else. For $y$ fixed, this is a shifted Heaviside function in
$x$; applying $\pa_{x}$ to it gives $\delta(x-y)$, so that we identify
$1_{y<x}$ with $\pa_{x}^{-1}\de_{xy}$. Formula (\ref{PBb-phi}) is
obtained by applying $\pa_{y}$ to this identity. \hfill $\Box$
\end{remark}
\begin{lemma} \label{lemma2.1}
Definition (\ref{PBb-phi}) is compatible with the $\pa$-linearity
condition (\ref{D-linearity}) on $P$.
\end{lemma}
{\em Proof \/}. Both
left and right hand sides of (\ref{PBb-phi}) satisfy the same identity
$$
\pa_{x}(\on{lhs})=(\on{lhs})(p_{-1}+\sum_{j} u_{j}(x)h_{j}^{\vee})
- b_{-}(x)h_{i}
\pa_{y}\de_{xy},
$$
and
$$
\pa_{x}(\on{rhs})=(\on{rhs})(p_{-1}
+\sum_{j} u_{j}(x)h_{j}^{\vee}) - b_{-}(x)h_{i}
\pa_{y}\de_{xy}.
$$
\hfill $\Box$
\medskip
Let us define for $b_{-}\in B_{-}$, and $x\in {\frak g}$,
$r(x)(b_{-})=(\Ad b_{-}(x))_{-}b_{-}$ in any representation of $G$
(recall that $x_{-}$ stands for the projection of $x\in{\frak g}$ on
the second factor of the decomposition ${\frak g}={\frak n}_{+}\oplus
{\frak b}_{-}$). This formula defines the right action of ${\frak g}$
on $B_{-}$ viewed as an open subset of $N_{+}\backslash G$.
Let $\Delta_+$ be the set of positive roots of $\G$, and let
$e^{\al}$, $e_{\al}$, $\alpha \in \Delta_+$, be dual bases of ${\frak
n}_{+}$, ${\frak n}_{-}$ for the inner product $\langle, \rangle$.
Let
\begin{equation}
R^{+}={1\over 2}(\sum_{\al}e^{\al}\otimes
e_{\al}+\sum_{i}h_{i}\otimes h_{i}^{\vee} -\sum_{\al}e_{\al}\otimes
e^{\al}),
\end{equation}
\begin{equation}
R^{-}={1\over 2}(\sum_{\al}e^{\al}\otimes
e_{\al}-\sum_{i}h_{i}\otimes h_{i}^{\vee} -\sum_{\al}e_{\al}\otimes
e^{\al}),
\end{equation}
Let us define, in the tensor product of any pair of representations of
$G$,
\begin{equation}
\begin{array}{rcl}
\everymath{\displaystyle}
P(b_{-}\otimes b_{-}) & = & \{\big((r\otimes r)(R^{-})(b_{-}(x)\otimes
b_{-}(x))\big)(1\otimes b_{-}(x)^{-1}b_{-}(y)) \\
& & -\big((r\otimes r)(R^{+})(b_{-}(y)\otimes
b_{-}(y))\big)(b_{-}(y)^{-1}b_{-}(x)\otimes 1)
\}
\pa_{x}^{-1}\de_{xy},
\end{array}
\label{PBb-b}
\end{equation}
where $pr_{-}$ is the projection $\g \rightarrow {\frak b}_-$ along
$\n_+$. This formula can be derived using the argument of Remark
\ref{addrem1}. A similar formula has been obtained by Faddeev and
Takhtajan \cite{Faddeev} (in the rational case) and used by Bazhanov,
Lukyanov and Zamolodchikov \cite{BLZ} (see also \cite{Bab}).
Formula (\ref{PBb-b}) can also be written as
\begin{equation}
\begin{array}{rcl}
\everymath{\displaystyle}
P(b_{-}\otimes b_{-}) =
(pr_{-}\otimes pr_{-})[\Ad^{\otimes 2}(b_{-}(x))(R^{-})
-\Ad^{\otimes 2}(b_{-}(y))(R^{+})] \\ (b_{-}(x)\otimes b_{-}(y))
\pa_{x}^{-1}\de_{xy}.
\end{array}
\label{convenientPBb-b}
\end{equation}
It satisfies the antisymmetry condition (\ref{antisymmetry}), because
$$
R^{-}=-R^{+(21)}.
$$
\begin{lemma} \label{lemma2.2}
Formula (\ref{PBb-b}) is compatible with the
$\pa$-linearity condition (\ref{D-linearity}) on $P$. \end{lemma}
{\em Proof\/}. The l.h.s. of (\ref{PBb-b}) satisfies
$$
\begin{array}{rcl}
\everymath{\displaystyle}
\pa_{y}(\on{lhs})=(\on{lhs})(1\otimes (p_{-1}
+\sum_{i} u_{i}(y)h_{i}^{\vee}))
\\
+
\sum_{i}\{b_{-}(x)h_{i}\de_{xy}-\Ad(b_{-}(y))([p_{-1},h_{i}])b_{-}(x)\}
\otimes b_{-}(y)h_{i}^{\vee}\pa_{x}^{-1}\de_{xy},
\end{array}
$$
and the r.h.s. satisfies
$$
\begin{array}{rcl}
\everymath{\displaystyle}
\pa_{y} (\on{rhs}) =
\Big\{[(r\otimes r)(R^{-})(b_{-}(x)\otimes b_{-}(x))][1\otimes
b_{-}(x)^{-1}b_{-}(y)(p_{-1} \\ +\sum_{i}h_{i}^{\vee} u_{i}(y))]\\
-\Big[(r\otimes r)(R^{+})
[(b_{-}(y)\otimes b_{-}(y))(p_{-1}\otimes 1+1\otimes
p_{-1}
-\sum_{i} u_{i}(y)(h_{i}^{\vee}\otimes 1 \\ +1\otimes
h_{i}^{\vee})]\Big] (b_{-}(y)^{-1}b_{-}(x)\otimes 1) \\
+[(r\otimes r)(R^{+}) (b_{-}(y)\otimes
b_{-}(y))][(p_{-1}+\sum_{i} u_{i}(y)h_{i}^{\vee})b_{-}(y)^{-1}
b_{-}(x)\otimes 1]\Big\}
\\ \pa_{x}^{-1}\de_{xy} \\
-[(r\otimes r)(R^{-}-R^{+})\big(b_{-}(x)\otimes b_{-}(x)\big)]\de_{xy}
\end{array}
$$
The $\de_{xy}$--terms coincide because
$R^{+}-R^{-}=\sum_{i}h_{i}\otimes h_{i}^{\vee}$. The
$\pa_{x}^{-1}\de_{xy}$--terms containing $ u_{i}(y)$ are coincide,
since $R^{+}$ is invariant by the conjugation by ${\frak h}$.
The identification of the remaining terms follows from the formula
\begin{equation} \label{rho}
[p_{-1}\otimes 1 + 1\otimes
p_{-1},R^{+}]=\sum_{i}[p_{-1},h_{i}]\otimes h_{i}^{\vee}.
\end{equation}
Let $\rho:{\frak g}\to {\frak g}$ be the linear map defined by
$\rho(x)=\langle R^{+},1\otimes x\rangle$. Formula (\ref{rho}) means
that $[\ad(p_{-1}),\rho]$ is the linear endomorphism of ${\frak g}$,
equal to $0$ on ${\frak n}_{+}$ and ${\frak n}_{-}$ and to
$\ad(p_{-1})$ on ${\frak h}$, which can be easily verified.
\hfill $\Box$
\medskip
\noindent
\begin{remark} \label{rem2.2}
It follows from the proof that the brackets (\ref{PBb-b}) can also be
expressed replacing $R^{+}$ and $R^{-}$ by
$R^{+}+\kappa\sum_{i}\epsilon^{i}\otimes \epsilon_{i}$ and
$R^{-}+\kappa\sum_{i}\epsilon^{i}\otimes \epsilon_{i}$,
$(\epsilon^{i})$, $(\epsilon_{i})$ dual bases of ${\frak g}$. This
follows also from the fact that $[\sum_{i}\epsilon^{i}\otimes
\epsilon_{i}, b_{-}\otimes b_{-}]=0$ for any $b_{-}\in B_{-}$. We
prefer the form given here because then (\ref{antisymmetry}) is
manifestly satisfied. \hfill $\Box$
\end{remark}
\medskip
Let us extend $P$ to $\CC[B_{-}]^{\otimes 2}$ by linearity, to
$\CC[B_{-}]\otimes \pi_{0}$ by formulas (\ref{PBb-phi}),
(\ref{D-linearity}) and Leibnitz rule, to $\pi_{0}\otimes \CC[B_{-}]$
by antisymmetry, and finally to $(\CC[B_{-}]\otimes \pi_{0})^{\otimes
2}$ by the Leibnitz rule.
\begin{proposition} \label{prop2.4}
The operation $P$ on $\CC[B_{-}]\otimes \pi_{0}$ defined by formulas
(\ref{PBphi-phi}), (\ref{PBb-phi}), (\ref{PBb-b}) and the rules above,
satisfies the nonlocal VPA axioms.
\end{proposition}
{\em Proof\/}. By construction, $P$ satisfies the $\pa$-linearity
axiom. The Leibnitz rule is satisfied for the brackets involving
$b_{-}$'s since the right hand sides of (\ref{PBb-phi}) and
(\ref{PBb-b}) respect the tensor structure. For the other brackets,
the Leibnitz rule is satisfied by construction. As was already
mentioned, the antisymmetry of $P$ follows from $R^{+}=R^{-(21)}$. Let
us check now the Jacobi identity. For the $\CC[B_-]$ part it follows
from the following lemma.
\begin{lemma} \label{lemma2.3}
Let $(R,\pa)$ be a differential ring. Let $R_{0}$ be a subring of $R$,
such that $R$ is spanned by elements of the form $\pa^i a_i$, where
$a_i \in R_0$.
Let $\sum_{i}\varpi_{i}\otimes\varpi'_{i}\in S^{2}(\Der(R_{0}))$,
and let us
assume that there exists a linear map $P:R\otimes R\to\cR_{2}$,
satisfying the Leibnitz rule and the $\pa$-linearity, such that
$$
P(a\otimes b)=\sum_{i}(\varpi_{i}a)(x)(\varpi'_{i}b)(y)
\pa^{-1}_{x}\delta_{xy},
$$
for $a,b\in R_{0}$. Then $P$ satisfies the Jacobi identity.
\end{lemma}
{\em Proof\/}. It is enough to check it for elements of $R_{0}$.
We have
$$
\begin{array}{rcl}
\everymath{\displaystyle} P_{xy,z}(P(a\otimes b)\otimes
c)=\sum_{i,j}(\varpi_{j}\varpi_{i}a)(x)
(\varpi'_{i}b)(y)(\varpi'_{j}c)(z)\varpi_{x}^{-1}
\pa_{x}^{-1}\de_{xyz}
\\ +(\varpi_{i}a)(x)(\varpi_{j}\varpi'_{i}b)(y)
(\varpi'_{j}c)(z)\pa_{x}^{-1}\pa_{y}^{-1}\de_{xyz};
\end{array}
$$
the first term is cancelled by
$$
\sum_{i,j}
(\varpi_{i}c)(z)(\varpi_{j}\varpi'_{i}a)(x)(\varpi'_{j}b)(y)\pa_{z}^{-1}
\pa_{x}^{-1}\delta_{xyz}
$$
(which is obtained from the second by a cyclic permutation).
\hfill $\Box$ \medskip
The Jacobi identity in the case of the proposition is then proved as
follows: introduce the variables $\varphi_{i}$, $i=1,\ldots,l$; let $\wt
R=\CC[B_{-}]\otimes \CC[\varphi_{i}^{(k)}]$, and identify $R$ with a
subalgebra of $\wt R$ by $\pa\varphi_{i}= u_{i}$. Extend the
bracket $P$ to $\wt R$, by the rules
$$
P(\varphi_{i}\otimes\varphi_{j})=-(\alpha_{i},\alpha_{j})\pa_{x}^{-1}
\delta_{xy},
$$
and
$$
P(b_{-}\otimes\varphi_{i})
=-\Ad(b_{-}(y))(h_{i})b_{-}(x)\pa_{x}^{-1}\de_{xy}.
$$
Let now $R_{0}=\CC[B_{-}]\otimes \CC[\varphi_{i}]$; the restriction of $P$
to $R_{0}$ is of the form given in the lemma, so $P$ satisfies the
Jacobi identity.
\hfill $\Box$
\subsection{Extension by $\CC[B_{-}]$ and half IM's}
Consider now the algebra $\bar\pi_{0}=\pi_{0}\otimes\CC[F_{n}]
\otimes\CC[B_{-}]$. We extend the actions of $\pa$ and of the flows
$\pa_{n}$ on it in the way compatible with the embeddings
$\pi_{0}^{+}\subset\bar\pi_{0}$ and
$\wt\pi_{0}\subset\bar\pi_{0}$. We also define a nonlocal VPA
structure on $\bar\pi_{0}$, in such a way that the above embeddings are
nonlocal VPA morphisms, and we define the brackets $P(F_{n}\otimes
b_{-})$ by
\begin{equation} \label{PBF-b}
P(F_{n}\otimes b_{-})=\pa_{x}^{-1} P(H_{n}\otimes b_{-});
\end{equation}
we then extend $P$ to $\bar\pi_{0}^{\otimes 2}$ by antisymmetry and the
Leibnitz rule (it is easy to check that (\ref{PBF-b}) is
compatible with the Leibnitz rule for the products of matrix elements of
$b_{-}$).
\begin{proposition} \label{prop2.5}
The operation $P$ defined by formulas
(\ref{PBphi-phi}), (\ref{PBF-phi}), (\ref{PBF-F}), (\ref{PBb-phi}),
(\ref{PBb-b}), (\ref{PBF-b}), antisymmetry and the
Leibnitz rule, defines a nonlocal VPA structure on
$\bar\pi_{0}$.
\end{proposition}
{\em Proof\/}. The same as in Prop. \ref{prop2.1}.
\hfill $\Box$
\medskip
The rest of this section is devoted to proving that the image of $P$
actually lies in $\cR_{2}^{-1}(\bar\pi_{0})$ (the definition of this
space is in Rem. \ref{rem1.2}). It is enough to prove it for
$P(F_{n}\otimes b_{-})$.
\begin{lemma} \label{addlemma1}
For certain $A_{ni}\in \cR^{+}_{2}(\pi_{0})$, we have
\begin{equation} \label{F-u}
P(F_{n}\otimes (p_{-1}+\sum_{i=1}^{l}u_{i}h_{i}^{\vee}))
=\sum_{i=1}^{l}
n(\pa_{n}u_{i})(y)h_{i}^{\vee}\pa_{x}^{-1}\de_{xy}
+A_{ni}h_{i}^{\vee}.
\end{equation}
\end{lemma}
{\em Proof\/}. We have
$
\{\int_{-\infty}^{\infty} H_{n}, u_{i}\}
=n\pa_{n}(u_{i}),
$
so that
$
P(H_{n}\otimes u_{i})=n\pa_{n}
u_{i}(y)\de_{xy}+\pa_{x}A_{ni},
$
with $A_{ni}\in \cR_{2}^{+}(\pi_{0})$. Now
$$
P(F_{n}\otimes u_{i})=\pa_{x}^{-1}P(H_{n}\otimes u_{i})=
n(\pa_{n}u_{i}(y))\pa_{x}^{-1}\de_{xy}+A_{ni},
$$
and the statement follows.
\hfill $\Box$
\begin{lemma} \label{addlemma1bis}
$\pa_{n}(\sum_{i=1}^{l}u_{i}(x)h_{i}^{\vee})$ is equal to the Cartan
component of
$$
-[p_{-1},\Ad(n_{+}(x))(p_{-n})].
$$
\end{lemma}
{\em Proof\/}. We will give two proofs of this fact.
Observe that
$[p_{-1}, (\log n_{+}(x))_{1}]=-\sum_{i} u_{i}(x)h_{i}^{\vee}$; also
$$
\pa_{n}((\log n_{+})_{1} )=(\Ad (n_{+})(p_{-n}))_{1}
$$
for any $n_{+}\in N_{+}$ (here $\log$
stands for the inverse of the exponential map $\exp: \n_+ \rightarrow
N_+$, which is an isomorphism, and
the index $1$ means the component of principal degree one of an
element of $\wt{\frak g}$). Hence
$$
\begin{array}{rcl}
\pa_{n}(\sum_{i} u_{i}(x)h_{i}^{\vee})=-\pa_{n}([p_{-1},
(\log n_{+}(x))_{1}]) =-[p_{-1}, \pa_{n}(\log n_{+}(x))_{1}]
\\ =-[p_{-1},(\Ad
(n_{+}(x))(p_{-n}))_{1}].
\end{array}
$$
Alternatively, this statement is a formulation of the Cartan part of
the equation
$$
[\pa+p_{-1}+\sum_{i}u_{i}h_{i}^{\vee},
\pa_{n}-(\Ad(n_{+})(p_{-n}))_{+}]=0,
$$
which follows from the zero-curvature equation (see e.g. formula (10)
of \cite{EFr}) and the fact that $\Ad(n_{+})(p_{-n})$ commutes with
$\pa+p_{-1}+\sum_{i}u_{i}h_{i}^{\vee}$ (see formula (13) of
\cite{EFr}).
\hfill $\Box$
Let us set $A_{ni}=\sum_{k\ge 0}A_{ni}^{(k)}(y)\pa_{x}^{k}\de_{xy}$,
with $A_{ni}^{(k)}\in \pi_{0}$.
\begin{lemma} \label{addlemma2}
Let $\cZ\in \cR_{2}(\bar\pi_{0})$ be such that
\begin{equation} \label{diffeqZ}
\pa_{y}\cZ=\cZ (p_{-1}+\sum_{i=1}^{l}u_{i}(y)h_{i}^{\vee})
+b_{-}(y)\big(\sum_{i=1}^{l}
n(\pa_{n}u_{i})(y)h_{i}^{\vee}\pa_{x}^{-1}\de_{xy}
+A_{ni}h_{i}^{\vee}\big).
\end{equation}
Then
\begin{equation} \label{possibleZ}
\begin{array}{rcl}
\cZ=n[b_{-}(y)(\Ad (n_{+}(y))( p_{-n}))_{-}- \Ad (b_{-}(x))(\Ad (n_{+}(x))
(p_{-n}))_{-}b_{-}(y)]\pa_{x}^{-1}\de_{xy} \\
+
\Ad(b_{-}(x))\big(\sum_{i=1}^{l}A_{ni}^{(0)}(x)h_{i}^{\vee}\big)
b_{-}(y)\pa_{y}^{-1}\de_{xy} \\
-\sum_{k\ge 1}\pa_{x}^{k} [\Ad(b_{-}(x))\big(\sum_{i=1}^{l}
A_{ni}^{(k+1)}(x)h_{i}^{\vee}\big) b_{-}(y) \de_{xy}].
\end{array}
\end{equation}
\end{lemma}
{\em Proof\/}. Let
$$
\cZ_{0}=n[b_{-}(y)(\Ad (n_{+}(y)) (p_{-n}))_{-}
- \Ad (b_{-}(x))(\Ad
(n_{+}(x))(p_{-n}))_{-}b_{-}(y)]\pa_{x}^{-1}\de_{xy}.
$$
Then
$$
\begin{array}{rcl}
\pa_{y}\cZ_{0}-\cZ_{0}(p_{-1}+\sum_{i=1}^{l}u_{i}(y)h_{i}^{\vee})
=n\big\{ b_{-}(y)[p_{-1}+\sum_{i=1}^{l}u_{i}(y)h_{i}^{\vee}
,(\Ad(n_{+}(y))(p_{-n}))_{-}] \\ -b_{-}(y) \big(
[p_{-1}+\sum_{i=1}^{l}u_{i}(y)h_{i}^{\vee}
,\Ad(n_{+}(y))(p_{-n})]\big)_{-} \big\}\pa_{x}^{-1}\de_{xy} \\ =-n
b_{-}(y)
[p_{-1},\big(\Ad(n_{+}(y))(p_{-n})\big)_{1}]\pa_{x}^{-1}\de_{xy} \\ =
n b_{-}(y) \pa_{n}(\sum_{i=1}^{l}u_{i}(y)h_{i}^{\vee})
\pa_{x}^{-1}\de_{xy},
\end{array}
$$
where the last equality follows from Lemma
\ref{addlemma1bis}. We then have
$$
\pa_{y}(\cZ-\cZ_{0})-(\cZ-\cZ_{0})
(p_{-1}+\sum_{i=1}^{l}u_{i}(y))h_{i}^{\vee})
= b_{-}(y)\sum_{i=1}^{l}A_{ni}h_{i}^{\vee},$$
so that
$$
\pa_{y}[(\cZ-\cZ_{0})b_{-}(y)^{-1}]=\Ad(b_{-}(y))[\sum_{i=1}^{l}
A_{ni}h_{i}^{\vee}],
$$
and the result.
\hfill $\Box$
\begin{proposition} \label{addprop1}
\begin{align*}
P(F_{n}\otimes b_{-}) &= n[b_{-}(y)(\Ad (n_{+}(y))( p_{-n}))_{-} \\ &-
\Ad (b_{-}(x))(\Ad (n_{+}(x)) (p_{-n}))_{-}b_{-}(y)] \pa_{x}^{-1}\de_{xy}
\\ &+ \Ad(b_{-}(x))\big(\sum_{i=1}^{l}A_{ni}^{(0)}(x)h_{i}^{\vee}\big)
b_{-}(y)\pa_{y}^{-1}\de_{xy} \\ &-\sum_{k\ge
0}\pa_{x}^{k}[\Ad(b_{-}(x))\big(\sum_{i=1}^{l}
A_{ni}^{(k+1)}(x)h_{i}^{\vee}\big) b_{-}(y) \de_{xy}]
\end{align*}
and so belongs to $\cR_{2}^{-1}(\wt{\pi}_{0})$.
\end{proposition}
{\em Proof\/}. Indeed, $P(F_{n}\otimes b_{-})$ defined by formula
(\ref{PBF-b}) satisfies (\ref{diffeqZ}), by the Leibnitz rule and
Lemma \ref{addlemma1}, and we apply to it Lemma \ref{addlemma2}.
\hfill $\Box$
\begin{remark} \label{rem2.3}
It follows from Thm. \ref{thm3.1} that the extension of
$\pa_{n}$ to $\bar{\pi}_{0}$, defined again as $r(p_{-n})$, is an
infinitesimal automorphism of the nonlocal VPA structure of
$\bar\pi_{0}$.\hfill $\Box$
\end{remark}
\section{Geometric interpretation of the Poisson structures.}
\subsection{Determination of the nonlocal terms.}
In 2.3, we have defined a nonlocal VPA $\bar\pi_{0}$. It is
isomorphic as a differential algebra to $\CC[B_{-}\times N_{+}]$
with the derivation $\pa$ defined by the right action of $p_{-1}$ on
$B_{-}\times N_{+}$. We will now describe $P$ in these geometric
terms.
Let us set $t=\sum_{\al}e^{\al}\otimes e_{\al}+e_{\al}\otimes
e^{\al}+\sum_{i}h_{i}\otimes h_{i}^{\vee}$. Recall that $p_{-n}$
is a basis of ${\frak a}_{-}$ dual to $p_{n}$ with respect to the inner
product $\langle , \rangle$.
\begin{lemma} \label{lemma3.1}
\begin{equation}
P(b_{-}\otimes b_{-})=-\ell^{\otimes 2}(t)(b_{-}(x)\otimes b_{-}(y))
\pa_{x}^{-1}\de_{xy}
\label{reformulationPBb-b}
\end{equation}
\end{lemma}
{\em Proof \/}.
We have
$$
\begin{array}{lcl}
\everymath{\displaystyle}
(pr_{-}\otimes pr_{-})(\Ad^{\otimes 2}b_{-}(x)R^{-} -
\Ad^{\otimes 2}(b_{-}(y))(R^{+})
) = (pr_{-}\otimes pr_{-})(\Ad^{\otimes 2}(b_{-}(x)) \\ \sum_{\al}e^{\al}
\otimes e_{\al})
+(pr_{-}\otimes pr_{-})(\Ad^{\otimes 2}(b_{-}(y))\sum_{\al}e_{\al}
\otimes e^{\al}) -\sum_{i}h_{i}\otimes h_{i}^{\vee}
\\
= \sum_{\al}(\Ad (b_{-}(x))(e^{\al}))_{-}\otimes \Ad (b_{-}(x))(e_{\al})
+\sum_{\al} \Ad (b_{-}(y))(e_{\al}) \otimes (\Ad (b_{-}(y))(e^{\al}))_{-}
\\ -\sum_{i}h_{i}\otimes h_{i}^{\vee}
\\ =-\Ad (b_{-}(x))
(\Ad (b_{-}(x)^{-1})(e^{\al}))_{-}\otimes e_{\al}
-e_{\al}\otimes \Ad (b_{-}(y))(\Ad
(b_{-}(y)^{-1})(e^{\al}))_{-}
\\ -\sum_{i}h_{i}\otimes h_{i}^{\vee}. \end{array}
$$
The first equality is obtained using the following arguments:
$R^{-}=\sum_{\al}e^{\al}\otimes
e_{\al}-{1\over 2}t$,
$R^{+}=-\sum_{\al}e_{\al}\otimes e^{\al}+{1\over 2}t$;
$t$ is $B_{-}$-invariant, and $(pr_{-}\otimes pr_{-})t
=\sum_{i}h_{i}\otimes
h_{i}^{\vee}$; the second equality is straightforward, and the
the last one is because for $b_{-}\in B_{-}$, we have
\begin{equation}
\sum_{\al}(\Ad (b_{-})(e^{\al}))_{-}\otimes \Ad (b_{-})(e_{\al})
=-\sum_{\al}\Ad (b_{-})(\Ad (b_{-}^{-1})e^{\al})_{-}\otimes e_{\al}.
\label{auxil}
\end{equation}
Formula (\ref{auxil}) can be shown as follows: let $\xi\in{\frak
n}_{+}$, then
$$
\langle \on{lhs\ of\ (\ref{auxil})}, 1\otimes\xi\rangle=(\Ad b_{-}(\Ad
b_{-}^{-1}(\xi))_{+})_{-},
$$
and
$$
\langle \on{rhs\ of\ (\ref{auxil})}, 1\otimes\xi\rangle=-\Ad b_{-}(\Ad
b_{-}^{-1}(\xi))_{-},
$$
with $x_{+}=x-x_{-}$.
Formula (\ref{reformulationPBb-b}) then follows from
(\ref{convenientPBb-b}). \hfill $\Box$ \medskip
\begin{lemma} [\cite{EFr}, Prop. 6] \label{lemma3.3}
In any representation of $G$, $n_{+}(x)$ has the form
\begin{equation}
\bar n_{+}(x) \exp \left( \sum_{n\in I} {1\over n}{{p_{n}F_{n}(x)}}
\right),
\label{formofn+}
\end{equation}
where $\bar n_{+}(x)$ is a matrix of polynomials with entries in
$\CC[u_i^{(n)}]$.
\end{lemma}
Recall from \cite{EFr}, Lemma 1, that in any representation of $N_+$
we have the following formula for the right action of $x \in \g$ on
$N_{+}\subset B_{-}\backslash G$: $$r(x)n_{+}=(\Ad
(n_{+})(x))_{+}n_{+}.$$
\begin{lemma} \label{lemma3.4}
\begin{equation}
\begin{array}{rcl}
P(n_{+}\otimes n_{+})= r(a)[n_{+}(x)\otimes
n_{+}(y)]\pa_{x}^{-1}\de_{xy}
+\on{\ local\ terms,}
\end{array}
\label{PBn-n}
\end{equation}
where $a=\sum_{n\in I\cup(-I)}p_{n}\otimes p_{-n}$.
\end{lemma}
{\em Proof \/}. Let us write $n_{+}$ in the form (\ref{formofn+}) and
compute the nonlocal part of $P(n_{+}\otimes n_{+})$. It comes from
three different terms: $P(F_{n}\otimes \bar n_{+}), P(\bar n_{+}
\otimes F_n), P(F_{n}\otimes F_m)$.
We have
$$
P(F_{n}\otimes n_{+})=
n\pa_{n}n_{+}(y)\pa_{x}^{-1}\de_{xy}+\rho_{n}(n_{+}),
$$
by (\ref{hamactonpi0+}). Set $F=\sum_{n\in I}{1\over
n}p_{n}F_{n}$. We then have $\bar n_{+}=n_{+}e^{-F}$.
The Leibnitz rule gives
\begin{align*}
P(F_{n}\otimes \bar n_{+}) =n\pa_{n}n_{+}(y)e^{-F(y)}\pa_{x}^{-1}\de_{xy}
& +\rho_{n}(n_{+})(y)e^{-F(y)}
+(1\otimes \bar n_{+}(y)) \\
& [\sum_{m\in I}
{-}{1\over
m}(1\otimes p_{m})P(F_{n}\otimes F_{m})].
\end{align*}
Using (\ref{PBF-F}), we obtain
\begin{align*}
P(F_{n}\otimes \bar n_{+}) & \in
n\pa_{n}n_{+}(y)e^{-F(y)}\pa_{x}^{-1}\de_{xy}
+\rho_{n}(n_{+})e^{-F(y)}
+(1\otimes \bar n_{+}(y)) \cdot
\\
& \left( \sum_{m\in I}{-}{1\over
m}(1\otimes p_{m})(mH_{n,m}(x)+n H_{n,m}(y))\pa_{x}^{-1} \right) \de_{xy}
+\cR_{2}^{+}(\pi_{0}),
\end{align*}
so that
\begin{align} \label{tmp}
P(F_{n}\otimes \bar n_{+}) & \in n[(\Ad \bar n_{+}(p_{-n}))_{+}\bar
n_{+}
-\bar n_{+}\sum_{m\in I}{1\over m}p_{m}H_{n,m}](y)\pa_{x}^{-1}\de_{xy}
\\
&
+\rho_{n}(n_{+})e^{-F(y)}
-\sum_{m\in I}H_{n,m}(x)(\bar
n_{+}(y)p_{m})\pa_{x}^{-1}\de_{xy}+\cR_{2}^{+}(\pi_{0}).
\end{align}
Let us show that
$$
\cW=\rho_{n}(n_{+})e^{-F(y)}-\sum_{m\in I}H_{n,m}(x)(\bar
n_{+}(y)p_{m})\pa_{x}^{-1}\de_{xy}
$$
belongs to $\cR_{2}^{+}(\pi_{0})$. Apply $\pa_{x}$ to (\ref{tmp}). The
l.h.s. of the resulting identity belongs to $\cR_{2}^{+}(\pi_{0})$, as
well as
$
\pa_{x}
(n[(\Ad \bar n_{+}(p_{-n}))_{+}\bar
n_{+}
-\bar n_{+}\sum_{m\in I}{1\over
m}p_{m}H_{n,m}](y)\pa_{x}^{-1})\de_{xy}.
$
It follows that
$$
\pa_{x}\cW\in \cR_{2}^{}{+}(\pi_{0}).
$$
On the other hand, in view of Prop. \ref{prop2.3}, we can write the
nonlocal part of $\cW$ as $\sum_{m\in
I}H_{n,m}(x)w_{m}(y)\pa_{x}^{-1}\de_{xy}$. The result of the action of
$\pa_{x}$ on this nonlocal part should be local; since all $\pa
H_{n,m}$, $m\in I$, are independent, it follows that all $w_{m}$'s
vanish, so that
$$
\cW\in \cR_{2}^{+}(\pi_{0}).
$$
Formula (\ref{tmp}) becomes
$$
P(F_{n}\otimes \bar n_{+})\in n \left( (\Ad \bar
n_{+}(p_{-n}))_{+}\bar n_{+} -\bar n_{+}\sum_{m\in I}{1\over
m}p_{m}H_{n,m} \right)(y)\pa_{x}^{-1}\de_{xy} +\cR_{2}^{+}(\pi_{0}).
$$
We derive from this a similar statement on $P(\bar n_{+}\otimes
F_{n})$. Combining these with
(\ref{PBF-F}), and the fact that $P(\bar
n_{+}\otimes \bar n_{+})$ is contained in $\cR_{2}^{+}(\pi_{0})$,
we obtain (\ref{PBn-n}).
\hfill $\Box$
\medskip
Introduce the following notation. For any tensor $\gamma = \sum_i
\gamma_i \otimes \gamma'_i$, and two operators $a$ and $b$, we write
$a (\gamma^{(1)}) \otimes b(\gamma^{(2)})$ for $\sum_i a(\gamma_i)
\otimes b(\gamma'_i)$.
\begin{lemma} \label{lemma3.5}
\begin{equation}
\begin{array}{rcl}
P(b_{-}\otimes n_{+})=-b_{-}(x)[\Ad (b_{-}^{-1}(x))(t^{(1)})]_{-}\otimes
[\Ad (b_{-}^{-1}(y))(t^{(2)})]_{+}n_{+}(y)\pa_{x}^{-1}\de_{xy} \\
+\sum_{n\in I}b_{-}(x)(\Ad (n_{+}(x))(p_{-n}))_{-}\otimes
n_{+}(y)p_{n}\pa_{x}^{-1}\delta_{xy}
+\on{\ local\ terms;}
\end{array}
\label{prePBb-n}
\end{equation}
this equation should be understood in the tensor product of
two representations of ${\frak g}$.
\end{lemma}
The proof is given in Sect. 5.
\subsection{Determination of VPA structures}
\begin{theorem} \label{thm3.1}
The nonlocal VPA structure of $\bar \pi_{0}$
is expressed, via the identification of $\bar\pi_{0}$ with
$\CC[B_{-}\times N_{+}]$,
by the formula
\begin{equation}
\begin{array}{rcl}
P(g\otimes g)=[-\ell^{\otimes 2}(t)(g(x)\otimes g(y))+r^{\otimes
2}(a)(g(x)\otimes g(y))]\pa_{x}^{-1}\delta_{xy} \\ +
\sum_{n\ge 0}r^{\otimes 2}\Big((\ad p_{-1})^{-n-1}\otimes
1)(t-a)\Big)(g(x)\otimes g(y))\pa_{x}^{n}\de_{xy}.
\label{finalPBg-g}
\end{array}
\end{equation}
\end{theorem}
{\em Proof \/}. Let us denote by $g(x)$ the pair
$(b_{-}(x),n_{+}(x))$; it lies in the variety $B_{-}\times
N_{+}$. This variety is endowed with left and right actions of ${\frak
g}$, that we denote $\ell$ and $r$. They are defined as follows. The
mapping $B_{-}\times N_{+}\to G$, associating to $(b_{-},n_{+})$ the
product $b_{-}n_{+}$ embeds $B_{-}\times N_{+}$ in $G$ as a Schubert
cell. The left and right actions of ${\frak g}$ on $G$ can be
restricted to $B_{-}\times N_{+}$; so that $\ell(x)$ is the vector
field equal at $(b_{-},n_{+})$ to the sum of $r((\Ad b_{-}(x))_{-})$
on the first component of the product, and $\ell((\Ad b_{-}(x))_{+})$
on the second one, according to
$$
x \cdot b_{-}n_{+}=b_{-}[(\Ad (b_{-})(x))_{-}+(\Ad (b_{-})(x))_{+}]n_{+}.
$$
Likewise, $r(x)$ is the vector field equal at $(b_{-},n_{+})$ to the
sum
of $r((\Ad n_{+}(x))_{-})$ on the first component of the product, and
$\ell((\Ad n_{+}(x))_{+})$ on the second one, since
$$
b_{-}n_{+}\cdot x=b_{-}[(\Ad (n_{+})(x))_{-}+(\Ad (n_{+})(x))_{+}]n_{+}.
$$
Formulas (\ref{PBb-b}) and (\ref{PBb-n}) then imply
\begin{equation}
P(g\otimes g)=[-\ell^{\otimes 2}(t)(g(x)\otimes g(y))+r^{\otimes
2}(a)(g(x)\otimes g(y))]\pa_{x}^{-1}\delta_{xy}+\on{ \ local \ terms.}
\label{PBg-g}
\end{equation}
The action of $\pa$ on $g$ coincides with $r(p_{-1})$, due to
formulas (\ref{evolb-}) and (\ref{evoln+}), and we obtain
\begin{equation}
\pa g=r(p_{-1})g.
\label{evolg}
\end{equation}
Therefore $P(g\otimes g)$ satisfies the differential equations
$$
\pa_{x}P(g\otimes g)=(r(p_{-1})\otimes 1)P(g\otimes g), \quad
\pa_{y}P(g\otimes g)=(1\otimes r(p_{-1}))P(g\otimes g).
$$
We will use these equations to determine $P(g \otimes g)$ completely.
Let us first determine the local part of the r.h.s. of
(\ref{PBg-g}). Denote it by $\cY$.
We have
\begin{equation}
\pa_{x}\cY- (r(p_{-1})\otimes 1)\cY
=r^{\otimes 2}(t-a)(g(x)\otimes g(x))\de_{xy},
\label{eqlocal1}
\end{equation}
\begin{equation}
\pa_{y}\cY-(1\otimes r(p_{-1}))\cY
=-r^{\otimes 2}(t-a)(g(x)\otimes g(x))\de_{xy}
\label{eqlocal2}
\end{equation}
(we have used the $N_{+}$- and $B_{-}$-invariances of $t$ to replace
$\ell^{\otimes 2}(t)$ by $r^{\otimes 2}(t)$, and that
$(g(x)\otimes g(y))\de_{xy}=(g(x)\otimes g(x))\de_{xy}$).
Recall that ${\frak g}={\frak a}\oplus\Imm (\ad p_{-1})$; this is an
orthogonal decomposition in ${\frak g}$; $\ad p_{-1}$ is
an automorphism of $\Imm(\ad p_{-1})$, and $t-a$ belongs to
$(\Imm(\ad
p_{-1}))^{\otimes 2}$.
Then
\begin{equation}
\cY_{0}= \sum_{n\ge 0}r^{\otimes 2}\Big((\ad p_{-1})^{-n-1}\otimes
1)(t-a)\Big)(g(x)\otimes g(y))\pa_{x}^{n}\de_{xy}
\label{Azero}
\end{equation}
is a solution to (\ref{eqlocal1}); to see it, one should apply the
Leibnitz rule, formula (\ref{evolg}) and note the cancellation of all
terms except the one in $\de_{xy}$. It is also a solution of
(\ref{eqlocal2}) because $((\ad p_{-1})^{n}\otimes
1)(t-a)=(-1)^{n}(1\otimes (\ad p_{-1})^{n})(t-a)$. Indeed, this
identity can be proved by writing $t-a=\sum_{\al}\widehat
e^{\al}\otimes\widehat e_{\al}$, where $\widehat e^{\al}$, $\widehat
e_{\al}$ are dual bases of $\Imm(\ad p_{-1})$, and using the
anti-selfadjointness of $\ad p_{-1}$.
Let $P_{0}$ be the operation defined by (\ref{PBg-g}) and
(\ref{Azero}). It defines a nonlocal VPA structure on
$(\CC[B_{-}\times N_{+}],r(p_{-1}))$ by virtue of Prop. \ref{prop1.1}
(the elements of $E_{i}$ in the second condition being $\ell^{\otimes
2}(t)-r^{\otimes 2}(t)$
for $i=0$, and $0$ for $i>0$).
The theorem follows from the following lemma which is proved in Sect. 5.
\begin{lemma} \label{lemma3.6}
$P_{0}$ is equal to $P$.
\end{lemma}
\hfill \qed
\begin{remark} \label{rem3.1}
The variety $B_{-}\times N_{+}$ can be considered as an open subset of
$G$. One can show that formula (\ref{compatPBg-g}) defines a nonlocal
VPA structure on the whole group $G$.
\hfill \qed
\end{remark}
It is easy to derive now a formula for the nonlocal
VPA structure of $\CC[N_{+}]$.
\begin{corollary} \label{corollary3.1}
The nonlocal VPA structure of $\pi_{0}^{+}$
is expressed, via the identification of $\pi_{0}^{+}$ with $\CC[N_{+}]$,
by the formula
\begin{equation}
\begin{array}{rcl}
P(n_{+}\otimes n_{+})=r^{\otimes
2}(a)(n_{+}(x)\otimes n_{+}(y))\pa_{x}^{-1}\delta_{xy} \\ +
\sum_{n\ge 0}r^{\otimes 2}\Big((\ad p_{-1})^{-n-1}\otimes
1)(t-a)\Big)(n_{+}(x)\otimes n_{+}(y))\pa_{x}^{n}\de_{xy}.
\label{finalPBn-n}
\end{array}
\end{equation}
\end{corollary}
We now reformulate (\ref{finalPBn-n}) so as to make it clear that the
$A_+$--invariant functions of $N_{+}$ only have local Poisson brackets
expressed in terms of $A_+$--invariant functions of $N_{+}$. Let us
denote
$$
(t-a)_{k}=\big((\ad p_{-1})^{-k-1}\otimes 1\big)(t-a).
$$
\begin{corollary} \label{corollary3.2}
\begin{align} \label{nbarversion}
& P(n_{+}\otimes n_{+})=r^{\otimes
2}(a)(n_{+}(x)\otimes n_{+}(y))\pa_{x}^{-1}\delta_{xy} \\ \notag & +
\sum_{k\ge 0}
\big\{\Ad (\bar n_{+}(x))\big[(\pa_{x}-\sum_{n\in I}{1\over n}H_{n}(x)
\ad p_{n})^{k}((t-a)_{k}^{(1)}\big]
\big\}_{+} n_{+}(x)
\\ \notag
& \otimes
\big(
\Ad(\bar n_{+}(y))(t-a)_{k}^{(2)}
\big)_{+}n_{+}(y) \de_{xy}.
\end{align}
\end{corollary}
{\em Proof \/}. Let $F(x)=\sum_{n\in I}{1\over n}p_{n}F_{n}(x)$, and
$\alpha \in\G$. Then
\begin{equation} \label{identforlocal}
\Ad (e^{F(x)-F(y)})(\alpha\pa_{x}^{n}\de_{xy})=[\pa_{x}-\ad
(F'(x))]^{n}(\alpha\de_{xy}).
\end{equation}
Indeed, this is obvious for $n=0$. Assume that it is true for $n$, and
apply $\pa_{x}$ to the corresponding identity. We find
$$
\begin{array}{rcl}
\Ad(e^{F(x)-F(y)})(\alpha\pa_{x}^{n+1}\de_{xy})
+[F'(x), \Ad(e^{F(x)-F(y)})(\alpha\pa_{x}^{n}\de_{xy})]
=
\\
\pa_{x}\{[\pa_{x}-\ad
(F'(x))]^{n}(\alpha\de_{xy})\};
\end{array}
$$
but the l.h.s. of this identity is expressed as
$$
\Ad(e^{F(x)-F(y)})(\alpha\pa_{x}^{n+1}\de_{xy})
+[F'(x), \big(\pa_{x}-\ad
(F'(x))\big)^{n}(\alpha\de_{xy})],
$$
and we obtain (\ref{identforlocal}) at step $n+1$.
Now formula (\ref{nbarversion}) follows directly from
(\ref{finalPBn-n}) and (\ref{identforlocal}). In fact, the local part
of formula (\ref{finalPBn-n}) can be rewritten as
$$\sum_{k\geq 0} \left( \Ad(n_+(x) \otimes n_+(y)) (t-a)_k\right)_+
n_+(x) \otimes n_+(y) \pa_x^k \de_{xy}.$$ After substituting
$n_+(x) = \bar{n}_+(x) e^{F(x)}$ in this formula we obtain
$$\sum_{k\geq 0} \left( \Ad(\bar{n}_+(x) \otimes \bar{n}_+(y))
\Ad(e^{F(x)} \otimes e^{F(y)}) (t-a)_k\right)_+ n_+(x) \otimes n_+(y)
\pa_x^k \de_{xy},$$ which is equal to
$$\sum_{k\geq 0} \left( \Ad(\bar{n}_+(x) \otimes \bar{n}_+(y))
(\Ad(e^{F(x)-F(y)} \otimes 1) (t-a)_k\right)_+ n_+(x) \otimes n_+(y)
\pa_x^k \de_{xy}$$ by ${\frak a}_+$--invariance of $(t-a)_k$. Applying
formula (\ref{identforlocal}) we obtain (\ref{nbarversion}).
\hfill
$\Box$
Now let $V((\la))$, $W((\mu))$ be $\G$-modules, and $v\in V$, $w\in W$
be such that ${\frak a_{+}}v=0$, ${\frak a_{+}}w=0$. The matrix
coefficients of $n_{+}(x)v$, $n_{+}(y)w$ are composed of functions on
$N_{+}/A_{+}$, or, equivalently, of elements of $\pi_{0}$. Moreover,
all elements of $\CC[N_{+}/A_{+}]$ can be obtained this way. Formula
(\ref{nbarversion}) implies that
\begin{align} \label{localversion}
& P(n_{+}v\otimes n_{+}w)= \\ \notag
& \sum_{k\ge 0}
\big\{\Ad (\bar n_{+}(x))\big[(\pa_{x}-\sum_{n\in I}{1\over n}H_{n}(x)
\ad p_{n})^{k}((t-a)_{k}^{(1)}\big]
\big\}_{+} n_{+}(x)v \\ \notag & \otimes
\big( \Ad(\bar n_{+}(y))(t-a)_{k}^{(2)} \big)_{+}n_{+}(y)w \de_{xy};
\end{align}
since $\bar n_{+}(x)$, $\bar n_{+}(y)$ are matrices whose entries are
elements of $\pi_{0}$, (\ref{localversion}) shows at the same time that
the Poisson brackets of the entries of $n_{+}(x)v$ and $n_{+}(y)w$ are
local and expressed in terms of functions of $N_{+}/A_{+}$.
In the theory of the mKdV equations, an important role is played by
the embedding of $N_{+}/A_{+}$ in $\G$ as an $N_+$--coadjoint orbit of
an element of ${\frak a}$. This motivates us to compute the Poisson
brackets of the matrices $\Ad(n_{+})(p_{n})$. The result is a
direct consequence of Cor. \ref{corollary3.1}.
\begin{corollary} \label{corollary3.3} For $n,m\in\pm I$,
\begin{align*}
& P(\Ad(n_{+})(p_{n})\otimes \Ad(n_{+})(p_{m}))=
\\
& \sum_{k\ge 0}
\big[
\big\{\Ad (\bar n_{+}(x))\big[(\pa_{x}-\sum_{n\in I}{1\over n}H_{n}(x)
\ad p_{n})^{k}((t-a)_{k}^{(1)})\big]
\big\}_{+},
\Ad(n_{+}(x))(p_{n})\big]
\\
& \otimes
\big[
\big(
\Ad(\bar n_{+}(y))(t-a)_{k}^{(2)}
\big)_{+}, \Ad(n_{+}(y))(p_{m})
\big]\de_{xy}.
\end{align*}
\end{corollary}
Here again, the locality of these brackets and the fact they are expressed
in terms of $A_+$--invariant functions on $N_{+}$ is manifest.
\medskip
\noindent
\begin{remark}
Formulas (\ref{finalPBn-n}) and (\ref{localversion}) give a geometric
interpretation of the nonlocal VPA structures of $\pi_{0}^{+}$ and
$\pi_{0}$ respectively. \hfill $\Box$
\end{remark}
\medskip
We now give a formula for $P(u_{i}\otimes n_{+})$. Due to the presence
of nonlocal quantities $F_{n}$ in $n_{+}$, this bracket contains
nonlocal as well as local terms.
\begin{proposition} \label{prop3.1}
$$
\begin{array}{rcl}
P(u_{i}\otimes n_{+})=n_{+}(y)\pa_{x}\Big(-(n_{+}(x)^{-1}h_{i}
n_{+}(x))_{{\frak
a}}\pa_{y}^{-1}\de_{xy} \\ +\sum_{n\ge 0}(-\ad\
p_{-1})^{-n-1}((n_{+}(x)^{-1}h_{i}n_{+}(x))_{\Imm(\ad p_{-1})})
\pa_{x}^{n}\de_{xy}\Big),
\end{array}
$$
where the indices ${\frak a}$ and $\Imm(\ad p_{-1})$ stand for the
projections on the components of ${\frak g}={\frak a}\oplus\Imm(\ad
p_{-1})$.
\end{proposition}
{\em Proof. \/}
Consider again the variable
$\varphi_{i}=\pa^{-1}u_{i}$. It is easier to compute
$\cG=P(\varphi_{i}\otimes n_{+})$ first. We have
$$
\pa_{y}\cG=-(p_{-1}+\sum_{i}u_{i}(y)h_{i}^{\vee})\cG+\cG p_{-1}
-h_{i}n_{+}(x)\pa_{xy}.
$$
Let $\cH=n_{+}(y)^{-1}\cG$, then
$$
\pa_{y}\cH=[\cH,p_{-1}]-n_{+}(x)^{-1}h_{i}n_{+}(x)\pa_{xy}.
$$
A solution to this equation is
$$
\begin{array}{rcl}
\cH_{0}=-(n_{+}(x)^{-1}h_{i}n_{+}(x))_{{\frak
a}}\pa_{y}^{-1}\de_{xy} \\ +\sum_{n\ge 0}(-\ad\
p_{-1})^{-n-1}((n_{+}(x)^{-1}h_{i}n_{+}(x))_{\Imm(\ad p_{-1})})
\pa_{x}^{n}\de_{xy}.
\end{array}
$$
On the other hand, the nonlocal part of $\cH$ coincides with that of
$\cH_{0}$ because it is equal to the nonlocal part of
$\sum_{n\in I}{1\over n}p_{n}P(\varphi_{i}\otimes F_{n})$; but
$P(\varphi_{i}\otimes F_{n})=-n\pa_{n}\varphi_{i}(x)+\on{local\ terms}$,
since $P(\varphi_{i}\otimes H_{n})\in -n\pa_{n}\varphi_{i}(x)
+\pa_{y}(\cR_{2}(\pi_{0}))$; so this nonlocal part is expressed as
$-\sum_{i\in I}\pa_{n}\varphi_{i}(x)p_{n}\pa_{y}^{-1}\de_{xy}$. To
establish
the coincidence of the nonlocal parts of $\cH$ and $\cH_{0}$, it remains to
check the identity
\begin{equation}
\pa_{n}\varphi_{i}(x)=\langle n_{+}^{-1}(x)h_{i}n_{+}(x), p_{-n}\rangle
\label{evolphi}
\end{equation}
which amounts to
$$
\pa_{n}u_{i}(x)=-\langle h_{i}, [p_{-1}+\sum_{i}u_{i}(x)h_{i}^{\vee},
n_{+}(x)p_{-n}n_{+}(x)^{-1}]\rangle ;
$$
since by Lemma \ref{addlemma1bis},
$\pa_{n}(\sum_{i}u_{i}(x)h_{i}^{\vee})$ coincides with the Cartan part
of
$$
-[p_{-1}, n_{+}(x)p_{-n}n_{+}(x)^{-1}],
$$
this is verified.
So (\ref{evolphi}) holds, and the nonlocal parts of $\cH$ and $\cH_{0}$
coincide.
So $\cH_{1}=\cH-\cH_{0}$ has no nonlocal terms, and satisfies
$\pa_{y}\cH_{1}=[\cH_{1}, p_{-1}]$; the arguments used to establish
(\ref{partialresult1}) then show that $\cH_{1}=0$.
\hfill $\Box$
\medskip
\noindent
\begin{remark}
As we noted in the introduction, it is possible to derive
(\ref{PBn-n}) using Prop. \ref{prop3.1} and a differential equation in
the first variable, satisfied by $P(n_{+}\otimes n_{+})$. But we find
the present use of $B_{-}$ more natural. \hfill $\Box$
\end{remark}
\subsection{Gelfand-Dickey-Dorfman structure}
According to Sect. 1.5, the VPA $\pi_0 = \CC[N_+/A_+]$ is endowed with a
Gelfand-Dickey-Dorfman structure. We give here a geometric interpretation
of it.
\begin{proposition} \label{propGDD}
Let $V((\la))$, $W((\mu))$ be $\G$-modules, and $v\in V$, $w\in W$
be such that ${\frak a_{+}}v=0$, ${\frak a_{+}}w=0$. We have:
$$
{\cal V}_{n_{+}v\otimes 1}(1\otimes n_{+}w)=
\sum_{k\ge 0}(-\pa)^{k}[(\Ad (n_{+})(t-a)_{k}^{(1)})_{+}n_{+}v]\otimes
(\Ad (n_{+})(t-a)_{k}^{(2)})_{+}n_{+}w
$$
\end{proposition}
{\em Proof.\/}
We apply the definition (\ref{hamvect}) of ${\cal V}_f$ to formula
(\ref{finalPBn-n}).
\hfill $\Box$
\subsection{Compatible nonlocal VPA structures}
Let us show how the nonlocal VPA structure on $\CC[B_{-}\times N_{+}]$,
defined in Thm. 3.1, can be embedded into an infinite family of
compatible nonlocal VPA structures (we call such a family compatible, if
any linear combination of these structures is again a nonlocal VPA
structure).
Let us identify ${\frak g}$ with the subalgebra of the loop algebra
$\bar{\frak g}\otimes\CC((\la))$ of a
finite dimensional semisimple Lie algebra $\bar{\frak g}$, consisting of
the elements $x(\la)$ satisfying
$$
x(\zeta\la)=x(\la)^{\sigma},
$$
$\sigma$ an automorphism of $\bar{\frak g}$ and $\zeta$ a root of
unity of the same order $r$. There is an action of $\CC((\la^{r}))$ on
${\frak g}$. Let us denote by the same letter elements of
$\CC((\la^{r}))$ and the corresponding operators on ${\frak g}$.
For $n\in \ZZ$, set
$$
t_{n}=(\la^{rn}\otimes 1)t, \quad a_{n}=(\la^{rn}\otimes 1)a.
$$
Note that $\ad p_{-1}$ commutes with the action of $\la^{rn}$, so that
$$
t_{n}-a_{n}\in\Imm(\ad p_{-1})^{\otimes 2}.
$$
Let us define an operation $P_n$ on $\CC[B_{-}\times N_{+}]$, by the
formula
\begin{equation}
\begin{array}{rcl}
P_{n}(g\otimes g)=[-\ell^{\otimes 2}(t_{n})(g(x)\otimes g(y))+r^{\otimes
2}(a_{n})(g(x)\otimes g(y))]\pa_{x}^{-1}\delta_{xy} \\ +
\sum_{n\ge 0}r^{\otimes 2}\Big((\ad p_{-1})^{-n-1}\otimes
1)(t_{n}-a_{n})\Big)(g(x)\otimes g(y))\pa_{x}^{n}\de_{xy}.
\label{compatPBg-g}
\end{array}
\end{equation}
\begin{proposition} \label{prop4.1}
The formulae (\ref{compatPBg-g}) define compatible nonlocal VPA
structures on $\CC[B_{-}\times N_{+}]$ (endowed with the derivation
$r(p_{-1})$).
\end{proposition}
{\em Proof. \/} A combination of the brackets (\ref{compatPBg-g})
corresponds to the same formula, with $t_{n}$ and $a_{n}$ replaced by
$t_{f}$ and $a_{f}$ respectively, with $t_{f}=(f\otimes 1)t$ and
$a_{f}=(f\otimes 1)a$, $f$ a certain element of $\CC[\la^{r},\la^{-r}]$.
The resulting bracket satisfies the conditions of Prop. \ref{prop1.1};
the elements of
$E_{i}$ in the second condition are $\ell^{\otimes 2}(t_{f})-r^{\otimes
2}(t_{f})$ for $i=0$,
and $0$ for $i>0$.
\hfill $\Box$
\medskip
\noindent
\begin{remark} \label{rem4.1}
The variety $B_{-}\times N_{+}$ can be considered as an open subset of
$G$. It has a compatible family of nonlocal VPA structures defined by
(\ref{compatPBg-g}). In the same way as in the proof of
Prop. \ref{prop4.1},
one can show that formula (\ref{compatPBg-g}) defines a compatible
family of nonlocal VPA structures on the whole group $G$.
\hfill $\Box$
\end{remark}
\noindent
\begin{remark} \label{rem4.2}
The extension to $G$ of the nonlocal VPA structure defined in
Thm. \ref{thm3.1} is clearly left $G$-invariant. It follows that left
$G$-translations provide symmetries of the mKdV hierarchy, respecting
the Poisson structure. Infinitesimal left translations by elements of
${\frak n}_{+}$ correspond to the Toda flows; left translations by
elements of ${\frak b}_{-}$ do not change the variables $u_{i}$. A
class of translations that would be interesting to study further are
left translations by elements of the affine Weyl group; they should
mix local and nonlocal variables while respecting the Poisson
structure. A. Orlov pointed out to us that they probably coincide with
the Darboux transformations.
\hfill $\Box$
\end{remark}
\section{The proof of Lemma \ref{lemma3.5} and Lemma \ref{lemma3.6}}
\subsection{Proof of Lemma \ref{lemma3.5}}
Equation (\ref{prePBb-n}) is rewritten as
\begin{equation}
\begin{array}{rcl}
P(b_{-}\otimes n_{+})=-\sum_{\beta}\bar e_{\beta}b_{-}(x)
\otimes (\Ad (b_{-}(y)^{-1})
(\bar e^{\beta}))_{+}n_{+}(y)\pa_{x}^{-1}\de_{xy}
\\ +\sum_{n\in I}b_{-}(x)(\Ad (n_{+}(x))(p_{-n}))_{-}\otimes
n_{+}(y)p_{n}\pa_{x}^{-1}\delta_{xy}
+\on{\ local \ terms},
\end{array}
\label{PBb-n}
\end{equation}
with $\sum \bar e^{\beta}\otimes \bar e_{\beta}=\sum e^{\al}\otimes
e_{\al}+\sum_{i}h_{i}\otimes h^{\vee}_{i}$. The differential equations
satisfied by the left and right hand sides of (\ref{PBb-n}) are
$$
\begin{array}{rcl}
\pa_{y}(\on{lhs\ of\ (\ref{PBb-n})})=(\on{lhs\ of\ (\ref{PBb-n})})
(1\otimes p_{-1})
-1\otimes(p_{-1}+\sum_{i} u_{i}(y)h_{i}^{\vee})
\\
(\on{lhs\ of\ (\ref{PBb-n})})
-\sum_{i}\Ad(b_{-}(y))([p_{-1},h_{i}])b_{-}(x)
\otimes h_{i}^{\vee}n_{+}(y) \pa_{x}^{-1}\de_{xy} \\ +\on{\ local \
terms,}
\end{array}
$$
using (\ref{evoln+}) and (\ref{PBb-phi}),
and
$$
\begin{array}{rcl}
\pa_{y}(\on{rhs\ of\ (\ref{PBb-n})})=(\on{rhs\ of\ (\ref{PBb-n})})
(1\otimes p_{-1})
-1\otimes(p_{-1}+\sum_{i} u_{i}(y)h_{i}^{\vee}) \\ (\on{rhs\ of\
(\ref{PBb-n})})
-\sum_{\beta}\bar
e_{\beta}b_{-}(x)\otimes \{[p_{-1}+ \sum_{i}h_{i}^{\vee} u_{i}(y),
(\Ad (b_{-}^{-1}(y))(\bar
e^{\beta}))_{+}] \\ -([p_{-1}+\sum_{i}h_{i}^{\vee} u_{i}(y), \Ad
(b_{-}^{-1}(y))
(\bar e^{\beta})])_{+}\} \\ n_{+}(y)\pa_{x}^{-1}\de_{xy}+\on{\ local \
terms,}
\end{array}
$$
using (\ref{evoln+}) and (\ref{evolb-}).
Since
\begin{equation}
\sum_{i}\Ad(b_{-}(y))([p_{-1},h_{i}])\otimes h_{i}^{\vee}=\sum_{\beta}
\bar
e_{\beta}\otimes [p_{-1},(\Ad (b_{-}(y)^{-1})(\bar e^{\beta}))_{1}],
\label{aux1}
\end{equation}
these two equations coincide. In (\ref{aux1}), we denote by $x_{1}$ the
part of $x\in{\frak g}$, of principal degree 1. (\ref{aux1}) is proved by
pairing its right and left hand sides with $1\otimes h_{i}$,
$i=1,\ldots,l$. It follows that the difference of the two sides of
(\ref{PBb-n}) satisfies
\begin{equation}
\begin{array}{rcl}
\pa_{y}(\on{ lhs\ of\ (\ref{PBb-n})}-\on{ rhs\ of\ (\ref{PBb-n})})
=(\on{ lhs\ of\ (\ref{PBb-n})}
-\on{ rhs\ of\ (\ref{PBb-n})})(1\otimes p_{-1})
\\
-(1\otimes
(p_{-1}+\sum_{i} u_{i}(y)h_{i}^{\vee}))(\on{ lhs\ of\ (\ref{PBb-n})}
-\on{ rhs\ of\ (\ref{PBb-n})})+\on{local\ terms.}
\label{evidence1}
\end{array}
\end{equation}
On the other hand, we have
$$
\begin{array}{rcl}\pa_{x}(\on{ lhs\ of\ (\ref{PBb-n})})=(\on{ lhs\ of\
(\ref{PBb-n})})((p_{-1}+\sum_{i}
u_{i}(x)h_{i}^{\vee})\otimes 1) \\ -
\sum_{i,k}[b_{-}(x)h_{i}^{\vee}\pa_{k} u_{i}(x)\otimes n_{+}(y)p_{k}]
\pa_{x}^{-1}\de_{xy}
+\on{ \ local\ terms,}
\end{array}
$$
and
$$
\begin{array}{rcl}\pa_{x}(\on{ rhs\ of\ (\ref{PBb-n})})=(\on{ rhs\ of\
(\ref{PBb-n})})((p_{-1}+\sum_{i}
u_{i}(x)h_{i}^{\vee})\otimes 1)
\\
+\sum_{n\in I}b_{-}(x) \{[p_{-1}+\sum_{i}h_{i}^{\vee} u_{i}(x), (\Ad
(n_{+}(x))(p_{-n}))_{-}]
\\
-([p_{-1}+\sum_{i}h_{i}^{\vee} u_{i}(x),\Ad
(n_{+}(x))(p_{-n})])_{-} \\ \otimes n_{+}(y)p_{n}\pa_{x}^{-1}\delta_{xy}
\}
+\on{ \ local\ terms.}
\end{array}
$$
We have the equality
$$
[p_{-1},(\Ad (n_{+}(x))(p_{-n}))_{1}]=\sum_{i}h_{i}^{\vee}\pa_{n}
u_{i}(x),
$$
because of Lemma \ref{addlemma1bis}.
Therefore the right hand sides of the last two formulas coincide up to
local terms, and
\begin{equation}
\begin{array}{rcl}
\pa_{x}(\on{ lhs\ of\ (\ref{PBb-n})}-\on{ rhs\ of\ (\ref{PBb-n})})
=(\on{ lhs\ of\ (\ref{PBb-n})}-\on{ rhs\ of\ (\ref{PBb-n})}) \\
(1\otimes
(p_{-1}+\sum_{i} u_{i}(x)h_{i}^{\vee}))+\on{local\ terms}.
\label{evidence2}
\end{array}
\end{equation}
Let $\cX=(1\otimes n_{+}(y)^{-1})(\on{lhs\ of\ (\ref{PBb-n})}-\on{rhs\
of\ (\ref{PBb-n})})(1\otimes b_{-}(x)^{-1})$, then
$$
\pa_{x}\cX=\on{local\ terms}
$$
by (\ref{evidence2}), and
$$
\pa_{y}\cX=[\cX, 1\otimes p_{-1}]+\on{local\ terms}
$$
by (\ref{evoln+}) and (\ref{evidence1}). The first equation gives
$$
\pa_{x}\cX=\sum_{n\ge 0}\cX_{n}(y)\pa_{x}^{n}\de_{xy},
$$
so
$$
\cX=\cX_{0}(y)\pa_{x}^{-1}\de_{xy}+\on{local\ terms,}
$$
and the second equation gives us
$$
\pa_{y}\cX_{0}(y)=[\cX_{0}(y), 1\otimes p_{-1}].
$$
Let $\xi$ be any element of the dual to ${\frak b}_{-}$, and
$\cX_{\xi}(y)=(\xi\otimes 1)(\cX_{0}(y))$. Then $\cX_{\xi}(y)$ has
values in ${\frak n}_{+}$ and satisfies
\begin{equation}
\pa_{y}\cX_{\xi}(y)=[\cX_{\xi}(y),p_{-1}];
\label{evolX}
\end{equation}
we then follow the proof of \cite{EFr}, lemma 3, to conclude that
$\cX_{\xi}(y)$ is
constant and lies in ${\frak a}_{+}$. Recall how this can be done:
decompose $\cX_{\xi}(y)$ in its homogeneous principal components
$\sum_{i}\cX_{\xi,i}(y)$, and each component along the decomposition
$\Imm(\ad p_{-1})\oplus{\frak a}$, as
$\cX_{\xi,i}^{1}(y)+\cX_{\xi,i}^{2}(y)$; let $i$ be the smallest index,
such that $\cX_{\xi,i}^{1}(y)$ is not zero; the equation implies that
$\pa_{y}\cX_{\xi}(y)$ has a nonzero component of degree $i-1$ in
$\Imm\ad p_{-1}$, hence a contradiction. So $\cX_{\xi}(y)$ lies in
${\frak a}$; (\ref{evolX}) then implies that it is constant. We finally
obtain:
$$
\on{ lhs\ of\ (\ref{PBb-n})}-\on{ rhs\ of\ (\ref{PBb-n})}
=\sum_{n\in I}
x_{n}b_{-}(x)\otimes
n_{+}(y)p_{n}\pa_{x}^{-1}\de_{xy}+\on{\ local\ terms,}
$$
with $x_{n}\in{\frak b}_{-}$. But there is only one possibility,
$x_{n}=0$, which is compatible with the following invariance property
of $P$.
Recall that for $\xi\in{\frak b}_{-}$, $\ell(\xi)$ is the derivation
of the algebra $\bar\pi_{0}$ defined by the action of the left
translation by $\xi$, on $B_{-}\times N_{+}$. Since $\ell(\xi)$
commutes with $\pa$, it induces an endomorphism (also denoted by
$\ell(\xi)$) of $\cR_{2}(\bar\pi_{0})$, according to the rules used in
2.1 in the case of $\pa_{n}$. We then have:
\begin{lemma} \label{lemma3.2}
For $a,b\in\bar\pi_{0}$, $\xi\in{\frak b}_{-}$,
\begin{equation}
P(\ell(\xi)a\otimes b)+P(a\otimes \ell(\xi)b)=\ell(\xi)P(a\otimes b).
\label{invariance}
\end{equation}
\end{lemma}
{\em Proof\/}. For the brackets $P(b_{-}\otimes b_{-})$, this follows
from (\ref{PBb-b}) and the invariance of $t$. We also have
$$
\begin{array}{rcl}
P(\ell(\xi)b_{-}\otimes u_{i})=
\pa_{y}([\xi,\Ad(b_{-}(y))(h_{i})]b_{-}(x)\pa_{x}^{-1}\de_{xy}
+\Ad(b_{-}(y))(h_{i})\xi b_{-}(x)\\ \pa_{x}^{-1}\de_{xy})=\ell(\xi)
P(b_{-}\otimes u_{i})
\end{array}
$$
so that
$$
P(H_{n}\otimes\ell(\xi)b_{-})=\ell(\xi)P(H_{n}\otimes b_{-}),
$$
(in this equality, the second $\ell(\xi)$ is $\ell(\xi)\otimes 1$ acting
on $\End(V)((\xi))\otimes \cR_{2}(\bar\pi_{0})$)
and $(\ell(\xi)\otimes 1)r_{n}=(1\otimes\ell(\xi))r_{n}$ (equality in
$\End(V)((\xi))\otimes \cR_{2}(\bar\pi_{0})$; so that
$$
P(F_{n}\otimes\ell(\xi)b_{-})=\ell(\xi)P(F_{n}\otimes b_{-}).
$$
Finally, the elements of $\pi_{0}^{+}$ are invariant under $\ell(\xi)$, so
the identity is trivially satisfied for their Poisson brackets.
\hfill $\Box$
\medskip
Now (\ref{PBb-n}) follows.
\hfill $\Box$
\subsection{Proof of Lemma \ref{lemma3.6}}
The operation $P_{0}$ is defined by the identities
$$
P_{0}(b_{-}\otimes b_{-})=-\ell^{\otimes 2}(t)(b_{-}(x)\otimes b_{-}(y))
\pa_{x}^{-1}\de_{xy},
$$
\begin{equation}
\begin{array}{rcl}
P_{0}(b_{-}\otimes n_{+})=[-b_{-}(b_{-}^{-1}t^{(1)}b_{-})_{-}(x)
\otimes (b_{-}^{-1}t^{(2)}b_{-})_{+}(y)n_{+}(y)
\\
+b_{-}(n_{+}a^{(1)}n_{+}^{-1})_{-}(x)\otimes
(n_{+}a^{(2)}n_{+}^{-1})_{+}n_{+}(y)] \pa_{x}^{-1}\de_{xy}
\\
+\sum_{n\ge 0}[b_{-}(n_{+}(t-a)_{n}^{(1)}n_{+}^{-1})_{-}(x)\otimes
(n_{+}(t-a)^{(2)}_{n}n_{+}^{-1})_{+}n_{+}(y)]\pa^{n}\de_{xy},
\label{attemptPBb-n}
\end{array}
\end{equation}
where we denote $((\ad\ p_{-1})^{-n-1}\otimes 1)(t-a)$ by $(t-a)_{n}$,
any element $\al\in{\frak g}\otimes {\frak g}$ is decomposed as
$\sum \al^{(1)}\otimes \al^{(2)}$, and
\begin{equation}
\begin{array}{rcl}
P_{0}(n_{+}\otimes n_{+})=r^{\otimes
2}(a)(n_{+}(x)\otimes n_{+}(y))\pa_{x}^{-1}\delta_{xy} \\ +
\sum_{n\ge 0}r^{\otimes 2}\Big((\ad p_{-1})^{-n-1}\otimes
1)(t-a)\Big)(n_{+}(x)\otimes n_{+}(y))\pa_{x}^{n}\de_{xy}.
\end{array}
\label{attemptPBn-n}
\end{equation}
By construction, $P_{0}(g\otimes g)$ satisfies the identities
\begin{equation}
\pa_{x}P_{0}(g\otimes g)=(r(p_{-1})\otimes 1)P_{0}(g\otimes g), \quad
\pa_{y}P_{0}(g\otimes g)=(1\otimes r(p_{-1}))P_{0}(g\otimes g).
\label{evolPzero}
\end{equation}
Clearly, $P_{0}(b_{-}\otimes b_{-})$ coincides with $P(b_{-}\otimes
b_{-})$. $\cB=P(b_{-}\otimes n_{+})$ satisfies the equation
$$
\begin{array}{rcl}
\pa_{y}\cB+(1\otimes(p_{-1}+\sum_{i}u_{i}h_{i}^{\vee}))\cB-\cB(1\otimes
p_{-1})=
\sum_{i}
\left( \pa_{y} \Ad (b_{-}(y))(h_{i})b_{-}(x)\pa_{x}^{-1} \right. \\
\otimes
\left. h_{i}^{\vee}n_{+}(y) \right) \de_{xy},
\end{array}
$$
by virtue of (\ref{PBb-phi}) and (\ref{evoln+}). Let us determine an
equation satisfied by $\cB_{0}=P_{0}(b_{-}\otimes n_{+})$. Let
$\cE=P_{0}(b_{-}\otimes b_{-})$. $\cE$ satisfies the equation
$$
\pa_{y}\cE=\cE(1\otimes (p_{-1}+\sum_{i}u_{i}h_{i}^{\vee}))-\pa_{y}[\Ad
(b_{-}(y))(h_{i})b_{-}(x)\pa_{x}^{-1} \otimes b_{-}(y)] \de_{xy}.
$$
Note that due to (\ref{evolPzero}), $P_{0}(b_{-}\otimes g)$ satisfies
$$
\pa_{y}P_{0}(b_{-}\otimes g)=(1\otimes r(p_{-1}))P_{0}(b_{-}\otimes g).
$$
This implies, writing $\cB_{0}$ as $P_{0}(b_{-}\otimes
b_{-}^{-1}g)$ [$B_{-}\times N_{+}$ has a left $B_{-}$-action, defined as
the product of the left action of $B_{-}$ on itself and of the trivial
one, that we use here], that $\cB_{0}$ satisfies
$$
\begin{array}{rcl}
\pa_{y}\cB_{0}+(1\otimes(p_{-1}+\sum_{i}u_{i}h_{i}^{\vee}))\cB_{0}
-\cB_{0}(1\otimes
p_{-1})=
\sum_{i} \left(
\pa_{y} \Ad (b_{-}(y))(h_{i})b_{-}(x)\pa_{x}^{-1} \right. \\
\left. \otimes
h_{i}^{\vee}n_{+}(y) \right) \de_{xy}.
\end{array}
$$
Let us set $\cB_{1}=(1\otimes n_{+}^{-1}(y))(\cB-\cB_{0})$, we obtain
$$
\pa_{y}\cB_{1}+[1\otimes p_{-1}, \cB_{1}]=0.
$$
The nonlocal parts of $\cB$ and $\cB_{0}$ coincide, so that $\cB_{1}$
contains only local terms; write $\cB_{1}=\sum_{n\ge
0}\cB_{1}^{(n)}(x)\pa_{x}^{n}\de_{xy}$ (each $\cB_{1}^{(n)}$ belongs to
the tensor product of the tangent space $T_{b_{-}(x)}B_{-}$
to $B_{-}$ at $b_{-}(x)$ with
${\frak n}_{+}$), we then get
$$
[1\otimes p_{-1}, \cB_{1}^{(0)}(x)]=0, \quad
\cB_{1}^{(n)}=[1\otimes p_{-1}, \cB_{1}^{(n+1)}(x)]
$$
for $n\ge 0$, so $\cB_{1}^{(0)}(x)$ belongs to $T_{b_{-}(x)}B_{-}\otimes
{\frak
a}$ by the first equation and to $T_{b_{-}(x)}B_{-}\otimes \Imm (\ad\
p_{-1})$ by the second one (specialized to $n=0$), so that it is zero;
repeating
this argument for $\cB_{1}^{(1)}$, we find it to vanish as well, etc. So
$\cB_{1}=0$ and
\begin{equation}
P(b_{-}\otimes n_{+})=P_{0}(b_{-}\otimes n_{+}).
\label{partialresult1}
\end{equation}
Now, $\cB=P(b_{-}\otimes n_{+})$ satisfies the equation
\begin{equation}
\pa_{x}\cB=\cB((p_{-1}+\sum_{i}u_{i}(x)h_{i}^{\vee})\otimes
1)+\sum_{i}b_{-}(x)\otimes (h_{i}^{\vee}P(u_{i}\otimes n_{+})).
\label{evolBinx}
\end{equation}
On the other hand, let $\cC=P(n_{+}\otimes n_{+})$ and
$\cC_{0}=P_{0}(n_{+}\otimes n_{+})$. $\cC$ satisfies the equation
$$
\pa_{x}\cC=-((p_{-1}+\sum_{i}u_{i}(x)h_{i}^{\vee})\otimes
1)\cC+\cC(p_{-1}\otimes 1)-\sum_{i}h_{i}^{\vee}n_{+}(x)\otimes
P(u_{i}\otimes n_{+}).
$$
Let us determine an equation satisfied by $\cC_{0}$. Due to
(\ref{evolPzero}), $P_{0}(g\otimes n_{+})$ satisfies
$$
\pa_{x}P_{0}(g\otimes n_{+})=P_{0}(g\otimes n_{+})(p_{-1}\otimes 1),
$$
and writing $P_{0}(n_{+}\otimes n_{+})$ as $P_{0}(b_{-}^{-1}g\otimes
n_{+})$ (using the same left $B_{-}$-action as above) and using
(\ref{evolBinx}), we get
$$
\pa_{x}\cC_{0}=-((p_{-1}+\sum_{i}u_{i}(x)h_{i}^{\vee})\otimes
1)\cC_{0}+\cC_{0}(p_{-1}\otimes 1)-\sum_{i}h_{i}^{\vee}n_{+}(x)\otimes
P(u_{i}\otimes n_{+}).
$$
$\cC$ and $\cC_{0}$ satisfy the same equation, so that
$\cC_{1}=(n_{+}(x)^{-1}\otimes 1)(\cC-\cC_{0})$
(which belongs to the
tensor product of ${\frak n}_{+}$ with the tangent space to $N_{+}$ at
$n_{+}(y)$)
satisfies
$$
\pa_{x}\cC_{1}=[\cC_{1}, p_{-1}\otimes 1].
$$
Since the nonlocal parts of $P_{0}$ and $P$ coincide, $\cC_{1}$ contains
no nonlocal terms. We can use the same arguments as in the case of
$\cB_{1}$, to conclude that $\cC_{1}=0$ and
\begin{equation}
P(n_{+}\otimes n_{+})=P_{0}(n_{+}\otimes n_{+}).
\label{partialresult2}
\end{equation}
Lemma \ref{lemma3.6} now follows from (\ref{partialresult1}),
(\ref{partialresult2}).
\hfill $\Box$
\frenchspacing
|
1,108,101,566,637 | arxiv | \section{Introduction}
\label{sec:introduction}
In many microscale and nanoscale systems, electrolytes play a central role in collective interactions, equilibrium phase behaviors, and kinetics~\cite{SquiresQuakeFluidicsReview2005,
KirbyBook2010,BazantBookChapter2011}. This includes transitions in the stability of colloidal suspensions~\cite{DerjaguinLandauColloids1941,OverbeekColloids1948,Hansen2000}, electrophoretic separation and detection in fluidic devices~\cite{PennathurTransport2004, SquiresQuakeFluidicsReview2005,BazantBookChapter2011,
KirbyBook2010,KirbyZetaPotentialReview2004}, and biomolecular interactions~\cite{BakerPNASElectrostatics2001,McCammonBoLiSCPF2015,McCammon1987}.
Confinement of electrolytes and charged objects between charged walls presents additional effects often resulting in rich phenomena that are particularly important in nanoscale devices~\cite{SquiresQuakeFluidicsReview2005,
PennathurTransport2004}. This owes in part to such features as the thickness of ionic layers becoming comparable to other length-scales in the system\cite{BaldessariDLOverlap2008,
KirbyBook2010,DasDLOverlap2010,
RobbinsMultiscaleElectroOsmosis2016}.
For sufficiently charged multivalent systems additional phenomena can arise as observed in experiments and predicted by theory~\cite{PegadoLikeChargeAttraction2008,
GrierLikeChargeAttraction1997,
NetzSimilarChargedPlates2001,PincusTwoPlates2009}. This includes the formation of condensed ion layers on surfaces, over-charging of walls and particles, and attractions between like-charged objects~\cite{GrierLikeChargeAttraction1997,
PincusChargeFluct2002,SaderAttractionUnresolved1999}.
These effects have formed the basis for understanding phenomena such as DNA condensation~\cite{SafinyaDNACondensation2000, StevensDNACond2001,
PincusCondensationPolyelectrolytes1998,
Kuron2015,BloomfieldDNACondensation1991}, colloidal stability~\cite{NagornyakLikeChargeExp2009,
Hansen2000,GrierLikeChargeAttraction1997},
and attraction of like-charged plates~\cite{PegadoLikeChargeAttraction2008,
NetzSimilarChargedPlates2001,
PincusTwoPlates2009}.
We further explore here phenomena of charged systems in the context of colloidal particles confined within nanochannels. We investigate the behaviors of confined electrolytes and charged particles through coarse-grained molecular-level simulations using Brownian Dynamics (BD) and classical Density Functional Theory (cDFT). We also make comparisons with predictions from mean-field Poisson-Boltzmann theory (PB). We investigate the interactions between a charged colloidal particle and the nanochannel wall as the electrolyte concentration and particle charge are varied.
We find that in some charge regimes the free energy of the particle as a function of its position within the channel develops significant minima in preferred locations near the channel center and near to but separated from the channel wall. In some regimes these preferred locations are separated by significant energy barriers. Motivated by nanofludic devices our results indicate that colloidal particles could exhibit interesting bi-modalities switching from long dwell-times in locations near the channel center to locations near the channel wall. For instance, this could have implications for experimental protocols and devices such as capillary electrophoresis used in fluidics for separations and detection~\cite{wanDLOverlap1997,KirbyBook2010,
SquiresQuakeFluidicsReview2005,PennathurTransport2004}.
We investigate the origins of the free energy profile by using BD simulations to characterize at the coarse-grained molecular-level the ion-ion correlations and the surface overcharging and condensed ionic layers that form near the colloid surface and channel wall. We further make comparisons with results from classical Density Functional Theory (cDFT). We find the cDFT make predictions consistent with our molecular-level results but in the most strongly charged regimes with significant underestimation of the strength of effects such as the free energy well-depth. For the free energy profile of the confined particle, the combined simulation and cDFT results demonstrate the significant roles played by ion-ion correlations and over-charging at both the charged walls and colloid particle surface. We also show for the strongly charged regimes considered that a mean-field theory such as Poisson-Boltzmann theory is not adequate in predicting system behaviors highlighting the importance of accounting for ion-ion correlations and other discrete effects.
We introduce our BD simulations for the electrolyte and colloidal particle in Section~\ref{sec:rpm_model}. We introduce our cDFT description of the nanochannel system in Section~\ref{sec:cDFT}. We present the results of our calculations including the counterion and coion densities, colloidal particle free energy, and ion-ion correlation functions in Section~\ref{sec:results}. We discuss our findings and related phenomena observed within nanochannels in Section~\ref{sec:discussion}. Additional information on the computational methods developed and simulation protocols are discussed in Appendix~\ref{sec:detailsDFT} -~\ref{sec:corr_analysis}.
\lfoot{}
\section{Electrostatics of the Nanochannel System}
\subsection{Brownian dynamics simulations}
\label{sec:rpm_model}
We consider colloidal particles confined within a nanochannel having a slit-like geometry. The walls of the channel are viewed as two like-charged parallel plates. We consider electrolytes consisting of both counterions and coions, using a coarse-grained model related to the Restricted Primitive Model (RPM)~\cite{TorrieRPM_MC_1979,ValleauRPMElectrolytesII1980,
ValleauRPM_ElectrolytesI1980}. The discrete ion-ion interactions are taken into account within a continuous dielectric medium. A snapshot of the system is shown in Fig.\ \ref{fig:rpm_model}. After discussing our model for the ions, we discuss some additional details on the electrostatics of channels in Section~\ref{sec:electro_channels}.
We model the finite size of the ions and the excluded volume of the colloidal particle using the Weeks-Chandler-Andersen (WCA) interaction potential~\cite{WeeksChandlerAndersen1971}
\begin{eqnarray}
\nonumber
\label{whatever}
(\theequation)
\refstepcounter{equation}
\\
\nonumber
\phi_{\subtxt{wca}}(r)
= \left\{
\begin{array}{ll}
4\epsilon
\left[
\left(
{b}/{r}
\right)^{12}
-
\left(
{b}/{r}
\right)^6
+
\frac{1}{4}
\right], & r \leq r_c \\
0, & r > r_c.
\end{array}
\right. .
\end{eqnarray}
The $r$ is the distance between the center-of-mass of the two particles. For a particle with steric radius $b$ we have $r_c = 2^{1/6}\cdot b$. This ensures a purely repulsive interaction between particles~\cite{WeeksChandlerAndersen1971}. For the steric particle-wall interactions, we treat the walls as a smooth continuum and use the Lennard-Jones $9$-$3$ potential
\begin{eqnarray}
\label{eqn:phi_lj93}
\phi_{\subtxt{lj93}}(r)
=
\epsilon
\left[
\frac{2}{15}
\left(
{b}/{r}
\right)^{9}
-
\left(
{b}/{r}
\right)^3
\right].
\end{eqnarray}
Here, $r$ denotes the nearest distance between a particle and the wall. Electrostatic interactions between ions and/or the colloidal particle of charge $q_1$ and $q_2$ are given by the Coulomb interaction
\begin{eqnarray}
\phi_{\subtxt{coul}}(r) = \frac{q_1q_2}{4 \pi \epsilon_0 \epsilon r}
\end{eqnarray}
where $\epsilon$ is the dielectric constant of the background medium and we use SI units. To account for the surface charge density $\sigma$ of the colloidal particle we use Gauss' Law~\cite{GriffithsBookEM1998}, allowing us to use a point charge $Q_0$ at the center-of-mass with $Q_0 = 4\pi R^2\sigma$, where $R$ is the radius of the particle.
\begin{figure}[H]
\centering
\includegraphics[width=1.0\columnwidth]{./figData/fig_3D_MD_Model.png}
\caption{Shown is a cut-away view of the electrolyte and colloidal particle corresponding to $\sigma = -6$ and $C_m = 8$ with counterions with $+2$ charge (orange) and coions with $-1$ charge (blue). Strong correlations are exhibited, where counterions and coions form clusters and chains throughout the electrolyte and condensed layers near the walls and colloidal particle surface. For clarity the channel walls are not shown.}
\label{fig:rpm_model}
\end{figure}
To handle the long range Coulumb interactions we use the Particle-Particle Particle-Mesh (PPPM) approach~\cite{Hockney1989,PollockPPPM1996} as implemented in LAMMPS~\cite{PlimptonLAMMPS1995}. For the nanochannel with slit geometry we use a variant of the PPPM method which uses periodic boundary conditions in the xy-directions \cite{Yeh:1999dm}. This method has been extended to allow the simulated system to have a net charge within the slab interior~\cite{Ballenegger:2009ct} which we utilize in our simulations. Our overall system is electrically neutral with the electrostatics of channels with charged walls handled using our approach discussed in Section~\ref{sec:electro_channels}.
In some of the simulations, we use a harmonic potential to hold the colloidal particle at a given location by
\begin{eqnarray}
\label{eqn:phi_target}
\phi_{\subtxt{target}}\left(\mb{x}\right) = \frac{k}{2} \left|\mb{x} - \mb{x}_0 \right|^2,
\end{eqnarray}
where $\mb{x}_0$ is the target location for the colloidal particle location $\mb{x}$. The total potential energy associated with a configuration of the nanochannel system including the counterions, coions, and colloidal particle is given by
\begin{eqnarray}
\label{equ:totalEnergy}
\Phi[\mb{X}] = \Phi_{\subtxt{coul}}[\mb{X}] + \Phi_{\subtxt{sterics}}[\mb{X}] + \Phi_{\subtxt{target}}\left[\mb{X}\right],
\end{eqnarray}
where we represent the configuration of colloidal particle and ions by the composite vector $\mb{X} = \lbrack \mb{X}_{\subtxt{colloidal-particle}}, \mb{X}_{\subtxt{ions}} \rbrack^T$. To sample equilibrium configurations we use Brownian Dynamics (BD) based on the Langevin equations~\cite{Gardiner1985}
\begin{eqnarray}
m \frac{d \mb{V}}{dt} = -\gamma \mb{V} - \nabla \Phi[\mb{X}] + \mb{F}_{thm},
\end{eqnarray}
where $d\mb{X}/dt = \mb{V}$ and $\left \langle \mb{F}_{thm}(s) \mb{F}_{thm}^T(t) \right \rangle = 2k_B{T} \gamma\delta(t - s)$. For the time integration we use a stochastic Velocity-Verlet method implemented within LAMMPS ~\cite{PlimptonLAMMPS1995,AtzbergerLAMMPS2016}. All BD simulations are performed in LAMMPS, with parameter values as given in Table~\ref{table:defaultParams}.
Throughout this paper we use BD to probe only equilibrium properties of the system. The BD simulations were equilibrated from random initial conditions over times long enough for the ions to diffuse at least two times across the diameter of the nanochannel. We then collected statistics on trajectories in which the ions diffused at least five times across the nanochannel diameter.
\begin{table}[H]
\tiny
\centering
\begin{tabular}{|l|l|}
\hline
\rowcolor{LightGrey}
\textbf{Parameter} & \textbf{Value} \\
\hline
$\ell_z$ nanochannel width (z) & 6 $nm$ \\
$\ell_x,\ell_y$ nanochannel length (x,y) & 18 $nm$ \\
$\sigma_w$ wall surface charge & -0.72 $e/nm^2$ \\
$b_w$ wall steric parameter lj$93$ & 0.5 $nm$ \\
$r_c^{[w]}$ wall cut-off parameter lj93 & 0.425 $nm$ \\
$\epsilon_w$ wall energy lj$93$ & 2.27e+7 $amu\hspace{0.07cm}nm^2/ns^2$ \\
$\sigma$ particle surface charge & -3 $e/nm^2$ \\
$R$ particle radius & 0.75 $nm$ \\
$m_0$ particle mass & 6.20e+3 $amu$ \\
${T}$ temperature & 300 $K$ \\
$k_B{T}$ thermal energy & 2.50e+6 $amu\hspace{0.07cm} nm^2/ns^2$\\
$\rho$ solvent mass density & 6.02$e${+2}\hspace{0.07cm}$amu/nm^3$\\
$\mu$ solvent viscosity & 5.36e+5 amu/(nm$\cdot$ns)\\
$\epsilon_r$ solvent relative permittivity & 80.1 \\
$b_{-}$ counterion radius & 0.116 $nm$ \\
$b_{+}$ coion radius & 0.116 $nm$ \\
$q_{-}$ counterion charge & -1 $e$ \\
$q_{+}$ coion charge & +2 $e$ \\
$m_{-}$ counterion mass + solvantion & 2.3$e${+1}\hspace{0.07cm}$amu$ \\
$m_{+}$ coion mass + solvantion & 2.3$e${+1}\hspace{0.07cm}$amu$ \\
$\bar{c}_{-}$ reference ion concentration & 0.214$M$ \\
$\bar{c}_{+}$ reference ion concentration & 0.128$M$ \\
$r_c^{[w]}$ wall LJ cutoff & 0.425 $nm$\\
$r_c^{[c]}$ coulombic cutoff & 6 $nm$\\
$\Delta{t}$ Langevin timestep & 1.0$e${-5}\hspace{0.07cm}$ns$ \\
$\gamma$ Langevin drag & $6 \pi\mu R$ \\
$\tau_e$ Langevin equilibration time & $0.5 ns$ \\
\hline
\end{tabular}
\caption{Parameter values for the nanochannel model. We use by default these values unless specified otherwise.}
\label{table:defaultParams}
\end{table}
\subsubsection{Electrostatics of Channels}
\label{sec:electro_channels}
For channels having a slit geometry consisting of two parallel walls, the electrostatics exhibit a few interesting features. For channels of finite extent with wall edges immersed in a reservior, the wall surface charges generate the
strongest electric fields near the edges in the reservior. Through cancellations in the Coulombic interactions the wall charges do not generate significant net electric forces on the ions toward the middle region of the channel away from reservior edges. As a result, in the idealized limit of two infinite walls having equal and uniform surface charge, the electric fields generated by the wall-charges exactly cancel throughout the channel interior.
This can be seen by considering a single wall with charge $\sigma$. This contributes to the electric potential for the ion interactions as
\begin{eqnarray}
\label{equ:intChargedWalls}
\\
\nonumber
\phi_{\subtxt{coul-w}}\left(z\right)
= \int \frac{q_1\sigma(\mb{r}')}{\epsilon |z\mb{e}_z - \mb{r}' |} dx dy,
\end{eqnarray}
where $\mb{r}' = x\mb{e}_z + y\mb{e}_z$. The $\mb{e}_i$ denotes the standard basis vector pointing in the $i^{th}$ coordinate direction. For a constant uniform surface charge $\sigma$ this can be integrated to obtain the equivalent potential
\begin{eqnarray}
\phi_{\subtxt{coul-w}}(z) = -(2\pi q_1 \sigma/\epsilon) z.
\end{eqnarray}
For two equally charged parallel walls of infinite extent the net electric field
has a Coulombic potential that is independent of $z$. This can be seen from
\begin{eqnarray}
\nonumber
(\theequation) \hspace{1.4cm}
\label{equ:wallZeroField}
\refstepcounter{equation}
\\
\nonumber
\phi(z) &=&
\phi_{\subtxt{coul-w}}\left(z\right)
+ \phi_{\subtxt{coul-w}}\left(L - z\right) \\
\nonumber
&=& -(2\pi q_1 \sigma/\epsilon) \left(z + L - z\right) \\
\nonumber
&=& -(2\pi q_1 \sigma/\epsilon) L.
\end{eqnarray}
As a consequence, the net electric field $E = -d\phi/dz$ acting on ions confined between the walls is zero.
It is worth mentioning that such cancellations would not hold in the case of two walls that have a finite extent or non-uniform surface charge. For equal uniform charges this can be seen by integrating equation~\ref{equ:intChargedWalls} in polar coordinates for two disk-like walls of radius $R$. Our results show that for uniformly charged walls as their extent becomes large the electric fields contribute negligably toward the middle region of the channel away from the reserviors.
These results suggest a few interesting mechanisms by which ion concentrations are determined in the middle region of the channel and overall electric neutrality is acheived. The results indicate that the electric fields generated by the walls near the reservior edges of the channel are primarily responsible for driving ions into the channel or expelling them to acheive electric neutrality. Also, in the middle region of an infinite channel, the lack of net electric force acting on the ions from the walls gives an interesting perspective on the electric double-layers. Rather than conceiving of ions being pulled toward the charged walls, our results indicate once ionic concentrations are setup from the edge effects, the double-layer structures should be viewed as arising from how the walls break symmetry. In particualr, since like-charged ions repel one another within the confined region and there are no balancing forces from ion charges on the other side of the walls, the like-charged ion repulsions can be viewed as pushing ions from each other from the channel interior towards the walls. This occurs in a manner very similar to mechanisms underlying generation of osmotic pressures~\cite{AtzbergerOsmosis2007,AtzbergerWuPeskinOsmosisVesicle2015}. It is in this manner that the double layers can arise in the channel middle region without the need for local net electric forces generated by the two walls. From electric neutrality the ion concentrations are determined and such double-layers can be related to the Poisson-Boltzmann theory (PB) for single and two charged walls.
Our simulations capture such phenomena in the middle region of charged channels. We use periodic boundary conditions to capture behaviors similar to the limit of walls of infinite extent. Since in this limit the walls exert no net electric force on the ions, we handle implicitly the contributions of the wall charge. Our approach is similar to the Ewald summation method of Ballenegger et al \cite{Ballenegger:2009ct}. In this approach the energy of the charged slab system is regularized by placing two charged walls above and below the simulation system, with charge densities that neutralize the system. Thus, we are simulating a system that is overall electrically neutral with two walls of an appropriately chosen equal charge that serve to balance the ions.
For mean-field Poisson-Boltzmann theory (PB), charged walls are often handled by employing Neumann boundary conditions to account for surface charge explicitly~\cite{XingWallsPB2011,Maduar2016,NetzInterfaces2016,Netz2000}. A crucial consideration linking this to our molecular perspective is the condition of electric neutrality. For channels this implies the implicit determination of a surface charge for the walls. For our model, electric neutrality allows us to distinguish different choices for the wall charge which result in an excess or deficit of ionic species in the interior region driven by the edge electric fields. In this manner our molecular model gives overall results that can be directly related to continuum models with explicit Neumann boundary conditions for the wall charge~\cite{Maduar2016,Netz2000}. We discuss how the ionic species concentrations in the channel interior are related to the implicit choice of the wall charge in Section~\ref{sec_model_params}.
\subsubsection{Model parameters}
\label{sec_model_params}
We investigate the structure of the double-layer as the strength of charge of the colloidal particle and as the ion concentrations are varied. We characterize the charge of the negatively charged colloidal particle $Q_\subtxt{particle}$ in terms of its surface charge density $\sigma$, where $Q_\subtxt{particle} = 4\pi R^2 \sigma$. We performed simulations for colloidal particles with surface charge densities of $\sigma = $ -1, -3, and -6 e/nm$^2$; for brevity we will refer to these three cases without units as the systems with $\sigma$ = -1, -3, and -6. We mostly focus on divalent cations with $q_+ = 2e$ and monovalent anions with $q_- = -1e$. We take as a reference concentration for the counterions $\bar{c}_{+} = 0.128 M$ and for the coions $\bar{c}_{-} = 0.214 M$, expressed in molar units. Other ion concentrations are a multiple $C_m$ of these baseline reference concentrations. For example, $C_m = 10$ corresponds to a counterion concentration $c_{+} = C_m\bar{c}_{+} = 1.28 M$ and a coion concentration $c_{-} = C_m\bar{c}_{-} = 2.14 M$. The simulations are performed with a fixed number of ions, with an excess of counterions so that the bulk electrolyte solution is not neutral. The excess counterions (cations) lead to an effective negative charge on the nanochannel walls, given by the condition of overall electric neutrality:
\begin{eqnarray}
\label{eqn:neutral}
\\
\nonumber
q_{-}N_{-} + q_{+}N_{+} + Q_\subtxt{particle} + 2Q_\subtxt{wall} = 0.
\end{eqnarray}
Here $N_{-} = V c_{-}$ and $N_{+} = V c_{+}$ denote the number of ions in the unit cell where $V$ is the channel volume. $Q_\subtxt{wall}$ is the charge on each wall in the unit cell. For a given fixed concentration of coions and counterions the effective surface charge of the wall is obtained from electric neutrality by solving for $Q_\subtxt{wall}$ in equation~\ref{eqn:neutral}. The wall surface charge density for each system simulated is given in units of e/nm$^2$ in Table~\ref{table:wall_charge}. The wall charge density increases with increasing ion concentration. Additionally, the wall surface charge densities vary slightly depending on the colloidal particle charge, since we have a fixed number of ions in the channel.
\begin{table}[H]
\begin{center}
\begin{tabular}{|lccc|}
\hline
\cellcolor{AtzGrey} & \cellcolor{LightGrey} $\mathbf{\sigma = -1}$ & \cellcolor{LightGrey}$\mathbf{\sigma = -3}$ & \cellcolor{LightGrey} $\mathbf{\sigma = -6}$\\
\cellcolor{LightGrey}
$\mathbf{C_{m} = 1}$ & -0.74 & -0.72 & -0.68 \\
\cellcolor{LightGrey}
$\mathbf{C_{m} = 2}$ & -1.49 & -1.46 & -1.43 \\
\cellcolor{LightGrey}
$\mathbf{C_{m} = 4}$ & -2.98 & -2.96 & -2.93 \\
\cellcolor{LightGrey}
$\mathbf{C_{m} = 6}$ & -4.48 & -4.46 & -4.43 \\
\cellcolor{LightGrey}
$\mathbf{C_{m} = 8}$ & -5.98 & -5.95 & -5.92 \\
\cellcolor{LightGrey}
$\mathbf{C_{m} = 10}$ & -7.47 & -7.45 & -7.42 \\
\hline
\end{tabular}
\caption{Wall Surface Charge Density (units are $e/nm^2$). For the different regimes considered, we give the implicit surface charge density that arises from electric neutrality given by the condition in equation~\ref{eqn:neutral}.
}
\label{table:wall_charge}
\end{center}
\end{table}
In the regimes we consider, the electrostatic interactions vary in strength. We can characterize the strength of the interactions by the electrostatic coupling constant \cite{NetzStrongCouplingTheory2000} given by
\begin{equation}
g \equiv 2 \pi q^3 \ell_B^2 \sigma.
\end{equation}
Here $q=2e$ is the charge of the divalent counterions and $\sigma$ is the charge density of either the colloidal particle or the channel walls. The Bjerrum length $\ell_B$, the distance at which the electrostatic interaction energy is comparable to the thermal energy
$k_B{T}$, is $\ell_B \equiv e^2/4 \pi kT \epsilon\epsilon_0$. In our systems with divalent cations, the electrostatic coupling constant ranges from $g \approx 17$ for the least charged system, up to $g \approx 188$ for the most strongly charged system. Previous studies of electrolytes near flat surfaces\cite{NetzStrongCouplingTheory2000} have shown that the counterion density profiles agree with the PB theory for $g \approx 1$, the profiles show clear deviation from PB theory for $g=10$ and $g=100$, and they show good agreement with the strong-coupling limit for $g=10^4$ see \cite{NetzStrongCouplingTheory2000}. Previous simulations of highly charged spheres explored coupling constants ranging from $g=26$ up to $g = 615$ and found attraction between like-charged spheres \cite{Allahyarov:1998kz,GronbechJensen:1998em,StevensFrischknechtNanoparticle2016}. We therefore expect our simulations to be in the intermediate regime between weak and strong coupling.
\subsection{Classical Density Functional Theory (cDFT)}
\label{sec:cDFT}
In the classical density functional theory (cDFT) calculations, we use the original form of the RPM, i.e. we model the ions as interacting charged hard spheres with diameters $d_\alpha$ and charges $q_\alpha$, in a background continuum dielectric medium to represent the solvent. We represent the colloidal particle as a larger hard sphere of radius $R$ that has surface charge density $\sigma$. The ions are treated as mobile fluid species, while the colloidal particle has a fixed spatial location. We account for the steric interactions between the ions and the colloidal particle using a hard sphere interaction $V(r) = \infty$ for $r<R$, where $r$ is the distance between the ion and the center of the colloidal particle. In addition, we add a smooth truncated potential based on the Lennard-Jones (LJ) interaction to the surface of the colloidal particle,
\begin{equation}
V^{mLJ}_\alpha(r') = 4\epsilon_m \left[\left(\frac{\sigma_m}{r'}\right)^{12} - \left(\frac{\sigma_m}{r'}\right)^{6} \right],
\end{equation}
where $r'$ is the distance between the ion and the surface of the colloidal particle. We truncate and shift this potential to obtain
\begin{equation}
V^{m}_\alpha(r') = V^{mLJ}_\alpha(r') - V^{mLJ}_\alpha(r'_c), \hspace{0.4cm} r' < r'_c,
\end{equation}
with $V^{m}_\alpha(r')=0$ for $r'>r'_c$, at large distances from the colloidal particle. In our notation, the subscript $\alpha$ refers to the index of the particular ion species and the $mLJ$ and $m$ to the modified Lennard-Jones potentials. This repulsive potential serves to smooth the surface of the colloidal particle to reduce mesh-size effects in our discretized cDFT. We used $\epsilon_m = 0.5k_BT$ and $\sigma_m= d$ (where $d$ is the ion diameter) for all calculations. The channel boundaries are modeled as hard walls with the interaction potential for the ions
\begin{equation}
V^w_\alpha(z) = \left\{ \begin{array}{ll}
\infty, & \mbox{ions outside the channel} \\
0, & \mbox{ions inside the channel}.
\end{array} \right.
\label{eq:HS}
\end{equation}
The volume of fluid trapped between the two channel walls is referred to as the ``inside" region and everything else as ``outside" of the channel. This potential imposes that ions can not penetrate the walls and must remain within the channel region between the two walls.
We use a formulation of cDFT that follows closely the work of Oleksy and Hansen \cite{Oleksy:2006ed} and is very similar to that of Henderson et al.\cite{Henderson:2011fn}. We formulate the cDFT for an open ensemble, specified by the temperature $T$, the total volume $V$, and the chemical potentials $\mu_{\alpha}$ of all fluid species in the system. We discuss the relation of these parameters to those used in the BD simulations in Section~\ref{subsec:cdft_param}.
The grand free energy of the system is given as a functional of the ion densities $\rho_{\alpha}(\bf r)$:
\begin{eqnarray}
\label{eq:omega1}
\Omega[\rho_\alpha({\bf r})] = \sum_\alpha F[\rho_\alpha({\bf r})] \hspace{2cm} \\
- \sum_\alpha \int d{\bf r} \left(\mu_\alpha - V_\alpha({\bf r}) \right) \rho_\alpha({\bf r}).
\end{eqnarray}
For notational convenience, it is to be understood that $\Omega[\rho_\alpha({\bf r})]$ depends on all of the density fields $\{\rho_\alpha\}$ collectively, where we use this convention to reduce clutter. Here $F[\rho_\alpha({\bf r})] $ is the intrinsic Helmholtz free energy of the system. $V_\alpha({\bf r}) = V+ V^m + V^w$ denotes the neutral part of the potential that acts on each ion from the walls and the colloidal particle. The equilibrium density profile $\rho^0_\alpha({\bf r})$ minimizes the free energy functional $\Omega[\rho_\alpha({\bf r})]$. This can be expressed in terms of the variational derivative~\cite{Gelfand2000}
\begin{equation}
\left. \frac{\delta \Omega[\rho_\alpha({\bf r})]}{\delta \rho_\alpha({\bf r})} \right|_{\rho_\alpha^0} = 0.
\label{eq:domega}
\end{equation}
At equilibrium the associated grand potential free energy of the system is $\Omega^0 = \Omega[\rho^0_\alpha({\bf r})]$ \cite{Evans:1979jn}. The intrinsic Helmholtz free energy consists of four terms given by
\begin{eqnarray}
\label{eq:Fhelm}
F[\rho_\alpha({\bf r})] & = & F_{id}[\rho_\alpha({\bf r})] + F_{hs}[\rho_\alpha({\bf r})] \\
\nonumber
& + & F_{coul}[\rho_\alpha({\bf r})] + F_{corr}[\rho_\alpha({\bf r})].
\end{eqnarray}
The terms represent respectively the Helmholtz free energies for the ideal gas (id), hard spheres (hs), mean-field Coulombic interactions (coul), and second order charge correlations (corr). In formulating the DFT, approximations are needed to capture each of the listed effects. We give more details in Appendix~\ref{sec:detailsDFT}.
We emphasize the importance of the ion-ion correlation term $F_{corr}$ in cDFT which allows for capturing higher-order effects of density fluctuations distinguishing the cDFT results from those of mean-field theories like Poisson-Boltzmann (PB) theory. As we shall show these correlations play an especially important role in the ion distributions observed in multivalent systems. Without the correlation term (corr) and steric term for hard spheres (hs), the free energy functional $F$ reduces to that of the Poisson-Boltzmann theory. By including or excluding the different terms in the free energy $F$ we can investigate different theories for the relative contributions of various effects on the observed ion distributions and colloid-wall interactions. We now briefly discuss each of the terms in equation~\ref{eq:Fhelm}.
The term $F_{id}$ corresponds to the contributions of an ideal gas which for a given density is known exactly and is given in Appendix~\ref{sec:detailsDFT}. For the hard-sphere interactions $F_{hs}$, we use the \textit{White Bear} version of the fundamental measure theory ~\cite{Roth:2002p518}. The mean-field Coulombic interaction $F_{coul}$ is given by integrating the collective electric potential and density of the ionic species, see Appendix~\ref{sec:detailsDFT}. The charge correlation term $F_{corr}$ is based on a functional Taylor expansion of the direct correlation function, which in turn is obtained from the known analytic solution of the mean-spherical approximation (MSA) for mixtures of charged hard spheres given in~\cite{Oleksy:2006ed}. Detailed expressions for each of these free energy terms are given in Appendix~\ref{sec:detailsDFT}.
Minimization of the grand free energy in equation~\ref{eq:omega1} with respect to the density profiles of each ionic species is expressed mathematically as a set of nonlinear partial differential-integral Euler-Lagrange (EL) equations. We express this in terms of residual equations $R_i = 0$ where
\begin{eqnarray}
\label{eq:r1}
R_1 & = & \ln \rho_\alpha({\bf r}) + V_\alpha({\bf r}) - \mu_\alpha \\
\nonumber
& + & \int \sum_\gamma \frac{\partial \Phi}{\partial n_\gamma} ({\bf r'}) \omega_{\alpha}^{(\gamma)}(\mathbf{r} - \mathbf{r}') d {\bf r'} \\
\nonumber
& - & \sum_\beta \int d{\bf r'} \rho_\beta({\bf r'}) \Delta c_{\alpha\beta}({\bf r-r'}) + Z_\alpha \phi({\bf r})
\nonumber \\
\nonumber
\label{eq:r2}
\\
\nonumber
R_2 & = & n_\gamma({\bf r}) - \sum_{\alpha}\int d\mathbf{r}' \, \rho_{\alpha}(\mathbf{r})
\omega_{\alpha}^{(\gamma)}(\mathbf{r} - \mathbf{r}') \\
\label{eq:r3}
\\
R_3 & = & \nabla^2 \phi({\bf r}) +\frac{4\pi \ell_B}{d} \sum_\alpha q_\alpha \rho_\alpha(\mathbf{r})
\end{eqnarray}
Here $\phi$ is the electric potential; other terms are defined in Appendix~\ref{sec:detailsDFT}. The residual equations are solved computationally within the spatial domain of the nanochannel. The third residual equation $R_3$ is Poisson's equation for the electrostatic potential $\phi({\bf r})$. The cDFT calculations are performed using the open source package Tramonto, available at \url{https://github.com/Tramonto/Tramonto}. The EL equations are solved in real-space on a Cartesian mesh using inexact Newton iterations for the density fields and a finite element method for the electrostatic potential. Details of these numerical methods and discussions of related applications of Tramonto to charged systems can be found in~\cite{FrischknechtCDFTNumerical2002,
HerouxCDFTNumerical2007,BAM,Frink:2012hn}.
All quantities in the residual equations have been expressed in terms of reduced units with energies in units of $k_BT$ and lengths in units of the ion diameter $d$. $Z_\alpha$ is the valence of species $\alpha$. The dimensionless quantity appearing in $R_3$ is sometimes called the plasma parameter or the reduced temperature, $T^* = d/\ell_B$.
\subsubsection{Parameterization}
\label{subsec:cdft_param}
We parameterized our cDFT calculations to yield results in comparable physical regimes as the BD simulations. This was done by taking the temperature and dielectric constant so that $\ell_B =$ 7.1 {\AA} as in the simulations, using the same surface charge density on the colloidal particle, matching the ion diameters $d = 2b =$ 0.232 nm, and using the radius $R = $ 0.75 nm for the colloidal particle. We used a channel with total width $\ell_z$ = 6 nm as in the simulations. The channel walls extend into the channel to the same distance as in the simulations, so that we match the hard wall condition in the DFT with the Lennard-Jones 9-3 repulsive walls in the simulations.
To reduce computational costs in the cDFT calculations, we placed the colloidal particle with its center on the z-axis, so that the symmetry of the system allows for reflecting boundary conditions to be used in the x- and y-directions and thus only 1/4 of the particle needs to be directly included in the calculations. For this purpose, the size of the computational domain in the x and y directions was $\ell_x = \ell_y = $ 4.6 nm, for an effective channel length of 9.28 nm (taking into account the reflecting boundary through the center of the particle). We used a mesh size of 0.058 nm in all the 3D calculations (i.e. a mesh size of 0.25$d$ in reduced units, where $d$ = 0.232 nm is the diameter of the ions).
The BD simulations were performed in the canonical ensemble at constant $N_\alpha$, $V$, and $T$. For cDFT it is more natural to work in the grand canonical ensemble at constant $\mu_\alpha$, $V$, and $T$. To make a correspondence between these two sets of calculations, we set the chemical potentials in the cDFT so that the average ion densities match the BD simulations at the middle of the channel where nearly bulk conditions prevail. In the middle of the channel, the electrolyte solution is neutral, with $c_- = 2c_+$. We set the surface charge density of the channel walls in the cDFT equal to the effective surface charge densities given in Table \ref{table:wall_charge}.
We solve equations (\ref{eq:r1})-(\ref{eq:r3}) in the nanochannel geometry with Neumann boundary conditions on $\phi({\bf r})$ at the nanochannel walls and the colloidal particle, i.e.\ we set the charge density of these surfaces. We employ Dirichlet boundary conditions elsewhere, with a reflecting boundary through the colloidal particle as described above.
To obtain the free energy associated with the particle at a particular position within the channel, we performed a cDFT calculation at each particle position and use the grand free energy of the resulting density. We computed density profiles of ions around the particle both in the case with the particle in the center of the channel and in the case with the particle in the bulk fluid with no channel present. The density profiles were found to be the same in both cases. We also found that the density profile near the channel wall, at a location in the channel far from the particle, was also independent of the presence or absence of the colloidal particle. This allowed us a significant reduction in computational costs by performing calculations of the wall density profiles from 1D systems using cDFT. In our 1D calculations we used a finer mesh size of 0.0232 nm for better resolution in the reported results.
\subsection{Poisson-Boltzmann (PB): Mean-Field Theory}
\label{subsec:PB}
In the limit that the ions are treated as point particles and do not have any charge correlation contribution to their free energy, the cDFT reduces to the Poisson-Boltzmann (PB) equation. The PB limit corresponds to the Helmholtz free energy functional with only the ideal gas and mean-field Coulombic contributions given by
\begin{eqnarray}
\label{equ:pb_free_energy}
\\
\nonumber
\beta F[\rho_\alpha({\bf r})] & = & \beta F_{id}[\rho_\alpha({\bf r})] + \beta F_{coul}[\rho_\alpha({\bf r})] \\
\nonumber
& = & \sum_\alpha \int d{\bf r} \rho_\alpha({\bf r}) \left(\ln \rho_\alpha({\bf r}) - 1\right) \\
\nonumber
& + & \sum_\alpha \int d{\bf r} q_\alpha \rho_\alpha({\bf r}) \phi({\bf r}),
\end{eqnarray}
where $\beta = 1/kT$. Minimization of the grand free energy in equation~\ref{eq:omega1} using the free energy $F$ in equation~\ref{equ:pb_free_energy} gives
\begin{eqnarray}
\frac{\delta \Omega}{\delta \rho_\alpha} = 0 = \ln \rho_\alpha({\bf r}) + q_\alpha \phi({\bf r}) - \beta \tilde{\mu}_\alpha.
\end{eqnarray}
Here $\tilde{\mu}_\alpha = \mu_\alpha - V_\alpha(\mb{r})$ is the spatially dependent chemical potential including the contributions of the ion interactions with the channel wall and colloidal particle in equation~\ref{eq:omega1}. Solving for the density gives
\begin{equation}
\label{equ:pb_density}
\rho_\alpha({\bf r}) = \exp [\beta \tilde{\mu}_\alpha - q_\alpha \phi({\bf r})].
\end{equation}
In the case that the electric potential vanishes to zero in the bulk we have $\rho_\alpha^b = \exp[\beta \tilde{\mu}_\alpha]$. However, in the nanochannel system the term $\rho_\alpha^b$ should be interpreted with some care. Since the steric interaction potential depends on ion location we technically have $\rho^b_\alpha(\mb{r}) = \exp[\beta \tilde{\mu}_\alpha(\mb{r})]$, which is a known function of position. However, in the limit of hard wall interactions that we use here, the PB theory can be further simplified by using boundary conditions to represent the walls and colloidal particle. This eliminates the explicit dependence of
$\rho^b_\alpha(\mb{r})$ on position. The remaining part of the chemical potential $\mu_\alpha$ is constant and we simply have $\rho_\alpha^b = \exp[\beta \mu_\alpha]$, where $\rho_\alpha^b$ are the reference densities (ion densities in a reservoir in equilibrium with the nanochannel system; these are nearly identical to the ion densities in the middle of the channel).
The electric potential satisfies Poisson's equation $\nabla^2 \phi = -(4\pi \ell_B/d) \sum_\alpha \rho_\alpha$. Combining this with the densities found in equation~\ref{equ:pb_density} gives the non-linear Poisson-Boltzmann equations
\begin{equation}
\nabla^2 \phi({\bf r}) = -\frac{4\pi \ell_B}{d} \sum_\alpha \rho^b_\alpha \exp[- q_\alpha \phi({\bf r})].
\end{equation}
Here $d$ is a reference length in the system which for convenience we take to correspond to the ion size but other choices are also possible.
\section{Results}
\label{sec:results}
We first discuss results of the BD simulations, followed by comparisons with cDFT and PB theory. All figures show results from the BD simulations unless explicitly noted otherwise.
\subsection{Ionic Double-Layer Structure: BD Simulations}
\label{sec:ion_dl_struct}
We show in Figure~\ref{fig:avgDensity2D} typical distributions for the counterions and coions as the colloidal particle position is varied in the case of $\sigma = -6$ and $C_m = 8$. In this regime strong layering occurs for the counterions near the walls and near the colloidal particle surface. Also, a secondary layer of coions occurs offset from the walls and the colloidal particle surface adjacent to the counterion layer. This is especially visible for the coions shown in the right panel of Figure~\ref{fig:avgDensity2D}.
\begin{figure}[H]
\centering
\includegraphics[width=0.99\columnwidth]{./figData/fig_2D_counterions2.png}\caption{The average concentration of counterions (left) and coions (right) as the colloidal particle position is varied within the nanochannel, at $X_0^{(3)}$ = 3.0 nm, 4.6 nm, and 4.85 nm (top to bottom), for $\sigma = -3$ and $C_m = 8$.}
\label{fig:avgDensity2D}
\end{figure}
We show the ion concentrations near the wall for $\sigma = -6$ and varying $C_m$ in Figure~\ref{fig:wall_layers}. The other cases with $\sigma = -1$ and $\sigma = -3$ show ion concentrations that are indistinguishable after scaling the concentration with the case with $\sigma = -6$. For ions near the wall there are two length scales associated with the ion layers. The first length scale is the location of the closest ion layer to the wall, which occurs at the minimum of the Lennard-Jones potential of equation~\ref{eqn:phi_lj93}, at $\ell_* = \left( 18/45\right)^{1/6}b_w$ = 0.43 nm. From the steric interactions the next closest layer can form only around $\ell_2 = \ell_* + b_{+} + b_{-}$. For the parameters in Table~\ref{table:defaultParams} we have $\ell_2 = 0.66$ nm. We see both of these length-scales manifest in the structure of the ion layers. The double-layer essentially forms according to the packing distance imposed by the ion and wall sterics. This becomes especially pronounced as the concentration increases as seen in Figure ~\ref{fig:wall_layers}.
\begin{figure}[H]
\centering
\includegraphics[width=0.99\columnwidth]{./figData/fig_compare_wall_DoubleLayers_RPM_only_2column_3.png}\caption{Ion concentrations near the channel walls as $C_m$ is varied, for $\sigma = -6$. }
\label{fig:wall_layers}
\end{figure}
Other interesting features arise in the ion layers near the wall as the ion concentrations increase. The ion layers become smaller in width and more dense as the ion concentration increases. For small concentrations there is significant overlap between the counterion and coion layers with significant mixing of ions especially within the secondary coion layer. As the concentration increases the layers become more distinct. Interestingly, for the counterion layer depletion occurs for the counterions within the secondary layer relative to the counterion concentration in the bulk. This is especially pronounced once $C_m > 4$ as shown in the inset in Figure~\ref{fig:wall_layers}. For $C_m < 4$ the concentration of the counterions appear to monotonically decay to the bulk counterion concentration.
In the nanochannel in the regimes we consider the ion double-layer structure is in contrast to many theories developed for weakly charged systems with a proposed stern layer and Helmholtz plane demarcating a transition from relatively immobile ions to a gaseous mobile phase of ions~\cite{KirbyBook2010,BazantBookChapter2011}. From that perspective for our system at high ion concentrations this transition effectively occurs on the length scale of individual ions. Near the wall the surface counterion and coion positions are strongly correlated, as shown in the simulation snapshot in Figure \ref{fig:wall_ion_3D}. Many of the ions form pairs with opposing ions or small clusters or chains. The wall surface is covered in a condensed layer of counterions along with a secondary layer of coions that forms as part of clusters near individual counterions, see Figure~\ref{fig:wall_ion_3D}. This indicates some of the challenges involved in developing theory for such highly charged and concentrated regimes, where behaviors may be dependent on individual ion-ion interactions and charge clusters containing only a few ions.
\begin{figure}[H]
\centering
\includegraphics[width=0.99\columnwidth]{./figData/fig_nearWall_DL_3D_Model_2.png}\caption{Ion configurations near the wall, for $\sigma = -6$ and $C_m = 8$. }
\label{fig:wall_ion_3D}
\end{figure}
Next we show the density of counterions and coions near the colloidal particle surface for the three different surface charges $\sigma = -1$, $\sigma = -3$, and $\sigma = -6$ in Figure~\ref{fig:particle_layers_s_n1}, Figure~\ref{fig:particle_layers_s_n3}, and Figure~\ref{fig:particle_layers_s_n6}. The concentrations are measured at distances relative to the colloidal particle surface. The relevant steric length-scale for the position of the counterion layer in this case is the steric length $\ell_{**} = 2^{1/6}(R+b) - R$ = 0.22 nm. The coion layer forms at a distance corresponding to $\ell_{2*} = \ell_{**} + 2b_{\pm}= 0.45$ nm. Again the layer locations are primarily determined by the packing of the ions as determined by the sterics.
For a relatively weak particle charge density of $\sigma = -1$, the counterions form a tight layer near the colloidal particle surface with significant mixing of coions into this primary layer. After this layer the coions exhibit concentrations that rapidly approach a level comparable to the bulk, see Figure~\ref{fig:particle_layers_s_n1}. For $\sigma = -3$ the counterions also form a tight layer near the colloidal particle surface but with relatively little mixing of coions into this primary layer, see Figure~\ref{fig:particle_layers_s_n3}. The coions show only a weak secondary peak. For the highest surface charge density of $\sigma = -6$, a secondary layer of coions forms. For the largest concentrations some depletion of the counterions is exhibited in the secondary layer relative to the bulk. This is less pronounced than in the case of the walls due to the high curvature of the particle, but can be seen readily in the case with $\sigma = -6 e/nm^2$ and $C_m = 10$ as highlighted in the inset in Figure~\ref{fig:particle_layers_s_n6}.
\begin{figure}[H]
\centering
\includegraphics[width=0.99\columnwidth]{./figData/fig_compare_macroion_DoubleLayers_RPM_only_sigma1.png}\caption{colloidal Particle Double-Layer $(\sigma = -1.0)$. }
\label{fig:particle_layers_s_n1}
\end{figure}
For the smaller concentrations there is significant overlap of the counterion layer with the coion layer, with significant mixing in the secondary layer. From examining configurations of the ions around the colloidal particle we find this arises from strong correlations between the counterions and coions resulting in the formation of transient charge clusters, as shown in Figure~\ref{fig:particle_ion_3D}. As the colloidal particle charge increases, the layer of counterions near the particle adheres more strongly and the clusters are pushed increasingly toward the secondary layer. For the case $\sigma = -6$ this is especially pronounced with the double-layer providing excess charge relative to what would be required to achieve local electric neutrality. This over-charging phenomenon can be seen in Figure~\ref{fig:particle_overcharge}.
\begin{figure}[H]
\centering
\includegraphics[width=0.99\columnwidth]{./figData/fig_compare_macroion_DoubleLayers_RPM_only_sigma3.png}\caption{colloidal Particle Double-Layer $(\sigma = -3.0)$.}
\label{fig:particle_layers_s_n3}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.99\columnwidth]{./figData/fig_compare_macroion_DoubleLayers_RPM_only_sigma6.png}\caption{colloidal Particle Double-Layer $(\sigma = -6.0)$.}
\label{fig:particle_layers_s_n6}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.99\columnwidth]{./figData/fig_macroion_DL_3D_Model.png}\caption{Ion configurations near the colloidal particle in bulk, for $\sigma = -6$ and $C_m = 8$, showing ion pairs and clusters.}
\label{fig:particle_ion_3D}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.99\columnwidth]{./figData/fig_OverCharging_Sigma6.png}\caption{ The total collective amount of charge $Q(r)$ contained within the spherical volume of radius $r$ around the colloidal particle, for $\sigma = -6$. Near the particle surface the double-layer provides excess charges (over-charging) when countering the colloidal particle charge. $Q_0$ is the colloidal particle charge.}
\label{fig:particle_overcharge}
\end{figure}
\subsection{Free Energy of colloidal Particle Location: BD Simulations}
We next consider the free energy $E(d)$ of the system as a function of the colloidal particle position $d$, see Figure~\ref{fig:free_energy_all}. The wall and the colloidal particle are both negatively charged, and the free energy is repulsive when the particle is sufficiently close to the wall. As the concentration of the counterions and coions becomes sufficiently large, attraction occurs between the like-charged colloidal particle and wall. The free energy minimum occurs at a distance comparable to the interaction length-scale of the first layers of ions of the wall and the colloidal particle surface. The sum of the length-scale for the first counterion layer of the wall $\ell_* = 0.43nm$ and the length-scale of the counterion layer of the colloidal particle $\ell_{**} = 0.22nm$ is $\ell = \ell_* + \ell_{**} = 0.65 nm$, corresponding to $\bar{d}/\frac{1}{2}L \sim 0.22$, the approximate location of the free energy minima in Figure~\ref{fig:free_energy_all}. The free energy minimum can become significant compared to $k_B T$ at sufficiently large $C_m$. We discuss this further in Section~\ref{sec:discussion}.
\multicolinterrupt{
\begin{figure}[H]
\centering
\includegraphics[width=1.0\columnwidth]{./figData/fig_compare_FreeEnergy_RPM_only_all.png}
\caption{Free energy profile of the colloid-wall distance from BD simulations. We show the free energy for $\sigma$ = -1.0. -3.0, and -6.0, as a function of the distance $d$ between the particle and the wall. The results are normalized by the thermal energy $k_B{T}$ and the half-width $L_h= \frac{1}{2}L$ of the nanochannel. }
\label{fig:free_energy_all}
\end{figure}
}
The free energy profile has an interesting non-monotonic dependence on the colloidal particle charge and electrolyte ion concentrations. We see the depth of the free energy minimum well that forms near the wall is not entirely monotonic as the ionic concentration increases. Most clearly, for $\sigma=-6$ the magnitude of the free energy well depth is larger for $C_m=6$ than for $C_m=8$, but then increases significantly for $C_m=10$. There is also a significant free energy barrier as large as $~2k_B{T}$ that can arise separating the particle from the free energy local minimum near the wall. Making this even more interesting is that the largest energy barriers appear to occur for the intermediate ionic concentrations considered. For instance see the cases with $\sigma = -3,-6$ and $C_m = 8$. The free energy barrier appears to arise from the condensed ion layers that form on the colloidal particle surface and wall surface that must coordinate and rearrange as the particle approaches the wall, see Figure~\ref{fig:ions_particle_near_wall}.
\multicolinterrupt{
\begin{figure}[H]
\centering
\includegraphics[width=0.9\columnwidth]{./figData/fig_particle_wall_ions_all.png}
\caption{Top-down view of colloidal particle and ion distribution, showing typical distributions of ions nearby the colloidal particle at different locations within the nanochannel. We show the locations corrresponding to (i) the middle of the channel at $X_0^{(3)} = 3.0 nm$, (ii) the maximum attraction to the wall at $X_0^{(3)} = 4.6 nm$, and (iii) near-contact with the wall having large repulsion at $X_0^{(3)} = 4.85nm$.}
\label{fig:ions_particle_near_wall}
\end{figure}
}
When the particle is at the free energy minimum, the counterions in the condensed layer typically form transient ring-like structures near the surface of the colloidal particle as shown in Figure~\ref{fig:ions_particle_near_wall}. These counterions appear to serve double-duty in the condensed layer by screening both the colloidal particle charge and the effective wall charge. This double-duty appears to be the source of the resulting free energy gain. When the colloidal particle is positioned at an even closer distance to the wall it penetrates into the condensed counterion layer. This excludes counterions which results in a significant pressure on the colloidal particle surface resulting in a strong free energy penalty. It is important to remark that the effective electric field from the walls cancel so that all interactions beyond the steric distance are mediated by the ions.
\subsection{Ion-Ion Correlations: BD Simulations}
To further understand the system, we examine the ion correlations in the condensed wall layer vs in the center of the channel. The counterions and coions exhibit strong self-correlations and cross-correlations. The structures of these correlations depend significantly on whether an ion is near the channel wall or near the channel center. As a matter of convention we refer to the ions near the channel center as being in the bulk. We characterize the correlations by calculating a radial distribution function (RDF) $g(r)$ for ions within a permissible sampling region which we refer to as in the bulk or as near the wall (see Appendix~\ref{sec:corr_analysis} for details). The RDFs $g(r)$ are normalized by the reference number concentration given by taking the count of all counterions or coions and dividing by the channel volume. Throughout our simulations reference values are determined from the channel volume $V = 1944$ nm$^3$ and from the reference number concentrations $\hat{g}_- = 250/1944 \times C_m$ nm$^{-3}$ and $\hat{g}_+ = 150/1944 \times C_m$ nm$^{-3}$. We remark that since the density of ions can be large near the walls the $g(r)$ can exhibit long-range normalized bulk values that are significantly less than $1.0$ and normalized wall values that are in excess of $1.0$.
The RDFs in the bulk are shown in Figure \ref{fig:ion_ion_correlations_bulk}. In the bulk, the counterion-counterion $g(r)$ shows a correlation hole, with the counterions not likely to be close together. The counterion-coion interactions show strong correlations that indicate a counterion has a cluster of coions in its proximity at a distance roughtly twice the steric distance. The coion-coion $g(r)$'s exhibit a small feature around $r = 0.5$ which appears to be related to ionic clusters that form with multiple coions associated to a common counterion. Since we have divalent counterions, it makes sense that there should roughly be two coions associated with each counterion. These results indicate that on average the bulk electrolyte consists of triples of ions with one counterion and two coions, but not larger ion clusters.
\multicolinterrupt{
\begin{figure}[H]
\centering
\includegraphics[width=0.99\columnwidth]{./figData/fig_ion_ion_correlations_bulk_all.png}
\caption{Radial distribution functions $g(r)$ for ion-ion correlations in the bulk from the BD simulations.}
\label{fig:ion_ion_correlations_bulk}
\end{figure}
}
Near to the wall, the RDF $g(r)$ exhibits features indicating much stronger correlations than in the bulk. While the counterion-coion correlations are similar to those in the bulk, the counterion-counterion $g(r)$ has a significant peak at small $r$. This is from the large density associated with the condensed counterion layer near the wall. As the charge increases there is a transition around $C_m \geq 4$ from a correlated gas-like state to a state with significant correlations that are more liquid-like~\cite{bookChandler1987}. The peak that develops moves closer toward the steric length-scale of the ions with peaks around $0.5$ nm. The coion-coion correlations near the wall exhibit a peak for all of the regimes considered. From examining simulation trajectories we find this arises from the strong correlations of the coions with the counterions and from bulk coions that transiently move to penetrate the strongly positively-charged condensed layer. The coion-coion peak occurs independent of concentration around a similar length-scale $0.5$nm as the final counterion-counterion peaks for large concentration. These results show that there are some significant differences in ion-ion correlations when near the wall relative to the bulk.
\multicolinterrupt{
\begin{figure}[H]
\centering
\includegraphics[width=0.99\columnwidth]{./figData/fig_ion_ion_correlations_wall_all.png}
\caption{Radial distribution functions $g(r)$ for ion-ion correlations near the wall from the BD simulations.}
\label{fig:ion_ion_correlations_wall}
\end{figure}
}
\subsection{Results from Classical Density Functional Theory (cDFT) and Poisson-Boltzmann (PB) Theory}
The classical density functional theory (cDFT) and Poisson-Boltzmann (PB) theory provide other approaches for investigating phenomena in electrolytes and charged systems that are expected to be more computationally efficient than BD simulations. However, in cDFT and PB further approximations are incurred in modeling the underlying physics of the charged system. We expect that cDFT could provide a decent basis for describing the nanochannel system given the inclusion of terms accounting for charge correlations and ion sterics. The steric and correlation effects can be seen in the ionic layering and clustered interactions in the simulation results particularly in Figures~\ref{fig:rpm_model} and~\ref{fig:particle_ion_3D}. To further emphasize the importance of these effects, we include in our comparisons the mean-field Poisson-Boltzmann (PB) theory, which we do not expect to perform very well in the strongly charged regime. These results further demonstrate the importance of ion correlation effects and sterics to obtain correct phenomenology even at a qualitative level. As we shall discuss, our results further highlight the need for using descriptions beyond the mean-field theory to obtain reliable results in strongly charged regimes for the nanochannel system.
\begin{figure}[H]
\centering
\includegraphics[width=0.99\columnwidth]{./figData/compare_counterion_Profiles_DFT_all_atz.png}\caption{Comparison of the counterion densities for the cDFT (dashed curves) and the BD simulations (solid curves) as a function of distance $r$ from the channel wall, for wall charge densities from the $\sigma=-6$ column of Table \ref{table:wall_charge}.}
\label{fig:wallDL_DFT1}
\end{figure}
We compare the ion densities near the channel walls as calculated from cDFT with the simulation density profiles in Figures \ref{fig:wallDL_DFT1} and \ref{fig:wallDL_DFT2}. We find that cDFT predicts qualitatively similar trends as the simulations but with some significant quantitative differences. At smaller values of $C_{m}$ the profiles exhibit monotonic behavior. As observed in the BD simulation results, at larger values of $C_{m}$ the cDFT counterion densities exhibit a distinct peak (condensed layer) followed by a depleted region before attaining the bulk counterion concentration, see Figure~\ref{fig:wallDL_DFT1}. The cDFT coion distributions exhibit a similar trend as in the BD results with a distinct peak occurring at the location of the depleted counterion region before attaining the bulk concentration, see Figure~\ref{fig:wallDL_DFT2}. The depletion after the first layer of counterions is not seen for ion densities calculated using the Poisson-Boltzmann equation, nor for cDFT calculations with only mean-field electrostatics (i.e., without the correlation term $F_{corr}$). Instead, in the absence of ion correlations, the counterions exhibit a single peak near the wall that decays monotonically to the bulk, whereas the coion density profiles simply increase monotonically from the wall to their bulk concentration, with no peak.
\begin{figure}[H]
\centering
\includegraphics[width=0.99\columnwidth]{./figData/compare_coion_Profiles_DFT_all_atz.png}\caption{Comparison of the coion densities for the cDFT (dashed curves) and the BD simulations (solid curves) as a function of distance $r$ from the channel wall, for wall charge densities from the $\sigma=-6$ column of Table \ref{table:wall_charge}.}
\label{fig:wallDL_DFT2}
\end{figure}
Thus, the cDFT charge correlation terms capture the charge density qualitatively as the ionic concentration is varied, but as the system becomes more strongly charged there are some significant quantitative deviations with the simulation results. Compared to the BD simulations, at smaller $C_{m}$ the cDFT underestimates the magnitude of the coion peak but is in fairly good agreement with the long-range behavior of the counterion density profiles. At larger values, $C_{m} > 6$, the cDFT overestimates the magnitude of the coion peak and also overestimates the amount of depletion in the counterion density. For all concentrations and wall charge densities, the cDFT overestimates the countertion contact density at the charged wall as compared with the BD simulations (not shown).
Similar behavior is seen for the ion concentrations around the colloidal particle, as shown in Figures \ref{fig:macroDL_DFT1} and \ref{fig:macroDL_DFT2} for $\sigma = -3$. The cDFT underestimates the magnitude of the coion peak, especially for $C_{mult}=4$, and again overestimates the magnitude of the counterion contact density (not shown).
\begin{figure}[H]
\centering
\includegraphics[width=0.99\columnwidth]{./figData/compare_counterion_macroProfiles_DFT_all.png}\caption{Comparison of the counterion densities for the cDFT (dashed curves) and the BD simulations (solid curves) as a function of distance $r$ from colloidal particle, for $\sigma=-3$. }
\label{fig:macroDL_DFT1}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.99\columnwidth]{./figData/compare_coion_macroProfiles_DFT_all.png}\caption{Comparison of the coion densities for the cDFT (dashed curves) and the BD simulations (solid curves) as a function of distance $r$ from the colloidal particle, for $\sigma=-3$. }
\label{fig:macroDL_DFT2}
\end{figure}
We note that we are using the simplest form of the charge correlation term in the cDFT, namely the MSA expression for the direct correlation function $c(r)$, evaluated at the bulk density of the ions (i.e. the densities in the middle of the channel). In our previous study of the interactions between charged nanoparticles in electrolyte, we found good agreement between cDFT and molecular dynamics simulations in the density profiles \cite{StevensFrischknechtNanoparticle2016}. However, for our cDFT approach and for comparable regimes to our current studies, discrepancies have been previously observed with simulations having large ion concentrations and in regions near to highly charged walls in the work of Oleksy and Hansen~\cite{Oleksy:2006ed}. Oleksy and Hansen compared cDFT to Monte Carlo simulations for a 1:1 electrolyte at 1M concentration near a charged wall with reduced charge density $\sigma^* = 0.42$~\cite{Oleksy:2006ed}. They also included a hard sphere solvent, and found differences in the ion density profiles of similar magnitude to those found in our work. Improvements to the charge correlation term, such as using the local weighted density in the calculation of $c(r)$, leads to excellent agreement between cDFT and e.g. molecular dynamics (MD) simulations near highly charged surfaces \cite{Lee:2012fj}. The RFD functional of Gillespie and coworkers \cite{GillespieDensity2005}, which uses a local weighted density in $c(r)$, has been shown to give good agreement with simulation results and experiment in a variety of studies \cite{GillespieDensity2005,GillespiePennathurCorrelations2011}. Thus, in strongly charged regimes a more sophisticated approach beyond the simple bulk MSA treatment is needed to capture ion correlations if quantitative accuracy is sought near surfaces. In this paper, our main focus was to gain further insight into the qualitative role of charge correlations, so the more simple cDFT treatment is adequate. We also note that to our knowledge, more sophisticated treatments of charge correlations have not yet been implemented in a cDFT code that can also do 3D calculations in the geometry we study here.
\multicolinterrupt{
\begin{figure}[H]
\centering
\includegraphics[width=0.9\columnwidth]{./figData/eDFT_combined.png}
\caption{Comparison of the free energy as a function of particle position in the nanochannel for cDFT, mean-field cDFT, and PB theory with the BD simulations.}
\label{fig:compare_rpm_cdft}
\end{figure}
}
Next we consider the free energy for the colloidal particle as a function of position in the nanochannel. For systems with large ionic concentrations and high charge density on the particle, the cDFT becomes computationally difficult to converge given the localized structures that develop within the density fields. In Figure~\ref{fig:compare_rpm_cdft} we compare cDFT to the simulation results only for $\sigma = -3$ and $C_m = 1.0$ and $C_m = 2.0$, values which are accessible with the cDFT computational methods. We see that cDFT captures the trends on a qualitative level compared to the simulation results. In particular, for sufficiently high charge, the cDFT also predicts the development of a free energy minimum for the colloidal particle near the wall. In contrast, both the PB theory, which neglects sterics and correlations, and also mean-field cDFT with no charge correlations, are found to predict a purely repulsive interaction energy between the colloidal particle and wall. Figure~\ref{fig:compare_cdft_free_eng} shows cDFT results for differing charge densities on the colloidal particle, all at $C_m=2.0$. As the charge on the particle increases, the depth of the minimum in the free energy increases, as also found (for higher particle charges) by the BD simulations. In some cases the cDFT also predicts a small barrier in the free energy between the minimum and the center of the channel, but with cDFT we cannot access the high ion concentration regimes where this barrier is as large as in the BD simulations.
The difficulty in converging the cDFT calculations was surprising, but the systems studied here have higher ion concentrations and surface charge densities than most previous cDFT studies. In particular, our previous investigation of the interactions between like-charged nanoparticles had maximum ion concentrations of about 220 mM, which is close to the smallest ion concentration in the current study \cite{StevensFrischknechtNanoparticle2016}. Decreasing the strength of the electrostatic interactions slightly in the cDFT, by increasing the reduced temperature from $T^* = 0.33$ to $T^* = 0.43$, enabled convergence of systems with higher ion concentration (e.g., the $\sigma =-3$, $C_m=4$ system). This change corresponds to increasing the ion diameter from 0.232 nm to 3.0 nm. However, further increases would be needed in $T^*$ to get convergence at the higher ion concentrations so we did not pursue those calculations.
\begin{figure}[H]
\centering
\includegraphics[width=0.99\columnwidth]{./figData/energy_Cmult2_sigma_comp_atz.png}
\caption{Free energy of the colloidal particle as a function of position from cDFT as the particle charge $\sigma$ is varied.}
\label{fig:compare_cdft_free_eng}
\end{figure}
While cDFT agrees qualitatively with the simulation results there are some significant quantitative discrepancies. The location of the free energy minimum in the cDFT is significantly closer to the nanochannel wall than in the BD simulations. This is likely due to the somewhat more narrow ion layers in the cDFT. We also find cDFT predicts a depth for the free energy well that is significantly smaller than observed in the simulation results, see Figure~\ref{fig:compare_rpm_cdft} and ~\ref{fig:compare_cdft_free_eng}. Nevertheless, it is clear from these results that the attractive well results from ion charge correlations.
\section{Discussion}
\label{sec:discussion}
In the regimes studied, the ions tend to form clusters in the bulk electrolyte and a compact condensed layer near the channel walls. The interplay between the ionic layers associated with the colloidal particle and the wall can result in a significant attraction between the like-charged colloidal particle and wall. As discussed in Section~\ref{sec:ion_dl_struct} this occurs at a distance comparable to the thickness of the condensed counterion layer. As can be seen in Figure~\ref{fig:wall_layers} and~\ref{fig:particle_layers_s_n6}, there is a secondary layer of negative coions just beyond the counterion layer. At the distance of the free energy minimum, the negatively charged colloidal particle joins the secondary layer of negative coions. From our comparisons between the BD simulations and the cDFT calculations, we found the attraction to be a consequence of the ion-ion correlations. In contrast the mean-field theories, either PB or mean-field cDFT, that neglect these correlations predict a purely repulsive interaction between the colloidal particle and wall.
The free energy of the colloidal particle location also exhibits an energy barrier. For the case of the strongly charged colloidal particle and ion concentrations ($\sigma = -6$ and $C_m = 8$) there is a significant condensed counterion layer on the particle surface. As the colloidal particle approaches the wall, the condensed layer of the colloidal particle merges with the condensed wall layer. These rearrangements in some charge regimes result in the free energy barriers as observed in Figure~\ref{fig:free_energy_all}. This effect appears to occur only for intermediate ion concentrations of $C_m = 6, 8$, for $\sigma=-3$ and -6, and disappears when the ion concentration becomes sufficiently large. The significant rearrangements that occur as the ion approaches the wall indicate a strong role played by the ion-ion correlations and discrete structures in determining the free energy of the wall-particle interactions.
It is interesting to consider further the differences between multivalent and monovalent systems. We performed two additional sets of simulations of monovalent systems with 1:1 electrolytes; further details and results are in Appendix \ref{sec:monovalent}. In the first set of simulations, we keep the number density of the monovalent ions the same as in the multivalent system. While this case results in a different charge density it retains the same entropic contributions in the free energy. In the second we keep the charge density of the system the same but double the number of counterions, which increases the number of charge carriers and the entropic contributions in the free energy. In both cases, we find that the 1:1 electrolyte no longer results in a significant free energy minimum. In the more strongly charged systems with more charge carriers the free energy minimum is further suppressed than in the case of the less charged system which shows a very small (relative to $k_B{T}$) and wide region of lower free energy, see Figure~\ref{fig:free_energy_all} and Figure~\ref{fig:mono_free_energy}. This indicates that the multivalent system may benefit significantly from having fewer charge carriers, which reduces the entropic penalties associated with condensation of charge on the walls and strong correlations at the colloidal particle surface. There is also more of an energy gain or less entropic loss when sharing a screening charge in common. It can also be seen in the monovalent systems that the electrolyte is more diffuse, without the presence of transient ion clusters as in the multivalent system. The simulation results indicate that it is the asymmetry between the ion charges and the reduced entropic penality for forming discrete structures that is responsible for the rich phenomena seen in multivalent electrolytes and charged systems.
Thus, the simulation results show that both the ion correlations, and the resulting discrete ion configurations, play important roles in determining the free energy of the system. In the BD simulations strong electrostatic interactions and multivalent ions can result in the formation of discrete clusters, and the interactions can be mediated at the level of individual ions and their arrangements, as seen in Figures~\ref{fig:rpm_model}, \ref{fig:particle_ion_3D} and \ref{fig:ions_particle_near_wall}. This is expected to pose significant challenges in formulating constitutive equations for continuum descriptions of the system and in making quantitative predictions. The radial distribution functions we report for the counterions and coions for the bulk and near the wall may be helpful toward that aim, see Figure~\ref{fig:ion_ion_correlations_bulk} and~\ref{fig:ion_ion_correlations_wall}. The significant quantitative differences between the cDFT and the simulation results arise from the correlation terms in the cDFT functional that are based on the mean-spherical approximation (MSA) for bulk electrolytes. It would be of interest in future work to examine whether the RFD functional \cite{GillespieDensity2005}, which is still based on the MSA direct correlation function but for the local (inhomogeneous) rather than bulk density, would be sufficient to match the present simulation results, or whether improved expressions for the direct correlation function, such as from the new DH-extended MSA (DHEMSA) closure of Olvera de la Cruz and coworkers \cite{Zwanikken:2011jj}, would give better agreement. However, it may also be the case that for nano systems with finite numbers of ions, finite ion numbers lead to effects that cannot be captured by density functional theories which by construction only include the average ion density.
\section{Conclusion}
We have investigated the behaviors of a charged colloidal particles confined in nanochannels. We have found for multivalent 2:1 electrolytes that strong ion-ion correlations can develop that give interesting free energy profiles for the colloidal particle position within the channel. We found that the free energy profile can exhibit minima giving a preferred location for the colloid near the channel center and near to but separated from the channel wall. We found in some of the charge regimes the minima can be separated by significant energy barriers. This appears to be the result of over-charging of the double-layer that forms near the colloidal particle surface, see Figure~\ref{fig:particle_overcharge}. Comparisons between our BD simulations and cDFT and PB theory indicate the strong role played by ion-ion correlations. As may be expected from a mean-field, the PB theory was found to be inadequate in capturing even qualitative features of the simulation results. The cDFT approach is found to capture at a qualitative level the main trends seen in the simulation results both for the ionic densities and for the free energy profile as the charge of the system is varied. However, the cDFT results have quantitative discrepancies with the simulation results, in both the ionic layer densities near the walls and in the depth of the free energy well. This arises appears to arise from the MSA approach used for the charge correlation term, which is based on hard-sphere models of unconfined bulk electrolytes. Our simulations indicate that near surfaces the ions can form interesting ionic structures such as clusters or discrete layers differring significantly from bulk behaviors. To obtain more quantitative accuracy, such effects would have to be captured likely requiring further development of correlation terms for cDFT. Overall the cDFT did make predictions in qualitative agreement with most of the BD simulation results.
The results we report could have implications for many phenomena within nanochannels and more broadly nanodevices that rely upon electrical effects. For instance, in the case of capillary electrophoresis the free energy profile indicates that colloidal particles within the device may hop between positions close to the nanochannel wall and close to the channel center. Given the expected differences in particle mobilities in these locations, this could significantly affect arrival time observations. More generally, our results show that discrete ion-ion interactions may play a dominate role in nanodevices requiring more sophisticated theory than proivided by traditional mean-field approaches such as the widely used Poisson-Boltzmann theory. Toward this aim in developing better correlation terms for cDFT our bulk and wall radial distribution results may be useful. Many of our results are expected to be useful in gaining insights into other charged systems such as biological macromolecules where similar discrete ion interactions and collective effects may be relevant.
\section{Acknowledgments}
The authors P.J.A and I.S. acknowledge support from research grant NSF CAREER DMS-0956210, NSF DMS - 1616353, W. M. Keck Foundation, and DOE ASCR CM4 DE-SC0009254. We also acknowledge UCSB Center for Scientific Computing NSF MRSEC (DMR-1121053) and UCSB MRL NSF CNS-0960316. The authors would also like to thank Kai Sikorski for discussions and work developing codes for LAMMPS. This work is supported by the Applied Mathematics Program within the Department of Energy (DOE) Office of Advanced Scientific Computing Research (ASCR) as part of the Collaboratory on Mathematics for Mesoscopic Modeling of Materials (CM4). This work was performed, in part, at the Center for Integrated Nanotechnologies, an Office of Science User Facility operated for the U.S. Department of Energy (DOE) Office of Science. Sandia National Laboratories is a multi-mission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC., a wholly owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA0003525.
\bibliographystyle{plain}
|
1,108,101,566,638 | arxiv | \section{\@startsection{section}{1}{\z@}
{-3.5ex \@plus -1ex \@minus -.2ex}
{2.3ex \@plus .2ex}
{\normalfont\large\bfseries}}
\renewcommand\subsection{\@startsection{subsection}{2}{\z@}
{-3.25ex\@plus -1ex \@minus -.2ex}
{1.5ex \@plus .2ex}
{\normalfont\normalsize\bfseries}}
\renewcommand\subsubsection{\@startsection{subsubsection}{3}{\z@}
{-3.25ex\@plus -1ex \@minus -.2ex}
{1.5ex \@plus .2ex}
{\normalfont\normalsize\bfseries}}
\renewcommand\paragraph{\@startsection{paragraph}{4}{\z@}
{3.25ex \@plus1ex \@minus.2ex}
{-1em}
{\normalfont\normalsize\bfseries}}
\makeatother
\renewcommand*\contentsname{\normalsize\bfseries \begin{center} Table of contents \end{center} }
\renewcommand{\cftsecleader}{\cftdotfill{\cftdotsep}}
\newdimen\tableauside\tableauside=1.0ex
\newdimen\tableaurule\tableaurule=0.4pt
\newdimen\tableaustep
\def\phantomhrule#1{\haox{\vbox to0pt{\hrule height\tableaurule
width#1\vss}}}
\def\phantomvrule#1{\vbox{\haox to0pt{\vrule width\tableaurule
height#1\hss}}}
\def\sqr{\vbox{%
\phantomhrule\tableaustep
\haox{\phantomvrule\tableaustep\kern\tableaustep\phantomvrule\tableaustep}%
\haox{\vbox{\phantomhrule\tableauside}\kern-\tableaurule}}}
\def\squares#1{\haox{\count0=#1\noindent\loop\sqr
\advance\count0 by-1 \ifnum\count0>0\repeat}}
\def\tableau#1{\vcenter{\offinterlineskip
\tableaustep=\tableauside\advance\tableaustep by-\tableaurule
\kern\normallineskip\haox
{\kern\normallineskip\vbox
{\gettableau#1 0 }%
\kern\normallineskip\kern\tableaurule}%
\kern\normallineskip\kern\tableaurule}}
\def\gettableau#1 {\ifnum#1=0\let\next=\null\else
\squares{#1}\let\next=\gettableau\fi\next}
\tableauside=1.5ex
\tableaurule=0.2pt
\newcommand{\begin{equation}}{\begin{equation}}
\newcommand{\end{equation}}{\end{equation}}
\newcommand{\begin{eqnarray}}{\begin{eqnarray}}
\newcommand{\end{eqnarray}}{\end{eqnarray}}
\newcommand{\begin{array}}{\begin{array}}
\newcommand{\end{array}}{\end{array}}
\newcommand{\haox{1\kern-.27em l}}{\haox{1\kern-.27em l}}
\newcommand{\mathbb{Z}}{\mathbb{Z}}
\newcommand{\mathbb{C}}{\mathbb{C}}
\newcommand{\mathbb{R}}{\mathbb{R}}
\newcommand{ {\textstyle \frac{1}{2} } }{ {\textstyle \frac{1}{2} } }
\newcommand{\alpha}{\alpha}
\newcommand{\gamma}{\gamma}
\newcommand{\Gamma}{\Gamma}
\newcommand{\beta}{\beta}
\newcommand{\varrho}{\varrho}
\newcommand{\kappa}{\kappa}
\newcommand{\varkappa}{\varkappa}
\newcommand{\delta}{\delta}
\newcommand{\varphi}{\varphi}
\newcommand{\epsilon}{\epsilon}
\newcommand{\varepsilon}{\varepsilon}
\newcommand{\sigma}{\sigma}
\newcommand{\lambda}{\lambda}
\newcommand{\tau}{\tau}
\newcommand{\omega}{\omega}
\newcommand{\Omega}{\Omega}
\newcommand{\zeta}{\zeta}
\newcommand{\Delta}{\Delta}
\newcommand{\Lambda}{\Lambda}
\newcommand{\Upsilon}{\Upsilon}
\newcommand{\theta}{\theta}
\newcommand{\vartheta}{\vartheta}
\newcommand{\Theta}{\Theta}
\newcommand{\Upsilon}{\Upsilon}
\newcommand{\mathcal{N}}{\mathcal{N}}
\newcommand{\mathcal{A}}{\mathcal{A}}
\newcommand{\mathcal{O}}{\mathcal{O}}
\newcommand{\mathcal{D}}{\mathcal{D}}
\newcommand{\mathcal{E}}{\mathcal{E}}
\newcommand{\mathcal{Q}}{\mathcal{Q}}
\newcommand{\mathcal{K}}{\mathcal{K}}
\newcommand{\mathcal{W}}{\mathcal{W}}
\newcommand{\mathcal{F}}{\mathcal{F}}
\newcommand{\mathcal{T}}{\mathcal{T}}
\newcommand{\mathcal{V}}{\mathcal{V}}
\newcommand{{\mathcal C}}{{\mathcal C}}
\newcommand{{\mathcal U}}{{\mathcal U}}
\newcommand{{\rm tr}}{{\rm tr}}
\newcommand{{\rm d}}{{\rm d}}
\newcommand{\partial}{\partial}
\newcommand{\rightarrow}{\rightarrow}
\newcommand{\emptyset}{\emptyset}
\newcommand{\nonumber}{\nonumber}
\newcommand{\langle}{\langle}
\newcommand{\rangle}{\rangle}
\newcommand{{\rm e}}{{\rm e}}
\newcommand{{\rm i}}{{\rm i}}
\newcommand{{\rm d}}{{\rm d}}
\newcommand{\hat{a}}{\hat{a}}
\newcommand{\tilde{a}}{\tilde{a}}
\newcommand{\tilde{\jmath}}{\tilde{\jmath}}
\newcommand{\mathrm{SU}}{\mathrm{SU}}
\newcommand{\mathrm{SO}}{\mathrm{SO}}
\newcommand{\mathrm{SL}}{\mathrm{SL}}
\newcommand{\mathrm{Sp}}{\mathrm{Sp}}
\newcommand{\mathrm{su}}{\mathrm{su}}
\newcommand{\mathrm{so}}{\mathrm{so}}
\newcommand{\mathrm{sp}}{\mathrm{sp}}
\newcommand{\mathrm{gl}}{\mathrm{gl}}
\newcommand{\mathrm{sl}}{\mathrm{sl}}
\newcommand{\mathrm{U}}{\mathrm{U}}
\newcommand{\mathrm{u}}{\mathrm{u}}
\newcommand{\mathrm{Spin}}{\mathrm{Spin}}
\newcommand{\mathrm{Pin}}{\mathrm{Pin}}
\newcommand{\textstyle}{\textstyle}
\newcommand{\displaystyle}{\displaystyle}
\begin{document}
\begin{center}
\vspace*{2mm}
{\Large\sf
{$\mathcal{W}$-algebras and surface operators in {\large $\mathcal{N}=2$} gauge theories }}
\vspace*{6mm}
{\large Niclas Wyllard}
\vspace*{4mm}
{\tt [email protected]}
\vspace*{8mm}
{\bf Abstract}
\end{center}
\vspace*{0mm}
\noindent
A general class of $\mathcal{W}$-algebras can be constructed from the affine $\mathrm{sl}(N)$ algebra
by (quantum) Drinfeld-Sokolov reduction and are classified by partitions of $N$. Surface operators in an $\mathcal{N}=2$ $\mathrm{SU}(N)$ $4d$ gauge theory are also classified by partitions of $N$.
We argue that instanton partition functions of $\mathcal{N}=2$ gauge theories in the presence of a surface operator can also be computed from the corresponding $\mathcal{W}$-algebra.
We test this proposal by analysing the Polyakov-Bershadsky $\mathcal{W}_3^{(2)}$ algebra obtaining results that are in agreement with the known partition functions for $\mathrm{SU}(3)$ gauge theories with a so called simple surface operator. As a byproduct, our proposal implies relations between the $\mathcal{W}_3^{(2)}$ and $\mathcal{W}_3$ algebras.
\vspace{1mm}
\setcounter{tocdepth}{1}
\setcounter{equation}{0}
\section{Introduction}\label{sint}
In the last year several new detailed connections between $2d$ conformal field theories and $4d$ quiver gauge theories with $\mathcal{N}\,{=}\,2$ supersymmetry have been discovered. In particular, conformal (or chiral) blocks \cite{Belavin:1984} of certain $2d$ conformal theories have been argued to be equal to instanton partition functions \cite{Nekrasov:2002} in $4d$ $\mathcal{N}\,{=}\,2$ quiver gauge theories.
The starting point of the new developments was the important paper \cite{Alday:2009a} where a relation between the Liouville theory (whose conformal blocks are those of the Virasoro algebra) and instanton partition functions in (conformal) $\mathcal{N}\,{=}\,2$ $\mathrm{SU}(2)$ quiver gauge theories was uncovered. This result has been extended to various other $2d$ theories, such as the $2d$ $A_{N-1}$ Toda theories, whose conformal blocks are those of the $\mathcal{W}_N$ algebras, and are conjectured to be related \cite{Wyllard:2009} to instanton partition functions in (conformal) $\mathcal{N}\,{=}\,2$ $\mathrm{SU}(N)$ quiver gauge theories. Extensions to non-conformal gauge theories have also been discussed, first for $\mathrm{SU}(2)$ theories in \cite{Gaiotto:2009b} and later also for higher rank theories \cite{Taki:2009}. In addition, conformal blocks of $2d$ conformal field theories with affine $\mathrm{sl}_N$ symmetry have been argued to be related to conformal $\mathcal{N}\,{=}\,2$ $\mathrm{SU}(N)$ gauge theories in the presence of a so called full surface operator. This was first proposed for the affine $\mathrm{sl}(2)$ conformal blocks in \cite{Alday:2010} and further studied in \cite{Awata:2010}. The extension to $\widehat{\mathrm{sl}}_N$ (affine $\mathrm{sl}_N$) was discussed in \cite{Kozcaz:2010b}.
In this paper we argue that the above relations are special cases of a general connection between
$\mathcal{W}$-algebras and instanton partition functions in $\mathcal{N}=2$ gauge theories in the presence of surface operators.
Before describing our proposal in more detail, we should point out that in parallel to the physics developments there have also been many important results in the mathematics literature. For instance, the results in \cite{Carlsson:2008} can be viewed as a simpler version of the AGT relation \cite{Alday:2009a} when the gauge group is $\mathrm{U}(1)$ rather than $\mathrm{SU}(2)$. In the pioneering papers \cite{Braverman:2004a} various aspects of instanton partition functions in the presence of surface operators were discussed. In particular, for the pure $\mathrm{SU}(N)$ theories with a full surface operator it was shown that the partition function of the gauge theory is equal to the norm of a so called Whittaker vector of the $\widehat{\mathrm{sl}}_N$ algebra. This result can be viewed as a non-conformal version of the AT relation \cite{Alday:2010} and is analogous to the discussion in \cite{Gaiotto:2009b}, which is valid in the absence of surface operators and can also be formulated in the language of Whittaker vectors (see
e.g.~\cite{Yanagida:2010}). In a further development \cite{Feigin:2008} explicit expressions for the instanton partition functions of $\mathrm{SU}(N)$ quiver gauge theories in the presence of a full surface operator were determined. Finally, we must also mention the recent paper \cite{Braverman:2010} which contains ideas similar to the ones in this work, albeit phrased in a more mathematical language. Phrased in physics terminology, it is shown in \cite{Braverman:2010} that the subsector where $4d$ instanton effects decouple of the instanton partition function for the pure $\mathrm{SU}(N)$ theory in the presence of a general surface operator is equal to the norm of a Whittaker vector of a so called finite $\mathcal{W}$-algebra (a certain finite subalgebra of a $\mathcal{W}$-algebra). For non-conformal theories, our proposal can be viewed as an extension of the result in \cite{Braverman:2010} to the full $\mathcal{W}$-algebra (such a possibility was also mentioned in \cite{Braverman:2010} but was not spelled out explicitly).
A natural class of $\mathcal{W}$-algebras are obtained from the $\widehat{\mathrm{sl}}_N$ algebra\footnote{Throughout this paper we focus on the $\widehat{\mathrm{sl}}_N$ algebras and their associated
$\mathcal{W}$-algebras, but similar results are expected to hold also for other affine Lie algebras.}
by quantum Drinfeld-Sokolov reduction (also called hamiltonian reduction). The $\mathcal{W}$-algebras that arise from this construction are classified by the embeddings of $\mathrm{sl}_2$ inside $\mathrm{sl}_N$ (or equivalently by the nilpotent orbits or Levi subalgebras of $\mathrm{sl}_N$). Concretely this means that these $\mathcal{W}$-algebras are classified by partitions of $N$. The (quantum) Drinfeld-Sokolov reduction method was studied for the $\widehat{\mathrm{sl}}_2$ algebra in \cite{Bershadsky:1989} and shown to lead to the Virasoro algebra upon reduction. An extension to $\widehat{\mathrm{sl}}_N$ that gives rise to the $\mathcal{W}_N$ algebras upon reduction was developed in \cite{Feigin:1990} (see also the pioneering work \cite{Fateev:1987}). In the language of $\mathrm{sl}_2$ embeddings the reductions in \cite{Feigin:1990} correspond to the so called principally embedded $\mathrm{sl}_2$ subalgebras. The first example of a reduction corresponding to a non-principally embedded $\mathrm{sl}_2$ was obtained in \cite{Bershadsky:1990} where a reduction from $\widehat{\mathrm{sl}}_3$ gave rise to a previously unknown $\mathcal{W}$-algebra, now referred to as the Polyakov-Bershadsky $\mathcal{W}_3^{(2)}$ algebra \cite{Polyakov:1989,Bershadsky:1990}. The general connection to $\mathrm{sl}_2$ embeddings was first observed in the classical case \cite{Bais:1990} (see also the review \cite{Feher:1992}). A general theory of quantum reductions for arbitrary $\mathrm{sl}_2$ embeddings was developed in \cite{deBoer:1993} (see also e.g.~\cite{Kac:2003a} for some further mathematical developments.)
One way to define a surface operator in a $4d$ gauge theory is by specifying the (singular) behaviour of the gauge field (and scalars, if present) near the $2d$ submanifold where the surface operator is supported. In \cite{Gukov:2006} it was found that the possible types of surface operators in an $\mathcal{N}\,{=}\,4$ $\mathrm{SU}(N)$ gauge theory are in one-to-one correspondence with the Levi subalgebras of $\mathrm{SU}(N)$. Concretely this means that for every (non-trivial) partition of $N$ there is a possible surface operator. Surface operators in $4d$ $\mathrm{SU}(N)$ theories with $\mathcal{N}\,{=}\,2$ supersymmetry are also classified by partitions of $N$ and have been studied e.g.~in \cite{Gukov:2007} (and more recently in the context of the AGT relation in several papers \cite{Alday:2009b,Gaiotto:2009c,Kozcaz:2010,Dimofte:2010,Maruyoshi:2010,Taki:2010,Alday:2010,Awata:2010,Kozcaz:2010b}). For $\mathcal{N}\,{=}\,2$ theories a surface operator depends on a certain number of continuous complex parameters, one for each abelian $\mathrm{U}(1)$ factor in the Levi subalgebra. Following \cite{Alday:2010} we call a surface operator corresponding to the partition $N=(N{-}1)+1$ a simple surface operator and a surface operator corresponding to $N=1{+}\ldots{+}1$ a full surface operator.
As was recalled above, both the $\mathcal{W}$-algebras that are obtained by quantum Drinfeld-Sokolov reduction from the $\widehat{\mathrm{sl}}_N$ algebra and surface operators in $\mathcal{N}=2$ $\mathrm{SU}(N)$ gauge theories are classified by partitions of $N$. We argue that this is not a coincidence and that the two classes of objects are related.
We propose that instanton partition functions of $\mathcal{N}\,{=}\,2$ $\mathrm{SU}(N)$ gauge theories in the presence of a surface operator corresponding to a given partition of $N$ are also computable from the $\mathcal{W}$-algebra corresponding to the same partition. For non-conformal gauge theories the relevant $\mathcal{W}$-algebra quantity is the norm of a Whittaker vector, whereas for conformal gauge theories the relevant object is a conformal block. This proposal generalises in a very natural way the two cases previously considered in the literature: Whittaker vectors/conformal blocks of the $\widehat{\mathrm{sl}}(N)$ algebra have been shown/argued \cite{Braverman:2004a,Alday:2010,Kozcaz:2010b} to correspond to non-conformal/conformal $\mathrm{SU}(N)$ instanton partition functions with a full surface operator and conformal blocks/Gaiotto states of the $\mathcal{W}_N$ algebras correspond \cite{Alday:2009a,Wyllard:2009, Gaiotto:2009b,Taki:2009} to conformal/non-conformal $\mathrm{SU}(N)$ instanton partition functions in the absence of a surface operator. In the language of partitions, these two cases correspond to the partitions $N=1{+}\cdots{+}1$ and $N=N$, respectively.
In the next section we test our proposal by analysing the Polyakov-Bershadsky $\mathcal{W}_3^{(2)}$ algebra. This $\mathcal{W}$-algebra corresponds to the partition $3=2{+}1$ and is the simplest case which has not previously been studied.
Our proposal implies that it should be possible to use $\mathcal{W}_3^{(2)}$ methods to compute partition functions in $\mathcal{N}\,{=}\,2$ $\mathrm{SU}(3)$ gauge theories with a simple surface operator. Such partition functions have previously been computed using other approaches\footnote{Strictly speaking the simple surface operator appearing in these papers, although also associated with $3=2+1$, is not precisely the same as the one that appears in the $\mathcal{W}_3^{(2)}$ computation. However our results (as well as those in \cite{Awata:2010,Kozcaz:2010}) indicate that for the purpose of computing the instanton partition function they can be considered to be the same (at least for non-quiver theories). In this paper both types will therefore be referred to as a simple surface operator (see section \ref{sdisc} for a further discussion).} \cite{Alday:2009a,Kozcaz:2010,Dimofte:2010,Taki:2010}. Using these results we find agreement with $\mathcal{W}_3^{(2)}$ computations. As a byproduct we find relations between the $\mathcal{W}_3^{(2)}$ and $\mathcal{W}_3$ algebras.
\setcounter{equation}{0}
\section{$\mathcal{W}$-algebras and surface operators for rank two } \label{A2}
In this section we test the idea outlined above relating $\mathcal{W}$-algebras and instanton partition functions in $\mathcal{N}=2$ gauge theories with surface operators. We focus on the rank two theories. For such theories, the partition $3=1{+}1{+}1$ corresponds to the $\widehat{\mathrm{sl}}(3)$ algebra (no reduction) and to a full surface operator in $\mathcal{N} \,{=}\,2$ $\mathrm{SU}(3)$ gauge theories, the partition $3=3$ corresponds to the reduction of $\widehat{\mathrm{sl}}(3)$ to the $\mathcal{W}_3$ algebra \cite{Zamolodchikov:1985} and to the absence of a surface operator. The final case, $3=2{+}1$, corresponds to the reduction of $\widehat{\mathrm{sl}}(3)$ to the $\mathcal{W}^{(2)}_3$ algebra \cite{Polyakov:1989,Bershadsky:1990} and to a simple surface operator. We summarise the various possibilities in the following table:
\medskip
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
Partition & $2d$ symmetry algebra& Type of surface operator\\
\hline
$\phantom{\bigg(} \!\!\! 1{+}1{+}1$& $\widehat{\mathrm{sl}}(3)$& Full\\
\hline
$\phantom{\bigg(} \!\!\! 2{+}1$& $\mathcal{W}_3^{(2)}$ & Simple\\
\hline
$\phantom{\bigg(} \!\!\! 3$& $\mathcal{W}_3$& Absent \\ \hline
\end{tabular}
\end{center}
\medskip
The relation between the second and third columns in the last row is the $A_2$ AGT relation \cite{Wyllard:2009,Alday:2009a} (or its non-conformal version \cite{Taki:2009,Gaiotto:2009b}) and in the first row the $A_2$ AT relation \cite{Kozcaz:2010b,Alday:2010} (or its non-conformal version \cite{Braverman:2004a}). The relation between the last two columns for the middle row is the subject of this section and constitutes the first previously unknown case illustrating our proposal relating $\mathcal{W}$-algebras and surface operators in $\mathcal{N}=2$ gauge theories.
We first review various properties of the $\mathcal{W}_3^{(2)}$ algebra and its representations and then in section \ref{sW32pert} perform some perturbative computations. These results should be compared to instanton partition functions in $\mathrm{SU}(3)$ theories with a simple surface operator. In the general case we do not know how to compute the instanton partition function in the presence of a surface operator. However, for the case of a simple surface operator one can fortunately use the alternative dual description in terms of a degeneratate field in the $\mathcal{W}_3$ algebra ($A_2$ Toda theory) \cite{Alday:2009b,Drukker:2010,Kozcaz:2010,Dimofte:2010}. Using this result, in section \ref{sdeg} we perform some perturbative $\mathcal{W}_3$ computations (with a degenerate field insertion), finding complete agreement with the $\mathcal{W}_3^{(2)}$ computations in section \ref{sW32pert}.
\subsection{The $\mathcal{W}^{(2)}_3$ algebra and its representations } \label{sW32}
The Polyakov-Bershadsky $\mathcal{W}_3^{(2)}$ algebra \cite{Polyakov:1989,Bershadsky:1990} is an extension of the Virasoro algebra. In addition to the energy-momentum tensor $T(z)$ it also contains two fields $G^{\pm}(z)$ each with conformal dimension $3/2$ and one field $J(z)$ with conformal dimension $1$. These fields have the mode expansions
\begin{equation} \label{modes}
J(z) = \sum_n z^{-n-1} J_n \,, \qquad G^{\pm}(z) = \sum_n z^{-n-\frac{3}{2}} G^{\pm}_n \,, \qquad T(z) = \sum_n z^{-n-2} L_n \,.
\end{equation}
The modes satisfy the following commutations relations (which are straightforwardly obtained from the more commonly quoted operator product expansions)
\begin{eqnarray} \label{W32}
&& \!\!\!\!\! \!\!\!\!\! [L_n,J_m] = -m\, J_{n+m}\,, \qquad [L_n,G_m^{\pm}] = (\frac{n}{2}{-}m)\, G^{\pm}_{n+m}\,, \qquad [J_n,G_m^{\pm}] = \pm G_{n+m}^{\pm} \,, \nonumber \\
&& \!\!\!\!\! \!\!\!\!\! {}[J_n,J_m] = \frac{2k+3}{3} \,n \, \delta_{n+m,0} \,, \quad \; [L_n,L_m] = (n{-}m)L_{n+m} + \frac{c}{12} n(n^2-1) \delta_{n+m,0} \,, \\
&& \!\!\!\!\! \!\!\!\!\! {}[G^+_n,G^-_m] = \frac{(k+1)(2k+3)}{2}(n^2{-}{\textstyle \frac{1}{4} }) \delta_{n+m,0} - (k{+}3) L_{n+m} +\frac{3}{2} (k{+}1) (n{-}m) J_{n+m} \nonumber \\
&& \!\!\!\!\! \!\!\!\!\! \qquad \qquad\, +\, 3 \,\sum_\ell : J_{n+m-\ell}J_\ell: \nonumber
\end{eqnarray}
where $k$ is a parameter, $c=-\frac{(2k+3)(1+3k)}{k+3}$ and $:\;:$ denotes the normal ordering
\begin{equation}
: X_n Y_m: = \left\{ \begin{array}{c} X_n Y_m \qquad {\rm if} \qquad n \le m \\
Y_m X_n \qquad {\rm if} \qquad n > m\end{array} \right.
\end{equation}
The $\mathcal{W}^{(2)}_3$ algebra is similar to the well-known $\mathcal{N}\,{=}\,2$ superconformal algebra \cite{Ademollo:1975}, but in (\ref{W32}) $G^{\pm}_n$ are bosonic and there is a nonlinear $J^2$ term in the algebra. Despite these differences it is still true that one can consider both Ramond and Neveu-Schwarz sectors. These differ by whether $n$ in the mode-expansion of $G^{\pm}(z)$ in (\ref{modes}) are integers or half-integers.
We mainly consider the Ramond sector, where $G^{\pm}_n$ are integer moded.
The zero-mode sector of (\ref{W32}) is of particular importance and is spanned by $J_0$, $G^{\pm}_0$, and $L_0$. Introducing the notation
\begin{equation} \label{finitegens}
H = 2J_0 \,, \qquad E = 2G_0^+ \,, \qquad F= \frac{2}{3}G^-_0 \,, \qquad C= -\frac{4(k{+}3)}{3}L_0 -\frac{(k{+}1)(2k{+}3)}{6}\,,
\end{equation}
we find the algebra
\begin{equation} \label{finiteW}
[H,E]= 2E \,, \qquad [H,F]= -2F \,, \qquad [E,F] = H^2 + C\,.
\end{equation}
This is an example of a so called finite $\mathcal{W}$-algebra \cite{Tjin:1992}. Finite $\mathcal{W}$-algebras can be obtained by (quantum) Drinfeld-Sokolov reduction from {\it ordinary} Lie algebras (rather than from affine Lie algebras) \cite{Tjin:1992,deBoer:1992}. The above algebra (\ref{finiteW}) arises via reduction from $\mathrm{sl}_3$ \cite{Tjin:1992,deBoer:1992}. (See \cite{DeSole:2005} for a discussion of various equivalent ways of defining a finite $\mathcal{W}$-algebra and their relations to $\mathcal{W}$-algebras.) As discussed in \cite{DeSole:2005} it is the Ramond sector that is most directly related to the finite $\mathcal{W}$-algebra.
The representation theory for the $\mathcal{W}^{(2)}_3$ algebra has been developed in the literature.
In the Ramond sector, a highest weight (or primary) state $|\lambda \rangle$ satisfies \cite{Kac:2004}
\begin{equation} \label{eigenvals}
L_0 |\lambda \rangle = \left( \frac{ \langle \lambda, \lambda - (k+1)\rho \rangle}{2(k+3)} -\frac{1}{8} \right) |\lambda \rangle \,, \qquad J_0 |\lambda \rangle = \left( \langle \lambda,h_2\rangle - {\textstyle \frac{1}{2} } \right) |\lambda \rangle \,,
\end{equation}
together with
\begin{equation} \label{anni}
L_n |\lambda \rangle = G^{+}_{n-1} |\lambda \rangle = G^{-}_{n} |\lambda \rangle = J_n |\lambda \rangle = 0 \qquad (n=1,2,\ldots)\,.
\end{equation}
In (\ref{eigenvals}) $\lambda$ denotes a vector in the root/weight space of $\mathrm{sl}_3$, i.e.~$\lambda = \lambda^1 \Lambda_1 + \lambda^2 \Lambda_2$ where $\Lambda_{1,2}$ are the two fundamental weights of $\mathrm{sl}_3$. Furthermore, $\rho=\Lambda_1+\Lambda_2$ is the Weyl vector and $h_2 = \Lambda_2-\Lambda_1$ (see appendix \ref{ALie} for more details of our Lie algebra conventions). Note that shifting $\lambda$ in (\ref{eigenvals}) by a term proportional to $\rho$ changes the form of the $L_0$ eigenvalue, but does not change the $J_0$ eigenvalue. The representation theory in the Ramond sector is closely related to the representation theory of the associated finite $\mathcal{W}$-algebra (\ref{finiteW}). The representation theory of the algebra (\ref{finiteW}) was obtained in \cite{Tjin:1992,deBoer:1992} (see also \cite{Smith:1990}).
The Neveu-Schwarz version of (\ref{eigenvals}), (\ref{anni}) can be found e.g.~in \cite{Furlan:1994}. In this case the $\lambda$-independent terms in (\ref{eigenvals}) are absent and the $G^{\pm}$ conditions in (\ref{anni}) are replaced by $G^{\pm}_r |\lambda\rangle = 0$ for all positive half-integers $r$.
In the Ramond sector, the descendants of a primary state, $\langle \lambda |$, are denoted $\langle {\bf n} ;\lambda |$, where
\begin{equation}
\langle {\bf n}; \lambda | = \langle \lambda | G^{+}_{n^+_1-1} \cdots G^{+}_{n^+_{\ell^+}-1} G^{-}_{n^-_1} \cdots G^{-}_{n^-_{\ell^-}} J_{n_1} \cdots J_{n_\ell} \, L_{\tilde{n}_1} \cdots L_{\tilde{n}_\ell} \,,
\end{equation}
and $n^\pm_i$, $\tilde{n}_i$ and $n_i$ can be any positive integer. Similarly,
\begin{equation}
| {\bf n}; \lambda \rangle = L_{-\tilde{n}_1} \cdots L_{-\tilde{n}_\ell} \, J_{-n_1} \cdots J_{-n_\ell} \, G^{+}_{-n^+_1} \cdots G^{+}_{-n^+_{\ell^+}} G^{-}_{-n^-_1+1} \cdots G^{-}_{-n^-_{\ell^-}+1} |\lambda \rangle \,.
\end{equation}
The matrix of inner products of descendants (usually called the Gram or Shapovalov matrix) satisfies
\begin{equation} \label{Xmatrix}
X_{ \lambda}( {\bf n} ; {\bf m }) = \langle {\bf n};\lambda | {\bf m}; \lambda \rangle\propto \delta_{ \rm N , \rm M}\, \delta_{ S_{n}+S_{m},0} \,,
\end{equation}
i.e.~it is a block-diagonal matrix where each block contains only descendants with given values for the total level ${\rm N}= \sum_i (n_i + \tilde{n}_i + n^{+}_i+ n^{-}_i) $ and the total charge, $S_n$, given by the number of $n^{+}_i$ minus the number of $n^{-}_i$.
\subsection{Perturbative computations for the $\mathcal{W}^{(2)}_3$ algebra} \label{sW32pert}
A Whittaker-type state (vector) can be defined for the $\mathcal{W}^{(2)}_3$ algebra in a way completely analogous to the construction in \cite{Braverman:2004a,Braverman:2010} (see also section 5 in \cite{Kozcaz:2010b} for a discussion using the notation of \cite{Gaiotto:2009b} that will also be used below). We denote this state by $|x_1,x_2; \lambda \rangle$ and demand that it should satisfy
\begin{equation} \label{Wconds}
G_0^{+} |x_1,x_2; \lambda \rangle = \sqrt{x_1} \,|x_1,x_2 ; \lambda \rangle \,, \qquad G_1^{-} |x_1,x_2; \lambda\rangle = \sqrt{x_2}\, |x_1 ,x_2; \lambda \rangle \,,
\end{equation}
where all other $G^{\pm}_n$, $J_n$ and $L_n$ that annihilate $| \lambda \rangle$ also annihilate $|x_1,x_2;\lambda \rangle$. The norm of the Whittaker state can be expressed in terms of certain (diagonal) components of the inverse of the matrix (\ref{Xmatrix}). The following set of descendants play a distinguished role in this construction
\begin{equation}
|n,p; \lambda \rangle = (G_{-1}^{+})^p (G_{0}^{-})^n |\lambda\rangle \,.
\end{equation}
Denoting the corresponding diagonal component of the inverse of the matrix $X_{\lambda}$ by $X^{-1}_\lambda(n,p;n,p)$, the norm of the Whittaker vector can be obtained via
\begin{equation} \label{whit}
\langle x_1,x_2;\lambda |x_1,x_2 ; \lambda \rangle = \sum_{n=0}^{\infty} \sum_{p=0}^{\infty} X_\lambda^{-1}(n,p;n,p) \, x_1^n \, x_2^p\,.
\end{equation}
From our proposal it follows that this expression should equal (possibly up to a prefactor) the instanton partition function for the pure $\mathcal{N}\,{=}\,2$ $\mathrm{SU}(3)$ theory with a simple surface operator insertion.
The terms in (\ref{whit}) containing only $x_1$ involve descendants of the form $(G_0^-)^n | \lambda \rangle$. For such descendants, the Gram matrix is diagonal and can be computed using (\ref{W32}), (\ref{eigenvals}) and (\ref{anni}) with the result
\begin{eqnarray}
\langle \lambda|(G_0^+)^n (G_0^-)^n |\lambda \rangle &=& n (\lambda_1 {-} {\textstyle \frac{k}{2}} {+}{\textstyle \frac{1}{2}} {+}n{-}1)(-\lambda_2 {+} {\textstyle \frac{k}{2}} {+}{\textstyle \frac{3}{2}} {+} n{-}1)\langle \lambda | (G_0^+)^{n-1} (G_0^-)^{n-1}|\lambda \rangle \nonumber \\ &=& n! \, (\lambda_1 -{\textstyle \frac{k}{2}} +{\textstyle \frac{1}{2}})_n (-\lambda_2 + {\textstyle \frac{k}{2}} +{\textstyle \frac{3}{2}})_n \,,
\end{eqnarray}
where $(X)_n= X (X+1) \cdots (X+n-1)$ is the usual Pochhammer symbol. The contribution to (\ref{whit}) is consequently
\begin{equation}\label{x1}
\sum_{n=0}^{\infty}\frac{ 1 }{ (\lambda_1 -{\textstyle \frac{k}{2}} +{\textstyle \frac{1}{2}})_n (-\lambda_2 + {\textstyle \frac{k}{2}} +{\textstyle \frac{3}{2}})_n } \frac{x_1^n}{n!}\,.
\end{equation}
Similarly, the terms depending only on $x_2$ arise from the result
\begin{equation}
\langle \lambda|(G_1^-)^n (G_{-1}^+)^n |\lambda \rangle = (-1)^n n! \, (-\lambda_1 -{\textstyle \frac{k}{2}} -{\textstyle \frac{3}{2}})_n (\lambda_2 - {\textstyle \frac{3k}{2}} -{\textstyle \frac{5}{2}})_n \,,
\end{equation}
and lead to the following contribution to (\ref{whit})
\begin{equation}\label{x2}
\sum_{n=0}^{\infty}\frac{ 1 }{ (-\lambda_1-{\textstyle \frac{k}{2}} - {\textstyle \frac{3}{2}})_n (\lambda_2 - {\textstyle \frac{3k}{2}} - {\textstyle \frac{5}{2}})_n } \frac{(-x_2)^n}{n!}\,.
\end{equation}
It is also possible to compute subleading terms. As an example, we consider the terms of the form $x_1^{n+1} \, x_2$. The relevant block of the Gram matrix involve descendants of the form
\begin{eqnarray}
&& |1\rangle= G_{-1}^{+} (G_0^{-} )^{n+1} | \lambda \rangle \,, \qquad |2\rangle=G_{-1}^{-} (G_0^{-} )^{n-1} | \lambda \rangle \,, \nonumber \\
&& |3\rangle=J_{-1} (G_0^{-} )^n | \lambda \rangle \,, \qquad \quad \, |4\rangle=L_{-1} (G_0^{-} )^n | \lambda\rangle \,.
\end{eqnarray}
For any $n\geq 1$ these states generate a $4{\times}4$ sub-block $X_{r,s}= \langle r | s\rangle$ with $r,s=1,\ldots,4$ of the Gram matrix:\footnote{When $n=0$, the block reduces to a $3{\times}3$ block (obtained from (\ref{G44}) by removing the 2nd row and column and setting $n=0$).}
\begin{equation} \label{G44}
X_{r,s}= \left( \begin{array}{cccc} P_1(\lambda) M(n{+}1)& 0 &M(n{+}1) & {\textstyle \frac{3}{2}} M(n{+}1) \\
0& P_2(\lambda) M(n{-}1) & -M(n) & {\textstyle \frac{3}{2}}M(n) \\
M(n{+}1) & -M(n)&{\textstyle \frac{(2k+3)}{3}}M(n) & [q(\lambda)-n] M(n) \\
{\textstyle \frac{3}{2}} M(n{+}1) & {\textstyle \frac{3}{2}}M(n) &[q(\lambda)-n] M(n) &2 \Delta(\lambda)M(n) \end{array} \right)
\end{equation}
with
\begin{eqnarray}
\!\!\!\!\!\!P_1(\lambda)&\!\! =\!\! & -\frac{3(k{+}1)(2k{+}3)}{8} +(k{+}3)\Delta(\lambda) +3(k{+}1)[\Upsilon(\lambda){-}n{-}1]-3[\Upsilon(\lambda){-}n{-}1]^2 \,, \nonumber \\
\!\!\!\!\!\!P_2(\lambda) &\!\! =\!\! & \frac{3(k{+}1)(2k{+}3)}{8} -(k{+}3)\Delta(\lambda) +3(k{+}1)[\Upsilon(\lambda){-}n{+}1]+3[\Upsilon(\lambda){-}n{+}1]^2 \,,
\end{eqnarray}
where $\Delta(\lambda)$ denotes the eigenvalue of $L_0$ in (\ref{eigenvals}), $\Upsilon(\lambda)$ denotes the $J_0$ eigenvalue, and
\begin{equation}
M(n)\equiv {\langle \lambda | (G_0^{+} )^{n} (G_0^{-} )^{n} | \lambda \rangle }= n! \, (\lambda_1 -{\textstyle \frac{k}{2}} +{\textstyle \frac{1}{2}})_n (-\lambda_2 + {\textstyle \frac{k}{2}} +{\textstyle \frac{3}{2}})_n \,.
\end{equation}
Inverting (\ref{G44}) and selecting the 1,1 component in accordance with the general result (\ref{whit}), gives a closed expression for all $x_1^{n+1} \, x_2$ terms. However, as this expression is somewhat unwieldy we only give the coefficient of the $x_1 x_2$ term:
\begin{equation} \label{x1x2}
\frac{ 8 (9 {+} 6 k {+} k^2 {+} 12[1 {+} k] \lambda_1 {+} 4 k^2 \lambda_1 {-} 2 [1{+} k] \lambda_1^2 {+} 8 k \lambda_2 {+} 4 k^2 \lambda_2 {-} 8 \lambda_1 \lambda_2 {-} 4 k \lambda_1 \lambda_2 {-}
2 [1{+} k] \lambda_2^2)}
{ (k{+}3) (k{-}1 {-} 2 \lambda_1) (k{+}3 {+} 2 \lambda_1) (k{+}3 {-} 2 \lambda_2) (3k{+}5 {-} 2 \lambda_2) (2k{+}3 {-} \lambda_1 {-} \lambda_2) (1 {+}
\lambda_1 {+} \lambda_2)}.
\end{equation}
So far we have focused on $\mathcal{W}_3^{(2)}$ quantities that on the gauge theory side correspond to the (non-conformal) pure $\mathrm{SU}(3)$ theory. It should also be possible to consider conformal $\mathrm{SU}(3)$ gauge theories. For instance, from our proposal and standard AGT-type arguments it follows that the four-point $\mathcal{W}_3^{(2)}$ conformal block on the sphere should equal (possibly up to a prefactor) the instanton partition function for the $\mathcal{N}\,{=}\,2$ $\mathrm{SU}(3)$ theory with $N_f=6$ and a simple surface operator insertion. It seems natural to assume that the primary field corresponding to the state $|\lambda\rangle$ can be expressed as $V_{\lambda}(x,z)$, where $x$ is an isospin variable and $z$ denotes the worldsheet coordinate. In the standard decomposition, the four-point conformal block can then be written
\begin{equation}\label{4pt}
\!\!\! \!\!\! \!\!\! \sum_{{\bf n}; {\bf p} }
\frac{ \langle \lambda_1| V_{\xi_2}(1,1) |{\bf n}; \lambda\rangle
X^{-1}_{\lambda}( {\bf n} ; {\bf m } )
\langle {\bf m }; \lambda | V_{\xi_3}(x,z) |\lambda_4 \rangle }{ \langle \lambda_1| V_{\xi_2}(1,1) | \lambda \rangle \langle \lambda | V_{\xi_3}(x,z) |\lambda_4 \rangle } \,.
\end{equation}
As in \cite{Wyllard:2009,Kozcaz:2010b} the $\xi_i$ should be special (restricted) momenta which should lead to crucial simplifications. To compute (\ref{4pt}) one would in particular need to know the commutation relations between the generators of the $\mathcal{W}_3^{(2)}$ algebra and the $V_{\xi_i}$'s. As in the $\widehat{\mathrm{sl}}_3$ case it is natural to expect that these commutation relations can be expressed in terms of differential operators acting on the isospin (and worldsheet) variables (as in the $\mathcal{W}_3$ case there can also be pieces that can not be expressed as differential operators). One encouraging result is that the zero-mode part of the $\mathcal{W}_3^{(2)}$ algebra (i.e.~the finite $\mathcal{W}_3^{(2)}$-algebra) can be realised in terms of differential operators as (see also the discussion in section 6 of \cite{deBoer:1992})
\begin{eqnarray} \label{Dzs}
&& D_0^+ = -x \left[ \frac{(k{+}1)(2k{+}3)}{8} + (k{+}3)x [\Delta + z\partial_z] - 3 \Upsilon^2\right] - x^2(3\Upsilon-{\textstyle \frac{3}{2}} )\frac{{\rm d}}{{\rm d} x} + x^3 \frac{{\rm d}^2}{{\rm d} x^2} \,, \nonumber \\
&&
D_0^- = \frac{{\rm d}}{{\rm d} x} \,, \qquad \quad D_0 = \Upsilon - x \frac{{\rm d}}{{\rm d} x} \,, \qquad \quad
\mathcal{D}_0 = \Delta + z \partial_z \,,
\end{eqnarray}
where $\Delta$ denotes the conformal dimension (the eigenvalue of $L_0$ in (\ref{eigenvals})), and $\Upsilon$ denotes the $J_0$ eigenvalue. (Note that the algebra (\ref{Dzs}) also closes if one omits the $z\partial_z$ terms.)
We should also mention that in the $\widehat{\mathrm{sl}}_N$ computations in \cite{Alday:2010,Kozcaz:2010b} additional operator insertions in the conformal blocks were crucial to obtain agreement with the instanton computations. Similar insertions are probably also required in the $\mathcal{W}_3^{(2)}$ case.
As there are several unsolved (technical) problems associated with the computations of conformal blocks for the $\mathcal{W}_3^{(2)}$ algebra we postpone a full discussion to future work.
\subsection{ $\mathcal{W}_3$ degenerate fields and simple surface operators } \label{sdeg}
Instanton partition functions for $\mathcal{N}\,{=}\,2$ $\mathrm{SU}(3)$ gauge theories can be obtained from the $\mathcal{W}_3$ algebra \cite{Wyllard:2009,Taki:2009} (see also \cite{Mironov:2009a,Kanno:2009}). The addition of a certain simple surface operator can be interpreted as the insertion of a degenerate field in the $2d$ CFT \cite{Alday:2009b,Drukker:2010,Kozcaz:2010,Dimofte:2010}. For the pure $\mathrm{SU}(3)$ theory the relevant quantity is
\begin{equation} \label{W3whit}
\langle y ;\alpha |V_{-b\Lambda_1}(x) |y ; \alpha \rangle ,
\end{equation}
where (in our notation) $ |y ; \alpha \rangle$ is the $\mathcal{W}_3$ (Whittaker) state constructed in \cite{Taki:2009} and $V_{-b\Lambda_1}$ is a degenerate field of the $\mathcal{W}_3$ algebra. For the conformal $\mathrm{SU}(3)$ theory with $N_f=6$ the relevant quantity is a particular five-point $\mathcal{W}_3$ conformal block where two of the insertions are special (cf.~\cite{Wyllard:2009}) and one of the insertions is the degenerate field $V_{-b\Lambda_1}$.
An alternative to the $\mathcal{W}_3$ degenerate field approach is to use the (B or A model) topological string description of a simple surface operator \cite{Kozcaz:2010,Dimofte:2010}, or the gauge theory method in \cite{Kozcaz:2010} which uses a combination of the conjectures in \cite{Alday:2009a} and \cite{Alday:2009b} and corresponds to a geometric transition in the topological string language \cite{Dimofte:2010,Taki:2010}.
We first briefly describe the $\mathcal{W}_3$ approach. Primary fields associated with $\mathcal{W}_3$ are denoted $V_\alpha(z)$ where $\alpha = \alpha^1 \Lambda_1 + \alpha^2 \Lambda_2$, and the corresponding state is denoted $|\alpha\rangle$. By inserting two complete sets of states the five-point $\mathcal{W}_3$ conformal block mentioned above can be written (we suppress the three-point factors in the denominator)
\begin{eqnarray} \label{4+1}
\!\! \!\! \sum_{{\bf n},{\bf n}',{\bf m},{\bf m}'} \!\! \langle \alpha_1| V_{\chi_2}(1) |{\bf n};\alpha \rangle X^{-1}_{ \bf n; \bf n'}(\alpha)
\langle {\bf n}';\alpha | V_{-b\Lambda_1}(x) |{\bf m};\tilde{\alpha} \rangle X^{-1}_{ \bf m; \bf m'}(\tilde{\alpha })
\langle {\bf m}';\tilde{\alpha } | V_{\chi_3}(z) |\alpha_4 \rangle ,
\end{eqnarray}
where $\chi_i = \kappa_i \Lambda_1$, $|{\bf n}; \alpha \rangle$ is short-hand notation for the descendants of the primary state $|\alpha \rangle$, $X^{-1}_{ \bf n; \bf n'}(\alpha )$ is the inverse of the Gram matrix, and the sums run over are all descendants. The terms in (\ref{4+1}) with $|{\bf m}; \alpha \rangle =|{\bf m'}; \alpha \rangle =| \alpha \rangle$ depend only on $x$ and after summing over ${\bf n}$ and ${\bf n}'$ reduce to
\begin{eqnarray} \label{xterms}
\langle \alpha_1| V_{\chi_2}(1) V_{-b \Lambda_1 }(x) |\tilde{\alpha }\rangle \propto \,{}_3 F_2(A_1,A_2,A_3;B_1,B_2;x)\,,
\end{eqnarray}
where we used the results in \cite{Fateev:2005}. This result has also been obtained from the dual gauge theory \cite{Mironov:2009a} and was discussed in \cite{Schiappa:2009} using the matrix model approach \cite{Dijkgraaf:2009} (see also \cite{Cheng:2010}). The hypergeometric function in (\ref{xterms}) is defined in the neighbourhood of $x=0$ and has the series expansion
\begin{equation} \label{3F2}
{}_3F_2(A_1,A_2,A_3;B_1,B_2;x) = \sum_{n=0}^{\infty} \frac{(A_1)_n (A_2)_n(A_3)_n}{(B_1)_n(B_2)_n}\frac{x^n}{n!} \,,
\end{equation}
with
\begin{equation}
A_i = b( {\textstyle \frac{1}{3}}\kappa_2 - {\textstyle {\frac{2}{3} }b + \langle \tilde{\alpha } {-}Q \rho ,h_1}\rangle - \langle \alpha_1{-} Q \rho ,h_i\rangle) \,, \quad B_i = 1+b\langle \tilde{\alpha }{-}Q\rho,h_1{-}h_{i+1}\rangle \,.
\end{equation}
Similarly, the terms with $|{\bf n}; \alpha \rangle =|{\bf n'}; \alpha \rangle =| \alpha \rangle $ depend only on $\frac{z}{x}$ and reduce to
\begin{equation} \label{2ndcase}
\!\! \!\! \langle \alpha | V_{-b \Lambda_1 }(x) V_{\chi_3}(z) |\alpha_4 \rangle =\propto {}_3 F_2(C_1,C_2,C_3;D_1,D_2 ;\frac{z}{x}) \,,
\end{equation}
where
\begin{equation} \label{cd}
C_i = b( {\textstyle \frac{1}{3}}\kappa_3 - {\textstyle {\frac{2}{3} }b + \langle \alpha_4 {-}Q \rho ,h_i}\rangle - \langle \alpha {-} Q \rho ,h_1\rangle) \,, \quad D_i = 1-b\langle \alpha{-}Q\rho,h_1{-}h_{i+1}\rangle\,.
\end{equation}
The above expressions correspond on the gauge theory side to the conformal $\mathrm{SU}(3)$ theory with $N_f=6$; the expressions relevant to the pure $\mathrm{SU}(3)$ theory can be obtained by taking the``non-conformal limit" i.e.~by replacing the $(A_i)_n$ and $(C_i)_n$ factors by 1. Alternatively, one can analyse (\ref{W3whit}) directly using the method in \cite{Awata:2009,Awata:2010}.
By comparing the non-conformal version of the above two results to the corresponding results in the previous subsection (\ref{x1}), (\ref{x2}) we see that they agree provided we make the identifications
\begin{equation}
x_1 = x\,, \quad x_2 = -\frac{z}{x} \,,\quad k{+}3 = -b^2\,, \quad \lambda_1 = b \,\alpha_1 -\frac{b^2}{2} -2\,,\quad \lambda_2 = -b(\alpha_1{+}\alpha_2)+\frac{b^2}{2} +1\,.
\end{equation}
Furthermore, $\tilde{\alpha }=\alpha + b\Lambda_1$, which is simply
the degenerate fusion rule.
We have also analysed a class of subleading terms. These can be obtained from CFT considerations as above, but we found it more convenient to use the method in section 6 of \cite{Kozcaz:2010}. In this method the partition function of the $\mathrm{SU}(3)$ gauge theory with a simple surface operator is obtained from an $\mathrm{SU}(3){\times}\mathrm{SU}(3)$ quiver gauge theory (with instanton expansion parameters $y_1$ and $y_2$) by imposing certain restrictions, which are simply the degenerate field and fusion requirements translated into gauge theory language using the AGT relation.
Using this method the coefficient in front of the $y_1y_2$ term in the instanton partition function for the pure $\mathrm{SU}(3)$ theory with a simple surface operator becomes (here $\epsilon \equiv\epsilon_1+\epsilon_2$)
\begin{equation} \label{y1y2}
\frac{(-6 a_1^2 \epsilon_1 {-} 6 a_1 a_2 \epsilon_1 {-} 6 a_2^2 \epsilon_1 {+} 6 \epsilon_1^3 {-} a_1^2 \epsilon_2{ -} 4 a_1 a_2 \epsilon_2 {-} 4 a_2^2 \epsilon_2 {+} 3 a_1 \epsilon_1 \epsilon_2 {+} 10 \epsilon_1^2 \epsilon_2 {+}
5 \epsilon_1 \epsilon_2^2 {+} \epsilon_2^3)}
{\epsilon_1^2 \epsilon_2 ( \epsilon_1{+}a_1 {-} a_2) (\epsilon_1 {+}2 a_1 {+} a_2 ) (\epsilon{-}a_1 {-} 2 a_2) ( \epsilon {-}2 a_1 {-} a_2) ( \epsilon{-}a_1 {+} a_2 ) (\epsilon{+}a_1 {+} 2 a_2 )},
\end{equation}
where $a_{1,2}$ are the $\mathrm{SU}(3)$ Coulomb parameters. The result (\ref{y1y2}) matches (\ref{x1x2}) provided that
\begin{equation}
\! x_1 = y_1\,, \quad x_2 = -y_2 \,, \quad k{+}3 = -\frac{\epsilon_2}{\epsilon_1} \,, \quad \lambda_1 = \frac{a_2{-}a_1}{\epsilon_1} +\frac{1}{2} \frac{\epsilon_2}{\epsilon_1}-1 \,, \quad \lambda_2 = \frac{2a_1{+} a_2}{\epsilon_1} -\frac{3}{2} \frac{\epsilon_2}{\epsilon_1}-1\,.
\end{equation}
The leading $y_1^n$ and $y_2^n$ terms of course also match, as do higher-order $y_1^n y_2$ terms. The non-trivial agreement of these infinite sets of terms supports our idea that instanton partition functions in $\mathcal{N}\,{=}\,2$ $\mathrm{SU}(3)$ gauge theories with a simple surface operator should be computable from the $\mathcal{W}_3^{(2)}$ algebra.
As a byproduct of our analysis we find relations between the $\mathcal{W}_3^{(2)}$ and $\mathcal{W}_3$ algebras. For the non-conformal case the conjecture is that (\ref{whit}) is equal to (\ref{W3whit}); more generally there should also be relations between $\mathcal{W}_3^{(2)}$ conformal blocks and $\mathcal{W}_3$ conformal blocks with an additional degenerate field insertion, e.g.~we expect that (\ref{4pt}) and (\ref{4+1}) should be equal (possibly up to a prefactor).
\setcounter{equation}{0}
\section{Discussion} \label{sdisc}
In this paper we argued that there is a general connection between $\mathcal{W}$-algebras and instanton partition functions in $\mathcal{N}\,{=}\,2$ gauge theories with surface operators (similar ideas were discussed in \cite{Braverman:2010}). This proposal is very natural from the viewpoint in \cite{Gaiotto:2009a} which uses the $6d$ $(2,0)$ theory formulated on $\mathbb{R}^4{\times}C$, where an $\mathcal{N}\,{=}\,2$ $\mathrm{SU}(N)$ gauge theory lives on $\mathbb{R}^4$ and the $2d$ conformal field theory lives on the Riemann surface $C$. As discussed in \cite{Alday:2010}, one way a surface operator can arise is from a $4d$ defect spanning a $2d$ submanifold of $\mathbb{R}^4$ and wrapping $C$. In \cite{Gaiotto:2009a} it was argued that for the $\mathrm{SU}(N)$ theories, the $4d$ defects of the $(2,0)$ theory are classified by Young tableaux or equivalently by partitions of $N$, so the class of surface operators constructed from $4d$ defects should also be classified by partitions. Thus in this construction it should be possible to describe a general surface operator. Our proposal can be viewed as a prescription for how the symmetry algebra of the $2d$ theory is changed when a general $4d$ defect wraps $C$.
It is also possible to describe surface operators using $2d$ defects spanning a submanifold inside $\mathbb{R}^4$ and intersecting $C$ at a point. This construction leads to the interpretation of a simple surface operator, i.e.~a surface operator corresponding to the partition $N=(N{-}1)+1$, in terms of degenerate fields in the $A_{N-1}$ Toda theory as first proposed in in \cite{Alday:2009a}. It is less clear (at least to us) if one can describe general surface operators using only $2d$ defects. But at least for a simple surface operator there are two descriptions, in terms of $4d$ or $2d$ defects. There are certain differences between the simple surface operators that arise from these two constructions, but computations in \cite{Awata:2010,Kozcaz:2010} and the results in this paper indicate that the instanton partition function is not sensitive to these differences (at least for some theories). For this reason we have not used a nomenclature which emphasises the differences, but this point should be kept in mind in future applications.
Our analysis is far from complete and there are many unsolved problems. It would be desirable to have additional checks (or perhaps even proofs) of the general proposal. One immediate extension is to develop the technology needed to compute conformal blocks for theories with $\mathcal{W}_3^{(2)}$ symmetry and to compare the results with the proposed dual gauge theory expressions. The Whittaker vectors and the conformal blocks only depend on the symmetry algebra, but just as for the original AGT conjecture \cite{Alday:2009a} it seems plausible that there is an extension to a relation between correlation functions in the $2d$ CFT and gauge theory partition functions involving some (modified) version of the type studied in \cite{Pestun:2007}. In the general case the relevant $2d$ CFT is probably a generalised Toda theory (see e.g.~\cite{Feher:1992}), but unfortunately such theories have not been much studied in the literature.
Another subject that we did not discuss, but where surface operators appear to be important, is the connections to quantum-mechanical integrable systems. In addition to papers already mentioned this is discussed in e.g.~\cite{Negut:2008}.
\section*{Acknowledgements}
I would like to thank Can Koz\c{c}az, Sara Pasquetti and Filippo Passerini for collaboration on \cite{Kozcaz:2010b} which formed the basis of the present work. I would also like to thank Nadav Drukker for some useful comments and the string theory group at Queen Mary, University of London for hospitality during the final stages of this work.
|
1,108,101,566,639 | arxiv | \subsubsection{Recursion}
Now we extend the set of dependent variables $f$, $b$, and $w$ by the
symmetry generators
$F$, $B$, and $W$ that satisfy the linearized relations
upon the flows of the initial super\/-\/fields, respectively.
In this setting, we obtain the recursion
\begin{equation}\label{RecBurgers}
\mathcal{R}_{[1]}=\binom{F_x-\dd_\theta{f}\,F+f_x\,W}%
{B_x-\dd_\theta{f}\,B+b_x\,W}\ \Longleftrightarrow\ %
R=\begin{pmatrix}
D_x-\dd_\theta{f}+f_x\,\partial_\theta^{-1} & 0 \\
b_x\,\partial_\theta^{-1} & D_x-\dd_\theta{f}
\end{pmatrix}
\end{equation}
of weight $[s_R]=-1$.
In agreement with~\cite{TMPhGallipoli}, the above recursion is
{weakly non\/-\/local}~\cite{Novikov}.
We recall that a recursion operator $R$ is \emph{weakly non\/-\/local}
if each nonlocality $\partial_\theta^{-1}$ is preceded with a (shadow
\cite{JKKersten} of a nonlocal) symmetry $\varphi_\alpha$ and is followed
by the gradient $\psi_\alpha$ of a conservation law:
$R=\text{local
part}+\sum_\alpha\varphi_\alpha\cdot\partial_\theta^{-1}\circ\psi_\alpha$.
From \cite{TMPhGallipoli} it follows that this property is
automatically satisfied by all recursion operators which are
constructed using one layer of the nonlocal variables assigned to
conservation laws.
Recursion~\eqref{RecBurgers} generates two sequences of higher
symmetries for system~\eqref{BurgSystem}:
\begin{equation}\label{SymBurg}
\binom{f_t}{b_t}\mapsto
\binom{\dd_\theta{b_x}-\dd_\theta{f}\dd_\theta{b}-f_xb}{\dd_\theta{f_x}-(\dd_\theta{f})^2-b^2\dd_\theta{f}+bb_x}
\mapsto\cdots,\
\binom{f_x}{b_x}\mapsto
\binom{f_{xx}-2\dd_\theta{f}f_x}{b_{xx}-2\dd_\theta{f}b_x} \mapsto \cdots.
\end{equation}
Also, recursion~\eqref{RecBurgers} produces two infinite sequences
of supersymmetries for~\eqref{BurgSystem}:
\begin{equation}\label{BurgSSym}
\binom{\dd_\theta{f}}{\dd_\theta{b}} \mapsto
\binom{\dd_\theta{f_x}-(\dd_\theta{f})^2-f_xf}{\dd_\theta{b_x}-\dd_\theta{f}\,\dd_\theta{b}-b_xf}
\mapsto\cdots, \qquad
\binom{f\dd_\theta{b}-b\,\dd_\theta{f}+b_x}{b\dd_\theta{b}-f\,\dd_\theta{f}+f_x-fb^2}
\mapsto\cdots.
\end{equation}
\begin{rem}
System~\eqref{BurgSystem} is not a supersymmetric extension of
the Burgers equation;
it is a supersymmetric representation of
the Burgers equation.
However, symmetries~\eqref{SymBurg} and~\eqref{BurgSSym}
are \emph{not} reduced to the
purely bosonic $(x,t)$-independent symmetries~\cite{Lychagin}
of the Burgers equation (particularly, owing to
the interchanged role of the variables $x$ and~$t$).
We finally recall that the Burgers equation
has infinitely many higher symmetries that depend explicitly on the base
coordinates $x$, $t$ but exceed the set of axioms~\cite{Kiev2005}
we use.
\end{rem}
Two supersymmetric generalizations ($N=0$ and $N=2$)
of the Burgers equation are constructed
in~\cite{Kiev2005}. The $N=0$ extension relates it with integrable
flows on associative algebras. The $N=2$ Burgers equation contains a
KdV\/-\/type component and admits an $N=2$ modified KdV equation as a
symmetry flow.
\paragraph*{2. A system with nonlocal recursions.}
The second system,
\begin{equation}\label{DoubleLayer}
f_t=\dd_\theta{b}+fb,\qquad b_t=\dd_\theta{f},
\end{equation}
is also homogeneous w.r.t.\ a unique set of weights
$[f]=[b]=\tfrac{1}{2}$, $[t]=-\tfrac{1}{2}$, $[x]=-1$.
Similarly to the supersymmetric representation~\eqref{BurgSystem}
for the Burgers equation,
Eq.~\eqref{DoubleLayer} admits
symmetries $(f_s$, $b_s)$ for all weights~$[s]\leq-\tfrac{1}{2}$.
We conjecture that system~\eqref{DoubleLayer}
has only one conservation law that
defines the fermionic variable~$w$ of weight~$0$ by
$
w_t=f$, $\dd_\theta{w}=b$.
Then, many nonlocal conservation laws and hence many new
variables appear. We use the fermionic variable~$v$ whose
weight $[v]=\tfrac{3}{2}$ is minimal: we set
$v_t=\dd_\theta{b}\cdot wfb+f_xwf$ and
$\dd_\theta{v}=-\dd_\theta{b}\cdot fb+\dd_\theta{f}\cdot\dd_\theta{b}\cdot w+b_xwf$.
Now, there are nontrivial solutions to the determining equations
for recursion operators. First, we obtain the recursion of
zero differential order with nonlocal coefficients:
\[
R_{[-1\tfrac{1}{2}]} =
\binom{-\dd_\theta{b}\cdot wf B + wvF + v\cdot B}%
{\dd_\theta{b}\, wf F - vF + vw\cdot B}.
\]
Also, we get a nonlocal operator with nonlocal coefficients,
\[
R_{[-2]} =
\binom{\dd_\theta{b}\,Vw-\dd_\theta{f}\,\dd_\theta{B}\,wf-\dd_\theta{f}\,\dd_\theta{b}\,Wf+\dd_\theta{f}\,\dd_\theta{b}\,Fw
+\dd_\theta{f}\,V+Vwfb}%
{\dd_\theta{B}\,\dd_\theta{b}\,wf+\dd_\theta{b}\,V-\dd_\theta{b}\,Fwfb+\dd_\theta{f}\,\dd_\theta{b}\,Vwf+\dd_\theta{f}\,Vw-Vfb}.
\]
The coefficients of the recursions found for
$[s_R]=-2\tfrac{1}{2}$ and $[s_R]=-3$ are also nonlocal.
\paragraph*{3. A triplet of super\/-\/systems.}
Finally, we consider the three systems
\begin{equation}\label{Quad}
f_t=-\alpha fb,\qquad b_t=b^2+\dd_\theta{f}
\end{equation}
which differ by the values $\alpha=1$, $2$, and $4$ of the
parameter~$\alpha$ and therefore exhibit rather different properties.
The weights for the above equation are multiply defined, and we
choose the tuple $[f]=[b]=\tfrac{1}{2}$, $[t]=-\tfrac{1}{2}$, $[x]=-1$ to be the primary
`reference system.'
\subparagraph*{Case $\alpha=2$.}
First, we fix $\alpha=2$
and consider Eq.~\eqref{Quad}: we get
$
f_t=-2fb$, $b_t=b^2+\dd_\theta{f}$.
The weights for symmetries are $[s]=-\tfrac{1}{2}$, $[s]=-1$, and then
Eq.~\eqref{Quad} admits a continuous chain of symmetry flows
for all (half-)integer weights $[s]\leq-2\tfrac{1}{2}$.
Surprisingly, no nonlocalities are needed to construct the recursion
operators, although there are many conservation laws for this system.
We obtain purely local recursion operators~$\mathcal{R}$ that
proliferate the symmetries:
$\varphi=(F,B)\mapsto\varphi'=\mathcal{R}$ for any $\varphi$.
The recursion
\[
\mathcal{R}_{[-2]} =
\begin{pmatrix}
\tfrac{11}{2}\dd_\theta{F}\,\dd_\theta{f}\,f + 11\dd_\theta{F}\,fb^2 + \tfrac{3}{2}(\dd_\theta{f})^2 F +
3\dd_\theta{f}\,Fb^2 + \tfrac{1}{2}f_xFf \\
\begin{gathered}[t] {} \\
11\dd_\theta{B}\,fb^2 + 8\dd_\theta{b}\,Fb^2 + 22\dd_\theta{b}\,fBb + 7(\dd_\theta{f})^2 B +{}\\
14\dd_\theta{f}\,Bb^2 + \tfrac{11}{2}\dd_\theta{f}\,\dd_\theta{B}\,f +
\tfrac{5}{2}\dd_\theta{f}\,\dd_\theta{b}\,F +
\tfrac{1}{2}b_xFf + f_x F b + 5 f_xfB
\end{gathered}
\end{pmatrix},
\]
of weight $[s_R]=-2$ is triangular since $R^f$ does not contain~$B$.
Also, we obtain the recursion of weight~$2\tfrac{1}{2}$; its components are
\begin{align*}
\mathcal{R}_{[-2\tfrac{1}{2}]}^f &=
- 2\dd_\theta{b}\,Ffb^2 - \dd_\theta{F}\,\dd_\theta{f}\,fb - \dd_\theta{F}\,fb^3 - \tfrac{1}{2}f_xFfb
- 2\dd_\theta{f}\,fBb^2,\\
\smash{\mathcal{R}_{[-2\tfrac{1}{2}]}^b} &=
\dd_\theta{B}\,fb^3 + \dd_\theta{b}\,Fb^3 + \dd_\theta{b}\,fBb^2 + \tfrac{1}{8}\dd_\theta{f_x}Ff +{}\\
&\quad+ \tfrac{1}{2}\dd_\theta{F}b^4 + \tfrac{1}{2}\dd_\theta{F}(\dd_\theta{f})^2
+ \dd_\theta{F}\,\dd_\theta{f}\,b^2 + \tfrac{1}{8}\dd_\theta{F}\,f_xf + (\dd_\theta{f})^2Bb +{}\\
&\quad+ \dd_\theta{f}\,Bb^3 + \dd_\theta{f}\,\dd_\theta{B}\,fb + \dd_\theta{f}\,\dd_\theta{b}\,Fb
+ \dd_\theta{f}\,\dd_\theta{b}\,fB + \tfrac{3}{8}\dd_\theta{f}\,F_xf + {}\\
&\quad+\tfrac{1}{4}\dd_\theta{f}\,f_xF
+ \tfrac{1}{2}b_xFfb + \tfrac{1}{2}F_xfb^2 + \tfrac{1}{4}f_xFb^2
+ \tfrac{1}{2}f_xfBb.
\end{align*}
Further, we get
a triangular nilpotent operator of weight $-3$ such that
$\mathcal{R}_{[-3]}^f=0$ and $\mathcal{R}_{[-3]}^b=
{(\dd_\theta{f})^3fF+6(\dd_\theta{f})^2fb^2F+12\dd_\theta{f}\,fb^4F+8fb^6F}$.
The above recursion is a recurrence relation~\cite{Kiev2005}
which is well\/-\/defined for all symmetries of Eq.~\eqref{Quad}.
Another local
recursion for $[s]=-3$ is huge and therefore omitted.
For $\alpha=2$, system~(\ref{Quad})
admits at least three super\/-\/recursions
${}^t(R^f$, $R^b)$ such that the parities of $R^f$ and
$R^b$ are opposite to the odd parity for $f$ (and hence for $F$) and to
the even parity of $b$ and~$B$. This property
is possible owing to the presence of the \emph{odd} variable~$s_R$.
The triangular zero\/-\/order super\/-\/recursions are
${\bar{\mathcal{R}}}_{[-2]}^f =
{4\dd_\theta{f}\,Ffb+8Ffb^3}$, ${\bar{\mathcal{R}}}_{[-2]}^b=
{-4\dd_\theta{b}\,Ffb+2(\dd_\theta{f})^2F+6\dd_\theta{f}\,Fb^2+4\dd_\theta{f}\,fBb-f_xFf+4Fb^4+8fBb^3}$
and
\[
{\bar{\mathcal{R}}}_{[-2\tfrac{1}{2}]} =
\binom{-\dd_\theta{f}\,f_xF-2f_xFb^2}%
{\dd_\theta{b}\,f_xF-\dd_\theta{f}\,b_xF+\dd_\theta{f}\,f_xB-2b_xFb^2+2f_xBb^2}
\]
for weights $[s_R]=-2$ and $[s_R]=-2\tfrac{1}{2}$, respectively;
the third super\/-\/recursion found
for $[s_R]=-2\tfrac{1}{2}$ is very large.
Quite naturally, system~\eqref{Quad} has infinitely many
supersymmetries if~$\alpha=2$.
\subparagraph*{Case $\alpha=1$.}
For $\alpha=1$ from~\eqref{Quad} we obtain the system
$
f_t=-fb$, $b_t=b^2+\dd_\theta{f}$.
The default set of weights is the same as above:
$[f]=[b]=\tfrac{1}{2}$, $[t]=-\tfrac{1}{2}$, and $[x]=-1$.
The sequence of symmetries is not continuous
and starts later than for the chain in the case $\alpha=2$.
We find out that there are symmetry flows
if either $[s]=[t]=-\tfrac{1}{2}$ (the equation itself),
$[s]=[x]=-1$ (the translation along~$x$),
or $[s]\leq-3\tfrac{1}{2}$ such that a
continuous chain starts for all (half-)\/in\-te\-ger
weights~$[s]$.
Similarly to the previous case, no nonlocalities are needed to
construct the recursions, which therefore are purely local.
The recursion operator
$\mathcal{R}_{[-2\tfrac{1}{2}]}^f=0$, $\mathcal{R}_{[-2\tfrac{1}{2}]}^b=
{(\dd_\theta{f})^2\,Ff+3\dd_\theta{f}\,Ffb^2+\tfrac{9}{4}Ffb^4}$
of maximal weight $[s_R]=-2\tfrac{1}{2}$ is nilpotent: $\mathcal{R}^2=0$.
For the succeeding weight $[s_R]=-3$,
we obtain a nilpotent local recursion with components
\begin{align*}
\mathcal{R}_{[-3]}^f&=
\tfrac{5}{3}\dd_\theta{F}\,(\dd_\theta{f})^2f+\tfrac{5}{2}\dd_\theta{F}\,\dd_\theta{f}\,fb^2
-\tfrac{5}{3}(\dd_\theta{f})^3F-\tfrac{5}{2}(\dd_\theta{f})^2Fb^2 +{}\\
&+{5}\dd_\theta{f}\,\dd_\theta{b}\,Ffb+\tfrac{20}{3}\dd_\theta{f}\,f_xFf
+\tfrac{15}{2}f_xFfb^2,\\
\mathcal{R}_{[-3]}^b&=
\dd_\theta{f_x}\,Ffb-\tfrac{105}{2}\dd_\theta{F}\,\dd_\theta{b}\,fb^2
-\tfrac{160}{3}\dd_\theta{F}\,\dd_\theta{f}\,\dd_\theta{b}\,f+{11}\dd_\theta{F}\,f_xfb+{}\\
&+\tfrac{5}{3}(\dd_\theta{f})^2\dd_\theta{B}f+\tfrac{5}{3}(\dd_\theta{f})^2\dd_\theta{b}\,F
+\tfrac{5}{2}\dd_\theta{f}\,\dd_\theta{B}\,fb^2+\tfrac{5}{2}\dd_\theta{f}\,\dd_\theta{b}\,Fb^2-{}\\
&-{55}\dd_\theta{f}\,\dd_\theta{b}\,fBb+\tfrac{17}{3}\dd_\theta{f}\,b_xFf
+\dd_\theta{f}\,f_xfB+\tfrac{23}{2}b_xFfb^2+\tfrac{183}{2}f_xfBb^2.
\end{align*}
It generates symmetries of system~(\ref{Quad});
the differential order of $\mathcal{R}_{[-3]}$ is positive.
\subparagraph*{Case $\alpha=4$.}
Finally, let $\alpha=4$; then system~\eqref{Quad}
acquires the form
$
f_t=-4fb$, $b_t=b^2+\dd_\theta{f}.$
Again, the basic set of weights is
$[f]=[b]=\tfrac{1}{2}$, $[t]=-\tfrac{1}{2}$, $[x]=-1$, and system~\eqref{Quad}
admits the symmetries $(f_s$, $b_s)$ such that their weights are
$[s]=-\tfrac{1}{2}$, $-1$ or $[s]\leq-3\tfrac{1}{2}$ w.r.t.\ the basic set.
This situation coincides with the case~$\alpha=1$.
Again, no nonlocalities are needed for constructing the recursion
of minimal weight $[s_R]=-3\tfrac{1}{2}$:
\begin{align*}
\smash{\mathcal{R}_{[-3\tfrac{1}{2}]}^f}&=
-12\dd_\theta{b}\,Ffb^4-\dd_\theta{F}\,(\dd_\theta{f})^2fb
-{4}\dd_\theta{F}\,\dd_\theta{f}\,fb^3 - 3\dd_\theta{F}\,fb^5-{}\\
&-{4}(\dd_\theta{f})^2fBb^2-{4}\dd_\theta{f}\,\dd_\theta{b}\,Ffb^2
-\tfrac{2}{3}\dd_\theta{f}\,f_xFfb-12\dd_\theta{f}\,fBb^4-{2}f_xFfb^3,\\
\smash{\mathcal{R}_{[-3\tfrac{1}{2}]}^b}&=
3\dd_\theta{B}fb^5+3\dd_\theta{b}\,Fb^5+9\dd_\theta{b}\,fBb^4
+\tfrac{1}{9}\dd_\theta{f_x}\,\dd_\theta{f}\,Ff-\tfrac{1}{3}\dd_\theta{f_x}\,Ffb^2+\tfrac{3}{4}\dd_\theta{F}\,b^6+{}\\
&+\dd_\theta{F}\,\dd_\theta{b}\,fb^3
+\tfrac{1}{4}\dd_\theta{F}\,(\dd_\theta{f})^3+\tfrac{5}{4}\dd_\theta{F}\,(\dd_\theta{f})^2b^2
+\tfrac{7}{4}\dd_\theta{F}\,\dd_\theta{f}\,b^4+\dd_\theta{F}\,\dd_\theta{f}\,\dd_\theta{b}\,fb+{}\\
&+\tfrac{5}{18}\dd_\theta{F}\,\dd_\theta{f}\,f_xf
+\tfrac{1}{2}\dd_\theta{F}\,f_xfb^2+(\dd_\theta{f})^3Bb
+{4}(\dd_\theta{f})^2Bb^3+(\dd_\theta{f})^2\dd_\theta{B}\,fb+{}\\
&+(\dd_\theta{f})^2\dd_\theta{b}\,Fb+(\dd_\theta{f})^2\dd_\theta{b}\,fB
+\tfrac{2}{9}(\dd_\theta{f})^2F_xf+\tfrac{1}{6}(\dd_\theta{f})^2f_xF+3\dd_\theta{f}\,Bb^5+{}\\
&+{4}\dd_\theta{f}\,\dd_\theta{B}\,fb^3
+{4}\dd_\theta{f}\,\dd_\theta{b}\,Fb^3 + {10}\dd_\theta{f}\,\dd_\theta{b}\,fBb^2+\tfrac{2}{3}\dd_\theta{f}\,b_xFfb
+\dd_\theta{f}\,F_xfb^2+{}\\
&+\tfrac{2}{3}\dd_\theta{f}\,f_xFb^2
+\tfrac{5}{3}\dd_\theta{f}\,f_xfBb
+{2}b_xFfb^3+F_xfb^4+\tfrac{1}{2}f_xFb^4+f_xfBb^3.
\end{align*}
No nilpotent recursion operators were found for system~(\ref{Quad})
if~$\alpha=4$.
\begin{rem}\label{InfByNilpotent}
We discovered that an essential part of recursion operators for
supersymmetric PDE are nilpotent.
At present, it is not clear how the nilpotent recursion operators
contribute to the integrability of supersymmetric systems and what
invariants they describe or symptomize.
Further, we emphasize that this property does not always originate from
the rule `$f\cdot f=0$', but this is an immanent feature of the
symmetry algebras.
More generally, the nilpotent recursions are quite natural in the
bosonic sector, too. We have
\end{rem}
\begin{exampleNo}[I. S. Krasil'shchik, private communication]
Consider a system of linear ordinary differential equations
$\dot{\boldsymbol{x}}=A(t)\,\boldsymbol{x}$. Then any nilpotent
constant matrix $R$ that commutes with the matrix~$A$ is a recursion.
\end{exampleNo}
It would be of interest to construct an equation~$\mathcal{E}$ that admits
nilpotent differential recursion operators $\{R_1,\ldots\mid
R_i^{n_i}=0\}$ which generate an infinite sequence of symmetries
$\varphi$, $R_{i_1}(\varphi)$, $R_{i_2}\circ R_{i_1}(\varphi)$, $\ldots$
for~$\mathcal{E}$. Here we assume that
at least two operators (without loss of generality, $R_1$ and $R_2$)
do not commute and hence the flows never become zero.
\paragraph*{Acknowledgements}
The authors thank I.\,S.\,Krasil'shchik, A.\,S.\,Sorin,
and A.\,M.\,Ver\-bo\-vet\-sky for stimulating discussions.
|
1,108,101,566,640 | arxiv |
\section{Introduction}
\label{sec:intro}
The past few years have witnessed significant advances in speech synthesis and voice conversion technologies, and recently emerged adversarial attacks, such that even humans may not be capable to distinguish the real users’ speech from the synthesised speech \cite{wu2015asvspoof,kinnunen2017asvspoof,todisco2019asvspoof, yamagishi2021asvspoof, Yi2022ADD, wu2020defense_2, wu2020defense, peng2021pairing, wu2021adversarial, liu2019adversarial, li2021replay, li2021channel, wu2021voting, wu2021improving, wu2021spotting, wu2015spoofing,wu2014study,kamble2020advances,das2020assessing, chenglong2021global,ma2021continual,yi2021half,wang2021comparative}.
Such technologies can undermine the robustness of broadly implemented biometric identification models, e.g. automatic speaker verification (ASV) models, and can be harnessed by in-the-wild attackers for criminal usage.
For instance, an attacker can generate fake audios to manipulate the voiceprint-based security entrance system to accept the attacker falsely, and get access to normally protected information and valuables.
Additionally, an imposter can call the bank center, fool the biometric identification system to accept him/her as a registered user, and transfer money to the imposter's account.
Considering the severe harm caused by synthesized fake audio, it is critical to devise methods to tackle such threats.
The ASVspoof challenge \cite{wu2015asvspoof,kinnunen2017asvspoof,todisco2019asvspoof, yamagishi2021asvspoof}, a community-led challenge, arouses the attention from both the industry and the academia to tackle the spoofing audios in both physical access and logical access.
In logical access, attacks are mainly from synthesized audios by advanced speech synthesis and voice conversion models, while in physical access, replayed audios are adopted as attacks.
The challenge attracts various international teams, and various high-performance anti-spoofing models have been proposed to address the two kinds of attacks mentioned above.
The adversarial attacks for ASV and anti-spoofing models have been well investigated \cite{wu2020defense_2, wu2020defense, peng2021pairing, wu2021adversarial,wu2021improving, wu2021spotting}.
To solve further challenging attack situations in realistic applications, the first Audio Deep Synthesis Detection challenge (ADD 2022) \cite{Yi2022ADD} extends the attack scenarios to fake audio detection.
They consider the fake audios perturbed by diverse background noise, and attacks from the latest speech synthesis and voice conversion models.
Additionally, the organizers propose partially fake audio detection track, where the attacks are composed of hiding small fake clips into real speech.
Partially fake audios are dangerous, and ADD 2022 is the first challenge attempting to tackle this type of brand new attacks, which is an open question, and is the focus of this paper
\begin{figure*}[ht]
\centering
\centerline{\includegraphics[width=0.9\linewidth]{figs/ADD_frawework.png}}
\vspace{0.2cm}
\caption{The proposed framework. $X,Z,H,A$ are the acoustic features, hidden features, bottleneck features and the output for Question-answer layer, respectively. $f$ and $g$ are the SENet feature extractor and self-attention layer, corresponding to (1)-(8) and (9) in Table~\ref{tab:model}, respectively.
QA and AF are the question-answering (fake span discovery) and anti-spoofing layers with loss calculation procedures respectively.}
\label{fig:framework}
\end{figure*}
During generation of partially fake audio, only small clips of synthetic speech are inserted, and thus the fake audio contains a large proportion of genuine user's audio.
Through experimentation, we find it is hard to distinguish the fake and real audios by directly implementing the previous state-of-the-art spoofing countermeasure models, such as Light Convolutional Neural Network (LCNN) \cite{lavrentyeva2017audio} and Squeeze-and-Excitation Network (SENet) \cite{lai2019assert}.
To allow the model discover the small anomalous clip in real speech, we design a proxy task to make the model answer ``where are the start and end points'' of such anomalous clips.
During training, the anti-spoofing model not only has to predict the fake or real label for each utterance, but also to find the start and end positions of the fake clips within the utterance.
Identifying the time segments of the fake clips is similar to \textit{extraction-based question-answering}, which determines the answer span in a document.
Also, to further improve the capacity of the anti-spoofing model to tackle the ``question-answering'' task, we introduce the self-attention \cite{vaswani2017attention} strategy.
The experimental results illustrate the effective discrimination capacity of our proposed method between real and partially fake audios.
Our main contributions are two-fold:
\begin{itemize}
\item We proposed a brand new framework inspired by the extraction question-answering strategy for locating the fake regions in the fake, overall input audio, in order to improve the performance for partially fake audio detection.
\newpage
\item We further equipped the fake span discovery strategy with the self-attention mechanism to get a better detection capacity.
\end{itemize}
Also, our submission ranked second in the partially fake audio detection track of ADD 2022
This paper is organized as follows:
Section 2 introduces the proposed method, namely self-attention-based question-answering framework for partially fake audio detection
Section 3 presents experimental setups, followed by section 4 reporting on experimental results and analysis.
Section 5 presents the conclusion.
\section{Methodology}
\label{sec:method}
In this section, we will introduce the anti-spoofing method equipped with the proposed question-answering strategy and self-attention mechanism.
We firstly present the details of the proposed framework.
And then we will clarify the rationale of the proposed method.
\subsection{Proposed anti-spoofing model}
We adopt the base model SENet \cite{lai2019assert}, which is a variant of ResNet \cite{he2016deep} equipped with squeeze-and-excitation networks \cite{hu2018squeeze}, and we perform some modifications to that model.
The modified model architecture is shown in Table~\ref{tab:model}.
Let $X=[x_{1},x_{2},...x_{T}]$ denote the $T$ frames of input acoustic features.
The extracted hidden features by the SENet feature extractor are denoted as $f(X)=Z = [z_{1}, z_{2}, ...z_{T}]$, where $f$ is the (1)-(8) layers in Table~\ref{tab:model}.
The bottleneck features are denoted as $g(Z)= H = [h_{1}, h_{2}, ...h_{T}]$, where $g$ is the self-attention layer, the layer (9) in Table~\ref{tab:model}, and $h_{t} \in R^{n}$.
The self-attention layer is one layer of transformer \cite{vaswani2017attention}.
The question-answering layer (a) is one fully-connected layer with the input dimension as $n$, the dimension for $h_{t}$, and with output dimension as 2.
The 2 dimensions represent how likely will $h_{t}$ be the start or end position of the fake clip.
Given $H$ as the input, the question-answering layer will output $A = [a_{1}, a_{2}, ...a_{T}]$, where $a_{t} \in R^{2}$.
The question-answering loss $L_{qa}$ is denoted as:
\begin{equation}
L_{qa}=-(log \frac{exp(a_{s}^{1})}{\sum_{t=1}^{T} exp(a_{t}^{1})} + log \frac{exp(a_{e}^{2})}{\sum_{t=1}^{T} exp(a_{t}^{2})}),
\label{eq:qa-loass}
\end{equation}
where $s$ and $e$ are the start and end positions for the fake clip, $a_{t}^{1}$ and $a_{t}^{2}$ are the values for first and second dimensions of $a_{t}$ at the $t^{th}$ frame.
We will not incorporate the $L_{qa}$ for training with real utterances.
For the pooling layer (b), there are three pooling strategies in this paper, average pooling, self-attentive pooling (SAP) \cite{bhattacharya2017deep} and attentive statistics pooling (ASP) \cite{okabe2018attentive}.
Based on the bottleneck features $H$, the pooling layer (b) followed by the prediction layer (c) will output $S=[s_{0}, s_{1}]$, indicating whether the utterance is fake or real.
The anti-spoofing loss $L_{af}$ is denoted as:
\begin{equation}
L_{af}= -log \frac{exp(s_{l})}{\sum_{j=0}^{1} exp(s_{j})},
\label{eq:af-loass}
\end{equation}
where $l \in \{0,1\}$ is the target label.
The final loss is
\begin{equation}
L=L_{qa}+L_{af}.
\label{eq:final-loss}
\end{equation}
\begin{table}[t!]
\centering
\small
\caption{Proposed anti-spoofing model.}
\vspace{0.3cm}
\begin{tabular}{cc|c|l}
\toprule
layer & Type & Filter / Stride & Output shape \\
\cmidrule(r){1-4}
(1) & Conv & $7\times7 / 1\times2$ & $16\times501\times40$ \\
(2) & BatchNorm & $-$ & $-$ \\
(3) & ReLU & $-$ & $-$ \\
(4) & MaxPool & $3\times3 / 1\times2$ & $16\times501\times20$ \\
\hline
(5) & SEResNet Module$\times3$ & $-$ & $16\times501\times20$ \\
\hline
(6) & SEResNet Module$\times4$ & $-$ & $32\times501\times10$ \\
\hline
(7) & SEResNet Module$\times6$ & $-$ & $64\times501\times5$ \\
\hline
(8) & SEResNet Module$\times3$ & $-$ & $128\times501\times3$ \\
\hline
(9) & Self-attention & $-$ & $501\times384$ \\
\toprule
(a) & Question-answering & $-$ & $501\times2$ \\
\hline
(b) & Pooling & $-$ & $384$ \\
\hline
(c) & Prediction & $-$ & $2$ \\
\bottomrule
\end{tabular}
\label{tab:model}
\end{table}
\subsection{Rationale}
\label{subsec:model}
In the partially fake audio detection track, there is only a small proportion of fake audio frames in the overall piece of input speech.
Previous state-of-the-art anti-spoofing models \cite{lai2019assert,lavrentyeva2017audio} tackle the problem of identifying whether a whole audio utterance is real or fake.
Hence, previous strategies are not designed to identify anomalous regions within one utterance.
Thus the previous models intuitively attain the ability to distinguish between utterances but there is no guarantee that such models can discover the abnormal regions within a single utterance.
To evaluate the performance of the previous state-of-the-art anti-spoofing models, we direct train binary classification anti-spoofing models for the partially fake audio detection task with reference to previous papers.
We discover that these well-trained models do not have the discriminative ability for the adaptation set provided by the organizers of ADD 2022.
A reasonable explanation is that the models may have learned some shortcuts to differentiate the audios with real and fake labels in the training set, but what the models have learned can not generalize to the adaptation set.
In other words, the models cannot discover the fake regions for fake audio detection.
Thus, to regularize the model to learn to distinguish between the real and partially fake audios, we propose a proxy task to let the model discover the abnormal parts within a piece of partially fake audio.
The proposed anti-spoofing model has to predict not only whether the input utterance is real or fake, but also output the start and end of each anomalous region.
We name this proxy task as question-answering, or fake span discovery proxy task, in which the model has to answer ``where is the fake clip'' in a piece of partially fake audio.
The extraction-based question-answering models in natural language processing (NLP) often take a question and a passage as input, build representations for the passage and the question respectively, match the question and passage embeddings, and output the start and end positions within the passage as the answer.
We adopt the analogy of extraction-based question-answering here.
The passage is the partially fake utterance, and the answer span is the time of the fake clip.
By the question-answering proxy task, the model can learn to find the fake clips within an utterance, thus benefiting the model to distinguish between the audios with or without fake clips.
Moreover, the self-attention module followed by the question-answering task addresses the model to attend on the fake regions, and helps reduce the question-answering loss, resulting in better discrimination capacity between real and partially fake audios.
\section{Experimental setup}
\label{sec:setup}
\subsection{Data preparation}
\subsubsection{Dataset construction}
The training set and dev set, which are based on Mandarin publicly available corpus AISHELL-3 \cite{shi2020aishell}, provided by the organizers of ADD 2022, cannot be directly adopted to tackle the problem of partially fake audio detection track, as the whole utterance sample in them is either real or fake.
During the training phase, for constructing fake audios, we generate the partially fake audio by inserting a clip of audio into the real audios.
The inserted clips are derived from three sources: 1). fake audios in the training and dev set provided by ADD 2022. 2). Real audios other than the victim audio in the training and dev set. 3). audios re-synthesised by the traditional vocoders, including Griffin-Lim \cite{griffin1984signal} and WORLD \cite{morise2016world}, based on the real audios in the training and dev set.
It is hard to train text-to-speech (TTS) or voice conversion (VC) models based on the limited real data provided by the organizer, so we choose the traditional vocoders, namely Griffin-Lim and WORLD, to increase the diversity of fake audios.
As for the validation set, we adopt the adaptation set consisting of partially fake audios synthesised by ADD2022 for selecting the models.
We report the equal error rate (EER) for the testing set released by the organizer, as EER is the main evaluation metric for the partially fake audio detection track.
\begin{table}[htb]
\caption{The EERs with (w/) or without (w/o) self-attention.}
\vspace{0.3cm}
\centering
\begin{tabular}{cccc}
\toprule
FFT window size & w/o attention & w/ attention \\
\cmidrule(r){1-3}
384 & 23.6\% & 14.3\% \\
768 & 22.0\% & 17.9\% \\
\bottomrule
\end{tabular}
\label{tab:eer for self-attn}
\end{table}
\subsubsection{Input representations}
Mel-spectrograms, which are based on short-time Fourier transform (STFT) where the window size of fast Fourier transform (FFT) is varied from 384 to 768, the hop size is 128, and the number of output bins is 80, are used as input features for most of our experiments and are denoted by MSTFT in following sections.
Besides spectral features, some extra experiments are operated on cepstral and NN-based features to increase diversity for achieving a better performance in the stage of fusion.
The FFT window size, hop size, and number of output bins are fixed to 384, 128, and 80 respectively for Mel-frequency cepstral coefficients (MFCC), linear frequency cepstral coefficients (LFCC), and SincNet \cite{ravanelli2018speaker}, as we find the FFT window size of 384 performs well as shown in Table~\ref{tab:eer for mstft}.
\subsubsection{Data augmentation}
We perform on-the-fly data augmentation by adding noise from MUSAN dataset \cite{snyder2015musan}, performing room impulse response (RIR) simulation \cite{ko2017study} and applying codec algorithms (a-law and $\mu$-law) \cite{recommendation1988pulse}.
\subsection{Implementation details}
The backbone model is shown in Table~\ref{tab:model}.
Three kinds of attention, average pooling (Avg), attentive statistics pooling (ASP) and self-attentive pooling (SAP) are adopted for experiments.
All the models are optimized by Adam with the learning rate of 0.001 and weight decay as $1e^{-4}$.
\begin{table*}[htb]
\caption{The EERs using MSTFT features. w/o or w/ mean with or without. w/ or w/o re-synthesis correspond to using the re-synthesised audios by Griffin-Lim and WORLD or not.}
\vspace{0.5cm}
\centering
\begin{tabular}{ccccccc}
\toprule
\multirow{2}{*}{feature} & \multirow{2}{*}{FFT window size} & \multirow{2}{*}{pooling method} & \multicolumn{2}{c}{w/o augmentation} & \multicolumn{2}{c}{w/ augmentation} \\
\cmidrule(r){4-5}\cmidrule(r){6-7}
& & & w/o re-synthesis & w/ re-synthesis & w/o re-synthesis & w/ re-synthesis \\
\cmidrule(r){1-3}\cmidrule(r){4-5}\cmidrule(r){6-7}
\multirow{4}{*}{MSTFT} & 384 & Avg & 14.3\% & 19.9\% & 11.9\% & 14.2\% \\
& 512 & Avg & 13.2\% & 20.5\% & 13.0\% & 14.8\% \\
& 640 & Avg & 18.5\% & 19.9\% & 18.9\% & 13.3\% \\
& 768 & Avg & 17.9\% & 16.8\% & 14.8\% & 12.6\% \\
\cmidrule(r){1-3}\cmidrule(r){4-5}\cmidrule(r){6-7}
\multirow{4}{*}{MSTFT} & 384 & SAP & 16.9\% & 17.5\% & 15.6\% & 12.6\% \\
& 512 & SAP & 17.0\% & 18.0\% & 13.9\% & 12.5\% \\
& 640 & SAP & 12.1\% & 15.3\% & 15.3\% & 11.1\% \\
& 768 & SAP & 15.2\% & 17.8\% & 11.7\% & 14.8\% \\
\cmidrule(r){1-3}\cmidrule(r){4-5}\cmidrule(r){6-7}
\multirow{4}{*}{MSTFT} & 384 & ASP & 17.3\% & 15.9\% & 14.9\% & 11.9\% \\
& 512 & ASP & 14.9\% & 15.8\% & 12.9\% & 11.1\% \\
& 640 & ASP & 17.5\% & 15.9\% & 15.8\% & 11.2\% \\
& 768 & ASP & 14.8\% & 17.9\% & 14.5\% & 22.1\% \\
\bottomrule
\end{tabular}
\label{tab:eer for mstft}
\end{table*}
\section{Experimental results and analysis}
\label{sec:expt}
\begin{table}[htb]
\caption{The EERs for three different features}
\vspace{0.3cm}
\centering
\begin{tabular}{cccc}
\toprule
feature & MFCC & LFCC & SincNet \\
EER & 12.5\% & 11.1\% & 16.1\% \\
\bottomrule
\end{tabular}
\label{tab:eer for input}
\end{table}
First of all, we illustrate that the question-answering (QA) strategy drastically decreases the EERs.
We conduct experiments with and without the QA strategy.
The experimental results show that the trained models without the QA strategy attain the EERs of around 40\%, which indicates that such models can not distinguish the partially fake audios from the genuine audios.
Due to the poor performance of models without the QA strategy on the adaptation set, we decide not to submit the results on testing sets of such models to the leaderboard to save the submission times.
Next, we verify the effectiveness of the self-attention layer by Table~\ref{tab:eer for self-attn}.
As the input and output feature dimensions after the self-attention layer are the same, the model without the self-attention layer can be constructed by directly removing (9) in Table \ref{tab:model}.
In the following experiments, the performances on the testing set will be directly displayed.
We show the EERs under two settings of FFT window size due to space limitation, and the other settings are with the same trend.
Table \ref{tab:eer for self-attn} shows that the improvements are significant in two settings with different window sizes.
The EERs decrease 9.3\% and 4.1\% absolute after adding self-attention for the FFT window size of 384 and 768 respectively, which illustrates the significant improvements by introducing the self-attention layer.
Therefore, the model with self-attention will be adopted for the following experiments, unless specified otherwise.
In the main experiments as shown in Table~\ref{tab:eer for mstft}, the input representations are MSTFTs with hop size of 128, output bins as 80, and FFT window size ranging from 384-768.
Table \ref{tab:eer for mstft} exhausts the experimental settings under four different window sizes, three pooling strategies, whether to use the data augmentation and whether to use the re-synthesised fake audios by Griffin-Lim and WORLD.
We have the following observations.
First, EERs are improved with the help of data augmentation in most of the setups.
Secondly, enlarging the training set by the re-synthesised data usually benefits the EERs when data augmentation is conducted.
Lastly, the SAP and ASP pooling significantly improve the EERs when both data re-synthesis and augmentation are applied.
We also can observe that the best EER for a single model is 11.1\% shown in Table~\ref{tab:eer for mstft}.
In order to increase diversity of models for achieving a better performance in the stage of model fusion, we further take MFCC, LFCC and SincNet as input features to train the models.
We cannot exhaust all the settings due to limited computing resources, thus we refer to Table~\ref{tab:eer for mstft} to select the setting to conduct the experiments.
We fix the FFT window size as 384, apply only ASP pooling, adopt data augmentation and the re-synthesised data.
We observe from Table~\ref{tab:eer for input} that the LFCC feature gets EER as 11.1\%, reaching the best single model performance in our experimental settings.
For the further work, we plan to explore the potential of different front-end features to get better performance.
For the fusion method, we tried average fusion, weighted average fusion, min fusion and max fusion.
The best submission, which is fused by the average scores of the top 5 models, achieves the best 7.9\% EER and ranks second in partially fake audio detection track.
\section{conclusion}
\label{sec:conclusion}
Inspired by extraction-based question answering, this paper proposes a self-attention-based, fake span discovery strategy.
The proposed strategy tasks the anti-spoofing model to predict the start and end position of the fake clip within the partially fake audio, address the model's attention into discovering the fake spans rather than other patterns with less generalization, and finally equips the model with the discriminate capacity between real and partially fake audios.
Our final submitted model gave 7.9\% EER, and ranked 2nd in the partially fake audio detection track of ADD 2022.
Such a strategy can be model-agnostic and feature-agnostic.
Our future work will explore the potential of the proposed strategy by adopting other backbone anti-spoofing models and front-end features.
\section{acknowledgement}
\label{sec:acknowledge}
This research is partially funded by the Centre for Perceptual and Interactive Intelligence, an InnoCentre of The Chinese University of Hong Kong
This work was done while Haibin Wu was a visiting student at Human-computer Communications Laboratory, The Chinese University of Hong Kong.
|
1,108,101,566,641 | arxiv | \section{Introduction} \label{sec:intro}
The paradigm of accretion in young stellar objects (YSO) has shifted from a
model of constant mean accretion rate to that favouring short events of intense
accretion \citep{vorbas06,vorbas15,zhu09}.
This shift is largely to address the issue of the `protostellar luminosity problem' \citep{keny90, ken95, dun14}. A variety of models including turbulent or competitive accretion, accretion regulated by core, disk, and feedback, are invoked to understand the deviation from the idealized case of isothermal sphere (
\citet{keny90}, \citet{mcke10}, \citet{myer10}, \citet{vorbas08}, \citet{dun12}, \citet{dun14} and
references therein). However, most of these models share the variable accretion component, albeit differing at various mass regimes. The accumulated observational evidence appears to favour variable
accretion instead of constant mean scenarios \citep{dun14}.
Photometric variability of YSOs can be related to their natal environment,
accretion physics or a combination of both \
(\citet{contreras17}, \citet{kesseli16}, \citet{meyerMNRAS2017} and references therein). Some of
the variability can be caused by cold and hot spots formed on the surface of
the YSO by infalling material from the disc. Dust clumps in the stellar medium
surrounding the YSO can cause variable extinction of star-light as it passes
along the observers line of sight (e.g. \citet{herbst99}, \citet{eiro02} among others).
The FUors (FU Orionis) and EXors (EX Lupi) examples of high amplitude
photometric variability result from variable accretion. Respectively, they last from a
few years to a few months. These objects are known to be
low-mass YSOs, although similar counterparts in the higher mass range have
been found \citep{kumar2016,garat17}.
\citet{kumar2016} uses highly variable light curves (LCs) of massive young
stellar objects (MYSOs) candidates from the Vista Variables in the Via Lactea
(VVV) survey \citep{vvv2010}, arguing that they were signposts of ongoing episodic
accretion. Photometric and spectroscopic variability in a
20 \mbox{$\rm M_\odot\,$} MYSO was used by \citet{garat17} to conclude that disk-mediated accretion bursts are a
common mechanism across stellar masses. ALMA observations were used by
\citet{hunter17} as evidence that sudden accretion is responsible for the
growth of a massive protostar. These findings suggest that episodic accretion
maybe a common mechanism in star formation, independent of mass. Computational
models predict luminous flares in MYSOs, which are morphologically similar to FUors and
EXors \citep{meyerMNRAS2017}.
The findings in \citet{kumar2016} raise the question of the overall nature of
variability in massive YSOs. In this paper, we attempt to examine the
variability phenomena in known extended green objects (EGOs) \citep{cyg08} and
IR sources, deeply embedded in clumps identified by the APEX Telescope Large
Area Survey of the Galaxy (ATLASGAL) \citep{schull09}.
They represent unbiased large
samples of point-like massive young stellar candidates, therefore, allowing
us to use the point source photometry to examine variability. We surmise that
the RMS and UCHII regions represent an important MYSO sample, however, it
requires larger aperture photometry of extended objects to examine
variability, which we postpone to a different study.
Employing point source photometry requires that the targets are point like in
MIPS, have associated high mass star forming
signposts, and finally they are also point-like in the $K_s$ band. The
selection of such targets is described in Section \ref{sec:surveys}.
In Sect. \ref{sec:results} we describe the results obtained and discuss their implications
in Sect. \ref{sec:discussion}.
\section{Target sample, data, and methods}\label{sec:surveys}
Identification of point-like MYSO targets is based on the Spitzer
GLIMPSE and MIPSGAL surveys \citep{car09}, the ATLASGAL survey \citep{schull09}, and
the VVV survey \citep{vvv2010}. These different surveys are highly complementary, covering
much of the same area but at different wavelengths (from $\sim 1.2 - 870$ \mbox{$\rm \mu m\,$}). We searched for: a) driving
sources of EGOs \citep{cyg08, chen2013, chen2013pt2}
and b) luminous MIPS $24 \ \mbox{$\rm \mu m\,$}$ point sources embedded in ATLASGAL
clumps.
The two samples are expected to roughly represent two early evolutionary phases of massive stars; the EGOs, with an active phase of mass ejection, and non-EGOs which are likely yet to begin outflow activity.
\subsection{MYSO sample}\label{sec:iding}
\subsubsection{EGO sample}
EGOs are objects with extended emission in the Spitzer $4.5 \mbox{$\rm \mu m\,$}$ band
(IRAC 2). This band is of particular significance since it contains both
$H_2$, and $CO$ lines, which can be excited by shocks when outflows and jets
interact with the interstellar medium (ISM). This is particularly the case
when the extended emission in the $4.5 \ \mbox{$\rm \mu m\,$}$ band is in excess with respect to
emission in the other IRAC bands.
They were first catalogued by \citet{cyg08},
and later the catalogue was extended by \citet{chen2013,chen2013pt2}.
EGOs are thought to represent the $H_{2}$ flows driven by MYSOs \citep{cyg08}
or MYSO outflow cavities \citep{taka12}.
A total of 270 unique EGO targets have been catalogued so far. By original classification \citep{cyg08} these targets have a MIPS 24 \mbox{$\rm \mu m\,$} detection, usually representing the driving source of the outflow.
In order to find the near-infrared counterpart of these driving sources we
searched for 2 $\mu$m sources in the VVV catalogue with a search radius of 1.0\hbox{$^{\prime\prime}$} and 0.5\hbox{$^{\prime\prime}$}
from the known EGO positions. We find 187 and 153 driving sources. We allowed for sources classified as both point-like and extended to be selected, even though, 80\% of the detected sources were point-like. Young stellar objects with disk and outflow activity are often surrounded by circumstellar nebulae in the near-infrared, leading to a classification as extended. These objects were kept in the sample list. Additionally,
three colour composites, shown in Fig. \ref{fig:erup1}, were used to
visually examine whether the identified point sources are good representations
of an outflow driving source. This examination led us to retain the 153 sources
which clearly represent $2 \ \mbox{$\rm \mu m\,$}$ counterparts of the $24 \ \mbox{$\rm \mu m\,$}$ source, hence the
putative driving source of the EGO target.
\subsubsection{Non-EGO sample}
\citet{kumar2016} identified a sample of highly variable VVV objects and
found MYSO counterparts in ATLASGAL clumps \citep{contreras13}. Here an inverse
approach is used.
Using ATLASGAL, \citet{contreras13} and \citet{urquhartcsc14} built the Compact
Source Catalogue (CSC) which identified $\sim 10000$ dense clumps.
The mass, density, and distance to these clumps are provided by \citet{urqu17}
and they are believed to represent active sites of high-mass star formation.
Assuming that ATLASGAL clumps host MYSOs, we searched
for MIPSGAL point-like
sources that matched with ATLASGAL CSC sources within a radius of 5\hbox{$^{\prime\prime}$}.
This ensured that we matched red point-like sources in $24 \ \mu m$ band with
the peak emission in the 870 $\mu m$ observations of ATLASGAL. 873 point sources
were found with this search.
When there were multiple matches we chose the object with the closest centroid distance.
The MIPS FWHM is equal to 6\hbox{$^{\prime\prime}$}, therefore, a further search of the 873 targets with a
matching radius of 5\hbox{$^{\prime\prime}$} was performed with the VVV catalogue, allowing us to
find 574 $K_s$-band targets. These 574 targets display more than one Ks band source within the 5\hbox{$^{\prime\prime}$} radius.
In the next step, the point source closest to the MIPS peak was searched for, by constraining the search radius to
1.0\hbox{$^{\prime\prime}$}, only for the 574 targets. This retrieved a list of 2171 sources from the VVV source
catalogue. Out of these, any source which had less than ten non-saturated epochs (over the full five-year period)
was removed. This led us to find 367 single
detections and 147 multiple detections in the centroid search with $r \leq 1 \hbox{$^{\prime\prime}$}$.
The multiple detection targets were examined visually considering the source magnitude, colour and centroid distance, based on which 66 of the 147 sources were rejected, retaining 81 sources.
These 448 (367 + 81) sources are, therefore, the
$K_s$-band point sources representing the MYSO candidate at the peak of an
ATLASGAL clump with a spectral energy distribution (SED) that can be
assembled from at least $2 \ \mbox{$\rm \mu m\,$}$ unto $870 \ \mbox{$\rm \mu m\,$}$.
The final MYSO sample we produced to study the variability is, therefore, composed of
153 EGO and 448 non-EGO sources, resulting in 601 targets. We note that
66 of the 153 EGO targets also lie within the ATLASGAL clumps, the non-EGO
sources being exclusively those that coincide with the peak of ATLASGAL clumps.
\subsection{VVV survey data}
The VISTA Variables in the Via Lactea (VVV) survey has obtained
photometric observations in the near-infrared (NIR) passbands (0.9-2.5 \mbox{$\rm \mu m\,$}),
covering multiple epochs, spread over five years (from 2010 to 2014) and covering a 520 $deg^2$ area
of the inner Galactic plane (see Fig. \ref{fig:galaxy}) \citep{vvv2010}.
The survey data is made publicly available through the Cambridge Survey
Astronomical Unit (CASU), which is the photometry obtained on the final
combined `tile' images. A tile image incorporates multiple `pawprint' that
are single exposures on sky. The pawprint data (available to the VVV team and
also made public at the VISTA Science Archive in Edinburgh),
is the basic product of the observations which often holds better photometric
and seeing information, as they are better calibrated and tend to have sharper
image profiles than the tile data. In this work we have exploited this
full potential by using the pawprint photometry.
\begin{figure*}
\includegraphics[width=\textwidth]{{./survey-area-tile-nbrs-copy}.jpg}
\caption{VVV survey area.}
\label{fig:galaxy}
\end{figure*}
\subsection{Processing of the pawprint photometry}
The pawprint photometry and photometric classification used are standard
pipeline products from the Cambridge Astronomical Survey Unit (CASU), as
detailed in \citet{lewis2010}. Matching and combining detections between multiple
pawprints were made following the approach detailed by \citet{smith2017}.
Sources are classified according to their morphology and flagged as
1, 0, -1, -2, -3, -7, -9, respectively as; a galaxy, noise, stellar,
probably stellar, probable galaxy, bad pixel within 2\hbox{$^{\prime\prime}$} aperture, and
saturated.
The pawprint observing pattern of dithers and
overlaps, implies that each source might have between two and six image frames for
the same observing epoch, they can also be detected only once along certain edges of the survey.
Since these observations are close in
time, we chose to bin them together and compute the median magnitude of
all observations in intervals of half a day. This binning
prevents the detection of variations with timescales smaller than
half a day, but reduces the
level of scatter in short-periods. The gain in photometric sensitivity will
thus be a factor of the number of observations binned, and scales according
to $\sqrt{n}$ where $n$ is the number of binned observations. The
typical error in the photometry will be $K\_err \leq 0.05$ mag which allows
for the detection of low-level variability. Following the reasoning explained
in \citet{smith2017} we employ the aperMag2 ($r \sim 0.71\hbox{$^{\prime\prime}$}$) as the $K_s$
magnitude for all analysis.
For each source we assembled a database that contains: a unique identification, median co-ordinates in the
ICRS, median magnitude (over half a day) in the K-band, the median absolute
deviation (MAD), the standard deviation, the inter-quartile range (IQR),
the number of pawprints in which the source was observed, the number of total
observed epochs, the modal class, the number of epochs classified with each
flag, the K-band magnitude, the quality classifier of the photometry, and the
modified julian date (MJD) of the observation.
The median of the co-ordinates and magnitude, and the modal class were computed for all
pawprint observations. The MAD and IQR were computed for each source
as they are robust statistical indicators to
measure the amplitude and dispersion of the variability
\citep{hamp74, upton96,soko17}. These parameters are less sensitive to
outliers than the standard deviation. A high value of the MAD or IQR
can be a good indicator of the inherent variability of the source.
The IQR measures the amplitude of the difference between the third and first quartiles (Q3 and Q1) of a distribution, in this case the distribution of
magnitudes
\begin{equation}
IQR = Q3-Q1.
\end{equation}
The MAD is the
median of the absolute differences between each data point and the median, as
shown by the following equation:
\begin{equation}
MAD = median(\left | K_i-median(K) \right |)
\end{equation}
in which, $K_i$ is an observation and $K$ represents all the
observations.
\subsection{Light curves and their reliability}\label{subsec:lcs}
The light-curves (LCs) of 448 non-EGO and 153 EGO targets were produced in the following
way by querying the database assembled above.
First, we queried a set of co-ordinates and search radius on the database. Secondly,
we built a list with all observations that matched the query. Thirdly, we
excluded all saturated observations (modal class = -9). Fourthly, we produced
a LC for each target using the difference from median
($K_{median}-Kmag_i$).
In the top panel of Fig. \ref{fig:erup1} we show an example of a LC. To ascertain that the
photometry of the target at a given time epoch is not affected by poor
observing conditions, flat field errors, improper photometry, or poor seeing,
we performed a few tests.
\begin{figure}
\includegraphics[width=\columnwidth]{{./MG303.9304-00.6879_combined}.pdf}
\caption{LC of an eruptive event. Top panel shows the LC of the source, the
error bars represent MAD($\Delta S_{i_{mjd}}$), the bottom plot shows the
RGB image of the source using the Spitzer IRAC 3.6 \mbox{$\rm \mu m\,$}, IRAC 4.0 \mbox{$\rm \mu m\,$}, and the 24\mbox{$\rm \mu m\,$} MIPS band
as blue, green and red, respectively. The VVV source is indicated by the blue
circle and the green cross represents the MIPS co-ordinates. The contours of
the RGB are in the interval of [Peak-$5\sigma$, Peak] from the ATLASGAL
observation at $850$ \mbox{$\rm \mu m\,$}.}
\label{fig:erup1}
\end{figure}
\subsubsection{Identifying the variable source}
Stellar sources within two annuli defined by r=1\hbox{$^{\prime\prime}$} and r=60\hbox{$^{\prime\prime}$} from
the target were selected. Typically 100-200 sources were found by this
selection. For each such source $S_i$, the magnitude deviation
($\Delta S_{i_{mjd}}$) from its median value ($\widetilde{S_{i_{mjd}}}$) over all epochs
was computed. The median value $\widetilde{\Delta S_{i_{mjd}}}$ for all sources
in the annulus, is a representation of the photometric deviation (if any) of the
the individual epoch over the time-line.
for each source, at each epoch, the offset $\widetilde{\Delta S_{i_{mjd}}}$ was added to
$S_{i_{mjd}}$ to produce the corrected light curve.
The MAD of the deviations for all selected sources, MAD($\Delta S_{i_{mjd}}$), is used
as an approximation of the $1\sigma$ photometric error of a 1\hbox{$^\prime$} \ field around the target
for a given epoch. And is shown as the error bars in Fig. \ref{fig:erup1}.
\begin{figure}
\includegraphics[width=\columnwidth]{{./leigh_2dplot_std_all_vs_1mags}.pdf}
\caption{Systematic errors as a function of the magnitude of each target.
The red points represents the case where we only consider the stellar sources with $\pm 1$ mag around our targets.
The blue points represent the case in which we consider all stellar sources in the vicinity of our targets.}
\label{fig:2d_mags}
\end{figure}
\subsubsection{Influence of magnitude on variability}
Next we assessed the influence of using any and all, or, only sources
with magnitudes comparable to that of the target inside the annuli for
computing the $1\sigma$ error. For this purpose we filtered the sources
within a magnitude range $\pm 1$ mag to that of the target, which
decreased the number of sources by a factor of approximately ten. Figure
\ref{fig:2d_mags} illustrates the results of this test. It can be seen
that the difference in $1\sigma$ error by using the two comparison samples
is $K_s \sim 0.0018 -0.0031$ mag, which is 1-2 orders of magnitude below
the typical $1\sigma$ errors in the target fields.
\subsubsection{Control field test}
The targets of study are found in the midst of star forming regions, often deeply
embedded in dark clouds, leading to reduced and non-uniform source distribution.
Also YSOs (in general) are known to be variable objects, so, many sources in a
given field may be variable. To address the influence of these effects, we used a
control field region randomly selected to be 5\hbox{$^\prime$} away from the target field.
For each control field the steps explained in the two subsections above were
executed. We find that the control field variability was very similar to the
MAD($\Delta S_{i_{mjd}}$) computed above.
\subsection{Periodograms, false alarm probability, and their aliases}
Once the LCs are assembled, we computed the Lomb Scargle periodogram, identify
the max power frequency component, and use it to produce a phase-folded LC.
\citet{scargle82} defined the false alarm probability (FAP) as a measurement
of the probability of a signal without any periodic component to have a peak
amplitude. The predictive power of the FAP decreases in the presence of
correlated noise, non-gaussian errors, and, highly non-sinusoidal variability.
The $90\%$, $95\%$, and $99\%$ FAP levels have been computed for periodograms
of each target.
A given periodicity can, by a compound effect of binning, observational
window, and noise, produce harmonics of itself, which appear in the
periodograms as additional peaks, or aliases \citep{vanderplas17}. In an
effort to verify if the peaks determined were in fact real signals or their
aliases, an additional verification step was added. The highest peak of the
periodograms and the following 10 highest peaks were identified.
Aliases were searched by examining: a) multiples in the frequency range; b) multiples
in the period range; and c) solving the following equation:
\begin{equation}
f_i = f_t + n * f_w
\end{equation}
where $f_i$ is the frequency of the alias, $f_t$ is the true frequency, $n$ is
an integer, and $f_w$ is a frequency window, using the windows of 1 year
($0.0027 \ day^{-1}$), one day ($1 \ day^{-1}$), and a sidereal day
($1.0027 \ day^{-1}$), as these are the most common aliases for Earth-based
telescopes \citep{vanderplas17}.
\subsection{SED analysis}\label{sec:sedanalysis}
The target samples are generally considered to represent MYSOs based on signposts
of high mass star formation and survey shallowness. To better understand the
nature of the sources studied here in detail for variability, we have analysed their
$1.2 \ \mbox{$\rm \mu m\,$}$ - $870 \ \mbox{$\rm \mu m\,$}$ spectral energy distributions (SEDs).
\begin{table}
\centering
\caption{Filters and apertures used for building the SEDs.}
\label{tab:sed_apertures}
\begin{tabular}{lcc}
\hline
\hline
Filter & Wavelength & Aperture \\
& ($\mbox{$\rm \mu m\,$}$) & (\hbox{$^{\prime\prime}$})\\
\hline
J & 1.235 & 3 \\
H & 1.662 & 3 \\
$K_{s}$ & 2.159 & 3 \\
IRAC1 & 3.6 & 4 \\
IRAC2 & 4.5 & 4 \\
IRAC3 & 5.8 & 4 \\
IRAC4 & 8.0 & 4 \\
MIPS24 & 24 & 6 \\
PACS70 & 70 & 5.6 \\
PACS160 & 160 & 10.7\\
SPIRE250 & 250 & 17 \\
SPIRE350 & 350 & 24 \\
SPIRE500 & 500 & 35 \\
AGAL870 & 870 & 19.2\\
\hline
\end{tabular}
\end{table}
The Python version of the SED fitting tool \citep{rob10} was used to
model SEDs of the target sources. The photometric bands, filter, and apertures
used to construct the SEDs can be found on Table \ref{tab:sed_apertures}.
The photometric data used to construct the SEDs was obtained from querying the public
online archives of 2MASS, SPITZER, ATLASGAL, and Herschel \citep{masssurv,car09,schull09,herschel10}.
Our SED fitting follows the method detailed in \citet{grave09}.
An uniform photometric error
of $10\%$ was assumed.
Longer wavelength data were usually set as upper limits, because their large beams include emission from multiple sources, sometimes small clusters, even those that are well resolved at $24 \ \mbox{$\rm \mu m\,$}$ and below. Data at wavelengths
shorter than $24 \ \mbox{$\rm \mu m\,$}$ are set as data points.
However, for the EGO sample, the $4.5 \ \mbox{$\rm \mu m\,$}$
IRAC band data was set as upper limit by default, as their main characteristic is
to have excess emission in that band.
We used a range of extinction of Av = 0-50 mag for all targets.
Distances are available \citep{urqu17} for 105 targets, non-EGO and EGOs,
they were used with an uncertainty of $\pm 1$ kpc while fitting.
For the remaining 102 targets we allowed a full plausible range of d=1-13 kpc.
For each target all the models which have a $\chi^2 - \chi_{best}^2 <3$ were
used, and the parameters of the source were computed by performing a weighted
mean, weighted by the inverse $\chi^2$ as described in \citet{grave09}.
The observational data used to construct the SEDs is listed in the Table
\ref{tab:all_targets_sed_input_summary_first_filters} .
\section{Results}\label{sec:results}
The LCs of 601 (448 non-EGO + 153 EGO) were visually examined and compared with the
source IQR, while considering the deliberation made in Sect. \ref{subsec:lcs}.
We consistently find that an $IQR>0.05$ is associated with visually $>$20\%
of the data-points in the light-curve that are above the $1\sigma$ error of the
field, as shown by the error bars for each source.
\begin{figure*}
\includegraphics[width=\textwidth]{{./grip_all}.png}
\caption{Some of the clearer LCs, periodic (left column) and aperiodic (right column). Each figure shows the LC of the source, error bars represent
MAD($\Delta S_{i_{mjd}}$). The vertical axis represents the variability from the median normalized by $max(\left | K_i-median(K) \right | )$.}
\label{fig:fullpageoflcs}
\end{figure*}
\begin{table*}
\centering
\caption{Source co-ordinates, photometric data and variability}
\label{tab:sources_and_lcs}
\begin{tabular}{lcccccccc}
\hline
\hline
Source & RA & DEC & $K_s$ & MAD & IQR & $\Delta K_s$ & Class & Period\\
& (deg) & (deg) & (mag) & (mag) & (mag) & (mag) & & (day)\\
\hline
MG303.9304-00.6879 & 195.10156 & -63.54177 & 15.21 & 0.15 & 0.33 & 1.28 & Erup & NA \\
MG328.0494-00.0487 & 238.7064 & -53.7280 & 12.28 & 0.149 & 0.278 & 1.83 & Fad & NA \\
MG352.2452-00.0636 & 261.5178 & -35.5005 & 15.95 & 0.079 & 0.166 & 0.53 & STV & 29.4 \\
MG354.4384+00.4185 & 262.5086 & -33.4088 & 14.66 & 0.091 & 0.523 & 0.89 & Dip & NA \\
G309.91+0.32 & 207.7246 & -61.7394 & 13.65 & 0.204 & 0.383 & 0.81 & LPV-yso & 545.9 \\
G335.59-0.29 & 247.7437 & -48.7308 & 13.16 & 0.097 & 0.348 & 0.61 & low-Erup & NA \\
G351.78-0.54 & 261.6775 & -36.1536 & 14.46 & 0.06 & 0.12 & 0.38 & STV & 18.3 \\
G343.50-0.47 & 255.3267 & -42.8267 & 15.38 & 0.10 & 0.18 & 0.86 & LPV-yso & 1156.3 \\
\hline
\end{tabular}
\begin{tablenotes}
\item For full table check the online data.
\end{tablenotes}
\end{table*}
This selection criteria resulted in 51 (of the 448) non-EGO and 139 (of the 153)
EGO targets to be classified as variable sources. They are listed in
Table \ref{tab:sources_and_lcs}, along with
the LC classification. In Fig. \ref{fig:fullpageoflcs} we display some of the clear LCs of both periodic and aperiodic nature. For each source (see Fig. \ref{fig:lpv}), the LC, periodogram, phase-folded LC, and a three colour composite image
of the target is made available.
\subsection{Light curve classification}\label{subs:classification}
Light curves can be classified based on their behaviour and, often, such classification
represents a close connection with certain physical processes. A classification scheme similar to the one used in
\citet{contreras17} was followed here, and LCs were divided into : a) long period
variables (LPV-yso); b) short timescale variables (STV); c) dippers and faders;
d) eruptive. In
defining periodic variables, \citet{contreras17} only included periods of
the highest power, while we include all significant periods.
Four LCs for which we have only a short time coverage were considered to be unclassified.
Long period variables (LPV-yso) are defined in \citet{contreras17} as sources
with periodic photometric variability and periods larger than $P>100$ days.
LPV-ysos have periods larger than the stellar rotation or
inner disc orbits of young stellar objects, which are typically $P<15$
days. Figure \ref{fig:lpv} shows the example of two LPV-ysos,
source G309.91+0.32 and G343.50-0.47, which have periods of $\sim 545 $ days, and
$\sim 1156$ days.
The RGB image of source G309.91+0.32 reveals distinct extended green
emission, a signpost of the presence of an outflow, its periodogram shows a
prominent signal well above the $99\%$ FAP level. The source
has a median brightness of Ks = 13.65 mag in the VVV and the amplitude
between the brightest and dimmest point of its LC is $\sim 0.81$ mag.
The other prototypical LPV-yso selected, G343.50-0.47, and is part of a complex of three
MIPS bright sources. It is a source with Ks=15.38 mag, the amplitude of its variability is
close to $\sim 0.86$ mag and, the periodogram of the source shows a distinct
peak well above the $99\%$ FAP level. There are no aliases in the periodograms of either of these
sources.
\begin{figure*}
\begin{tabular}{cc}
\includegraphics[width=0.5\textwidth]{{./G309.91+0.32_combined}.pdf} &
\includegraphics[width=0.5\textwidth]{{./G343.50-0.47_combined}.pdf} \\
\end{tabular}
\caption{Prototypical LPV-yso sources: Top panel for each figure shows the LC of the source, error bars represent
MAD($\Delta S_{i_{mjd}}$), the left middle panel shows the
corresponding periodogram in logarithmic scale (also plotted are the $99\%$, $95\%$, and $90\%$ false probability levels,
respectively: the green dot-dashed line, the cyan full line, and the red dashed line), the bottom left panel shows the phase-folded
light curve of the source using the best period fitted (also shows the corresponding value in days), the bottom right
plot shows the
RGB image of the source using the Spitzer IRAC 3.6 \mbox{$\rm \mu m\,$}, IRAC 4.0 \mbox{$\rm \mu m\,$}, and the 24\mbox{$\rm \mu m\,$} MIPS band
as blue, green and red, respectively. The VVV source is indicated by the blue circle and the green cross represents the MIPS co-ordinates.
The contours of the RGB are in the interval of [Peak-$5\sigma$, Peak] from the ATLASGAL observation at $850$ \mbox{$\rm \mu m\,$}.}
\label{fig:lpv}
\end{figure*}
Short timescale variables are objects with short timescales of periodic variability
($P<100$ days), or without an
apparent periodicity. Periods larger than the stellar rotation or inner disc
orbits, $15<P<100$ days can be explained by phenomena such as obscuration from
circumbinary disc or by variable accretion \citep{contreras17, bouv03}.
Sources MG352.2452-00.0636 and G351.78-0.54 are typical examples of STVs, shown
in Fig. \ref{fig:stv}, with typical periods of approximately 29 and 18 days, respectively. Both sources match well ($r<2\hbox{$^{\prime\prime}$}$) with the
$870 \ \mbox{$\rm \mu m\,$}$ emission peak, additionally MG352.2452-00.0636 coincides with an
IRDC filament, and G351.78-0.54 is close ($r<2\hbox{$^{\prime\prime}$}$) to the VLA1a source studied by \citet{zapa2008} as part of a compact cluster of MYSOs. G351.78-0.54 also coincides with an highly variable maser
studied by \citet{goed2014}.
\begin{figure*}
\begin{tabular}{cc}
\includegraphics[width=0.5\textwidth]{{./MG352.2452-00.0636_combined}.pdf} &
\includegraphics[width=0.5\textwidth]{{./G351.78-0.54_combined}.pdf}\\
\end{tabular}
\caption{Prototypical STV sources: Top panel for each figure shows the LC of the source, error bars represent MAD($\Delta S_{i_{mjd}}$), the left middle panel shows the
corresponding periodogram in logarithmic scale (also plotted are the $99\%$, $95\%$, and $90\%$ false probability levels,
respectively: the green dot-dashed line, the cyan full line, and the red dashed line), the bottom left panel shows the phase-folded
light curve of the source using the best period fitted (also shows the corresponding value in days), the bottom right
plot shows the RGB image of the source using the Spitzer IRAC 3.6 \mbox{$\rm \mu m\,$}, IRAC 4.0 \mbox{$\rm \mu m\,$}, and the 24\mbox{$\rm \mu m\,$} MIPS band
as blue, green and red, respectively. The VVV source is indicated by the blue circle and the green cross represents the MIPS co-ordinates.
The contours of the RGB are in the interval of [Peak-$5*\sigma$, Peak] from the ATLASGAL observation at $850\mbox{$\rm \mu m\,$}$.}
\label{fig:stv}
\end{figure*}
Faders and dippers are two classes of LCs with aperiodic
photometric variability. Dippers are characterized by long-lasting (lasting months to years) dimming events
followed by a return to normal brightness, while the same terminology is found in works of the YSOVAR team \citep{moralesysovar} they use it to classify phenomena on shorter timescales (hours to days). Faders show light-curves
that slowly decline over time, or a
period of continuous brightness followed by a sudden decrease sustained
over a year. Dippers are often associated with increased extinction from surrounding
material. Faders can be caused by either a return to a quiescent accreting
phase or a long lasting increase in extinction. It should be noted that both
dippers and faders share common LC morphologies and can be easily
mistaken for one another. A snapshot of a dipper event not returning
to normal brightness can likely be mistaken as a fader.
Source MG328.0494-00.0487 (Fig. \ref{fig:fader}) is the prototypical example
of a fader event. It matches the peak emission of the ATLASGAL observations, and is
an extended object in the $8 \ \mbox{$\rm \mu m\,$}$ \textit{Spitzer} band, it has a close-by
companion. There is no clear peak in its periodogram and the LC shows some
periodic variability until around MJD 56500, at which point there is a drop in
brightness of close to $\Delta K \sim 1.4$ mag.
The morphology of Dipper events is typified by source MG354.4384+00.4185 which
is plotted in Fig. \ref{fig:dipper}. There is no clear peak in its periodogram, and its
light curve could be considered as non-variable, except for an abrupt drop in
brightness of $\Delta K \sim 0.8$ until the target recovers more than half its
brightness about 750 days later.
\begin{figure}
\includegraphics[width=\columnwidth]{{./MG328.0494-00.0487_combined}.pdf}
\caption{Typical Fader event.Colours and symbols are the same as in Fig. \ref{fig:lpv}.}
\label{fig:fader}
\end{figure}
\begin{figure}
\includegraphics[width=\columnwidth]{{./MG354.4384+00.4185_combined}.pdf}
\caption{LC of a dipper event. Colours and symbols are the same as in Fig. \ref{fig:lpv}.}
\label{fig:dipper}
\end{figure}
Eruptive LCs are also aperiodic, but they are characterized by outbursts and
increases in brightness, typically over periods of months or years but, in
some cases, lasting a few weeks. Objects with increases in their luminosity,
likely from ongoing accretion events, will produce such LCs.
FUors or EXors are classic examples showing
eruptive morphologies.
Similarly to \citet{med2018} we employ a subdivision of the
eruptive class, `low amplitude eruptives' for sources with
$\Delta K<1.0 $ mag. This distinction is made to emphasize that such
variations are much less extreme than those in FUors and EXors in the optical
wavelengths, and more similar to common short term variability in their
amplitude. Nevertheless, for certain disk geometries and high extinction it is
possible for a FUor or EXor-like eruption to appear as low-amplitude
variability in the NIR. A low-amplitude eruptive LC can either correspond to a
low-amplitude variable source or to a high-amplitude source with a
geometry + extinction combination such that it appears as low-amplitude in the
$K_s$ band. Overall, there are 26 low-amplitude eruptives and 15 normal eruptives
in our samples. While this identification is presented in Table \ref{tab:sources_and_lcs}, for the analysis they were not considered as separate classes.
One source that features an ongoing eruptive event is MG303.9304-00.6879, plotted
in Fig. \ref{fig:erup1}, showing multiple stages of increased brightness
over the entire time-line, with two large amplitude brightness changes over years.
Figure \ref{fig:erup2} shows an example of low-amplitude eruptive LC.
This source, G335.59-0.29, displays C-shaped green emission,
characteristic of jet emission. The main feature of this
LC is its sustained increase in brightness over time, with a total amplitude
of $\Delta K \sim 0.68$ mag.
Overall, the variable sources can be split into
the periodic category, composed of LPV-yso and STVs, or the aperiodic category,
including faders, dippers, and eruptive sources.
Their detailed distributions represented by the different MYSO samples can be found in
Table \ref{tab:egosandagalsummary_lc}.
Analysis of periodogram aliases (see Sect. \ref{subsec:lcs}) indicates that 1 and 15 members of the
non-EGO + EGO samples, respectively, could be classified differently (see also Sect. \ref{sec:discussion}).
\begin{figure}
\includegraphics[width=\columnwidth]{{./G335.59-0.29_combined}.pdf}
\caption{LC of the example of a low-amplitude eruptive event. Colours and symbols are the same as in Fig. \ref{fig:lpv}.}
\label{fig:erup2}
\end{figure}
\begin{table}[]
\centering
\caption{Observed parameters of LC classes, for both EGO and non-EGO samples.}
\label{tab:egosandagalsummary_lc}
\begin{tabular}{lccc}
\hline
\hline
LC classification & EGO & non-EGO & Total \\
\hline
Periodic & 90 ($\sim65\%$) & 21 ($\sim41\%$) & 111\\
Aperiodic & 49 ($\sim35\%$) & 30 ($\sim59\%$) & 79\\
LPV-yso & 53 ($\sim38\%$) & 9 ($\sim18\%$) & 62\\
STV & 37 ($\sim27\%$) & 12 ($\sim23\%$) & 49\\
Dipper & 15 ($\sim11\%$) & 5 ($\sim10\%$) & 20\\
Fader & 13 ($\sim9\%$) & 5 ($\sim10\%$) & 18\\
Eruptive & 21 ($\sim15\%$) & 20 ($\sim39\%$) & 41\\
\hline
\end{tabular}
\end{table}
The sources MG300.3241-00.1985, MG322.4833+00.6447, MG342.3189+00.5876 have also been
studied as highly variable objects \citep{contreras17,kumar2016}.
Of these, MG300.3241-00.1985 was studied spectroscopically by \citet{contrerasspec}
and classified as an eruptive MNor, an object with a mixture of characteristics
from FUors and EXors.
We note that other $\Delta K >1$ mag sources listed here were not found in
\citet{contreras17} because they were not highly variable in the 2010-2012 period.
\subsection{Variable source SEDs}
The goal of SED fitting was to test if the variable targets indeed represent MYSOs.
The SEDs of the variable sources were fitted by YSO models \citep{robmodels} (see Sect. \ref{sec:sedanalysis})
allowing us to constrain the
properties of these objects. The results of this fitting procedure can be
found in table \ref{tab:all_targets_sed_fit_summary}.
This table contains the full sample of variables with 190 entries. However,
as mentioned in Sect. \ref{sec:sedanalysis} only 105 targets have known distances,
where the SEDs can be reasonably constrained. We note that in
those cases where distances are not available, fitting with the full range of
1-13 kpc has resulted in some model fits that outputs
sub-stellar masses. This result is likely to be a consequence of
unknown distance rather than the true nature of the source because
the indicators of
high-mass star formation used in the
original selection are more reliable.
In Fig. \ref{fig:best_seds}, the data and model fits can be visualized
for the example targets with different LC classes mentioned in the previous
section. The masses of these example targets range from $1.84$ to
$10.30$ \mbox{$\rm M_\odot\,$}, with luminosities between $57$ and $6918 \mbox{$\rm L_\odot\,$}$, representing evolutionary ages
between $10^4$ to $10^6$ yrs.
Table \ref{tab:sed_mass_bins} summarizes the SED results by listing various
properties of the sources grouped in mass ranges roughly separating the
low, intermediate, and high-mass sources. It can be seen that about $\sim 35\%$
of the targets are modelled in the 4-8 \mbox{$\rm M_\odot\,$} range and only $6\%$ representing
$\geq 8 \ \mbox{$\rm M_\odot\,$}$ objects. A large fraction ($\sim 60\%$) are fitted with YSO models representing sources
with $M<4\mbox{$\rm M_\odot\,$}$.
\begin{table*}
\centering
\caption{SED results by mass bin.}
\label{tab:sed_mass_bins}
\begin{tabular}{lccccccccc}
\hline
\hline
M & Sources & L & L & $\dot{M}_{env}$ & $\dot{M}_{env}$ & $\dot{M}_{disk}$ & $\dot{M}_{disk}$ & $A_{{\rm V}_{circum}}$ & $A_{{\rm V}_{circum}}$ \\
(\mbox{$\rm M_\odot\,$}) & ($\%$) & (\mbox{$\rm L_\odot\,$}) & (\mbox{$\rm L_\odot\,$}) & ($\mbox{$\rm M_\odot\,$} \ yr^{-1}$) & ($\mbox{$\rm M_\odot\,$} \ yr^{-1}$) & ($\mbox{$\rm M_\odot\,$} \ yr^{-1}$) & ($\mbox{$\rm M_\odot\,$} \ yr^{-1}$) & & \\
Range & Ratio & Range & Median & Range & Median & Range & Median & Range & Median \\
\hline
$M<4$ & $\sim 59$ & [4.0E-1,9.0E2] & 5.0E1 & [0,4E-4] & 1.3E-5 & [-8E-3,4E-5] & 2E-7 & [6E-1,6E5] & 74 \\
$4\leq M < 6$ & $\sim 21$ & [8.8E1,1.2E3] & 2.9E2 & [0,4E-4] & 7.8E-5 & [-4E-2,9E-6] & 6E-7 & [2E0,2E4] & 56 \\
$6\leq M < 8$ & $\sim 14$ & [2.9E2,5.1E3] & 9.3E2 & [0,6E-4] & 2.0E-4 & [-2E-1,3E-5] & 2E-6 & [5E0,1E5] & 66 \\
$8\leq M$ & $\sim 6$ & [1.3E3,3.7E4] & 3.0E3 & [1E-4,4E-3] & 2.8E-4 & [-1E0,4E-6] & -2E-3 & [4E1,4E5] & 228 \\
\hline
\end{tabular}
\end{table*}
The 4-8 \mbox{$\rm M_\odot\,$} sources display $\dot{M}_{env} \sim 10^{-4} \ \mbox{$\rm M_\odot\,$} yr^{-1}$, $\dot{M}_{Disk} \sim 10^{-6} \ \mbox{$\rm M_\odot\,$} yr^{-1}$ and a few hundred solar luminosities. The number of EGO and non-EGO sources fitted as
low, intermediate, and high-mass stars are 87, 45, 10 and 25, 21, 1 respectively.
It is worth noting that all but one of the sources fitted by models $\geq 8 \mbox{$\rm M_\odot\,$}$ are EGO objects.
These SEDs are well-fitted by MYSO models similar to those represented in \citet{grave09}.
Four of the 11 objects ($\geq 8 \ \mbox{$\rm M_\odot\,$}$ ) are included in the 6.7 GHz class II methanol maser surveys
and they show emission. These four also show class I methanol maser emission.
\begin{figure*}
\begin{tabular}{ccc}
\subfloat{\includegraphics[width = 0.3\textwidth]{{./MG303.9304-00.6879}.pdf}} &
\subfloat{\includegraphics[width = 0.3\textwidth]{{./MG328.0494-00.0487}.pdf}} &
\subfloat{\includegraphics[width = 0.3\textwidth]{{./MG352.2452-00.0636}.pdf}} \\
\subfloat{\includegraphics[width = 0.3\textwidth]{{./MG354.4384+00.4185}.pdf}} &
\subfloat{\includegraphics[width = 0.3\textwidth]{{./G309.91+0.32}.pdf}} &
\subfloat{\includegraphics[width = 0.3\textwidth]{{./G335.59-0.29}.pdf}} \\
\subfloat{\includegraphics[width = 0.3\textwidth]{{./G351.78-0.54}.pdf}} &
\subfloat{\includegraphics[width = 0.3\textwidth]{{./G343.50-0.47}.pdf}} &
\end{tabular}
\caption{Grid of SEDs for our prototypical sources. The dark line corresponds to the best fit model. The grey lines correspond to other $\chi^2 - \chi_{best}^2 <3$ models.}
\label{fig:best_seds}
\end{figure*}
\section{Discussion}\label{sec:discussion}
The results show that 139 of 156 (91\%) EGO sample presents variability in contrast to 51 of the 433 (12\%) non-EGO targets, implying that variability is strongly correlated with the outflow activity in MYSOs. Table 3 summarizes the variability statistics. More than half (64\%) of the variable EGOs are classified as periodic contrasting more than half (59\%) of the non-EGO sample that are classified as aperiodic. Table \ref{tab:egosandagalsummary_sed} allows us to discern the differences between EGO and non-EGO samples, and sources classified as periodic or otherwise. It can be seen from Fig. \ref{hist:deltaK} and Table \ref{tab:egosandagalsummary_sed}, that the amplitude range of variation in non-EGOs is roughly twice as much as that of EGOs. Of the modelled parameters, the circumstellar extinction ($A_{{\rm V}_{circum}}$) for non-EGO targets clearly stand out as twice the median value for EGOs. Also, it appears that non-EGO variable sources may simply be more luminous objects located in slightly farther away targets. Together, the $\Delta K_s$, Av and L comparison indicate that the non-EGO variable sources are relatively more embedded objects when compared to EGOs.
The results of the search for aliases among the ten frequencies with greater
power, found aliases for the highest peak of the periodogram in five non-EGO targets
($\sim 9\%$), and 22 ($\sim 15\%$) EGO targets. Of these, only 1 ($\sim 2\%$) of the non-EGO
targets would change their classification from LPV-yso to STV, while, for the
EGO sample 15 ($\sim 10\%$) of the targets could change from LPV-YSO to STV or
vice-versa. Therefore, these aliases would not change any periodic to aperiodic
source, as
period length is not the only condition defining a periodic source (LC
morphology is also one of the main factors).
\begin{table}
\centering
\caption{Summary of the median fit parameters, for both EGO and non-EGO samples divided by periodicity.}
\label{tab:egosandagalsummary_sed}
\begin{tabular}{lcccc}
\hline
\hline
Parameter & EGO & non-EGO & Periodic & Aperiodic \\
\hline
$\Delta K_s$ (mag) & 0.52 & 1.02 & 0.58 & 0.69 \\
Period (days) & 312 & 416 & 126 & - \\
M (\mbox{$\rm M_\odot\,$}) & 3.2 & 3.8 & 3.2 & 3.6 \\
$\dot{M}$ ($\mbox{$\rm M_\odot\,$} yr^-1$) & 4E-5 & 6E-6 & 4E-5 & 2E-5\\
$\dot{M}_{disk}$ ($\mbox{$\rm M_\odot\,$} yr^-1$) & 3E-7 & 7E-7 & 3E-7 & 6E-7 \\
L (\mbox{$\rm L_\odot\,$}) & 125 & 212 & 125 & 190 \\
Age (Myr) & 5.0 & 5.6 & 5.0 & 5.0 \\
T (K) & 4841 & 7795 & 4857 & 5990 \\
$A_{{\rm V}_{circum}}$ & 61 & 125 & 71 & 54 \\
\hline
\end{tabular}
\end{table}
\begin{figure}
\begin{tabular}{cc}
\includegraphics[width=0.45\columnwidth]{{./testinghist2}.png} &
\includegraphics[width=0.45\columnwidth]{{./testinghist}.png} \\
\end{tabular}
\caption{Histogram of $\Delta K$ divided by sample and periodicity.
EGO and non-EGO sources are shown, respectively, on the right and left plots.}
\label{hist:deltaK}
\end{figure}
It can be seen from Fig. \ref{fig:MvsMdot} that the envelope accretion rate (see also Table \ref{tab:egosandagalsummary_sed}) for non-EGO sources is an order of magnitude smaller than that for the EGO sources. The same effect can be noticed for aperiodic sources (bottom panel). We note that the non-EGOs are dominated by aperiodic sources, that should be indicative of the differences observed in the top and bottom panels of Fig. \ref{fig:MvsMdot}. Aperiodic LCs, represented by eruptive, dippers and fader classes, are thought to trace objects with low level of quiescent accretion, that will undergo short periods of intense accretion. The lower level of accretion found in these objects can therefore be explained by this behaviour.
\begin{figure}
\includegraphics[width=\columnwidth]{{./mdotvsm_assembled_2}.png}
\caption{Mass versus envelope-accretion rate for the fitted SEDs of EGO and non-EGO sources, in logarithmic scale. EGO, non-EGO, periodic, and aperiodic, are plotted at the top left, top right, bottom left, and
bottom right, respectively.}
\label{fig:MvsMdot}
\end{figure}
We compared the SED fitted model properties with the amplitude of variation and did not find any correlations to understand the variability as a function of mass, accretion rate, luminosity or temperature.
Figure \ref{fig:hrdiagram} shows an HR diagram, by plotting the luminosity versus temperature of all variable sources derived from the SED fitting. The zero-age-main-sequence (ZAMS) curve \citep{siess00} is shown by a solid curve. The seven dashed curves display the pre-main-sequence tracks (also from \citet{siess00}, for solar metallicity) for objects of 1-7 \mbox{$\rm M_\odot\,$} in steps of 1 \mbox{$\rm M_\odot\,$}.
EGOs are concentrated closer to the putative birth-line position of the massive stars and are also largely lower mass objects ($<4$ \mbox{$\rm M_\odot\,$}). The precursor to a high mass star is considered to be a lower mass object which continues to accrete material for more than half of its life until it contracts on the main sequence. In view of that conjecture, it is not surprising that a majority of the EGO driving sources are modelled by young low to intermediate mass stars.
Furthermore, the HR diagram, with the associated PMS tracks, validates the fitted masses.
\begin{figure}
\includegraphics[width=\columnwidth]{{./hrdiagram3}.png}
\caption{HR diagram for our sources. Symbol size corresponds to $M<4$, $4\leq M <6$, $6\leq M <8$, $M \geq 8 $ \mbox{$\rm M_\odot\,$}, from smaller to larger, respectively. The dashed lines(from bottom to top) are the PMS tracks for 1, 2 , 3, 4, 5, 6, and 7 \mbox{$\rm M_\odot\,$}, the filled line is the ZAMS. Blue and red symbols are, respectively, EGOs and non-EGOs. }
\label{fig:hrdiagram}
\end{figure}
Most parameters resulting from SED fitting are model
dependent, with known correlations within the model grid between the age, mass and accretion rates \citep{robmodels}. However, the observed data is scaled to match the luminosity and
temperature of the selected models from the grid, therefore, these are relatively
more reliable fitted parameters. Unlike the lower mass stars, the luminosity and temperature differences prominently distinguish the sparsely populated massive stellar models in the grid. These two parameters are used for comparison in this analysis, to ensure that the inferences made are reasonably free of biases in the grid models.
There is an apparent concentration of non-EGO objects, probably with slightly more higher mass objects on the ZAMS. It was previously noted that the non-EGO targets are significantly more embedded objects displaying larger $\Delta K_s$ compared to EGOs. The objects located closer to the ZAMS may therefore be candidate sources to test the hypothesis of bloated and pulsating young massive stars. \citet{hosok10} argue that high-mass stars are bloated objects. Such objects are also thought to be pulsationally unstable, or at least go through a period of significant pulsations as they settle down on the ZAMS \citep{ina13}.
\citet{contreras17} indicates that eruptive variable behaviour
is more common or recurs more frequently at earlier stages of stellar PMS.
The analysis in this work show that $\sim 70\%$ eruptive variables are concentrated on the birthline and the ZAMS in nearly half proportion. Protostellar envelopes are a prominent feature of objects located on the birthline, therefore suggesting that most eruptive MYSOs are indeed the result of envelope accretion. High-mass protostellar objects ingesting a burst of accreting matter enter a `bloated phase' before re-adjustment and contraction \citep{hosok10}. This could be the case for those eruptive sources located on the ZAMS.
In Table \ref{tab:masers}, all the variable sources (32 targets) with known 6.7 GHz class II methanol maser detection are listed, along with, the simultaneous detection of class I methanol maser. Those sources with only class I methanol maser detections are not listed. The detection of class II methanol maser is considered as a strong sign-post of high-mass star formation, especially massive outflow activity \citep{devill15}. Of the 32 sources, only two are non-EGOs, therefore, reinstating the association of class II methanol masers with MYSO outflow activity. \citet{goed2014} have studied variability of methanol masers, and two of the infrared variable sources presented here, G351.78-0.54 and G298.26+0.74, were analysed in that study and G351.78-0.54
is considered as an highly variable maser, while G298.26+0.74 does not present maser variability above instrumental noise.
Our selection criteria for non-EGO sources, 24 \mbox{$\rm \mu m\,$} MIPS sources matching ATLASGAL CSC objects ($r<5$\hbox{$^{\prime\prime}$}) might lead us to miss some of the most important sources in the clumps.
Since the most luminous source inside each clump can be offset by more than $r<5$\hbox{$^{\prime\prime}$} we have missed many of the MYSOs in these regions. The criteria used ensures that the targets are good MYSO candidates but, the most luminous
FIR sources and their counterparts will be examined in a future work.
\section{Summary}
This study has investigated the nature of near-infrared variability in MYSOs, focusing on the driving
sources of EGOs and luminous 24 \mbox{$\rm \mu m\,$} point sources coinciding within 5\hbox{$^{\prime\prime}$} of the massive star forming clumps
mapped at 870 \mbox{$\rm \mu m\,$} by ATLASGAL. The search led us to examine the $K_s$-band light-curves of 718 point sources.
\begin{itemize}
\item 190 sources (139 EGOs and 51 non-EGOs) were found to be variable with an IQR$>0.05$ and $\Delta K_s >0.15$.
111 and 79 of these objects are classified as periodic + aperiodic, respectively.
\item The 2\mbox{$\rm \mu m\,$} - 870\mbox{$\rm \mu m\,$} spectral energy distribution of the variable point sources were assembled and fitted
with YSO models. 47 and 6 sources were modelled as $\geq4$ \mbox{$\rm M_\odot\,$} and $\geq8$ \mbox{$\rm M_\odot\,$}, respectively.
\item On an HR diagram, most lower mass EGO sources concentrate along a putative birth-line.
\item A high rate of detectable variability in EGO targets (139 out of 153 searched) implies that
near-infrared variability in MYSOs is closely linked to the accretion phenomenon and outflow activity.
\end{itemize}
Further to the discovery of a dozen high-amplitude variable MYSOs \citep{kumar2016}, this is the first large scale systematic study of near-infrared variability in MYSOs. The variable sources identified in this work are excellent targets with which to undertake follow-up studies to understand the circumstellar environment of MYSOs in detail.
\begin{table*}
\centering
\caption{EGO and non-EGO MYSO candidates with nearby methanol masers.}
\label{tab:masers}
\begin{threeparttable}
\begin{tabular}{lccccccc}
\hline
\hline
Source & $\widetilde{K\_mag}$ & IQR & Distance & Class & ClassII & ClassI \\
& (mag) & (mag) & (kpc) & & Maser & Maser \\
\hline
MG003.5016-00.2020 & 16.07 & 0.23 & 5.0 & Erup & Y & \\
MG006.9222-00.2512 & 14.38 & 0.26 & 3.0 & Erup & Y & Y\\
MG332.3652+00.6046 & 14.17 & 0.09 & 2.7 & Fad & Y & Y\\
MG333.0294-00.0149 & 15.24 & 0.18 & 4.0 & Dip & Y & N\\
MG339.2939+00.1387 & 15.63 & 0.41 & 4.8 & STV & Y & \\
MG339.5843-00.1282 & 13.16 & 0.16 & 2.6 & Dip & Y & Y\\
MG345.5764-00.2252 & 15.33 & 0.3 & 7.9 & Erup & Y & \\
MG352.6040-00.2253 & 15.38 & 0.22 & 7.6 & Erup & Y & \\
MG358.4604-00.3929 & 16.03 & 0.16 & 5.0 & LPV-yso & Y & Y\\
G9.62+0.20 & 14.38 & 0.11 & 5.2 & STV & Y & Y\\
G6.19-0.36 & 14.52 & 0.09 & 5.1 & STV & Y & Y\\
G5.62-0.08 & 15.43 & 0.07 & 5.1 & LPV-yso & Y & Y\\
G359.44-0.10 & 14.99 & 0.13 & & LPV-yso & Y & Y\\
G358.84-0.74 & 13.82 & 0.12 & 6.8 & LPV-yso & Y & Y\\
G358.46-0.39(b) & 15.45 & 0.16 & 2.9 & STV & Y & Y\\
G358.39-0.48 & 13.93 & 0.19 & 2.4 & Erup & Y & Y\\
G358.26-2.06 & 12.26 & 0.08 & 3.0 & Fad & Y & \\
G355.54-0.10 & 14.08 & 0.15 & 3.0 & LPV-yso & Y & Y\\
G355.18-0.42 & 14.98 & 0.08 & 1.2 & Erup & Y & Y\\
G353.46+0.56 & 13.18 & 0.1 & 11.2 & LPV-yso & Y & Y\\
G352.63-1.07 & 14.56 & 0.14 & 0.9 & STV & Y & Y\\
G352.58-0.18 & 15.62 & 0.09 & 5.1 & LPV-yso & Y & \\
G352.13-0.94 & 12.79 & 0.1 & 2.3 & LPV-yso & Y & Y\\
G351.78-0.54 & 14.46 & 0.12 & 0.7 & STV & Y & Y\\
G351.69+0.17 & 14.91 & 0.05 & 12.1 & STV & Y & \\
G351.38-0.18 & 15.8 & 0.07 & 5.6 & STV & Y & N\\
G351.16+0.69 & 10.4 & 0.15 & 1.8 & STV & Y & Y\\
G350.52-0.35 & 15.02 & 0.17 & 3.1 & Erup & Y & N\\
G350.36-0.07 & 14.31 & 0.09 & 11.2 & Fad & Y & \\
G2.54+0.20 & 12.71 & 0.09 & 4.0 & LPV-yso & Y & N\\
G2.14+0.01 & 13.03 & 0.03 & 11.2 & Non-var & Y & \\
G0.09-0.66 & 13.87 & 0.08 & 8.2 & STV & Y & Y\\
\hline
\end{tabular}
\end{threeparttable}
\end{table*}
\begin{acknowledgements}
G.D.C.T. is supported by an FCT/Portugal PhD grant PD/BD/113478/2015.
MSNK acknowledges the support from Funda\c{c}\~ao para a Ci\^encia e Tecnologia (FCT)
through Investigador FCT contracts IF/00956/2015/CP1273/CT0002, and the H2020 Marie-Curie
Intra-European Fellowship project GESTATE (661249).
Support for JB is provided by the Ministry of Economy, Development, and
Tourism's Millennium Science Initiative through grant IC120009, awarded to
The Millennium Institute of Astrophysics, MAS.
A.C.G. has received funding
from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. 743029).
PWL acknowledges the support of consolidated grants (ST/R000905/1
and ST/M001008/1) funded by the UK Science and Technology Facilities Research Council.
CCP acknowledges support from the Leverhulme Trust.
JFG is supported by Funda\c{c}\~ao para a Ci\^encia e a Tecnologia (FCT) through national funds (UID/FIS/04434/2013)
and by FEDER through COMPETE2020 (POCI-01-0145-
FEDER-007672).
\end{acknowledgements}
|
1,108,101,566,642 | arxiv | \section{Introduction}
\setcounter{equation}{0}
\label{intro}
In recent years there has been great interest in finding alternative theories to General Relativity (GR) \cite{beyond, capo1}, mainly due to the inability of the
latter to explain satisfactorily some fundamental issues associated with the gravitational interaction, such as the dark matter problem, the dark energy problem, as well as the impossibility to reconcile GR with the Standard Model of particle physics. Extra-dimensional theories,
which are mostly inspired by String/M-theory, are among the theories that lead to gravity beyond GR. One of
these extra-dimensional theories is the Braneworld (BW) proposed by Randall and Sundrum (RS) \cite{RS} which has been largely studied and which explains
one of the fundamental problems of Physics, i.e. the hierarchy problem (see also the ADD model \cite{ADD} and \cite{AADD}). Because of this, its study and impact on GR is fully justified and is of great importance
\cite{maartRev2004}.
Even though we have a covariant approach that is useful to study many fundamental aspects of the theory of RS BW
\cite{SMS}, we are still far from fully understanding its
impact on gravity, mainly due to the lack of the complete five-dimensional solution (bulk plus brane), which could helps to explain certain key issues
that remain unresolved, such as the existence of black holes in RS BW \cite{FW11}-\cite{kanti2013} and the bulk effects on stellar configurations \cite{germ}. Since the complete five-dimensional solution remains unknown so far, finding exact solutions to four-dimensional effective Einstein field equations in the
brane is a convenient way to clarify some aspects of the five-dimensional geometry, essentially because we could use Campbell-Magaard
theorems \cite{campbell, sss} to extend the brane solution through the bulk,
locally at least. However, GR during its almost century of history, has taught us that to find a physically acceptable exact solution of Einstein's field equations
is an extremely difficult task \cite{Stephani}. This mainly due to the complexity of the field equations. If we deal with internal stellar
solutions \cite{lake03herrera08}, the task is much more complicated, and in fact, just a few internal solutions are known \cite{lake98}.
On the other hand, in the context of BW, two important features, completely new and different from GR, greatly complicate
the searching for solutions to 4-dimensional Einstein's field equations in the astrophysical scenario:
1) The system remains indefinity due to nonlocal corrections from the five-dimensional bulk. 2) The presence of nonlinear
terms in material fields due to high energy corrections \cite{maartRev2004, SMS}. Because the latter,
to find exact and physically acceptable stellar interior solutions to effective
4-dimensional Einstein field equations seems an impossible task to carry out. However, these two problems can be solved {\it simultaneously on the brane}
when a GR solution is considered by using the minimal geometric deformation principle (MGD) \cite{jovalle2009}. Indeed, by using this approach,
an exact and physically acceptable solution on the brane was found in Ref. \cite{jovalle207}. The MGD has allowed, among other things, to generate physically acceptable interior
solutions for stellar systems \cite{jovalleBWstars}, to solve the tidally charged exterior solution found in Ref \cite{dadhich} in terms of the
ADM mass and to study (micro) black hole solutions \cite{covalle1, covalle2}, as well as to help to elucidate the role of exterior Weyl stresses
from bulk gravitons on compact stellar distributions \cite{olps2013} and the behaviour of black string models with variable brane
tension \cite{cor2013}.
In this paper, an analytical solution to Einstein field equations for a non uniform stellar structure is
found on the brane, and used to elucidate the effects of bulk gravitons on compact stellar structures.
The MGD approach will be used to modify the perfect fluid solution represented by a well
known general relativistic solution, namely, the Tolman IV solution \cite{tolman}, generating thus its braneworld version in an exact analytical form.
The reason to investigate the Tolman IV solution in the Braneworld context by using the MGD approach is quite obvious: among hundreds of known exact solutions
in GR, the Tolman IV solution is one of few with physical meaning \cite{phymeaning}, and this physical relevance is naturally inherited by its braneworld version.
This paper is organized as follows.
In Section {\bf 2} the Einstein field equations in the brane for a spherically symmetric and static distribution of density $\rho$ and pressure $p$
is reminded. In Section {\bf 3} the MGD approach is discussed, as well as the general matching conditions between an interior deformed
metric and the exterior one associated to a Weyl fluid with dark pressure ${\cal P}^+$ and dark
radiation ${\cal U}^+$. In Section {\bf 4} an analytical stellar interior solution to the effective 4-dimensional Einstein's fields
equations is generated by using the well known Tolman IV GR solution through the MGD approach. In Section {\bf 5} the far-field correction to the Newtonian potencial in the BW is used to construct an exterior geometry associated with this potential. In this approximation, the bulk effects on stellar configurations is elucidated. In the last section
the conclusions are presented.
\section{General framework}
In the context of the braneworld, the five-dimensional gravity produces a modification on Einstein's field equations in our (3+1)-dimensional observable universe,
the so-called brane, which effectively can be written as follow
\begin{equation}
\label{einst}
G_{\mu\nu}=
-k^2\,T_{\mu\nu}^{T}-\Lambda\, g_{\mu\nu}
\ ,
\end{equation}
where $k^2=8\,\pi\,G_{\rm N}$ and $\Lambda$ is the cosmological constant
on the brane. These modifications can be seen through the effective energy-momentum tensor $T_{\mu\nu}^{T}$,
which has new terms carrying five-dimensional consequences onto the brane:
\begin{equation}\label{tot}
T_{\mu\nu}\rightarrow T_{\mu\nu}^{\;\;T}
=T_{\mu\nu}+\frac{6}{\sigma}S_{\mu\nu}+\frac{1}{8\pi}{\cal
E}_{\mu\nu}+\frac{4}{\sigma}{\cal F}_{\mu\nu},
\end{equation}
where $\sigma$ is the brane tension, with $S_{\mu\nu}$ and
$\cal{E}_{\mu\nu}$ the high-energy and non-local (from the point of view of a brane observer) corrections respectively, and ${\cal F}_{\mu\nu}$ a term which depends on all stresses in the bulk but the cosmological constant. In this paper, only the cosmological constant will be considered in the bulk, hence ${\cal F}_{\mu\nu}=0$, which implies there will be no exchange of energy between the bulk and the brane, and therefore $\nabla^\nu\,T_{\mu\nu}=0$.
The high-energy $S_{\mu\nu}$
and Kaluza-Klein $\cal{E}_{\mu\nu}$ corrections are given by
\begin{eqnarray}
\label{s}
S_{\mu\nu}=
\frac{T\,T_{\mu\nu}}{12}
-\frac{T_{\mu\alpha}\,T^\alpha_{\ \nu}}{4}
+\frac{g_{\mu\nu}}{24}
\left[3\,T_{\alpha\beta}\,T^{\alpha\beta}-T^2\right]
\end{eqnarray}
where $T=T_\alpha^{\ \alpha}$, and
\begin{eqnarray}
\label{e}
k^2\,{\cal E}_{\mu\nu}
=
\frac{6}{\sigma}\left[{\cal U}\left(u_\mu\,u_\nu+\frac{1}{3}\,h_{\mu\nu}\right)
+{\cal P}_{\mu\nu}+{\cal Q}_{(\mu}\,u_{\nu)}\right]
\ ,
&&
\end{eqnarray}
with ${\cal U}$, ${\cal P}_{\mu\nu}$ and
${\cal Q}_\mu $ the bulk Weyl scalar, the anisotropic stress and energy flux, respectively,
and $u^\mu$ the cuadrivelocity with $h_{\mu\nu}=g_{\mu\nu}-u_{\mu}u_{\nu}$ the projection tensor.
In this paper we will consider spherically symmetric static distributions,
hence $Q_\mu =0$ and
\begin{equation}
{\cal P}_{\mu\nu}
={\cal P}\left(r_\mu\, r_\nu+\frac{1}{3}\,h_{\mu\nu}\right)
\ ,
\end{equation}
where $r_\mu$ is a unit radial vector. Furthermore, the line element will be given by Schwarzschild-like coordinates
\begin{equation}
\label{metric}ds^2=e^{\nu(r)} dt^2-e^{\lambda(r)} dr^2-r^2\left( d\theta
^2+\sin {}^2\theta d\phi ^2\right)\, ,
\end{equation}
where $\nu=\nu(r)$ and $\lambda=\lambda(r)$ are functions of
the areal radius $r$, which ranges from $r=0$ (the star's centre)
to $r=R$ (the star's surface). In this paper we will be focused in BW consequences on perfect fluids, hence
the energy-momentum tensor $T_{\mu\nu}$ in Eq. (\ref{tot}) corresponds
to a perfect fluid, given by
\begin{eqnarray}
\label{perfect}
T_{\mu\nu}
=
(\rho+p)\,u_\mu\,u_\nu-p\,g_{\mu\nu}
\ ,
\end{eqnarray}
where $u^\mu=e^{-\nu/2}\,\delta_0^\mu$ is the fluid four-velocity field in the
reference frame where the metric takes the form in Eq.~(\ref{metric}) (for early works on astrophysics in the braneworld context,
see for instance Refs. \cite{CFMsolution}-\cite{Gregory2006}).
\par
The metric (\ref{metric}) must satisfy the effective 4-D Einstein field
equations (\ref{einst}), which, for $\Lambda=0$, explicitly read (For details, see Ref. \cite{covalle2})
\begin{eqnarray}
\label{ec1}
&&
k^2
\left[ \rho
+\strut\displaystyle\frac{1}{\sigma}\left(\frac{\rho^2}{2}+\frac{6}{k^4}\,\cal{U}\right)
\right]
=
\strut\displaystyle\frac 1{r^2}
-e^{-\lambda }\left( \frac1{r^2}-\frac{\lambda'}r\right)
\\
\nonumber
\\
&&
\label{ec2}
k^2
\strut\displaystyle
\left[p+\frac{1}{\sigma}\left(\frac{\rho^2}{2}+\rho\, p
+\frac{2}{k^4}\,\cal{U}\right)
+\frac{4}{k^4}\frac{\cal{P}}{\sigma}\right]
=
-\frac 1{r^2}+e^{-\lambda }\left( \frac 1{r^2}+\frac{\nu'}r\right)
\\
\nonumber
\\
&&
\label{ec3}
k^2
\strut\displaystyle\left[p
+\frac{1}{\sigma}\left(\frac{\rho^2}{2}+\rho\, p
+\frac{2}{k^4}\cal{U}\right)
-\frac{2}{k^4}\frac{\cal{P}}{\sigma}\right]
=
\frac 14e^{-\lambda }\left[ 2\,\nu''+\nu'^2-\lambda'\,\nu'
+2\,\frac{\nu'-\lambda'}r\right]
\ .
\nonumber
\\
\end{eqnarray}
Moreover,
\begin{eqnarray}
\label{con1}
p'=-\strut\displaystyle\frac{\nu'}{2}(\rho+p)
\ ,
\end{eqnarray}
where $f'\equiv \partial_r f$.
We then note that four-dimensional GR equations are formally
recovered for $\sigma^{-1}\to 0$, and the conservation equation~(\ref{con1})
then becomes a linear combination of Eqs.~(\ref{ec1})-(\ref{ec3}).
\par
The Israel-Darmois matching conditions~\cite{israel} at the stellar surface
$\Sigma$ of radius $r=R$ give
\begin{eqnarray}
\label{matching1}
\left[G_{\mu\nu}\,r^\nu\right]_{\Sigma}=0
\ ,
\end{eqnarray}
where $[f]_{\Sigma}\equiv f(r\to R^+)-f(r\to R^-)$.
Using Eq.~\eqref{matching1} and the general field equations~\eqref{einst},
we find
\begin{eqnarray}
\label{matching2}
\left[T^{T}_{\mu\nu}\,r^\nu\right]_{\Sigma}=0
\ ,
\end{eqnarray}
which in our case leads to
\begin{eqnarray}
\label{matching3}
\left[
p+\frac{1}{\sigma}\left(\frac{\rho^2}{2}+\rho\, p
+\frac{2}{k^4}\,\cal{U}\right)+\frac{4}{k^4}\,\frac{\cal{P}}{\sigma}
\right]_{\Sigma}=0
\ .
\end{eqnarray}
Since we assume the distribution is surrounded by a Weyl fluid ${\cal U}^+, {\cal P}^+$,
$p=\rho=0$ for $r>R$, this matching condition
takes the final form
\begin{eqnarray}
\label{matchingf}
p_R+\frac{1}{\sigma}\left(\frac{\rho_R^2}{2}+\rho_R\, p_R
+\frac{2}{k^4}\,{\cal U}_R^-\right)
+\frac{4}{k^4}\frac{{\cal P}_R^-}{\sigma}
=
\frac{2}{k^4}\frac{{\cal U}_R^+}{\sigma}+\frac{4}{k^4}\frac{{\cal P}_R^+}{\sigma}
\ ,
\end{eqnarray}
where $f_R^\pm\equiv f(r\to R^\pm)$, with $p_R\equiv p_R^-$
and $\rho_R\equiv \rho_R^-$.
\par
Eq.~\eqref{matchingf} gives a general matching condition
for any static spherical BW star~\cite{germ,gergely2006}, i.e., the second fundamental form.
In the limit $\sigma^{-1}\rightarrow 0$, we obtain the well-known GR
matching condition $p_R =0$ at the star surface.
In the particular case of the Schwarzschild exterior,
${\cal U}^+={\cal P}^+ =0$, the matching condition~\eqref{matchingf}
becomes
\begin{eqnarray}
\label{matchingfS}
p_R+\frac{1}{\sigma}\left(\frac{\rho_R^2}{2}+\rho_R\, p_R
+\frac{2}{k^4}\,{\cal U}_R^-\right)
+\frac{4}{k^4}\frac{{\cal P}_R^-}{\sigma} = 0
\ .
\end{eqnarray}
This clearly shows that, because of the presence of
${\cal U}_R^-$ and ${\cal P}_R^-$, the matching conditions
do not have a unique solution in the BW.
\par
\section{Star interior and geometric deformation}
Two important aspects regarding the system of Eqs. (\ref{ec1})-(\ref{con1}) are worth being highlighted. First of all,
it represents an indefinite system of
equations in the brane, an open problem for which the solution requires more information of the bulk geometry and a
better understanding of how our four-dimensional spacetime is embedded in the bulk \cite{FW11}-\cite{LBH13}, \cite{cmazza}-\cite{darocha2012}. Secondly, to find exact and physically acceptable analytic functions $(\rho, p, \lambda, \nu, {\cal U}, {\cal P})$ being a solution of the system
(\ref{ec1})-(\ref{con1}), seems an impossible task. Even though the second point is quite obvious, we will see that it is possible
to build an exact and physically acceptable solution by using the MGD approach \cite{jovalle2009}. In order to accomplish this, the first
step is to rewrite field equations~\eqref{ec1}-\eqref{ec3} as follow
\begin{eqnarray}
\label{usual}
&&
e^{-\lambda}
=
1-\frac{k^2}{r}\int_0^r
x^2
\left[
\rho+\frac{1}{\sigma}\left(\frac{\rho^2}{2}+\frac{6}{k^4}\,\cal{U}\right)
\right]
dx\ ,
\\
\label{pp}
\nonumber
\\
&&
\frac{1}{k^2}\,\frac{{\cal P}}{\sigma}
=
\frac{1}{6}\left(G_{\ 1}^{1}-G_{\ 2}^2\right)\ ,
\\
\label{uu}
\nonumber
\\
&&
\frac{6}{k^4}\,\frac{{\cal U}}{\sigma}
=
-\frac{3}{\sigma}\left(\frac{\rho^2}{2}+\rho\,p\right)
+\frac{1}{k^2}\left(2\,G_{\ 2}^2+G_{\ 1}^1\right)-3\,p
\ ,
\end{eqnarray}
with
\begin{eqnarray}
\label{g11}
G_{\ 1}^1
=
-\frac 1{r^2}+e^{-\lambda }\left( \frac 1{r^2}+\frac{\nu'}r\right)\ ,
\end{eqnarray}
and
\begin{eqnarray}
\label{g22}
G_{\ 2}^2
=
\frac 14\,e^{-\lambda }\left( 2\,\nu''+\nu'^2-\lambda'\,\nu'+2 \frac{\nu'-\lambda'}r
\right)
\ .
\end{eqnarray}
\par
Now, by using Eq.~\eqref{uu} in Eq.~\eqref{usual} an integro-differential equation for the function
$\lambda=\lambda(r)$ is found, something completely different from the GR case,
and a direct consequence of the non-locality of the BW equations.
The only general solution known for this equation is given by~\cite{jovalle2009}
\begin{eqnarray}
\label{edlrwss}
e^{-\lambda}
&\!\!=\!\!&\underbrace{
{1-\frac{k^2}{r}\int_0^r
x^2\,\rho\,dx}}_{\rm GR-solution}
+\underbrace{e^{-I}\int_0^r\frac{e^I}{\frac{\nu'}{2}+\frac{2}{x}}
\left[H(p,\rho,\nu)+\frac{k^2}{\sigma}\left(\rho^2+3\,\rho \,p\right)\right]
dx+\beta(\sigma)\,e^{-I},}_{\rm Geometric\ deformation}
\nonumber
\\
&\!\!\equiv\!\!&
\mu(r)+f(r)
\ ,
\end{eqnarray}
where
\begin{eqnarray}
\label{finalsol}
H(p,\rho,\nu)
\equiv
3\,k^2\,p
-\left[\mu'\left(\frac{\nu'}{2}+\frac{1}{r}\right)
+\mu\left(\nu''+\frac{\nu'^2}{2}+\frac{2\nu'}{r}+\frac{1}{r^2}\right)
-\frac{1}{r^2}\right]
\ ,
\end{eqnarray}
and
\begin{eqnarray}
\label{I}
I
\equiv
\int\frac{\left(\nu''+\frac{{\nu'}^2}{2}+\frac{2\nu'}{r}+\frac{2}{r^2}\right)}
{\left(\frac{\nu'}{2}+\frac{2}{r}\right)}\,dr
\ ,
\end{eqnarray}
with $\beta(\sigma)$ a function of the brane tension $\sigma$ which must be zero in the GR limit. In the case of interior solutions,
the condition $\beta(\sigma)=0$ has to be imposed to avoid singular solutions at the center $r=0$.
Note that the function
\begin{eqnarray}
\label{standardGR}
\mu(r)
\equiv
1-\frac{k^2}{r}\int_0^r x^2\,\rho\, dx
=1-\frac{2\,m(r)}{r}
\end{eqnarray}
contains the usual GR mass function $m$,
whereas the function $H(p,\rho,\nu)$ encodes anisotropic effects due to bulk gravity consequences on $p$,
$\rho$ and $\nu$ .
\par
A crucial observation is now that, when a given (spherically symmetric) perfect fluid solution in GR is considered
as a candidate solution for the BW system of Eqs.~\eqref{ec1}-\eqref{con1}
[or, equivalently, Eq.~\eqref{con1} along with Eqs.~\eqref{usual}-\eqref{uu}],
one obtains
\begin{eqnarray}
H(p,\rho,\nu)=0
\ ,
\label{H=0}
\end{eqnarray}
therefore every (spherically symmetric) perfect fluid solution in GR
will produce a {\it minimal} deformation on the radial metric component (\ref{edlrwss}), given by
\begin{eqnarray}
\label{fsolutionmin}
f^{*}(r)
=
\frac{2\,k^2}{\sigma}\,
e^{-I(r)}\int_0^r
\frac{x\,e^{I(x)}}{x\,\nu'+4}\left(\rho^2+3\,\rho\, p\right)
dx
\ .
\end{eqnarray}
The expression given by Eq.~(\ref{fsolutionmin}) represents a minimal
deformation in the sense that all sources of the deformation in (\ref{edlrwss}) have been removed,
except for those produced by the density and pressure, which will always
be present in a realistic stellar distribution~\footnote{There is a MGD solution
in the case of a dust cloud, with $p=0$, but we will not consider it in the present work.}.
It is worth emphasising that the geometric deformation $f(r)$ shown in
Eq.~\eqref{edlrwss} indeed ``distorts'' the GR solution given in Eq.~\eqref{standardGR}.
The function $f^{*}(r)$ shown in Eq.~\eqref{fsolutionmin} will therefore produce,
from the GR point of view, a ``minimal distortion'' for any GR solution
one wishes to consider, being this distortion $f^{*}(r)$ the source of the anisotropy induced in the brane,
whose explicit form may be found through Eq. (\ref{pp}), leading to
\begin{eqnarray}
\label{ppf3}
\frac{48\pi}{k^4}\frac{{\cal P}}{\sigma} =
\bigg(\frac{1}{r^2}+\frac{\nu'}{r}\bigg)f^{*}
-\frac{1}{4}\bigg(2\nu''+{\nu'}^2+2\frac{\nu'}{r}\bigg)f^{*}-\frac{1}{4}\bigg(\nu'+\frac{2}{r}\bigg){(f^{*})}'.
\nonumber \\
\end{eqnarray}
It is clear that this minimal deformation will produce a minimal anisotropy onto the brane.
In this approach, the interior stellar geometry is generically described by the MGD metric, which explicitly read
\begin{equation}
\label{mgdmetric}
ds^2=e^{\nu(r)} dt^2-\frac{dr^2}{\left(1-\frac{2\,m(r)}{r}+f^*(r)\right)}-r^2\left( d\theta
^2+\sin {}^2\theta d\phi ^2\right)\, .
\end{equation}
As it is shown by Eq. (\ref{fsolutionmin}), the geometric deformation $f^{*}(r)$ in Eq. (\ref{mgdmetric}) satisfies $f^{*}(r)\geqslant\,0$, hence it always reduces the effective interior mass,
as it is seen further below in Eqs. (\ref{reglambda}) and (\ref{massfunction}).
\subsection{Matching conditions: interior MGD metric and exterior Weyl fluid.}
The MGD metric in (\ref{mgdmetric}), characterizing the interior stellar $r<R$, must be matched with a exterior
solution associated to the Weyl fluid ${\cal U}^+, {\cal P}^+$,
$p=\rho=0$ for $r>R$, which can be written generically as
\begin{equation}
\label{genericext}ds^2=e^{\nu^+(r)} dt^2-e^{\lambda^+(r)} dr^2-r^2\left( d\theta
^2+\sin {}^2\theta d\phi ^2\right)\, ,
\end{equation}
therefore the continuity of the first fundamental form at the stellar surface $r=R$
\begin{equation}
\label{match1}
\left[ds^2\right]_{\Sigma}=0
\end{equation}
leads to
\begin{eqnarray}
\label{ffgeneric1}
e^{\nu^-(R)}&=&e^{\nu^+(R)}\ ,
\\
\label{ffgeneric2}
1-\frac{2\,M}{R}+f^*_R&=&e^{-\lambda^+(R)}\ ,
\end{eqnarray}
whereas the second fundamental form (\ref{matchingf}) leads to
\begin{equation}
\label{sfgeneric}
p_R+\frac{f^*_R}{8\pi}\left(\frac{\nu'_R}{R}+\frac{1}{R^2}\right)= \frac{2}{k^4}\frac{{\cal
U}_R^+}{\sigma}+\frac{4}{k^4}\frac{{\cal P}_R^+}{\sigma}\ .
\end{equation}
The expressions given by Eqs. (\ref{ffgeneric1})-(\ref{sfgeneric}) are the necessary and sufficient conditions for the matching of the MGD metric to a spherically symmetric ``vaccum'' filled by a BW Weyl fluid.
\section{An interior solution.}
As already was mentioned, the system of Eqs. (\ref{ec1})-(\ref{con1}) [or, equivalently, Eq.~\eqref{con1} along with Eqs.~\eqref{usual}-\eqref{uu}], represents an indefinite system of
equations in the brane, an open problem whose answer requires the complete five-dimensional solution. Given that there is no such a solution, the first obvious question is to ask what restrictions we should imposse on the brane to close the system of Eqs. (\ref{ec1})-(\ref{con1}). However, it is not necessary to
impose any restriction at all when a given GR perfect fluid solution is considered as a candidate solution for Eqs. (\ref{ec1})-(\ref{con1}). In this case,
the geometric deformation is minimal and the open system of Eqs. (\ref{ec1})-(\ref{con1}) will be automatically
satisfied, in consequence a BW version of the given GR solution will be automatically generated. The virtue of the MGD approach lies in
the above fundamental fact, and its usefulness is obvious when physically acceptable GR solutions are investigated in the BW context, as we will see next.
Let us start by considering the Tolman IV solution for a perfect fluid in general relativity $(\nu,\lambda,\rho, p)$, which now is {\it deformed} by five-dimensional effects through $f^{*}(r)$
\begin{equation}\label{tolman00}
e^{\nu}=B^2\,\left(1+\frac{r^2}{A^2}\right),
\end{equation}
\begin{equation}\label{tolman11}
e^{-\lambda}=\frac{\left(1-\frac{r^2}{C^2}\right)\left(1+\frac{r^2}{A^2}\right)}{1+\frac{2\,r^2}{A^2}}+f^{*}(r),
\end{equation}
\begin{equation}\label{tolmandensity}
\rho(r) =\frac{3A^4+A^2\left(3C^2+7r^2\right)+2 r^2 \left(C^2+3 r^2\right)}{8{\pi}C^2\left(A^2+2r^2\right)^2},
\end{equation}
and
\begin{equation}
\label{tolmanpressure} p(r)=\frac{C^2-A^2-3r^2}{8{\pi}C^2\left(A^2+2r^2\right)}.
\end{equation}
In GR, i.e when $f^{*}(r)=0$, $A$, $B$ and $C$ have specific values written in terms of the compactness of the distribution, that is, in terms of $M/R$,
with $M$ and $R$ the mass and radius of the distribution, which are free parameters satisfying the constraint $M/R<4/9$ [See further bellow Eqs. (\ref{A}-\ref{C})]. However, as it is well known,
in the braneworld scenario the matching conditions are modified, consequently there are five-dimensional effects on these constants which must be considered.
Indeed, in the MGD approach, $A$, $B$ and $C$ in general are functions of the brane tension $\sigma$, being the $\sigma$ dependence determined by matching
conditions. We want to stress that as long as the brane tension $\sigma$ remains constant, $A$, $B$ and $C$ will not be functions of
the spacetime but functions of the parameters $M$, $R$ and $\sigma$. On the other hand, in general relativity the second fundamental form, which leads to
$p(r)\mid_{r=R}\,=0$ at the stellar surface $r=R$, produces
\begin{equation}
\label{ABC}
C^2=A^2+3\,R^2.
\end{equation}
We will keep the physical pressure vanishing on the surface, even though this condition may be dropped in the braneworld scenario
\cite{gergely2007}.
\par
From the point of view of a brane observer, the geometric deformation $f^{*}(r)$ in Eq. (\ref{tolman11}) produced by five-dimensional effects modifies
the perfect fluid solution [represented by Eqs. (\ref{tolman00})-(\ref{tolmanpressure}) when $f^{*}(r)=0$], introducing thus imperfect fluid effects through the braneworld solution for the geometric function $\lambda(r)$,
which is obtained using Eqs. (\ref{tolman00}), (\ref{tolmandensity}) and (\ref{tolmanpressure}) in Eq. (\ref{edlrwss}), leading to
\begin{equation}\label{reglambda}
e^{-\lambda(r)}=1-\frac{2\tilde{m}(r)}{r},
\end{equation}
where the interior mass function $\tilde{m}$ is given by
\begin{equation}
\label{massfunction}
\tilde{m}(r)=m(r)-\frac{r}{2}\,f^{*}(r),
\end{equation}
with $f^{*}(r)$ the {\it minimal geometric deformation} for the Tolman IV solution, given by Eq. (\ref{fsolutionmin}), whose explicit form is obtained using
Eqs. (\ref{tolman00}), (\ref{tolmandensity}) and (\ref{tolmanpressure}) in Eq. (\ref{fsolutionmin}), hence
\begin{eqnarray}
\label{gr} f^{*}(r)&=&-\frac{1}{\sigma}\frac{1}{384\pi r(A^{2}+3R^{2})^{2}(2A^{2}+3r^{2})^{3/2}}\left\lbrace (A^{2}+r^{2})\left[ \frac{36\,r\, \sqrt[]{2A^{2}+3r^{2}}}{(A^{2}+2r^{2})^{3}}\left\lbrace 5A^{8}+7A^{6}r^{2}+10A^{2}r^{6} \right. \right. \right. \nonumber \\
&+&12r^{8}+4(6A^{6}+10A^{4}r^{2}-3A^{2}r^{4}-6r^{6})R^{2}+2(15A^{4}+35A^{2}r^{2}+18r^{4})R^{4}\left. \left. \left. \right\rbrace \right. \right. \nonumber \\
&-&216(A^{2}+2R^{2})^{2} \arctan \left( \frac{r}{\sqrt[]{2A^{2}+3r^{2}}} \right)-48\,\sqrt[]{3}(A^{2}+3R^{2})^{2} \log(3r+\sqrt[]{6A^{2}+9r^{2}}) \left. \left. \right] \right\rbrace
\end{eqnarray}
The function $m(r)$ in Eq. (\ref{massfunction}) is the GR mass function,
given by the standard form
\begin{equation}
\label{regularmass2} m(r)=\int_0^r 4\pi
x^2{\rho}dx=\frac{r^{3}(2A^{2}+3R^{2}+r^{2})}{2(A^{2}+3R^{2})(A^{2}+2r^{2})},
\end{equation}
hence the total GR mass is obtained
\begin{equation}
\label{regtotmass} M\equiv
m(r)\mid_{r=R}\,=\frac{R^{3}}{A^{2}+3R^{2}}.
\end{equation}
Finally, the Weyl functions ${\cal P}$ and ${\cal U}$ associated with the geometric deformation shown in Eq. (\ref{gr}), are written as
\begin{equation}
\label{tolmanP}
\frac{\cal P}{\sigma}=\frac{4\,\pi}{3}\frac{(A^4+2 A^2 r^2+2 r^4)}{r^2 (A^2+r^2)^2}\,f^{*}(r)\, ,
\end{equation}
\begin{eqnarray}
\label{tolmanU}
\frac{\cal U}{\sigma}&=&\frac{4\pi (A^4+8 A^2 r^2+5 r^4)}{3 r^2 (A^2+r^2)^2}\,f^{*}(r)
\nonumber \\&&
-\frac{1}{\sigma}\frac{9(2 A^4+3 A^2 r^2+2 r^4+3 A^2 R^2+2 r^2 R^2)(2 A^4+A^2 r^2-2 r^4+5 A^2 R^2+6 r^2 R^2)}{4 (A^2+2 r^2)^4 (A^2+3 R^2)^2}\, . \nonumber \\
\end{eqnarray}
The expressions Eq. (\ref{tolman00})-(\ref{tolmanpressure}) along with Eq. (\ref{tolmanP}) and (\ref{tolmanU}) represent an {\it exact analytic solution}
to the system Eq.(\ref{ec1})-(\ref{con1}). We want to emphasize that the expressions for $p$, $\rho$ and $\nu$ in our solution are the same than those
for the Tolman IV solution, in consequence when these expressions are used in Eq. (\ref{finalsol}), the conditions in Eq. (\ref{H=0}) is obtained.
It can be shown by Eq. (\ref{gr}) that the geometric deformation $f^{*}(r)$
depends only on the parameter $A$, which has a well defined expression in terms of the compactness $M/R$ in GR, as will be shown in next section.
\section{Analysis of the solution}
\subsection{GR case}
In order to see the physicall consequences due to BW, first let us start recalling the GR case, that is, to match the Tolman IV solution to exterior Schwarzschild solution.
The exterior metric in (\ref{genericext}) will be the Schwarzschild one
\begin{equation}
e^{\nu^+}=e^{-\lambda^+}=1-\frac{2\,M}{r}\ ,
\end{equation}
therefore at the stellar surface $r=R$ we have
\begin{equation}\label{matchS1}
B^2\,\left(1+\frac{R^2}{A^2}\right)=1-\frac{2\,M}{R}\ ,
\end{equation}
\begin{equation}\label{matchS2}
\frac{\left(1-\frac{R^2}{C^2}\right)\left(1+\frac{R^2}{A^2}\right)}{1+\frac{2\,R^2}{A^2}}=1-\frac{2\,M}{R}\ ,
\end{equation}
whereas the condition (\ref{sfgeneric}) with $f_R^*={\cal U}_R^+={\cal P}_R^+=0$ leads to the expression in Eq. (\ref{ABC}).
Now by using Eq. (\ref{ABC}) along with Eqs. (\ref{matchS1})-(\ref{matchS2}) the constants $A$, $B$ and $C$ are found in terms of the compactness of the stellar distribution, as shown below
\begin{eqnarray}
\label{A}
A^2/R^2&=&\frac{1-3\,M/R}{M/R}\ ,
\\
\label{B}
B^2&=&1-3\,M/R\ ,
\\
\label{C}
C^2/R^2&=&(M/R)^{-1}\ .
\end{eqnarray}
The values given by Eqs. (\ref{A})-(\ref{C}) guarantee the geometric continuity at $r=R$ when this discontinuity is crossed, for instance, from the interior geometry,
described by the Tolam IV solution (\ref{tolman00})-(\ref{tolman11}) ($f^*=0$), to the exterior Schwarszchild geometry, which represents the unique exterior solution for a spherically symmetric distributions in GR. Next we will see that the BW case is quite different.
\subsection{Brane World case}
When a spherically symmetric and static self-gravitating system of radius $r=R$ is considered in the RS BW theory, the space-time $r>R$ surrounding the stellar system, contrary to GR, is not empty but filled by the so-called dark radiation ${\cal U}^+$
and dark pressure ${\cal P}^+$. It is well known that, from the point of view of a brane observer, extra dimensional effects modify the Schwarzschild solution through these fields. However, the effects of bulk gravitons on self-gravitating structures
are not quite understood so far \cite{olps2013}.
On the other hand, the gravitational collapse of spherically symmetric stellar distributions on the brane could lead to non-static exterior
\cite{nogo}-\cite{dad}, at least when standard matter is the source of the gravitational field and the configuration has vanishing surface pressure.
Even in this extreme scenario without Birkhoff theorem, a static exterior is eventually expected. The reason is
that the (unknown) non-static exterior solution should be transient \cite{nogo}, \cite{ssm}, hence it
is reasonable to assume that the exterior metric will be static at late times and tend to
Schwarzschild at large distances. However, using more general assumptions than the Oppenheimer-Snyder BW model used in Ref. \cite{nogo},
it was found that static exterior can exist for a collapsing star in the radiative bulk scenario \cite{pal}, and also for a near
dust-like perfect fluid \cite{gergely2007}. Moreover, recently it was proven that a realistic interior solution having a
very thin dark energy atmosphere can be matched consistently to a
Schwarzschild exterior \cite{OCG2013}. In summary, the above shows that the presence or not of static black holes remains an open issue in BW.
Since the exterior spacetime of a spherically symmetric configurations remains unknown on the brane due to the lack of a 5-dimensional solution, there are many ways to modify the Schwarzschild solution, i.e.,
there are many black hole solutions for a spherically symmetric static ``vacuum'' in 5-dimensional
gravity in the RS BW scenario \cite{FW11}-\cite{LBH13}, \cite{dadhich}, \cite{CFMsolution}. Next, regarding the weak-field limit in the BW, an approximate exterior solution is developed and considered in the matching conditions at the stellar surface $r=R$.
\subsection {Far-field correction to the Newtonian potencial}
As we have mentioned, the Weyl stresses imply that the exterior solution for a
spherically symmetric distribution is no longer the Schwarzschild metric and therefore there are many possible solutios to the effective four-dimensional
vacuum Einstein equations~\cite{SMS}, namely any metric such that
\begin{eqnarray}
\label{generalvacuum}
R_{\mu\nu}-\frac{1}{2}\,g_{\mu\nu}\,R^\alpha_{\ \alpha}
=
\mathcal{E}_{\mu\nu}
\qquad
\Rightarrow
\qquad
R^\alpha_{\ \alpha}
=0
\ .
\end{eqnarray}
The solution to Eq. (\ref{generalvacuum}) must satisfy the weak-field limit \cite{tanaka}, which is given by
\begin{equation}
\label{newton}
\Phi \sim\, -\frac{G\,{\cal M}}{r}\left(1+\frac{2\, \ell^2}{3\,r^2}\right)\ ,
\end{equation}
where $\ell$ is the curvature radius of $AdS_5$. Unfortunately, none of
the few known analytical solutions to Eq. (\ref{generalvacuum}) satisfy the weak-field limit in Eq. (\ref{newton}) and therefore cannot describe the end-state of collapse. Indeed, to our knowledge, an exact spherically symmetric
exterior solution on the brane satisfying the weak-field limit in Eq. (\ref{newton}) remains unknown so far. For instance, while it is true that the well known tidaly charged metric found in Ref. \cite{dadhich} shows the correct 5D behaviour of the potential at short distances, and therefore could be a good
approximation in the strong-field regime for micro black holes \cite{covalle1}, \cite{covalle2}, the astrophysical scenario is quite different. Likewise,
although the vacuum braneworld solution found by Casadio, Fabbri
and Mazzacurati in Ref. \cite{CFMsolution} is tremendously useful to elucidate the specific role played for
both Weyl functions \cite{olps2013}, it does not satisfy the limit in Eq. (\ref{newton}). Moreover, its condition of null dark energy
is too strong and therefore this solution should be condidered just an useful (unphysical) toy model in the astrophysical scenario. (For a resent study regarding this solution in the bulk, see Ref. \cite{roldaocqg2013}).
Since we want to elucidate the effects of bulk gravitons on stellar structure, and we lack an exterior solution, the potential in Eq. (\ref{newton}) could be
helpful to obtain some relevant information. For instance, it is reasonable to assume that the unknow 4-dimensional solution should be
close to the solution associated to the potential in Eq. (\ref{newton}) ($G=1$)
\begin{equation}
\label{aprox00}
g^{+}_{00} \sim\, 1-2\,{\cal M}\left(\frac{1}{r}+\frac{2 \ell^2}{3\,r^3}\right)\ ,
\end{equation}
\begin{equation}
\label{aprox11}
\left(g^{+}_{11}\right)^{-1} \sim\, 1-2\,{\cal M}\left(\frac{1}{r}+\frac{ \ell^2}{3\,r^3}\right)\ .
\end{equation}
In this approximation, the exterior Weyl fluid behaves
\begin{equation}
\label{approxpp}
\frac{6\,{\cal P}^+}{k^2\sigma} \sim \frac{{\cal M}\ell^2}{3r^5}\left[\frac{84({\cal M}/r)^2-91({\cal M}/r)+25}{(1-2{\cal M}/r)^2}\right]\ ,
\end{equation}
\begin{equation}
\label{approaxuu}
\frac{6\,{\cal U}^+}{k^2\sigma} \sim -\frac{2{\cal M}\ell^2}{3r^5}\left[\frac{36({\cal M}/r)^2-37({\cal M}/r)+10}{(1-2{\cal M}/r)^2}\right]\ ,
\end{equation}
where the expressions in Eqs. (\ref{aprox00})-(\ref{aprox11}) have been used in Eq. (\ref{pp})-(\ref{uu}) [with $p = \rho = 0$].
Therefore when the deformed Tolman IV interior solution, given by Eqs. (\ref{tolman00}) and (\ref{reglambda}), is used
along with the exterior solution in Eqs. (\ref{aprox00})-(\ref{aprox11}) in the matching conditions (\ref{ffgeneric1}) and (\ref{ffgeneric2}), we have
\begin{equation}
\label{aproxmatch00}
B^2\,\left(1+\frac{R^2}{A^2}\right)\sim\,1-\frac{2\cal{M}}{R}-\frac{4\,{\cal M}\ell^2}{3\,R^3}\ ,
\end{equation}
\begin{equation}\label{aproxmatch11}
{\cal M} \sim \frac{M-\frac{R}{2}f^{*}_R}{1+\frac{\ell^2}{3\,R^2}} \sim {M} -\frac{R}{2}\,f^{*}_R-\frac{M\ell^2}{3R^2}\ ,
\end{equation}
where in Eq. (\ref{aproxmatch11}) the approximation $f^{*}_R\, (\ell/R)^2\sim\,\sigma^{-1}(\ell/R)^2\sim\,0$ has been used.
Then when Eq. (\ref{aproxmatch11}) is used in the condition (\ref{aproxmatch00}), we obtain
\begin{equation}
\label{sup1}
B^2\,\left(1+\frac{R^2}{A^2}\right) \sim 1-\frac{2\,{M}}{R}+{\bar f}^*_R\ ,
\end{equation}
where the bar-function
\begin{equation}
\label{fbarr}
{\bar f}^*_R \equiv\, f^*_R-\frac{2\,{M}\ell^2}{3\,R^3}
\end{equation}
represents the bulk effects on the righ-hand side of Eq. (\ref{matchS1}). These effects can be written in terms of the brane tension $\sigma$ or the curvature radius of the bulk $\ell$ when the second fundamental form (\ref{sfgeneric}) is used, that is, by using Eqs. (\ref{approxpp})-(\ref{approaxuu}) and $p_R = 0$ in the condition (\ref{sfgeneric}), thus a relationship between the geometric deformation $f^{*}(r)$ and the curvature radius of the bulk $\ell$ is found at the stellar surface $r=R$ as
\begin{equation}
\label{sfgenericapproxim}
{f^*_R} \sim\frac{10 {\cal M} \ell^2}{3 R^3 }\left(\frac{1-2M/R}{1-2{\cal M}/R}\right)
\left(1-\frac{8{\cal M}}{5R}\right)+{\cal O}(\ell^4/R^4)
\sim\frac{10 {M} \ell^2}{3 R^3 }\left(1-\frac{8{M}}{5R}\right)\ ,
\end{equation}
in consequence the expression in Eq. (\ref{fbarr}) may be written as
\begin{equation}
\label{fbarr2}
{\bar f}^*_R \sim \frac{8}{3}\left(1-\frac{2\,M}{R}\right)\frac{M \ell^2}{R^3}
\sim \frac{4}{5}\frac{\left(1-\frac{2M}{R}\right)}{\left(1-\frac{8M}{5R}\right)}f^{*}_R\ ,
\end{equation}
showing thus that the bar-function in Eq. (\ref{fbarr2}) is allways positive.
The expression (\ref{sup1}) clearly shows that the values of $A$ and $B$ cannot
be that from Eqs. (\ref{A}) and (\ref{B}) because that values would lead to the trivial Schwarzschild condition ${\bar f}^*_R=0$.
Therefore the GR values of $A$ and $B$, hereafter called $A_{0}$ and $B_{0}$, have been modified by five-dimensional effects and
cannot be constants anymore but functions of the brane tension $\sigma$ [or the curvature radius of the bulk $\ell$, according to Eq. (\ref{fbarr2})],
as will be shown next.
The expression in (\ref{sup1}) represents
a condition which must be used to find two unknown functions $A$ and $B$, hence the problem at the surface seems not closed and
therefore additional information should be added. However, we will see that no additional information is needed, as explained below.
First of all, the BW effects on the GR condition (\ref{matchS1}) is explicitly shown in the right-hand side of Eq. (\ref{sup1}), showing
that these modifications are proportionals to the geometric deformation $f^*_R$, which is proportional to $\sigma^{-1}$ [see Eq. (\ref{gr})]. Therefore the unknown functions $A$ and $B$ can be written as
\begin{eqnarray}
\label{dA}
A = A_{0}+\delta\,A\ ,
\\
\label{dB}
B = B_{0}+\delta\,B\ ,
\end{eqnarray}
where $\delta$ represents the modification due to five-dimensional effects, which are functions of the brane tension $\sigma$.
Hence, the problem is reduced to
finding the unknown $\delta$ functions in Eq. (\ref{dA})-(\ref{dB}) by using the condition in Eq. (\ref{sup1}). On the other hand, since the
constants shown in Eqs. (\ref{A})-(\ref{C}) are modified by bulk gravity effects, this must occure by a change in the compactness $M/R$, as clearly shows the
right-hand side of Eqs. (\ref{A})-(\ref{C}), but $R$ is a constant free parameter, therefore the five-dimensional effects on $A$ and $B$ are produced by
bulk gravity effect $\delta\,M$ on GR mass $M_0$
\begin{eqnarray}
\label{dM}
M = M_{0}+\delta\,M\ ,
\end{eqnarray}
hence $\delta\,A$ and $\delta\,B$ have the {\it same source} and are not independent. Therefore all we need to do is to find $A=A(B)$ by using the compactness as a common variable, as shown
by Eqs. (\ref{A}) and (\ref{B}), hence
\begin{equation}
\label{AB}
A^2=3\,R^2\frac{B^2}{(1-B^2)}\ ,
\end{equation}
in consequence the problem
at the surface is closed. Clarified this point, next step is to examine the five-dimensional effects on the physical variables.
For instance, to see bulk gravity consequences on the pressure $p$, all
we have to do is to use Eq. (\ref{dA}) in Eq. (\ref{tolmanpressure}), hence the modification $\delta\,p$ will be found. Therefore we need to
determined $\delta\,A$ in Eq. (\ref{dA}), as shown below.
Using Eqs. (\ref{dA})-(\ref{dB}) in (\ref{sup1}), yields
\begin{equation}
\label{sup2}
(B_{0}+\delta\,B)^2\,\left[1+\frac{R^2}{(A_{0}+\delta\,A)^2}\right] \sim\, 1-\frac{2\,{M_0}}{R}+{\bar f}^*_R\ ,
\end{equation}
where $M_0$ is used to stress that the $M$ in Eq. (\ref{sup1}) actually is the GR value of $M$. Now keeping in
Eq. (\ref{sup2}) linear terms in $\delta$, we have
\begin{equation}\label{sup3}
B_0^2\,\left(1+\frac{R^2}{A_0^2}\right)+2\,B_0\left(1+\frac{R^2}{A_0^2}\right)\delta\,B-2\frac{\,B_0^2\,R^2}{A_0^3}\delta\,A \sim\, 1-\frac{2\,M_0}{R}+{\bar f}^*_R\ ,
\end{equation}
and by using Eq (\ref{matchS1}) [GR case, where $B=B_0$] in Eq. (\ref{sup3}) we obtain
\begin{equation}\label{sup4}
2\,B_0\left(1+\frac{R^2}{A_0^2}\right)\delta\,B-2\frac{\,B_0^2\,R^2}{A_0^3}\delta\,A \sim\, {\bar f}^*_R\ .
\end{equation}
In order to find $\delta\,A$ in Eq. (\ref{sup4}) $\delta\,B$ must be determined. To accomplish this the expression in Eq. (\ref{AB}) is used, yielding
\begin{equation}
\label{dAB}
\delta\,A=\frac{3\,R^2}{A}\frac{B}{(1-B^2)^2}\,\delta\,B \ .
\end{equation}
Using Eq (\ref{dAB}) in Eq. (\ref{sup4}) leads to
\begin{equation}
\label{sup5}
\delta\, A(\sigma)\sim\,\frac{A_0^3}{4\,R^2\,B_0^4}\,{\bar f}^*_R\ ,
\end{equation}
therefore the function $A$ in Eq. (\ref{dA}) is written as
\begin{equation}
A(\sigma)\sim\,A_0+\frac{A_0^3}{4\,R^2\,B_0^4}\,{\bar f}^*_R(\sigma)+{\cal O}(\sigma^{-2})\ .
\end{equation}
At this stage, we have all the necessary tools needed to examine the five-dimensional effects on the physical variables.
For instance, to see bulk gravity consequences on the pressure $p(r,\sigma)$, we rewrite Eq. (\ref{tolmanpressure}) as
\begin{equation}
\label{tolmanpressure2}
p(r,\sigma)=\frac{3(R^2-r^2)}{8{\pi}\left(A^2+3\,R^2\right)\left(A^2+2r^2\right)}\ .
\end{equation}
As the bar-function ${\bar f}^*_{R}$ in Eq. (\ref{fbarr2}) is positive, then from Eq. (\ref{sup5}) we can see that $\delta\,A>0$,
in consequence is straightforward to see that the pressure in Eq. (\ref{tolmanpressure2}) is always reduced by five-dimensional effects.
Finally, by using Eqs. (\ref{A})-(\ref{B}) and Eq. (\ref{dAB}) in Eq. (\ref{sup5}) $\delta\,M$ may be written as
\begin{equation}
\label{deltaM}
\delta\,M(\sigma) \sim\, -\frac{R}{2}\, {\bar f}^*_R\ ,
\end{equation}
hence the bulk effects on the compactness $M/R$ may be expressed as
\begin{equation}
\label{deltaMR}
\delta\,[M(\sigma)/R]\sim\,-{\bar f}^*_R/2\ ,
\end{equation}
or, according to Eq. (\ref{fbarr2}), in terms of the brane tension $\sigma$ or curvature radius of the bulk $\ell$
\begin{equation}
\label{jl}
\delta\,[M(\sigma)/R] \sim -\frac{4}{3}\left(1-\frac{2\,M}{R}\right)\frac{M \ell^2}{R^3}
\sim -\frac{2}{5}\frac{\left(1-\frac{2M}{R}\right)}{\left(1-\frac{8M}{5R}\right)}f^{*}_R\ .
\end{equation}
The pressure in (\ref{tolmanpressure2}) may be written in terms of the compactness as
\begin{equation}
\label{tolmanpressure3}
p(r,\sigma)=\frac{3(1-r^2/R^2)}{8{\pi}\,R^2\left[1-\left(3-2r^2/R^2\right)\frac{M(\sigma)}{R}\right]}\left[\frac{M(\sigma)}{R}\right]^2\ ,
\end{equation}
where $M(\sigma)$ in Eq. (\ref{tolmanpressure3}) is given by
\begin{equation}
M(\sigma) = M_0 + \delta\,M(\sigma)\ .
\end{equation}
Expressions in Eq. (\ref{deltaMR}) and (\ref{tolmanpressure3}) clearly shows that BW consequences on pressure occure through bulk gravitons effects on the compactness of the stellar structure. The result shown by Eq. (\ref{jl}) strongly suggests that the effects of bulk gravitons on stellar configurations
act in such a way that always reduce the compactness, in agreement with the results found in Ref. \cite{germ}.
\section{Conclusions}
In this paper, an exact analytic interior solution to four-dimensional effective Einstein's field equations
for a non uniform stellar structure was found in the context of the Randall-Sundrum braneworld.
By using this analytic solution, an exhaustive analysis of the extra-dimensional consequences on realistic stellar interiors is developed,
finding strong evidences in favor of the hypothesis that compactness and pressure are always reduced due to bulk effects on stellar configurations.
The interior solution was constructed from a well known
spherically symmetric stellar solution for a perfect fluid in GR, namely, the Tolman IV solution.
In order to produce the braneworld version containing the anisotropic effects necessary for realistic stellar models,
the MGD approach was used to modify the perfect fluid solution represented by the Tolman IV solution,
thus generating exact analytic expressions for the Weyl fields on the brane, namely, the scalar ${\cal U}$ and anisotropy ${\cal P}$.
As the Tolman IV solution is a solution to $4D$ Einstein's field equations in GR, it removes all the
non-local sources from the geometric deformation $f(\nu,\rho,p)$ in the generic expression given by Eq. (\ref{edlrwss}), leaving only the high energy terms shown explicitly in Eq. (\ref{fsolutionmin}), which are quadratic terms in the density and pressure. Hence the higher the density, the more geometric deformation will be produced, and as a consequence the anisotropy induced will be higher for more compact
distributions, as can easily seen through Eq. (\ref{tolmanP}). Finally, we want to stress that, while it is true that both the pressure and density in (\ref{tolmandensity}) and (\ref{tolmanpressure}) are modified through the change $A\rightarrow\;A(\sigma)$, their physical acceptability is not lost, given that is inherited from the Tolman IV solution. In other words, the deformation undergone by the density and pressure is not enough to jeopardize the physical acceptability of the BW system.
On the other hand, since we lack an exterior solution,
the far-field correction to the Newtonian potencial in the BW was used to construct an exterior geometry associated with this potential.
In this approximation, it was found that bulk effects always reduce the compactness of stellar configurations, in agreement with the conjectured hypothesis in Ref. \cite{germ}.
The analytic four-dimensional solution developed in this paper represents the point of view of a
brane observer, hence it is not known whether the bulk eventually constructed will be free of singularities.
Despite the above, it was found that the MGD principle represents a powerful tool in the searching of analytic
solutions in the braneworld context. Hence it could be useful in
the study of the five-dimensional geometry. Indeed, we could use Campbell-Magaard theorems \cite{campbell, sss}
to extend the generic solution represented by the MGD metric in (\ref{mgdmetric}) through the bulk, locally at least.
Also it could be investigated the consequences of the MGD metric in (\ref{mgdmetric}) on five-dimensional
bulk by introducing an extra dimension $y$ dependence in the MGD metric, similar to the study developed in Ref. \cite{kanti2002}. All this certainly deserves further investigation \cite{kan}.
|
1,108,101,566,643 | arxiv | \section{Introduction}
Study about the fundamental forces like gravity is one of the main subjects of the elementary particle physics, which help us to the understanding of nature and hence physics laws. As we know, the gravity in (2 + 1)-dimensional space-time is a very important topic of theoretical physics which usually considered as a toy model. These studies began in the early 1980 \cite{1,2,3,4}, by the discovery of BTZ \cite{5} and MTZ \cite{6} black holes. It becomes clear that the three- dimension solution is getting more advantage. The charged black hole with a scalar field in (2 + 1) dimensions already studied by the Ref. \cite{7}. In that case, the scalar field couples to gravity, and it couples to itself with the self-interacting potential too. Then, the similar black hole with a rotational parameter constructed by the Ref. \cite{8} and then developed by the Ref. \cite{9}. In that case, rotating charged hairy black hole in (2+1) dimensions considered to study Klein-Gordon equation \cite{9-1}. Also, some thermodynamical studies of such kind of black hole may be find in the Refs. \cite{10, 11}.
It has been show that the entropy of large black holes is proportional to the horizon area \cite{12, 13}. It is important to find what happened when the black hole size reduced where thermal fluctuations of the statistical physics yield to logarithmic corrections. It is indeed the first order corrections where the canonical ensemble is stable \cite{14}. The general form of the corrections and their dependence on
the horizon area is a universal feature, which appears in almost all approaches to quantum gravity. Main differences of various approaches are in the correction coefficients. Several works, already done to find the effect of the first order correction on the small black hole thermodynamics \cite{NPB}.
The logarithmic corrections to black hole entropy already obtained by counting microstates in non-perturbative quantum gravity \cite{15, 16}, also by using the Cardy formula \cite{17,18,19}. Moreover, there are other methods where the corrected entropy given by \cite{20,21,22,23,24,25},
\begin{equation}\label{1}
S= S_0 -\frac{\alpha}{2} \ln{S_{0}T_{0}^{2}},
\end{equation}
where $T_{0}$ is Hawking temperature which will be corrected later, and $\alpha $ is a dimensionless parameter, which introduced for the first time by the Ref. \cite{26}, also $ S_0 $ is the Bekenstein-Hawking (BH) entropy. We can trace the effect of correction using $ \alpha $ and reproduce ordinary thermodynamics when $ \alpha = 0$. There is also other logarithmic corrected entropy as \cite{27},
\begin{equation}\label{2}
S= S_0 -\frac{1}{2} \ln{C_{0} T_{0}^{2}},
\end{equation}
where $C_{0}$ is ordinary specific heat which will also corrected due to the thermal fluctuations.
It is interesting to check that relation (\ref{1}) with $\alpha=1$ and relation (\ref{2}) may yield to similar results for the thermodynamics quantities. It already applied to asymptotically $ AdS $ black holes \cite{28}. In the Ref. \cite{29} modified thermodynamics of a black saturn has been studied by using the corrected entropy (\ref{2}). Then, the corrected thermodynamics of a charged dilatonic black saturn by using both (\ref{1}) and (\ref{2}) investigated by the Ref. \cite{30} and found similar results from both relations.\\
There is also other logarithmic corrected entropy given by,
\begin{equation}\label{S}
S= S_0 -\frac{\alpha}{2} \ln{S_{0}},
\end{equation}
which will be considered in this paper. If we assume $\alpha=3$ then recover the corrected entropy given by \cite{15,20} where rotating BTZ black hole in the Chern-Simons formulation \cite{15} and four dimensional Schwarzschild black hole in loop quantum gravity \cite{20} considered.
In general, it is possible to state that all the different approaches to quantum gravity generate
logarithmic corrections (at the first order approximation) to the area-entropy law of a black hole. It should be note that even though
the leading order corrections to this area-entropy law are logarithmic, the
coefficient of such a term depends on the approach to the quantum gravity. Since
the values of the coefficients depend on the chosen approach to quantum
gravity, we can say that such terms are generated from quantum fluctuations of the space-time
geometry rather than from matter fields on that space-time. Hence, we consider general form given by (\ref{S}) including free parameter $\alpha$ which depends on the given theory. However, it is possible to consider higher order corrections on the black hole entropy \cite{h1,h2,h3,h4,h5,h6}.\\
If the BH entropy corrected, then other thermodynamic quantities also are corrected \cite{31}.
As we know, the thermodynamic stability analyzed by the Refs. \cite{29,30,31,32} under assumption of the fixed temperature. However, it is possible to consider corrected Hawking temperature as \cite{33,34,35,36,37,38},
\begin{equation}\label{T}
T= T_0 (1+\frac{\alpha}{2 S_0}),
\end{equation}
where $T_0$ is ordinary Hawking temperature. We note here the phase transition, and critical point for the black holes have been investigate by several researchers \cite{PV}. The behavior of a black hole compared to van der Waals fluid are studied by the Refs. \cite{39,40,41,42,42-1}. In that case, we use holographic principles and study the charged hairy black hole via a van der Waals fluid.
We use mentioned motivations to investigate logarithmic corrected entropy and temperature of a charged hairy black hole. It is important to note that such thermal fluctuations may be considered as quantum gravity effect \cite{q1,q2,q3}.\\
All the above information gives us motivation to organized paper as follows. In section 2, we review the charged hairy black holes. In section 3, we consider the effects of quantum corrections on the thermodynamics of charged BTZ black holes and study the behavior of the first-order correction on the thermodynamics of quantities and stability of black holes. In section 4, we consider an uncharged hairy AdS black hole and derive the corrected thermodynamics. We also study the global and local stability of the uncharged hairy AdS black hole.
In section 5, we discuss a conformally dressed AdS black hole and derive the corrected thermodynamics due to the thermal fluctuations. We also study the global and local stability of the conformally dressed AdS black hole. Finally, we summarize our results by concluding remarks in the last section.
\section{Charged hairy black holes in (2+1) dimensions}
In this section, we consider the solution of Einstein-Maxwell theory which is coupled minimally to scalar field in (2 + 1) dimensions. The hairy black hole is the same solution, and there are lots of texts about this corresponding theory \cite{44,45,46,47,48,49,50,51,52,53,54,55}.
On the other hand, the scalar field may be coupled minimally or nonminimally to gravity. Here, also the self-interacting potential plays important role as $V (\varphi)$ to such model. The above mentioned coupled scalar field lead us to write the corresponding action as,
\begin{equation}
S = \int{d^{3} x \sqrt{-g}\left[ \frac{R}{2} - \frac{1}{2} g^{\alpha \beta} \nabla _{\alpha \varphi} \nabla _{\beta \varphi} - \frac{1}{2} \varepsilon R \varphi^{2} - V(\varphi) - \frac{1}{8} F_{\alpha \beta} F^{\alpha \beta} \right]},
\end{equation}
where $ \varepsilon = \frac{1}{8}, $ is a constant, and shown the coupling power between gravity and the scalar field. The metric function is given by following expression,
\begin{equation}\label{5}
f(r) = \frac{r^{2}}{\ell^{2}} + 3 \beta -\frac{ Q^{2} }{ 4} + (2 \beta - \frac{ Q^{2} }{ 9}) \frac{B}{r} - Q^2 (\frac{1}{2} + \frac{B}{3 r}) \ln(r),
\end{equation}
where $Q$ is the electric charge, and $ \beta$ is a relation between the black hole charge and mass,
\begin{equation}
\beta = \frac{1}{3} (\frac{Q^2}{4} - M).
\end{equation}
\begin{figure}[h!]
\begin{center}$
\begin{array}{cccc}
\includegraphics[width=60 mm]{1a.eps}\includegraphics[width=60 mm]{1b.eps}\\
\includegraphics[width=60 mm]{1c.eps}\includegraphics[width=60 mm]{1d.eps}
\end{array}$
\end{center}
\caption{Horizon structure of charged hairy black holes in (2+1) dimensions for $\ell=1$. (a) $Q=B=1$ (b) $Q=0$, $B=1$ (c) $Q=B=0$ (d) $Q=1$, $B=0$.}
\label{fig:1}
\end{figure}
The constant $ \ell $ related to the cosmological constant by $ \Lambda = - \frac{1}{\ell^2}. $ It is negative because smooth black hole horizons can be only in the presence of a negative cosmological constant in (2+1) dimensions, and $r$ explains the radial coordinate. The relation between $B$ and scalar field is as follow,
\begin{equation}
\varphi (r) = \pm \sqrt{\frac{8 B}{r + B}}.
\end{equation}
Graphically, we can see that there are two horizons depend on values of $Q$, $M$, and $B$. In the Fig. \ref{fig:1} (a) we represent typical behavior of $Q=B=\ell=1$ and see that extremal solution given by $M=0.467$. For the cases of $M<0.467$ there is no event horizon and we have only a bare singularity. In the case of $M=1.2$ the event horizon obtained as $r_{+}=1.4$. In the Figs. \ref{fig:1} (b) and (c) we can see that uncharged black hole has only one event horizon.\\
The black hole mass given by,
\begin{equation}\label{mass}
M =\frac{18r_{+}^{3}+Bl^{2}Q^{2}-3(3r_{+}+2B)l^{2}Q^{2}\ln(r_{+})}{6l^{2}(3r_{+}+2B)}.
\end{equation}
In the Fig. \ref{fig:2} we can see behavior of the black hole mass with the event horizon corresponding to four situations of the Fig. \ref{fig:1}.
\begin{figure}[h!]
\begin{center}$
\begin{array}{cccc}
\includegraphics[width=70 mm]{2.eps}
\end{array}$
\end{center}
\caption{Typical behavior of the black hole mass for $\ell=1$. (solid red) $Q=B=1$ (dotted blue) $Q=0$, $B=1$ (dash dotted orange) $Q=B=0$ (dashed green) $Q=1$, $B=0$.}
\label{fig:2}
\end{figure}
We will use relation between the black hole mass and event horizon in the thermodynamics study.
These help us to study the modified thermodynamics due to the first order correction of the black hole entropy and temperature.
\section{Corrected thermodynamics for charged BTZ black hole}
If we assume $ B = 0$ in the equation (\ref{5}), we will have a black hole solution without a scalar field, so the equation (\ref{5}) reduced to
\begin{equation}\label{8}
f(r) = \frac{r^{2}}{\ell^{2}} - M -\frac{ Q^{2} }{ 2} \ln(r).
\end{equation}
It is indeed corresponding to the charged BTZ black hole \cite{BTZ1,BTZ2,BTZ3}. In the Fig. \ref{fig:1} (d) we can see horizon structure of this case. If we choose $Q=1$ the extremal case is corresponding to $M=0.6$. Event horizon of the case $M=1.4$ is about $r_{+}=1.2$.
Therefore, the black hole mass obtained as the follow,
\begin{equation}\label{9}
M =\frac{r^{2}_{+} }{\ell^{2}} - \frac{ Q^{2} }{ 2} \ln(r_{+}),
\end{equation}
which is corresponding to dashed green line of the Fig. \ref{fig:2}. Now, by using equations (\ref{8}) and (\ref{9}), the Hawking temperature calculated by,
\begin{equation}
T_{0} = \frac{1}{4 \pi}\frac{df}{dr} |_{r
= r_{+}} = - \frac{Q^2}{8 \pi r_{+}} + \frac{r_{+}}{2 \pi \ell^2}.
\end{equation}
The charged BTZ black hole entropy is given by,
\begin{equation}\label{12}
S_{0} = 4 \pi r_{+}.
\end{equation}
Here, the negative cosmological constant could interpreted as the positive thermodynamic pressure,
\begin{equation}
P = - \frac{\Lambda}{8 \pi} = \frac{3}{8 \pi \ell^{2}}.
\end{equation}
Utilizing the relations (\ref{S}) and (\ref{T}), the first-order corrected entropy and temperature for the charged BTZ black hole is obtained by the following equations,
\begin{equation}
S= 4 \pi r_{+} - \frac{\alpha}{2} \ln{ \left( 4 \pi r_{+} \right) },
\end{equation}
and,
\begin{equation}
T= - \frac{Q^2}{8 \pi r_{+}} + \frac{r_{+}}{2 \pi \ell^2} - \frac{ \alpha Q^2}{(8 \pi r_{+})^2 } + \frac{\alpha}{(4 \pi \ell)^{2}}.
\end{equation}
\begin{figure}[h!]
\begin{center}$
\begin{array}{cccc}
\includegraphics[width=60 mm]{3a.eps}\includegraphics[width=60 mm]{3b.eps}
\end{array}$
\end{center}
\caption{Typical behavior of the corrected Hawking temperature of charged BTZ black hole for $Q=\ell=1$. (a) in terms of $r_{+}$ (b) in terms of correction coefficient.}
\label{fig:3}
\end{figure}
Plots of the Fig. \ref{fig:3} show that the Hawking temperature is increasing function of $\alpha$. It means that thermal fluctuations increase value of the Hawking temperature.\\
From equation (\ref{9}), the corrected physical mass for the charged BTZ black hole is obtained as,
\begin{equation}
M = \left(\frac{ 8 \pi r_{+} - \alpha \ln (4 \pi r_{+} ) }{ 8 \pi \ell} \right)^{2} + \frac{Q^{2}}{2} \ln{(8 \pi)} - \frac{Q^{2}}{2} \ln{(8 \pi r_{+} - \alpha \ln (4 \pi r_{+} ) )} ,
\end{equation}
We plot it in Fig. \ref{fig:4} in terms of horizon radius as Fig. \ref{fig:4} (a) and in terms of correction coefficient as Fig. \ref{fig:4} (b). For the selected values ($Q=\ell=1$) we know that $r_{+}\geq0.5$ (see Fig. \ref{fig:1} (d)) hence black hole mass is decreasing function of $\alpha$.\\
\begin{figure}[h!]
\begin{center}$
\begin{array}{cccc}
\includegraphics[width=60 mm]{4a.eps}\includegraphics[width=60 mm]{4b.eps}
\end{array}$
\end{center}
\caption{Typical behavior of the charged BTZ black hole mass for $Q=\ell=1$. (a) in terms of $r_{+}$ (b) in terms of correction coefficient.}
\label{fig:4}
\end{figure}
The first law of thermodynamic for the black hole reads,
\begin{equation}\label{fl}
dM = T dS + V dP + \phi dQ,
\end{equation}
where $V$ is the thermodynamic volume, and $ \phi $ is the electric potential. In that case, by using entropy and temperature \cite{EPJC} one can obtain Helmholtz free energy as,
\begin{eqnarray}
F = -\int{S dT} = &-& \frac{Q^2}{2} \ln(r_+) - \frac{r^{2}_{+} }{\ell^{2}}\nonumber\\
&+&\frac{\alpha}{16 \pi^{2} \ell^2 r_{+}}[\pi Q^2 \ell^2 (1-\ln(4 \pi r_{+})) + 4 \pi r^{2}_{+} \ln(4 \pi r_{+}) - r^{2}_{+} ]\nonumber\\
&-& \frac{\alpha^{2} Q^2 }{(16 \pi)^2 r^{2}_{+}}[1 + 2 \ln(4 \pi r_{+})].
\end{eqnarray}
Using definition, $ E = F + S T $, the corrected internal energy is calculated as,
\begin{eqnarray}
E &=& \frac{r^{2}_{+} }{\ell^{2}} - \frac{Q^2}{2}( 1 + \ln(4 \pi r_+) ) - \frac{\alpha}{16 \pi^2 \ell^2 } (1 - 4 \pi) r_{+}\nonumber\\
&+& \frac{\alpha^{2} }{(16 \pi \ell)^2 r^{2}_{+}}[\ell^2 Q^2(1 - 2 \ln(4 \pi r_{+})) - 8 r^{2}_{+} \ln(4 \pi r_{+})]
\end{eqnarray}
Also, one can obtain the enthalpy as following,
\begin{eqnarray}
H &=& E + P V = \frac{r^{2}_{+} }{\ell^{2}} - \frac{r^{3}_{+} }{2 \ell^{2}} - \frac{Q^2}{2}( 1 + \ln(4 \pi r_+) )- \frac{\alpha}{16 \pi^2 \ell^2 } (1 - 4 \pi) r_{+}\nonumber\\
&+& \frac{\alpha^{2} }{(16 \pi \ell)^2 r^{2}_{+}}[\ell^2 Q^2(1 - 2 \ln(4 \pi r_{+})) - 8 r^{2}_{+} \ln(4 \pi r_{+})].
\end{eqnarray}
Finally, the quantum corrected Gibbs free energy given by,
\begin{eqnarray}\label{G}
G &=& H - ST = - \frac{r^{2}_{+} }{\ell^{2}} - \frac{r^{3}_{+} }{2 \ell^{2}} - \frac{Q^2}{2} \ln(4 \pi r_+)\nonumber\\
&+& \frac{\alpha}{16 \pi^2 \ell^2 r_{+}} [\pi \ell^2 Q^2 (1 - \ln(4 \pi r_{+})) + 4 \pi r^{2}_{+} \ln(4 \pi r_{+}) - r^{2} _{+}]\nonumber\\
&-& \frac{\alpha^{2} Q^2 }{(16 \pi r_{+})^2} [1 + 2 \ln(4 \pi r_{+})].
\end{eqnarray}
\begin{figure}[h!]
\begin{center}$
\begin{array}{cccc}
\includegraphics[width=50 mm]{5a.eps}\includegraphics[width=50 mm]{5b.eps}\\
\includegraphics[width=50 mm]{5c.eps}\includegraphics[width=50 mm]{5d.eps}
\end{array}$
\end{center}
\caption{Typical behavior of (a) Helmholtz free energy (b) internal energy (c) enthalpy (d) Gibbs free energy in terms of $r_{+}$ for the charged BTZ black hole with $Q=\ell=1$.}
\label{fig:5}
\end{figure}
In the plots of Fig. \ref{fig:5} we can see typical behavior of Helmholtz free energy, internal energy, enthalpy and Gibbs free energy to find the effect of quantum correction. From Fig. \ref{fig:5} (a) one can find that Helmholtz free energy is decreasing function of correction coefficient. In the Fig. \ref{fig:5} (b) we can see that internal energy is minimum corresponding to the extremal black hole. Hence, for the cases of $r_{+}\geq0.5$ internal energy is increasing function of the correction parameter. In the Fig. \ref{fig:5} (c) it is illustrated that the corrected enthalpy has a maximum (with negative value) and enthalpy is increased due to the thermal fluctuations. Extremum of the enthalpy may sound stability of the system. Finally, Fig. \ref{fig:5} (d) represent variation of Gibbs free energy with the event horizon, and we can see that net value of Gibbs free energy increase with positive $\alpha$.\\
We will discuss the critical points and the stability of the charged BTZ black hole by using Gibbs free energy and specific heat in the next section.
\subsection{The critical points and stability of charged BTZ black bole}
Now, we are going to consider the stability condition for the corresponding system. In that case, we need two quantities which play an important role in the study of the stability system as Gibbs free energy and heat capacity.
The critical points $(r_{+,c}, P_{c}, T_{c})$ in the phase transition of the charged BTZ black hole with the corrected entropy and temperature obtained by the following conditions,
\begin{equation}\label{Con}
\frac{\partial T}{\partial r_{+}} = \frac{\partial ^{2}T}{\partial r_{+}^{2}} = 0,
\end{equation}
which yields to the following relations,
\begin{equation}
\frac{4 \pi r_{+} \ell^{2} Q^{2} + \alpha\ell^{2} Q^{2} +16 \pi r_{+}^{3}}{32 \pi^{2} \ell^{2} r_{+}^{3}} = 0,
\end{equation}
and,
\begin{equation}
\frac{ 8 \pi r_{+} + 3 \alpha }{32 \pi^{2} r_{+}^{4} } = 0.
\end{equation}
Using these conditions one obtain the critical values,
\begin{equation}
r_{+,c} = \frac{-3 \alpha }{8\pi},
\end{equation}
and,
\begin{equation}
P_{c} = \frac{2 \pi Q^{2}}{9 \alpha^{2} }, \qquad T_{c} = \frac{4 \pi Q^{2}}{27 \alpha^{2}},
\end{equation}
the expression for specific volume is given by,
\begin{equation}
\nu_{c} = 6 \frac{V}{A} = \frac{3 \alpha }{8\pi}.
\end{equation}
Now, the critical ratio is calculated as,
\begin{equation}
\frac{P_{c} \nu_{c} }{T_{c}} = \frac{3 \alpha}{32 \pi}.
\end{equation}
As we see $ \nu_{c} $ and $ \frac{P_{c} \nu_{c}}{T_{c}} $ were increased but $T_{c}$ and $ P_{c} $ are decreased by increasing $ \alpha $. Also we
observe when $ \alpha$ is negative $ P_{c} $ and $T_{c}$ don't change but $ \nu_{c} $ and $ \frac{P_{c} \nu_{c}}{T_{c}} $ decrease.
If $ \alpha = 4 \pi $, the above product will be as a usual relation $ \frac{P_{c} \nu_{c} }{T_{c}} = \frac{3 }{8 }. $ This show when we have charged BTZ black hole the $ \nu_{c} $ changed to form of $ \frac{3}{2} $ without correction.\\
There is another method to study critical behavior in the extended phase space which is based on the Gibbs free energy \cite {57, 58}.
When the Gibbs free energy is negative $ (G < 0)$, the system has global stability. To discuss such global stability of the black hole, we need to calculate the Gibbs free energy in the presence of quantum corrected entropy and corrected temperature which is given by the equation (\ref{G}). In the Fig. \ref{fig:5} (d) we can see that Gibbs free energy is totally negative for selected value of the correction parameter hence global stability established even in presence of the logarithmic correction.\\
The specific heat is an important measurable physical quantity and also is determine the local thermodynamic stability of the system. The mentioned specific heat has the following relation with corrected entropy $S$ and temperature $T$,
\begin{equation}
C = T \left(\frac{dS}{dT} \right).
\end{equation}
In the cases of $C > 0$ ($C < 0$) the black hole is in stable (unstable) phases respectively. So, $C = 0$ with asymptotic behavior corresponds to the phase transition of the van der Waals fluid. By using of the above equation, we can obtain specific heat as,
\begin{equation}
C = \frac{(\alpha - 8 \pi r_{+} )}{2}\left[ \frac{ \ell^{2}Q^{2}(16 \pi r_{+} + \alpha) - 48 \pi r^{3}_{+} - 4 \alpha r^{2}}{\ell^{2}Q^{2}(8 \pi r_{+} + \alpha) + 64 \pi r^{3}_{+} } \right] ,
\end{equation}
\begin{figure}[h!]
\begin{center}$
\begin{array}{cccc}
\includegraphics[width=70 mm]{6.eps}
\end{array}$
\end{center}
\caption{Typical behavior of specific heat of the charged BTZ black hole mass with $Q=\ell=1$.}
\label{fig:6}
\end{figure}
In Fig. \ref{fig:6}, we observe the behavior of the heat capacity for differen values of the correction parameter $ \alpha $. Here, we can see the effects of the logarithmic corrected entropy and corrected temperature on the stability of charged BTZ black hole. We can see that the small positive values $ \alpha $ have no significant effect on the specific heat hence on the stability of the black hole. However, for the negative values of correction coefficient we can see phase transition which is corresponding to the asymptotic behavior (dashed green line of Fig. \ref{fig:6}).
\section{Corrected thermodynamics for uncharged hairy AdS black hole}
Now, if we assume $ Q = 0 $ in equation (\ref{5}), in that case, the metric function reduced to the following expression,
\begin{equation}
f(r) = - M(1 +\frac{2 B}{3 r} ) + \frac{r^{2}}{\ell^{2}}
\end{equation}
where the physical mass is,
\begin{equation}\label{31}
M =\frac{r^{2}_{+} }{\ell^{2} (1 +\frac{2 B}{3 r_{+}} )} .
\end{equation}
Horizon structure of this case plotted by the Fig. \ref{fig:1} (b).
The thermodynamic volume can be calculated by,
\begin{equation}
V =\left( \frac{\partial M }{\partial P} \right) _{S} = - \frac{ 8 \pi r_{+} ^{3}}{(3 r_{+} + 2 B)},
\end{equation}
So, the corresponding temperature will be,
\begin{equation}
T_{0} = \frac{9 S_0^2(S_0 + 4 \pi B )}{8 \pi^2 \ell^2 (3 S_0 + 8 \pi B )^2} ,
\end{equation}
the entropy density of uncharged hairy black holes is also given by (\ref{12}).
Utilizing the relations (\ref{S}) and (\ref{T}), the first-order corrected entropy and corrected temperature for the uncharged hairy AdS black hole is computed as,
\begin{equation}
S= 4 \pi r_{+} - \frac{\alpha}{2} \ln{4 \pi r_{+}},
\end{equation}
and,
\begin{equation}
T= \frac{ 9 S_0 (2 S_{0} + \alpha )(S_0 + 4 \pi B )}{16 \pi^2 \ell^2 (3 S_0 + 8 \pi B )^2}.
\end{equation}
\begin{figure}[h!]
\begin{center}$
\begin{array}{cccc}
\includegraphics[width=60 mm]{7a.eps}\includegraphics[width=60 mm]{7b.eps}
\end{array}$
\end{center}
\caption{Typical behavior of the corrected Hawking temperature of uncharged hairy AdS black hole for $B=\ell=1$. (a) in terms of $r_{+}$ (b) in terms of correction coefficient.}
\label{fig:7}
\end{figure}
In the Fig. \ref{fig:7} (a) we show shift of Hawking temperature due to the quantum correction. From the Fig. \ref{fig:7} (b) we can see that Hawking temperature is increasing function of correction coefficient. It means that Hawking temperature of uncharged hairy AdS black hole enhanced due to the thermal fluctuations.\\
From the equation (\ref{31}), the corrected physical mass for the uncharged hairy AdS black hole is,
\begin{equation}
M =\frac{3 \left( 8 \pi r_{+} - \alpha \ln (4 \pi r_{+} ) \right)^{3} }{ 64 \pi ^{2}\ell^{2} \left( 24 \pi r_{+} - 3 \alpha \ln (4 \pi r_{+} ) + 16 \pi B \right)},
\end{equation}
\begin{figure}[h!]
\begin{center}$
\begin{array}{cccc}
\includegraphics[width=60 mm]{8a.eps}\includegraphics[width=60 mm]{8b.eps}
\end{array}$
\end{center}
\caption{Typical behavior of uncharged hairy AdS black hole mass for $B=\ell=1$. (a) in terms of $r_{+}$ (b) in terms of correction coefficient.}
\label{fig:8}
\end{figure}
In the Fig. \ref{fig:8} (a) we show shift of black hole mass due to the quantum correction. Fig. \ref{fig:8} (b) shows that uncharged hairy AdS black hole mass is completely decreasing function of correction parameter. It means that there is upper bound for the correction coefficient to have positive mass which is about $\alpha\approx10$ for $r_{+}\approx1$.\\
The first law of thermodynamic for such black hole given by the equation (\ref{fl}),
So, here one can obtain the Helmholtz free energy for uncharged hairy AdS black hole which is obtained by the following expression,
\begin{eqnarray}
F &=& \frac{2 B}{(12 \pi \ell)^2} \left[ 24 \pi B \ln(u) - u + \frac{3 (8 \pi B)^2}{u} -\frac{(8 \pi B)^3}{2 u^2} \right]\nonumber\\
&+&\frac{\alpha B}{ 3 (4 \pi \ell)^2}\left[ \frac{(\ln(u))^2}{2} - \ln(u) + \frac{(8 \pi B)^2}{4 u^2} [1 - 2 \ln(u)] + 16 \pi B \frac{\ln(u)}{u}\right]\nonumber\\
&+&\frac{\alpha^2 B}{ 2 (4 \pi \ell)^2}\left[ \frac{(2 \pi B)}{ u^2} - \frac{1}{u}(1 + \ln(u)) + (4 \pi B)\frac{\ln(u)}{u^{2}} \right],
\end{eqnarray}
where $u=3S_{0}+8\pi B$ is defined.\\
The internal energy $ E $ can be obtained as,
\begin{eqnarray}
E &=& \frac{2 B}{(12 \pi \ell)^2}\left[ 24 \pi B \ln(S_{0}) - u + \frac{3 (8 \pi B)^2}{u} -\frac{(8 \pi B)^3}{2 u^2} + 81 \frac{S_{0}^{3}}{ B} \left( \frac{( S_0 + 4 \pi B)}{u^{2}}\right) \right]\nonumber\\
&+& \frac{\alpha B}{ 6 (4 \pi \ell)^2} \left[ (\ln(u))^2 - [ \frac{54 S_{0} }{B} (S_{0}^{2} + 4 \pi B) + ( 8 \pi B)^2] \frac{\ln(u)}{u^2} - 2 \ln(u) + (32 \pi B ) \frac{\ln(u)}{u}\right]\nonumber\\
&+& \frac{\alpha B}{ 6 (4 \pi \ell)^2} \left[ 54 \frac{S_{0}^{2}}{ B} \left( \frac{( S_0 + 4 \pi B)}{u^{2}}\right)+ 32 \frac{( \pi B)^2}{ u^2} \right]\nonumber\\
&+&\frac{\alpha^2 B}{ 2 (4 \pi \ell)^2}\left[ \frac{(2 \pi B)}{ u^2} - \frac{1}{u}(1 + \ln(u)) + (4 \pi B)\frac{\ln(u)}{u} - \frac{9 S_0}{B} (S_0 + 4 \pi B) \frac{\ln(S_0 )}{u^2} \right].
\end{eqnarray}
Moreover, one can obtain the enthalpy as follow,
\begin{eqnarray}
H &=& \frac{2 B}{(12 \pi \ell)^2}\left[ 24 \pi B \ln(S_{0}) - (3 S_0 + 9 \frac{S_{0}^{3}}{16 \pi B} + 8 \pi B) + \frac{3 (8 \pi B)^2}{(3 S_0 + 8 \pi B)} \right]\nonumber\\
&+&\frac{2 B}{(12 \pi \ell)^2} \left[ -\frac{(8 \pi B)^3}{2 u^2} + 81 \frac{S_{0}^{3}}{ B} \frac{( S_0 + 4 \pi B)}{u^{2}} \right]\nonumber\\
&+&\frac{\alpha B}{ 6 (4 \pi \ell)^2} \left[ (\ln(u))^2 - [ \frac{54 S_{0} }{B} (S_{0}^{2} + 4 \pi B) + ( 8 \pi B)^2] \frac{\ln(u)}{u^2} - 2 \ln(u) + 32 \pi B \frac{\ln(u)}{u}\right]\\
&+&\frac{\alpha B}{ 6 (4 \pi \ell)^2} \left[32 \frac{( \pi B)^2}{ u^2} + 54 \frac{S_{0}^{2}}{ B} \frac{( S_0 + 4 \pi B)}{u^{2}} \right]\nonumber\\
&+&\frac{\alpha^2 B}{ 2 (4 \pi \ell)^2}\left[ \frac{2 \pi B}{ u^2} - \frac{1}{u}(1 + \ln(u)) + 4 \pi B \frac{\ln(u)}{u} - \frac{9 S_0}{B} (S_0 + 4 \pi B) \frac{\ln(S_0 )}{u^2} \right].
\end{eqnarray}
Finally, the corrected Gibbs free energy is given by,
\begin{eqnarray}
G &=& \frac{2 B}{(12 \pi \ell)^2} \left[ 24 \pi B \ln(u) - (3 S_0 + 9 \frac{S_{0}^{3}}{16 \pi B} + 8 \pi B) + \frac{3 (8 \pi B)^2}{u} -\frac{(8 \pi B)^3}{2 u^2} \right]\nonumber\\
&+& \frac{\alpha B}{ 3 (4 \pi \ell)^2}\left[ \frac{(\ln(u))^2}{2} - \ln(u) + \frac{(8 \pi B)^2}{4 u^2} [1 - 2 \ln(u)] + 16 \pi B \frac{\ln(u)}{u}\right]\nonumber\\
&+& \frac{\alpha^2 B}{ 2 (4 \pi \ell)^2}\left[ \frac{(2 \pi B)}{ u^2} - \frac{1}{u}(1 + \ln(u)) + (4 \pi B)\frac{\ln(u)}{u} \right].
\end{eqnarray}
Graphically, we analyze above quantities and obtain quantum correction effects (see Fig. \ref{fig:9}).
\begin{figure}[h!]
\begin{center}$
\begin{array}{cccc}
\includegraphics[width=50 mm]{9a.eps}\includegraphics[width=50 mm]{9b.eps}\\
\includegraphics[width=50 mm]{9c.eps}\includegraphics[width=50 mm]{9d.eps}
\end{array}$
\end{center}
\caption{Typical behavior of (a) Helmholtz free energy (b) internal energy (c) enthalpy (d) Gibbs free energy for uncharged hairy AdS black hole with $B=\ell=1$.}
\label{fig:9}
\end{figure}
In the Fig. \ref{fig:9} (a) we draw Helmholtz free energy in terms of event horizon radius and see that there is a maximum which is corresponding to the black hole stability. This maximum affected by quantum corrections which is illustrated by the Fig. \ref{fig:9} (a). It means that Helmholtz free energy is increased by quantum corrections.\\
Figs. \ref{fig:9} (b) and (d) show that effect of correction on the internal energy and enthalpy is infinitesimal however both of them are increasing function of $\alpha$. Fig. \ref{fig:9} (c) show that effect of quantum correction on the Gibbs energy is important when $r_{+}$ is small. It means that Gibbs free energy of large black hole sense no any effect of thermal fluctuations.\\
We will discuss the critical points and the stability of the uncharged hairy AdS black hole in the next section.
\subsection{Stability of uncharged hairy AdS black hole}
The critical point $(r_{+,c}, P_{c}, T_{c})$ for uncharged hairy AdS black hole corresponding to the phase transition can be obtained by the condition (\ref{Con}), which yields to the following critical point,
\begin{equation}
r_{+,c} = \frac{- 1 \pm \sqrt{65} }{8} B.
\end{equation}
Now, we want to discuss the global stability condition for the corresponding system. The graphical analysis of the Gibbs free energy in the case of quantum corrected entropy and the corrected temperature for the uncharged hairy AdS black hole can see in Fig. \ref{fig:9} (c) where we fix constant $ B $ and $ \ell $. In Fig. \ref{fig:9} (c), we observe that correction terms are a few effects on the Gibbs free energy. But the Gibbs free energy will have global stability for enough large event horizon radius. For the selected value of model parameter $B=\ell=1$, we have global stability ($G<0$) if $r_{+}>0.9$ which means $M>1$ (see Fig. \ref{fig:1} (b)).\\
In order to discuss the local stability of the black hole we need to calculate the specific heat at constant pressure which is given by,
\begin{equation}
C_{P} = \frac{ 32 \pi ^{2} r_{+}^{2} (3 r_{+} + 2 B) [8 \pi r_{+}^{2} + 8 \pi r_{+} B + \alpha (r_{+} + B)]}{\left( 24 \pi r_{+}^{3} +24 \pi B r_{+}^{2} + 2 B^{2} (8 \pi r_{+} +\alpha ) + \alpha B r_{+}\right) (8 \pi r_{+} - \alpha) }.
\end{equation}
\begin{figure}[h!]
\begin{center}$
\begin{array}{cccc}
\includegraphics[width=60 mm]{10a.eps}\includegraphics[width=60 mm]{10b.eps}
\end{array}$
\end{center}
\caption{Typical behavior of specific heat at constant pressure of uncharged hairy AdS black hole with $B=\ell=1$.}
\label{fig:10}
\end{figure}
We draw $C_P$ in terms of $ r_{+}$ with constant values of $B$ parameter and quantum correction coefficient $\alpha$ for uncharged hairy AdS black hole in Fig. \ref{fig:10}. Both $r_{+}$ and $\alpha$ dependence figures (Fig. \ref{fig:10} (a) and (b)) show that ignoring thermal fluctuation ($\alpha=0$) yields to completely stable black hole. Quantum correction with positive $\alpha$ reduces value of specific heat but still it is positive and black hole is stable. However, negative corrected coefficient yields to stable/unstable phase transition. Dashed green line of the Fig. \ref{fig:10} (a) show phase transition at about $r_{+}\approx1.5$. We can see that if $r_{+}<1.5$ (by choosing $B=\ell=1$), which means $M<1.5$ (see Fig. \ref{fig:1} (b)) then black hole is in unstable phase, while if $r_{+}>1.5$ ($M>1.5$) the black hole is thermodynamically stable. The important consequence of quantum correction is that the black hole is unstable for some values of negative $ \alpha $ and stable for positive $\alpha$. We can see that there are some stable region $ r_{+} \geq r_{m} $ corresponding to $ \alpha \geq 0 $. Furthermore, $ r_{+} = r_{m} $ is a minimum value of the black hole horizon radius where the phase transition happen.\\
The heat capacity at constant volume is,
\begin{equation}
C_{V} = \left[ \frac{32 \pi^{2} r_{+}^{2} (r_{+} + B) (3 r_{+} + 2 B)}{24 \pi r_{+}^{3} +24 \pi r_{+}^{2} B + \alpha B (r_{+} +2 B)} \right] ,
\end{equation}
\begin{figure}[h!]
\begin{center}$
\begin{array}{cccc}
\includegraphics[width=65 mm]{11.eps}
\end{array}$
\end{center}
\caption{Typical behavior of specific heat at constant volume of uncharged hairy AdS black hole with $B=\ell=1$.}
\label{fig:11}
\end{figure}
We draw $C_V $ in terms of $r_{+}$ with different values of quantum correction coefficient $\alpha$ for uncharged hairy AdS black hole in Fig. \ref{fig:11}.
We can see effect of quantum corrections are important when event horizon radius is small. For the large black hole, thermal fluctuations have no any important effect on the specific heat at constant volume.\\
The local stability equilibrium condition for a thermodynamic system reads as,
\begin{equation}
C_P \geq C_V \geq 0,
\end{equation}
which yields,
\begin{equation}
(8 \pi r_{+} + \alpha) [8 \pi r_{+}^{2} + 8 \pi r_{+} B + \alpha (r_{+} + B)] \geq 3 (r_{+} + B) (8 \pi r_{+} + \alpha).
\end{equation}
Therefore, we can find,
\begin{equation}
\alpha \geq 0,
\end{equation}
as expected. It is completely agree with choosing $\alpha=1$ \cite{59}.
\section{Corrected thermodynamics for conformally dressed AdS black hole }
If we assume $ \beta = -\frac{B^{2}}{ \ell^{2} }$, in the equation (\ref{5}), we will have the conformally dressed AdS black hole solution.
In this section, we will use the corrections to the entropy and the temperature of a conformally dressed AdS black hole to obtain an explicit expression for various thermodynamic quantities.
Then, we will use these explicit values to study phase transition of this system. So,
\begin{equation}\label{50}
f(r) = \frac{r^{2}}{\ell^{2}} - 3 \frac{B^{2}}{\ell^{2}} - 2 \frac{B^{3}}{\ell^{2} r}.
\end{equation}
Because of $ \beta = - \frac{M}{3} $ the mass will be,
\begin{equation}\label{51}
M =\frac{3 r^{2}_{+} }{4 \ell^{2}} = 2 \pi P r^{2}_{+} ,
\end{equation}
Now, exploiting relations (\ref{50}) and (\ref{51}), the Hawking temperature of the event horizon can be calculated by
\begin{equation}
T_{0} = \frac{3 S}{32 \pi^{2} \ell^{2} },
\end{equation}
The entropy density of conformally dressed AdS black hole is given by (\ref{12}). Hence, the first-order corrected entropy and temperature for the conformally dressed AdS black hole is computed as
\begin{equation}
S= 4 \pi r_{+} - \frac{\alpha}{2} \ln{ \left( 4 \pi r_{+}\right) }
\end{equation}
and,
\begin{equation}
T= \frac{3}{(8 \pi \ell)^{2}} (8 \pi r_{+} + \alpha ),
\end{equation}
It is clear that the effects of quantum correction (with positive $\alpha$) is increasing the value of temperature and vice versa.\\
From equation (\ref{51}), the corrected physical mass for the conformally dressed AdS black hole is,
\begin{equation}
M = \frac{3}{4 (8 \pi \ell)^{2}} \left( 64 \pi^{2} r^{2}_{+} +\alpha^{2} \left(\ln( 4 \pi r_{+}) \right)^{2} - 16 \alpha \pi r_{+} \ln(4 \pi r_{+} ) \right),
\end{equation}
We find that effect of thermal fluctuations are infinitesimal on the black hole mass.\\
The thermodynamic first law of the black hole reads,
\begin{equation}
dM = T dS + V d P
\end{equation}
We can obtain the Helmholtz free energy for the conformally dressed AdS black hole as follow,
\begin{equation}
F = - \frac{3 r^{2}_{+} }{4 \ell^{2}} + \left( \frac{3 \alpha r_{+} }{(8 \pi \ell)^{2}} \right) \left( 4 \pi \ln(4 \pi r_{+}) -1\right).
\end{equation}
\begin{figure}[h!]
\begin{center}$
\begin{array}{cccc}
\includegraphics[width=50 mm]{12a.eps}\includegraphics[width=50 mm]{12b.eps}\\
\includegraphics[width=50 mm]{12c.eps}\includegraphics[width=50 mm]{12d.eps}
\end{array}$
\end{center}
\caption{Typical behavior of (a) Helmholtz free energy (b) internal energy (c) enthalpy (d) Gibbs free energy for conformally dressed AdS black hole with $r_{+}=\ell=1$.}
\label{fig:12}
\end{figure}
The effect of first - order correction to the Helmholtz free energy can be seen in Fig. \ref{fig:12} (a). It can be seen that with increases $ \alpha $ parameter the Helmholtz free energy is increased.\\
The internal energy is calculated as,
\begin{equation}
E = \frac{3 r^{2}_{+} }{ \ell^{2}} + \left( \frac{3 \alpha r_{+} }{(8 \pi \ell)^{2}} \right) \left( 4 \pi -1 \right) - \alpha^{2} \left( \frac{3 }{ 2 (8 \pi \ell)^{2}} \right)
\ln(4 \pi r_{+}),
\end{equation}
We have plotted the internal energy in Fig. \ref{fig:12} (b). We see that the internal energy increased by any value of $\alpha$. We can see the same behavior for the enthalpy in Fig. \ref{fig:12} (c) given by
\begin{equation}
H = \frac{15 r^{2}_{+} }{ 4 \ell^{2}} + \left( \frac{3 \alpha r_{+} }{(8 \pi \ell)^{2}} \right) \left( 4 \pi -1 \right) - \alpha^{2} \left( \frac{3 }{ 2 (8 \pi \ell)^{2}} \right)
\ln(4 \pi r_{+}).
\end{equation}
We can obtain the Gibbs free energy as the following expression,
\begin{equation}
G = \frac{3 \alpha r_{+} }{(8 \pi \ell)^{2}} \left[ 4 \pi \ln(4 \pi r_{+}) -1\right],
\end{equation}
which its behavior with correction parameter illustrated by Fig. \ref{fig:12} (d). We will discuss the critical points and the stability of the conformally dressed AdS black hole in the next subsection.
\subsection{Phase transition for conformally dressed AdS black bole}
We see in Fig. \ref{fig:12} (d), the graphical analysis of the Gibbs free energy in the case of quantum corrected entropy and the temperature for conformally dressed AdS black hole. We observe that the correction terms increase the Gibbs free energy for such black hole when $ \alpha $ is positive. So, the Gibbs free energy in this case, has not global stability. Also, we can see that the Gibbs free energy will have global stability for negative $\alpha$.
The specific heat at constant pressure with corrected entropy and corrected temperature can obtain as follows,
\begin{equation}
C = \frac{(8 \pi r_{+} - \alpha ) (8 \pi r_{+} + \alpha )}{16 \pi r_{+}}.
\end{equation}
\begin{figure}[h!]
\begin{center}$
\begin{array}{cccc}
\includegraphics[width=60 mm]{13a.eps}\includegraphics[width=60 mm]{13b.eps}
\end{array}$
\end{center}
\caption{Typical behavior of specific heat at constant pressure of conformally dressed AdS black hole with $\ell=1$.}
\label{fig:13}
\end{figure}
In Fig. \ref{fig:13} (a), we observe the behavior of the heat capacity at constant pressure for some values of the $ \alpha $ parameter. It is clear that negative and positive $\alpha$ yields to similar result.
In this diagram, we can see that the small values of $ \alpha $ have a few effect on the graph ($ \alpha \approx0 $). But by choosing $ \alpha > 0 $, the corrected heat capacity is decreasing.
For the large of $ \alpha $ with positive values, the effect of the corrected heat capacity with the negative $ \alpha $ is the same.\\
Fig. \ref{fig:13} (b) show that the black hole may be instable for small $r_{+}$, which means that thermal fluctuations is important when the size of black hole decreased due to the Hawking radiation.
\section{Conclusion}
In this paper, we have analyzed the thermodynamics of hairy black hole that contains a scalar field coupled minimally or nonminimally to gravity.
First of all, we imply different cases of the charged hairy black holes.
We have studied effects of the entropy and temperature with logarithmic corrected terms on the thermodynamics of hairy black holes. Considering the corrections to the entropy and temperature, specific heat and thermodynamical quantities were calculated for charged BTZ, uncharged hairy AdS, and conformally dressed AdS black holes.
Besides, we calculate internal energy, enthalpy, Helmholtz energy and Gibbs free energy for the mentioned black holes and analyze the effects of corrected entropy and temperature.\\
Critical values of event horizon radii for phase transitions are shown to was shifted due to the corrections of entropy and temperature.
This shifting is also indicated by physical mass, specific heat and enthalpy for charged BTZ black hole.\\
Our main work is finding the effect of corrections on the thermodynamics quantities.
Corrections exist for any black hole, but they are important for a small black holes and negligible for the large black holes. The advantage of a charged BTZ black hole is its holographic picture, which is a van der Waals fluid. We have shown that in the presence of corrections there is still a van der Waals fluid as a dual picture. Only we should fix the black hole charge and $\ell$
which corresponds to the electric charges of a van der Waals fluid. We obtained some thermodynamics quantities like Gibbs and Helmholtz free energies and showed that corrections don't have important effects on the large black hole.\\
On the other hand, for the small black holes, they are important and have a crucial role. Using specific heat, we found that corrections reduced stable regions of the black hole.
However, there are enough stable regions to see quantum gravity effects before the phase transition of a van der Waals fluid. This means that there is a minimum radius, which we call the critical radius, where a black hole is stable in the presence of corrections, and in this region, a charged BTZ black hole in the presence of
corrections is a dual of the van der Waals fluid.\\
It is still possible to find effect of such logarithmic correction on a Hyperscaling violation black hole background \cite{60} and discuss stability conditions.
|
1,108,101,566,644 | arxiv | \section{Introduction}
Coherence refers to the properties of a text that indicate how meaningful (sub-)sentential constituents are connected to convey document-level meaning. Different theories have been proposed to describe the properties that contribute to discourse coherence and some have been integrated with computational models for empirical evaluation. A popular approach is the entity-based model which hypothesizes that coherence can be assessed in terms of the distribution of and transitions between entities in a text -- by constructing an entity-grid (Egrid) representation~\cite{Barzilay:2005,Barzilay2008}, building on Centering Theory~\cite{Grosz1995}. Subsequent work has adapted and further extended Egrid representations~\cite{filippova-strube-2007-extending,Burstein2010,Elsner2011,Guinaudeau2013}.
Other research has focused on syntactic patterns that co-occur in text~\cite{louis-nenkova-2012-coherence} or semantic relatedness between sentences~\cite{Lapata:2005:AET:1642293.1642467,soricut-marcu-2006-discourse,somasundaran-etal-2014-lexical} as key aspects of coherence modeling.
There have also been attempts to model coherence by identifying rhetorical relations that connect textual units~\cite{mann1988rhetorical,lin-etal-2011-automatically,Feng2014} or capturing topic shifts via Hidden Markov Models~\cite[HMM,][]{barzilay-lee-2004-catching}. Other work has combined approaches to study whether they are complementary~\cite{elsner-etal-2007-unified,Feng2014}.
More recently, neural networks have been used to model coherence. Some models utilize structured representations of text~\cite[e.g. Egrid representations,][]{Dat2017,Joty2018} and others operate on unstructured text, taking advantage of neural models' ability to learn useful representations for the task~\cite{Li2017,Logeswaran2018,farag-yannakoudakis-2019-multi,xu-etal-2019-cross,moon-etal-2019-unified}.
Coherence has typically been assessed by a model's ability to rank a well-organized document higher than its noisy counterparts created by corrupting sentence order in the original document (\textit{binary discrimination task}), and neural models have achieved remarkable accuracy on this task. Recent efforts have targeted additional tasks such as recovering the correct sentence order~\cite{Logeswaran2018,Cui2018}, evaluating on realistic data~\cite{Lai2018,farag-yannakoudakis-2019-multi} and focusing on open-domain models of coherence~\cite{Li2017,xu-etal-2019-cross}.
However, less attention has been directed to investigating and analyzing the properties of coherence that current models can capture, nor what knowledge is encoded in their representations and how it might relate to aspects of coherence.
In this work, we systematically examine what properties of discourse coherence current coherence models can capture. We devise two datasets that exhibit various kinds of incoherence and analyze model ability to capture syntactic and semantic aspects of text implicated in discourse organisation.
We furthermore investigate a set of probing tasks to better understand the information that is encoded in their representations and how it might relate to aspects of coherence.
We hope this study shall provide further insight into how to frame the task and improve models of coherence assessment further.
Finally, we release our evaluation datasets as a resource for the community to use to test discourse coherence models.\footnote{\url{https://github.com/Youmna-H/coherence-analysis}}
\section{Neural Coherence Models}
\label{models}
We experiment with a number of existing and state-of-the-art neural approaches to coherence assessment, that have publicly available implementations, and present details of the models below. Across all the BERT-based models, we use bert-large-uncased and layer $16$ following \citet{liu2019linguistic} and~\citet{hewitt2019structural}.
\noindent{\bf Multi-task learning}~\cite[\textbf{MTL},][]{farag-yannakoudakis-2019-multi}: The model applies a Bi-LSTM on input GloVe word embeddings~\cite{pennington2014glove} followed by attention to build sentence representations; then builds a second Bi-LSTM with attention to compose a document vector. A linear operation followed by a sigmoid function is applied to the document representation to predict an overall coherence score as the main objective. Inspired by the Egrid approaches, the model is also optimized to predict the grammatical roles of the input words at the bottom layer of the network as an auxiliary task.
\noindent{\bf MTL with BERT embeddings (MTL$_{bert}$)}: We replicate the previous MTL model but now use BERT embeddings \cite{devlin2018bert} to initialize the input words.
\noindent{\bf Single-task learning}~\cite[\textbf{STL},][]{farag-yannakoudakis-2019-multi}: This model has the same architecture as MTL but only performs the coherence prediction task, excluding the grammatical role auxiliary objective.
\noindent{\bf STL with BERT (STL$_{bert}$)}: This is the same as STL but uses BERT embeddings.
\noindent{\bf Local Coherence Discriminator with Language modeling}~\cite[\textbf{LCD$_{rnnlm}$},][]{xu-etal-2019-cross}: The model generates sentence representations via an RNN language model, where word embeddings are initialized using GloVe.
It then generates a representation for two consecutive sentences via concatenating the output of a set of linear transformations applied to the two sentences: concatenation, element-wise difference, element-wise product and absolute value of element-wise difference. This representation is fed to an MLP layer to predict a \textit{local} coherence score.\footnote{Gold local scores $\in \{0,1\}$ represent whether a sequence of two sentences is coherent (i.e. extracted from a coherent document) or not (i.e. created via negative sampling).}
The overall coherence of a document is the average of its local scores.
\noindent{\bf LCD with BERT (LCD$_{bert}$)}: We create a variant of the LCD$_{rnnlm}$ model where instead of using an RNN language model encoder, we encode each sentence as the average BERT vectors of the words it contains. Everything else remains the same.
\noindent{\bf Local Coherence}~\cite[LC,][]{Li2017}: The model generates sentence vectors via an LSTM over GloVe-initialized word embeddings; then a window approach is applied over adjacent sentences to get embeddings of groups of sentences and predict local coherence scores. The final document score is calculated by averaging its local scores.
\noindent{\bf Egrid CNN}~\cite[Egrid$_{cnn}$,][]{Dat2017}: The model applies a CNN over Egrid representations across groups of consecutive sentences; the CNN slides multiple filters of weights to extract feature maps that represent high-level entity-transition features, followed by a max pooling function to focus on the important features. Furthermore, additional entity-related features are integrated such as salience, proper mentions and named entity type.
\begin{table*}[]
\centering
\scalebox{0.7}{
\begin{tabular}{|cccccccc|} \hline
MTL & MTL$_{bert}$ & STL & STL$_{bert}$ & LCD$_{rnnlm}$ & LCD$_{bert}$ & LC & Egrid$_{cnn}$ \\ \hline
93.2 & 96.1 & 87.7 & 95.4 & 94.5 & \textbf{97.1} & 74.1 & 87.6 \\ \hline
\end{tabular}}
\caption{PRA results of coherence models based on the binary discrimination task on the WSJ.}
\label{table1}
\end{table*}
\section{Binary Discrimination Task}
\label{bintask}
Binary discrimination is a typical approach to assessing neural coherence models where a well-organized document should be ranked higher than its permuted counterparts created by corrupting sentence order. Following previous work, we train and test\footnote{All models are run $5$ times and the test predictions are averaged across the runs.} the coherence models on the WSJ\footnote{We use the same train and test splits as~\citet{Dat2017} and the same test set permuted counterparts as~\citet{farag-yannakoudakis-2019-multi}.} and evaluate them using Pairwise Ranking Accuracy (PRA), which is calculated based on the fraction of correct pairwise rankings between a coherent document and its incoherent counterparts.
In Table ~\ref{table1}, we present the performance of all coherence models. The high accuracy of the models demonstrates their efficacy for the task of selecting a maximally coherent sentence order from a set of candidate permutations. We note that the LCD and MTL BERT variants achieve a new state-of-the-art on the WSJ. The remarkable accuracy on this task may render this problem fully solved.
Herein, we seek to investigate how well these models of coherence can capture aspects of text implicated in discourse organisation. We devise a set of datasets and systematically test model susceptibility to syntactic or semantic changes.
\begin{table*}[]
\centering
\scalebox{0.7}{
\begin{tabular}{|l|l|}
\hline
\multicolumn{1}{|c|}{Type of reference} & \multicolumn{1}{c|}{Example} \\ \hline
Pronominal Reference & Rich was a musician. \underline{He} made a few hit songs. \\ \hline
Proper Name & Dan's parents were overweight. \underline{Dan} was overweight as well. \\ \hline
Nominal Substitution & My dog hates his treats. I decided to go buy some new \underline{ones}. \\ \hline
Demonstrative Reference & \begin{tabular}[c]{@{}l@{}}My daughter wants to take her toddler to the Enchanted Village.\\ \underline{This} is a puppet show featuring early 20th century figurines.\end{tabular} \\ \hline
\end{tabular}}
\caption{Examples of first two sentences extracted from the ROCStories Cloze dataset with different referential types (referring word underlined).}
\label{table2}
\end{table*}
\section{Cloze Coherence (CC) Dataset }
\label{ch4:directevaluation}
We compile a large-scale dataset, to which we refer as Cloze Coherence (CC), of coherent and incoherent examples, where the former are intact well-written texts while the latter are the result of applying syntactic or semantic perturbations to the coherent ones.
\subsection{Coherent examples}
For the sake of specifically testing for coherence, we avoid complex linguistic structures. Specifically, we focus on coherent examples that consist of two short sentences that are coreferential and exhibit a rhetorical relation (such properties can be manipulated to create incoherent counterparts). Furthermore, we focus on examples that are self-contained, meaning that they do not reference or rely on an outer context to be interpreted. We find that narrative texts are good candidates to satisfy these criteria and therefore create our coherent examples from the ROCStories Cloze dataset\footnote{\url{https://www.cs.rochester.edu/nlp/rocstories/}}~\cite{mostafazadeh-etal-2016-corpus}.
ROCStories Cloze contains short stories of $5$ sentences manifesting a sequence of causal or temporal events that have a shared protagonist. A story usually starts by introducing a protagonist in the first sentence, then subsequent sentences describe events that happen to them in a logical / rhetorically plausible manner. The dataset was designed for commonsense reasoning by testing the ability of machine learning models to select a plausible ending for the story out of two alternative endings. Here, our main aim is to challenge the models and investigate whether they truly understand
inter-sentential relations and coherence-related features. We specifically utilize the first two sentences in the stories to compose the coherent examples in our dataset.\footnote{We use NLTK for word tokenization; sentence boundaries are already marked in the stories.}
Selecting the first two sentences helps make the examples self-contained since there is no preceding context to refer to, and no cataphoric relations to consequent sentences. Regarding rhetorical relations in these sentences,~\citet{mostafazadeh-etal-2016-corpus} conducted a temporal analysis to investigate the logical order of the events presented in a story, demonstrating, among others, that the first and second sentences in the stories are presented in a commonsensical temporal manner with logical links between them.
In order to examine coreferential relations between the two sentences in each extracted pair, we gather a set of statistics. We adopt a heuristic approach\footnote{We initially used the spaCy and Stanford coreference resolution systems~\cite{clark2016deep}, but found their performance unreliable for the purposes of this experiment after manual inspection.} by simply counting the number of second sentences that contain at least one third person pronoun (either personal or possessive) and find that they constitute $80\%$ of the examples.\footnote{If we exclude `it' the percentage becomes $76\%$.} Third person pronouns anaphorically refers to preceding items in text, which could occur in the same sentence or the previous one (i.e., the first sentence). We, therefore, randomly select, and manually inspect, $500$ examples that contain third person pronouns in their second sentence and find that in $95\%$ of them the referenced entity appears in the first sentence. Furthermore, third person pronouns are not the only coreferential relations in the examples. For instance, we find that $90\%$ of the second sentences contain a personal or possessive pronoun (whether it is first, second or third person), which could also signal coreference, e.g., `\textit{I was walking to school. Since I wasn't looking at my feet I stepped on a rock.}' There are also other coreferential devices such as: demonstrative references (e.g., ‘this’ and ‘there’), ‘the’ + noun, proper names or nominal substitutions (e.g., ‘one’ or ‘ones’) to name a few~\cite{halliday}, so the true proportion of coreferential pairs will be higher. Table~\ref{table2} presents examples of different referential relations in our dataset.
We use the same train/dev/test splits provided with ROCStories Cloze but only keep the first two sentences in each story. We exclude cases with erroneous sentence boundaries,\footnote{The training stories are in CSV format (separating sentences by comma delimiters) and we parse them using the Python CSV parser. We exclude the stories where the parser fails to detect $5$ sentences.} yielding $97,903$ examples for training, $1,871$ for development, and $1,871$ for testing, and a training vocabulary size of $29,596$ tokens. Each instance in our dataset contains two sentences that represent a coherent pair.
\begin{table*}[ht]
\centering
\scalebox{0.9}{
\small
\begin{tabularx}{\textwidth}{
| >{\raggedright}X
| >{\raggedright}X
| >{\raggedright\arraybackslash}X | }
\hline
\multicolumn{1}{|c|}{Coherent example} & \multicolumn{1}{c|}{Incoherent example from cloze\_swap} & \multicolumn{1}{c|}{Incoherent example from cloze\_rand}
\\ \hline
Tyrese joined a new gym.The membership allows him to work out for a year. & The membership allows him to work out for a year. Tyrese joined a new gym. & Tyrese joined a new gym. As children they hated being dressed alike. \\ \hline
Jasmine doesn't know how to play the guitar. She asked her dad to take her to guitar class. & She asked her dad to take her to guitar class. Jasmine doesn't know how to play the guitar. & Jasmine doesn't know how to play the guitar. May thought her milk was no good. \\ \hline
I wanted to play an old game one day. When I looked in the game's case the CD was missing. & When I looked in the game's case the CD was missing. I wanted to play an old game one day. & I wanted to play an old game one day. Jason pressed the buzzer since he knew the answer. \\ \hline
\end{tabularx}}
\caption{Examples of coherent and incoherent pairs from the cloze\_swap and cloze\_rand datasets.}
\label{table3}
\end{table*}
\subsection{Incoherent examples}
To assess model susceptibility to syntactic or semantic alterations, we construct incoherent examples by applying two different transformations to each coherent pair resulting in two different sets of data.
\noindent{\bf cloze\_swap} We create incoherent examples by swapping the two sentences in a coherent pair. This mostly breaks the coreference relation between them and/or the rhetorical relation (e.g. temporal or causal) by reversing the event sequence. The dataset, referred to as \texttt{cloze\_swap}, is balanced, i.e., the number of incoherent examples is the same as the number of the coherent ones above. The way cloze\_swap is created corrupts the syntactic patterns that co-occur in coherent texts (e.g. $\text{S} \rightarrow \text{NP-SBJ VP } | \text{ NP-SBJ} \rightarrow \text{PRP}$) as demonstrated by~\citet{louis-nenkova-2012-coherence}.
\noindent{\bf cloze\_rand} Here we create incoherent examples by keeping the first sentence of a coherent pair intact and replacing the second with a randomly selected second sentence from (the same split of) our set of coherent examples.
This dataset, referred to as \texttt{cloze\_rand}, is also balanced (for each coherent pair, we compose one incoherent counterpart), and constitutes examples with changed semantics but with the main syntactic pattern intact.
As the randomly-created pair may still be coherent, we address this by: 1) constraining random selection of the second sentence to not begin with the same word as the second sentence in the original pair, or with the pronoun `he' if the original starts with `she', and vice-versa\footnote{We do not find instances of `they' as a third-person singular pronoun.} (we note $70\%$ of the second sentences in ROCStories Cloze start with a pronoun); 2) using human evaluation to further assess the validity of this data and get an estimate of upper-bound performance on the task. Specifically, we randomly select $100$ coherent sentence pairs from our test split along with their own incoherent counterparts and ask two annotators (who are not authors of this paper), with high English proficiency levels, to \textit{rank} each set of coherent--incoherent examples based on which one they considered to be more coherent and plausible. The average PRA of the annotators is $94.5\%$.
Table~\ref{table3} shows examples from cloze\_swap and cloze\_rand. As our datasets are balanced (one incoherent counterpart per coherent pair), we have a total number of $195,806$, $3,742$ and $3,742$ instances in the train, dev, and test splits respectively for each cloze dataset (cloze\_swap and cloze\_rand have the same coherent examples, and the same number of coherent and incoherent examples).
We note that the gold labels in this data are not to be interpreted as (overall) binary indicators of coherence. We rather use these to test model performance using PRA, i.e. we only compare a coherent pair with \textit{its own} incoherent counterpart.
\begin{table*}[]
\centering
\scalebox{0.55}{
\begin{tabular}{|c|l|} \hline
Original & \begin{tabular}[c]{@{}l@{}}A government paper on Monday found UK and EU firms would be faced with a "a significant new and ongoing administrative burden" in the event of a no-deal Brexit.\\ It found large firms importing and exporting at scale would need to fill in forms taking one hour 45 minutes on average and cost £28 per form for each load imported.\end{tabular} \\ \hline
Swap & \begin{tabular}[c]{@{}l@{}}It found large firms importing and exporting at scale would need to fill in forms taking one hour 45 minutes on average and cost £28 per form for each load imported.\\ A government paper on Monday found UK and EU firms would be faced with a "a significant new and ongoing administrative burden" in the event of a no-deal Brexit.\end{tabular} \\ \hline
Random & \begin{tabular}[c]{@{}l@{}}1- A government paper on Monday found UK and EU firms would be faced with a "a significant new and ongoing administrative burden" in the event of a no-deal Brexit.\\ She spent over a decade at Swiss investment bank UBS before joining the UK Treasury's council of economic advisers in 1999.\\ 2- Lady Vadera was born in Uganda and moved to the UK as a teenager.\\ It found large firms importing and exporting at scale would need to fill in forms taking one hour 45 minutes on average and cost £28 per form for each load imported.\end{tabular} \\ \hline
\begin{tabular}[c]{@{}c@{}}Lexical\\ Substitution\end{tabular} & \begin{tabular}[c]{@{}l@{}}The paper found large firms importing and exporting at scale would need to fill in forms taking one hour 45 minutes on average and cost £28 per form for\\each load imported.\\ A government paper on Monday found UK and EU firms would be faced with a "a significant new and ongoing administrative burden" in the event of a no-deal Brexit.\end{tabular} \\ \hline
\begin{tabular}[c]{@{}c@{}}Prefix\\ Insertion\end{tabular} & \begin{tabular}[c]{@{}l@{}}More Specifically, it found large firms importing and exporting at scale would need to fill in forms taking one hour 45 minutes on average and cost £28 per form\\ for each load imported.\\ A government paper on Monday found UK and EU firms would be faced with a "a significant new and ongoing administrative burden" in the event of a no-deal Brexit.\end{tabular} \\ \hline
\begin{tabular}[c]{@{}c@{}}Lexical\\ Perturbations\end{tabular} & \begin{tabular}[c]{@{}l@{}}A government paper on Monday found UK and EU firms would be faced with a "a significant new and ongoing administrative burden" in the event of a no-deal Brexit.\\ It found large firms importing and exporting at scale would need to fill in cups taking one hour 45 minutes on average and cost £28 per cup for each load imported.\end{tabular} \\ \hline
\begin{tabular}[c]{@{}c@{}}Corrupt\\ Pronoun\end{tabular} &
\begin{tabular}[c]{@{}l@{}}A government paper on Monday found UK and EU firms would be faced with a "a significant new and ongoing administrative burden" in the event of a no-deal Brexit.\\He found large firms importing and exporting at scale would need to fill in forms taking one hour 45 minutes on average and cost £28 per form for each load imported.\end{tabular} \\ \hline
\end{tabular}}
\caption{Examples from our manually constructed CLA dataset. For `Random' we create two incoherent instances: one where the first sentence is unchanged and the second is randomly selected (1-); and another where the first sentence is randomly selected and the second is kept intact (2-).}
\label{table5}
\end{table*}
\section{Controlled Linguistic Alterations (CLA) Dataset }
In order to further understand the properties of coherence that current coherence models capture, we manually construct a dataset of controlled sets of linguistic changes. We first identify a set of coherent, well-written texts of two consecutive sentences from business and financial articles in the BBC, the Independent and Financial Times (this allows us to stay in the same domain as the one used for training the models -- the WSJ). We focus on sentence pairs where the subject of the first sentence is pronominalized in the second, and the second sentence begins with this pronoun. We select the examples so that they are self-contained and do not reference an outer context.
We then manually create incoherent counterparts by modifying the coherent examples in a constrained way in order to systematically examine model performance. Specifically, we apply the following sets of perturbations to our set of coherent sentence pairs, examples of which are presented in Table~\ref{table5}.
\noindent{\bf{Swap.}} We simply swap the two sentences.
\vspace{0.1cm} \\
\noindent{\bf{Random.}} We keep the first sentence intact and select a second sentence randomly from our set of coherent examples. We constrain the selection so that the subject pronoun is different from the subject pronoun in the original sentence.\footnote{We also take into account that some subjects could be referred to by `he', `she' or `they' and thus factor that into the selection.} We also create another random pair with the same constraint but now changing the first sentence. Thus each original coherent example has two incoherent counterparts.
\vspace{0.1cm} \\
\noindent{\bf{Lexical Substitution.}} We swap the two sentences in a coherent pair but replace the subject pronoun in the second sentence with \textit{the + a general noun} that substitutes the subject in the first sentence (e.g. the company, the woman, etc.).
\vspace{0.1cm} \\
\noindent{\bf{Prefix Insertion.}} We analyze the WSJ training data and find that the average number of times the first sentence in a document starts with a pronoun is $0.02$ (and never with `he' or `she') which is significantly less than the average number of times a sentence starts with a pronoun (regardless of its position) which is $0.07$. This difference is not maintained in the randomly ordered documents in the WSJ training set and so this might give a signal to the models to detect that a swapped pair that starts with a pronoun is less coherent. To see if such positional information plays a role in model prediction, we insert a phrase, before the subject pronoun after swapping the sentences, that doesn't change the propositional content (e.g. `More specifically', `However', etc.). We can then observe whether this insertion will change the prediction of the model.
\vspace{0.1cm} \\
\noindent{\bf{Lexical Perturbation.}} We investigate the robustness of the models to minor lexical changes that result in incoherent meaning, by replacing one word in either of the two sentences (if the word is repeated, we change that too). We choose a replacement word from the training vocabulary of the WSJ with the same part-of-speech tag. For example, in Table~\ref{table5} `form' is replaced with `cup' and `forms' with `cups'.
\vspace{0.1cm} \\
\noindent{\bf{Corrupt Pronoun.}} We replace the subject pronoun in the second sentence with another pronoun that cannot reference anything in the first sentence. With this method, we test whether the models are capable of resolving coreferences or just rely on syntactic patterns.
Our dataset contains a total of $240$ examples of coherent and incoherent pairs of sentences ($30$ coherent examples and $210$ incoherent counterparts).
Our constrained set of modifications ensures that all coherent examples are more coherent than any of the incoherent counterparts in the data.
\begin{table*}[]
\centering
\scalebox{0.7}{
\begin{tabular}{|c|c|c|cccccccc|} \hline
& \multirow{2}{*}{Dataset} & \multirow{2}{*}{\# comparisons} & \multicolumn{8}{c|}{Models} \\
& & & MTL & MTL$_{bert}$ & STL & STL$_{bert}$ & LC & LCD$_{rnnlm}$ & LCD$_{bert}$ & Egrid$_{cnn}$ \\ \hline
\multirow{4}{*}{CC} & cloze\_swap & 1,871 & 69.3 & 73.5 & 74.2 & 75.3 & 70.7 & 74.5 & 75.4 & \textbf{84.6} \\
& fine-tuned & 1,871 & 88.8 & 88.5 & 83.5 & 84.7 & 76.3 & 88.4 & \textbf{96.7} & 88.1 \\ \cline{2-11}
& cloze\_rand & 1,871 & 51.3 & 53.3 & 48.5 & 52.5 & 50.5 & 54.5 & \textbf{71.0} & 53.4 \\
& fine-tuned & 1,871 & 65.7 & 54.2 & 53.7 & 56.1 & 51.3 & 65.2 & \textbf{94.8} & 68.8 \\ \hline \hline
\multirow{8}{*}{CLA} & \multicolumn{1}{c|}{ Swap} & \multicolumn{1}{c|}{30} & 90.0 & \textbf{93.3} & 83.3 & 90.0 & 80.0 & \textbf{93.3} & 86.6 & 83.3 \\
& \multicolumn{1}{c|}{Random} & \multicolumn{1}{c|}{60} & 56.6 & 45.0 & 50.0 & 51.6 & 51.6 & 61.6 & \textbf{78.3} & 71.6 \\
& \multicolumn{1}{c|}{Lexical Substitution} & \multicolumn{1}{c|}{30} & 83.3 & \textbf{93.3} & 80.0 & 90.0 & 86.6 & 83.3 & 86.6 & 76.6 \\
& \multicolumn{1}{c|}{Prefix Insertion} & \multicolumn{1}{c|}{30} & 83.3 & \textbf{96.6} & 76.6 &90.0 & 76.6 & 86.6 & 93.3 & 80.0 \\
& \multicolumn{1}{c|}{Lexical Perturbations} & \multicolumn{1}{c|}{30} & 56.6 & 46.6 & 46.6 & 63.3 & 50.0 & 53.3 & \textbf{80.0} & 53.3 \\
& \multicolumn{1}{c|}{Corrupt Pronoun} & \multicolumn{1}{c|}{30} & 70.0 & 53.3 & 63.3 & 63.3 & 53.3 & 60.0 & \textbf{76.6} & 56.6 \\
\cline{2-11} & \multicolumn{1}{c|}{All data} & \multicolumn{1}{c|}{210} & 70.9 & 67.6 & 64.2 & 71.4 & 64.2 & 71.4 & \textbf{82.8} & 70.4 \\
\cline{2-11}& \multicolumn{1}{c|}{All data (TPRA)} & \multicolumn{1}{c|}{6,300} & 69.9 & 71.3 & 61.8 & 71.6 & 66.0 & 69.1 & \textbf{72.2} & 65.8 \\ \hline
\end{tabular}}
\caption{PRA performance on the CLA (bottom) and CC datasets (top; `fine-tuned' shows results for models tuned on the respective cloze training sets).}
\label{table6}
\end{table*}
\section{Experiments}
Table \ref{table6} (top) presents the PRA performance of the models trained on the WSJ (Section \ref{bintask}) when they are evaluated on the test sets of the CC datasets (rows `cloze\_swap' and `cloze\_rand'). We find that, overall, models are good at detecting syntactic alterations (cloze\_swap; PRA ranging from $69.3$ to $84.6$) even though the test data is from a domain different than the training one.
However, most models perform poorly on semantic alterations (cloze\_rand; PRA ranging from $48.5$ to $54.5$), the only exception being LCD$_{bert}$ that achieves a PRA of $71$. Specifically, models that use RNN-based sentence encoders (the first six models), even when initialised with BERT, or apply a CNN to capture entity transitions fall short in capturing semantic changes despite the fact that cloze\_rand is from the same domain as cloze\_swap. In contrast, LCD$_{bert}$ is more capable of detecting semantic changes where the model builds sentence representations by averaging BERT vectors then applies a set of linear transformations to increase its expressive power, surpassing its RNN-based counterpart (LCD$\_{rnnlm}$) with $16.5$\% on cloze\_rand. Additionally, across models, we observe that the use of contextualized (BERT) embeddings consistently improves performance on both cloze tasks, although performance on semantic alterations remains close to random.
We investigate domain shift effects and fine-tune the WSJ-trained models on each of the cloze\_swap and cloze\_rand training sets (Section \ref{ch4:directevaluation}) and re-evaluate performance on the respective test sets. Specifically, we use an MLP layer over the models' pre-prediction representation, followed by sigmoid non-linearity. The models are optimized using the mean squared error between the gold labels (0 or 1) and the predicted scores.\footnote{We use Adam~\citep{KingmaB14}, batch size $64$, L$2$ regularization, and a learnable penalty rate (search space $\{0.00001, 0.0001, 0.001, 0.01\}$). We use early stopping and stop training if PRA does not improve on the dev set over $5$ epochs (max epochs $200$). MLP hidden unit size is $100$.} In this setup, only the MLP layer is fine-tuned and not the whole coherence model which allows us to create a fast efficient evaluation framework that can be applied as a further examination step after coherence models are developed and tuned on their respective datasets, instead of training the models from scratch. The results of the fine-tuned models are presented in Table~\ref{table6} (CC; rows `fine-tuned'). Although we can see that there is some domain effect, we nevertheless find that the results confirm our earlier observation: performance on semantic alterations is, overall, poor, in contrast to syntactic ones (cloze\_swap).
In Table \ref{table6} (bottom), we can observe model performance (PRA) on our constrained set of manually devised examples (CLA). Again, we observe a similar result: across RNN-based models, performance is particularly low on random examples, which suggests that they struggle to detect topical or rhetorical shifts and unresolved references if the main syntactic pattern is maintained.
The exception is LCD$\_{bert}$ which is again the best performing model (PRA $78.3$).
We furthermore observe that now Egrid$_{cnn}$ is the second best model on CLA Random (PRA $71.6$). A sparser entity grid where entities in the two sentences are different allows the model to detect such cases (e.g.~in the example in Table~\ref{table5}, `firms' is mentioned in the two sentences, while in the two random examples, it is only mentioned in one). However, its substantial difference in PRA on CLA Random compared to cloze\_rand ($53.4$) suggests that the lower performance observed in the latter is due to domain shift effects, something which we do not observe (to the same extent) with LCD$_{bert}$.
Regarding the CLA Swap results, we can again confirm models' capability of detecting corrupted syntactic constructions. We furthermore observe that they are able to maintain good performance in the cases where a prefix is inserted (`Prefix Insertion') or the subject pronoun is substituted with a lexical item (`Lexical Substitution'). This suggests that they can capture the relevant syntactic patterns and do not rely solely on positional features.
\begin{table*}[]
\centering
\scalebox{0.75}{
\begin{tabular}{|c|cccccccl|c|} \hline
\multirow{2}{*}{Task} & \multicolumn{8}{c|}{Models} & \multirow{2}{*}{Human} \\
& MTL & MTL$_{bert}$ & STL & STL$_{bert}$ & LC & LCD$_{rnnlm}$ & LCD$_{bert}$ & Best from~\citet{conneau-etal-2018-cram} & \\ \hline
SubjNum & 64.9 & 75.4 & 62.2 & 71.5 & 52.7 & 71.2 & \textbf{88.0} & 95.1 (Seq2Tree)& 88.0 \\ \hline
ObjNum & 64.5 & 72.1 & 61.1 & 70.7 & 54.5 & 65.0 & \textbf{86.5} & 95.1 (Seq2Tree) &86.5 \\ \hline
CoordInv & 58.5 & 63.4 & 53.0 & {63.7} & 53.0 & 56.6 & \textbf{78.4} & 76.2 (NMT En-De) &85.0 \\ \hline
CorruptAgr & 53.2 & 69.7 & 57.7 & 68.6 & 52.2 & 64.2 & \textbf{94.3} & - & - \\ \hline
\end{tabular}
}
\caption{Classification accuracy on probing tasks. `Human' shows the human upper bound on the task. }
\label{table4}
\end{table*}
Performance is overall low on lexical perturbations and corrupt pronouns which suggests that the models are not sensitive to minor lexical changes even if they result in implausible meaning and they also struggle to resolve pronominal references. However, the exception is LCD$_{bert}$ (with PRA $80$ on lexical perturbations and $76.6$ on corrupt pronoun) suggesting a better ability at capturing semantics and resolving references.
Across all six CLA datasets (`All data'; Table \ref{table6}), we find that, overall, LCD$_{bert}$ is the top performing model (average PRA). The `All data' row reports the result of comparing a coherent example against its incoherent counterparts across the different alterations (i.e., in Table~\ref{table5}, the original example is compared against all the examples in the table and this is applied to all the original examples in the dataset). If we furthermore compare all the coherent examples against the incoherent ones in the whole dataset (rather than against their own incoherent counterparts), we find that a similar performance pattern is maintained (row `All data (TPRA)', i.e., all data Total Pairwise Ranking Accuracy).
\section{Probing Coherence Embedding Space}
\label{probe}
Inspired by previous work ~\cite{conneau-etal-2018-cram}, and to better understand the information that is encoded in the representations of coherence models, we investigate probing tasks that can capture coherence-related features.
We experiment with the following set of sentence-level tasks that are relevant to discourse coherence: 1) the subject number (SubjNum) task that detects the number of the subject of the main clause; 2) the object number (ObjNum) task that detects the number of the direct object of the main clause; 3) the coordination inversion (CoordInv) task that contains sentences consisting of two coordinate clauses, where the two clauses are inverted in half of the sentences and kept intact in the other half (the task is to detect whether a sentence is modified or not); 4) the corrupt agreement (CorruptAgr) task where sentences are corrupted by inverting the verb number (the task is to identify corrupted sentences).
Tasks 1, 2 and 4 align with Centering theory as they probe for subject and object relevant information; the theory suggests that subject and object roles are indicators of entity salience. On the other hand, task 3 tests whether the models can capture intra-sentential coherence. For these tasks, we use the datasets from \citet{conneau-etal-2018-cram} (tasks $1$,$2$ and $3$) and \citet{linzen-etal-2016-assessing} (task $4$).
\vspace{-0.2cm}
\paragraph{Probing model} We adopt the SentEval framework of~\citet{conneau-etal-2018-cram}.
Our probing model consists of an MLP layer over model sentence representations, followed by sigmoid non-linearity. We use the same training parameters as~\citet{conneau-etal-2018-cram}.\footnote{\url{https://github.com/facebookresearch/SentEval}}
\vspace{-0.2cm}
\paragraph{Results} Table~\ref{table4} presents the results.\footnote{Egrid$_{cnn}$ is based on entity transitions across sentences and therefore we cannot probe sentence representations.}
Overall, we observe that models are better at detecting SubjNum, and ObjNum (accuracy of at least $61$\% for all models except LC which is the odd one out) compared to CorruptAgr and CoordInv, with the last two being particularly challenging for most models (minimum accuracy of $53$\% excluding LC). For SubjNum and ObjNum the models can find hints in words other than the target word (as the majority of nouns in a sentence tend to have the same number, with $75.9$\% of SubjNum test sentences and $78.7$\% of ObjNum ones containing nouns of the same number in the same sentence~\cite{conneau-etal-2018-cram}).
On the other hand, CorruptAgr examples are longer and with more syntactic variations and require the models to detect the dependency between verbs and their subjects. CoordInv is also a difficult task for the models particularly since they are pre-trained on the WSJ to focus on the order of sentences, not clauses.
Across all tasks, we find that LCD$_{bert}$ achieves the best performance,
outperforming all other approaches.
We note, however, that LCD$_{bert}$ does not fine-tune its sentence representations during coherence training in the WSJ but they are rather fixed and based on the average of BERT-based word embeddings (Section \ref{models}). This means the probing model fine-tunes averaged BERT-based word embeddings rather than actual sentence parameters from the LCD coherence model.
Therefore, the level of performance observed is not representative of the maximum performance coherence models can achieve on these tasks.\footnote{Nevertheless, we observe that LCD$_{bert}$ outperforms the best reported result on CoordInv by \citet{conneau-etal-2018-cram}.}
We surmise that the comparatively lower performance observed with MTL$_{bert}$ and STL$_{bert}$ (whose sentence representations are fine-tuned during coherence training) is due to their coherence training objective. The models are optimized on the binary discrimination task, i.e. learning to rank a well-organized document higher than its permuted counterparts. This is an overly simplistic approach to coherence modeling that may be making models (and their representations) more susceptible to losing useful linguistic information. Having said that, though, MTL$_{bert}$, that has direct training signal with respect to the words' grammatical roles, is able to alleviate this issue to an extent and is the next best performing model on SubjNum, ObjNum and CorruptAgr.
Across tasks, LC is the odd one out, and the worst performing model. This can be explained partly by its comparatively lower performance on the simpler binary discrimination task (Table~\ref{table1}) and partly by the simplicity of the approach: LC utilizes no attention mechanism as the MTL and STL family of models do, nor has expressive enough transformations as LCD$_{rnnlm}$ does.
\section{Discussion}
Our evaluation experiments on two coherence datasets reveal that RNN- or EGrid-based coherence models are able to detect syntactic alterations that undermine coherence, but are less effecient at detecting semantic ones even after fine-tuning on the latter.
We furthermore find that they particularly struggle with recognizing minor lexical changes even if they result in implausible meaning and resolving pronominal references.
On the other hand, these models are particularly good at detecting cases where a prefix is inserted or the subject pronoun is substituted with a lexical item, suggesting that they are capable of capturing the relevant syntactic patterns and do not solely rely on positional features.
We find that the best performing model overall is LCD$_{bert}$ which does not use an RNN sentence encoder but rather builds sentence representations by averaging BERT embeddings then utilizes a number of linear transformations over adjacent sentences to facilitate learning richer representations.
Our probing experiments reveal that models are better at encoding information regarding subject and object number
followed by verb number (CorruptAgr). These probing tasks align with Centering theory as they probe for subject and object relevant information. The task that tests for knowledge on coordination inversion is the lowest performing one overall, suggesting that there is little capacity at capturing information related to intra-sentential coherence. Excluding LCD$_{bert}$, MTL$_{bert}$ is the best performing model; nevertheless, there is still scope for substantial improvement across all probing tasks and particularly on CoordInv and CorruptAgr.
\section{Conclusion}
We systematically studied how well current models of coherence can capture aspects of text implicated in discourse organisation. We devised datasets of various kinds of incoherence and examined model susceptibility to syntactic and semantic alterations. Our results demonstrate the models are robust with respect to corrupted syntactic patterns, prefix insertions and lexical substitutions. However, they fall short in capturing rhetorical and semantic corruptions, lexical perturbations and corrupt pronouns. We furthermore find that discourse embedding space encodes subject and object relevant information; however, there is scope for substantial improvement in terms of encoding linguistic properties relevant to discourse coherence. Experiments on coordination inversion further suggest that current models have little capacity at encoding information related to intra-sentential coherence.
We hope this study shall provide further insight into how to frame the task of coherence modeling and improve model performance further. Finally, we make our datasets publicly available for researchers to use to test coherence models.
|
1,108,101,566,645 | arxiv | \section{Introduction}
Designing effective test suites requires a significant effort that dramatically affects both testing and development costs, with a relevant impact on the software development schedule, the project deadlines, and ultimately the quality of the final product.
\begin{change}The recent advances in automated test case generation and the new tools for automatically generating test suites can reduce the effort required to generate effective test suites, and thus improve the cost-effectiveness of test case generation activities. \end{change}
Several popular research prototypes
(for instance, Randoop~\cite{Pacheco:Randoop:ICSE:2007}, JBSE~\cite{Braione:enhancing:esecfse:2013}, Evosuite~\cite{Fraser:whole:2013} and SUSHI~\cite{,Braione:SUSHI:ISSTA:2017})
and commercial products
(for instance, PEX~\cite{Tillmann:pex:TAP:2008} and Parasoft Testing C/C++ Test~\cite{Parasoft:C-C++Test:Web})
provide good support for automatically generating \emph{unit} test suites.
The increasing success of test generation tools for unit testing already produced studies about the issues associated with their adoption in industry~\cite{Fraser:UnitTestsEmpirical:TOSEM:2015}.
\begin{change2}
Tools for generating \emph{system} test cases can address applications in both critical~\cite{wang:automated:iciest:2018}
and non-critical domains~\cite{Almasi:industria:icse-seip2017,Arcuri:Evomaster:tosem:2019}.
The industrial practice for automatically generating system test cases for GUI applications offers tools that capture, generalize, and execute behaviors observed when users interact with the system~\cite{tosca}.
\end{change2}
Studies about the effectiveness of research prototype tools for generating system test cases provide evidence of the effectiveness of interesting approaches on medium-scale and small-scale open source software systems,
notably the work on ABT~\cite{Mariani:GUI:STVR:2014}, Exsyst~\cite{Gross:Exsyst:ISSTA:2012}, WATEG~\cite{Thummalapenta:GuidedTestWeb:ICSE:2013}, GUITAR~\cite{Yuan:StateFeedback:TSE:2010}, and Gazoo~\cite{arlt:GAZOO:ISSRE:2012}.
\begin{change2}
However, introducing automated system testing solutions available in the form of research prototypes in industrial software development processes\end{change2} challenges automated test case generators with problems that can be hardly experienced with medium-scale and small-scale open source software~\cite{Braione:TestGenerator:SwQuality:2014}.
\begin{change2}Some experiences reported so far refer to the successful introduction of automatic test case generation tools for unit testing~\cite{Almasi:industria:icse-seip2017} and REST API testing in industrial settings~\cite{Arcuri:Evomaster:tosem:2019}, but there is little attention to the specific issues that must be addressed to effectively introduce automatic GUI test case generators within commercial organizations.
\end{change2}
\begin{change}
This paper reports an exploratory study that investigates the issues that arise when introducing a leading-edge-research GUI test generator, $ABT$\xspace~\cite{Mariani:Autoblacktest:ICST:2012}, in a commercial organisation.
$ABT$\xspace generates system test cases by sampling possible GUI interaction sequences. $ABT$\xspace relies on Q-Learning, a machine learning algorithm, to steer the test generation process towards relevant functionalities of the target application~\cite{Sutton:IRL:1998}.
\end{change}
In this paper we
\begin{inparaenum} [(i)]
\item introduce the relevant issues that led a software house to consider an automated approach for generating system test suites,
\item present the sometimes unexpected technical and
organisational obstacles that prevented the straightforward adoption of
\begin{change}
$ABT$\xspace
\end{change}
for automatically generating system test suites,
\item discuss the technical improvements that we made to overcome those obstacles, and that led to $ABT_{2.0}$\xspace, a system test case generator \begin{change} that extends $ABT$\xspace to address\end{change}
the specific industrial process,
and
\item illustrate the lessons that we learned with our experience and that may generalise to other commercial applications that share the characteristics of our study.
\end{inparaenum}
The \begin{change} study\end{change} reported in this paper is the result of a one-year \begin{change}pilot\end{change} technology-transfer project.
\begin{change}
It follows an exploratory design
because of the initial lack of clear propositions and hypotheses on the potential constraints that could limit the effectiveness of $ABT$\xspace in the considered industrial setting.
\end{change}
During the project, we faced several challenges and learned several lessons
about \emph{scalability}, \emph{test reporting} and \emph{oracles} that are specific to the problem of automatically generating test cases, and that represent major barriers to the adoption of automated system test case generation tools.
Interestingly, we also found that we could largely or totally address the most relevant issues with domain-tailored solutions that adapt well to the specific business oriented applications considered in our project, thus reducing the need for difficult-to-find general solutions that would have significantly delayed the adoption of the test generator.
In the light of the issues that we observed during the project, we extended the tool $ABT$\xspace with new solutions that exploit the domain specific characteristics of the applications under test, to produce an effective test generation tool, $ABT_{2.0}$\xspace, tailored for testing business oriented applications.
Based on the results of our experiments with an ERP application considered in our project, \begin{change}the study reported in\end{change} this paper provides \begin{change}initial\end{change} empirical evidence that our extensions are the key ingredients to turn \begin{change}our\end{change} academic test case generation tool \begin{change}$ABT$\xspace\end{change} from an ineffective to an effective solution for testing industrial business oriented applications.
Our experience refers to a specific system testing technology and a specific product under test, \begin{change} and cannot be directly generalised. However,\end{change}
the challenges that we faced and the solutions that we designed can be generally relevant for both researchers and practitioners, who might think to apply similar strategies in different contexts to make automatic system testing more effective.
In a nutshell, this paper contributes to the state of the art by:
\begin{itemize}
\item identifying some important challenges to automatically generating system test suites in the context of industrial software systems;
\item proposing original extensions of the test generator $ABT$\xspace that improve the effectiveness of the approach for automatic system testing of industrial business oriented applications. \begin{change}While the core algorithm of $ABT$\xspace refers to our previous work~\cite{Mariani:Autoblacktest:ICST:2012}, the extensions reported in this paper are entirely novel contributions that we present for first time (Sections~\ref{sec:strategy} and~\ref{sec:oracles}). \end{change}
The proposed extensions
improve the scalability of the GUI exploration strategy of $ABT$\xspace, produce effective test reports during the test generation process, and support the definition of domain specific test oracles;
\item presenting the experience of exploiting the proposed extensions in the context of a case study concerned with generating tests for a business oriented application developed by our industrial partner.
\end{itemize}
This paper is organized as follows. \begin{change}Section~\ref{sec:abt} overviews $ABT$\xspace, the technology that we identified as a viable test case generation tool for the target industrial case.\end{change}
Section~\ref{sec:challenges} discusses the main challenges in introducing automatic test case generation in an industrial development process by referring to our experience with a medium-size company that produces software solutions on demand. The section describes the industrial context of the technology transfer project, the challenges that we faced, and the domain-specific opportunities offered by the application under test.
Sections~\ref{sec:strategy} and~\ref{sec:oracles} discuss the solutions to the technical and organisational challenges, respectively. These solutions led to $ABT_{2.0}$\xspace, and to the extension of $ABT$\xspace to business oriented applications that meet the industrial requirements of our project. The sections present the new features, and discuss experimental evidence of their effectiveness.
Section~\ref{sec:results} distills some key lessons learned that can be useful to researchers and practitioners working on automatic system testing. \begin{change2}Section~\ref{sec:threats} discusses the main threats to validity.\end{change2}
Section~\ref{sec:related} surveys the related research efforts, and Section~\ref{sec:conclusions} summarizes the contributions of the paper and outlines our current research plans. \section{AutoBlackTest}
\label{sec:abt}
AutoBlackTest ($ABT$\xspace) is the test generation technology that we referred to in our project. In the first part of \begin{change}our exploratory case study\end{change}, we used $ABT$\xspace to gain sample data on the extent of the applicability and the limitations of a representative state-of-the-art test generator, when challenged with the industrial applications considered in the \begin{change}pilot\end{change} project. \begin{change}Later in the project, we used $ABT$\xspace\end{change} as a platform to concretely develop and experience with new test generation strategies that we designed to exploit the opportunities identified in the project.
Our initial experience with introducing $ABT$\xspace in the industrial context was the driver that enlightens both the challenges and the opportunities that we discuss in Sections~\ref{subsec:challenges}, which led to $ABT_{2.0}$\xspace that enriches $ABT$\xspace with new functionalities to meet both technical and organisational industrial requirements.
In this section we introduce $ABT$\xspace to make this paper self-contained. The interested readers can refer to~\cite{Mariani:Autoblacktest:ICSE:2011,Mariani:Autoblacktest:ICST:2012,Mariani:GUI:STVR:2014} for a comprehensive discussion of $ABT$\xspace.
$ABT$\xspace generates system test cases for applications that rely on GUI driven interactions based on Q-Learning~\cite{Sutton:IRL:1998}, a learning algorithm that $ABT$\xspace uses to steer the testing activity towards the most relevant functionalities of an application.
In this paper, we refer to the $ABT$\xspace Q-Learning exploration strategy as \emph{Reinforcement Learning based Strategy (RLS)}.
Q-Learning is extensively used to address the problem of agents that learn how to interact with unknown environments.
Q-learning agents learn how to interact with the environment by iteratively selecting and executing actions, and updating a model of the system that estimates the utility of the actions according to the reactions of the environment.
Q-learning agents
interleave random and model-based selection of actions, to take advantage of the incrementally build models, and increasingly explore relevant portions of the environments.
The $ABT$\xspace Q-learning agent computes the utility of actions as the impact of the actions on the GUI, based on the intuition that the impact of an action on the state of the application should produce observable effects.
Actions that produce relevant unseen transitions in the GUI, for instance actions that lead to successfully submitting a complete registration form,
likely correspond to important interactions, and are thus given high utility values. While actions that produce negligible or already seen transitions, for instance actions that repetitively lead to incomplete forms or to error windows that bring the application back to its initial state, likely correspond to scarcely relevant interactions, and are thus given low utility values.
By rewarding actions based on their impact on the GUI, the $ABT$\xspace Q-learning agent steers the testing activity towards combinations of actions that correspond to important interactions, and reduces the amount of repetitive actions with respect to selecting actions with a purely random process.
The $ABT$\xspace testing process initializes the Q-learning model to the home page of the application, and builds interactions (i.e., test cases) as sequences of episodes, where the number of sequences and the length of each sequence are parameters.
Each episode starts from a random state of the current Q-learning model, and executes a fixed number of actions that are selected according to the $\epsilon$-greedy policy.\footnote{ Q-Learning supports several policies, the $\epsilon$-greedy policy is demonstrated to be effective for test case generation~\cite{Mariani:GUI:STVR:2014}.}
The $\epsilon$-greedy policy alternates exploration and exploitation. Exploration selects actions never executed before, exploitation executes the most useful action according to the knowledge accumulated so far by the agent.
In particular, when $ABT$\xspace is in a state not in the Q-learning model yet, the policy selects a random action (exploration).
When $ABT$\xspace is in a state already in the Q-learning model, the policy executes a random action with probability $\epsilon$ (exploration) and the action with the highest Q-value according to the model with probability $1-\epsilon$ (exploitation), where $\epsilon$ is a user-provided parameter.
Executing actions may require some data, for instance, editing a textfield requires identifying the data to be entered in the field.
$ABT$\xspace determines values by using a catalog of values that associates labels, such as \texttt{email} and \texttt{date}, to a set of values, such as \texttt{[email protected]} and \texttt{20-04-2019}. When interacting with an input widget, $ABT$\xspace analyzes the GUI to determine the label of the GUI that better describes the values expected by the input widget~\cite{Becce:Widget:FASE:2012}. For instance, $ABT$\xspace can determine that an email address should be entered in an input field from the presence of the label \texttt{email} next to the input widget. When entering data, $ABT$\xspace randomly selects a value from the set of values associated with the label, \texttt{email} in the example. When the label is not present in the catalog, $ABT$\xspace selects a string from a default set of values.
Software applications may include complex functionalities that are hard to successfully execute using random exploration only. For instance, an application may include a form that can be successfully submitted only by filling several fields. An execution that enters values in all the fields of the form and then submits the form has little probability to be produced randomly. To address these cases, $ABT$\xspace supports the execution of complex actions, which are short workflows that execute a sequence of actions according to a specific strategy. For instance, when an application displays a large form, a complex action may facilitate submitting the form by entering a value in each field and then clicking the submit button.
$ABT$\xspace considers the execution of complex actions before applying the standard \mbox{$\epsilon$-greedy} policy. In particular, if the current state of the application allows to execute a complex action, $ABT$\xspace selects it with probability $\textit{pcomplex}$. If the complex action is not executed, $ABT$\xspace applies the regular \mbox{$\epsilon$-greedy} policy.
$ABT$\xspace tool is implemented in Java and integrates IBM Functional Tester~\cite{IBM:FuncTester:WEBSITE}, a commercial capture and replay tool, to interact with the GUI of an application. $ABT$\xspace can support the same GUI frameworks supported by IBM Functional Tester. The experience reported in this paper exploits the support for .NET applications. $ABT$\xspace also uses the free library TeachingBox~\cite{Ertel:TeachingBox:ICAR:2009} to handle the Q-learning model.
In our industrial experience, we executed $ABT$\xspace using the $\epsilon$-greedy policy with \mbox{$\epsilon=0.7$} and with \mbox{$\textit{pcomplex}=0.5$}.
We initialized the catalog of input values with \mbox{35 entries} comprised of valid user names, emails and data item identifiers, for instance, the identifier of an invoice, to satisfy the input fields with domain specific constraints, and used random strings for the unconstrained input fields.
The testing process consisted of \mbox{50 episodes}, each one composed of 30 actions. \begin{change2}We determined the values of the parameters based on both our experience with the tool~\cite{Mariani:GUI:STVR:2014} and a trial and error fine-tuning process executed on the target industrial application.\end{change2}
In the next sections, we discuss how we adapted
\begin{change}$ABT$\xspace to face the challenges of business oriented applications, \end{change}
ending up with extending $ABT$\xspace to $ABT_{2.0}$\xspace by redesigning the GUI exploration strategy and extending its test reporting capabilities.
\section{Testing Business Oriented Applications: Challenges and Strategies}
\label{sec:challenges}
In a recent joint project between the LTA research laboratory\footnote{LTA -- Laboratory for software Testing and Analysis, Universit\`a degli studi di Milano Bicocca} and an industrial partner\footnote{A medium company producing customized software solutions on demand operating in the area of North Italy, whose identity is not disclosed for non-disclosure agreement policies}, we evaluated the benefits of automating the generation of system test suites for business applications that rely on GUI-driven interactions.
In this section we report the technical obstacles that derive from the complexity of the target application, and the organizational issues due to the specificity of the software process of the industrial partner.
We discuss obstacles and opportunities offered by the distinctive nature of the business oriented application targeted in the project, by focusing on scenarios that are common to many diverse applications and domains, and that can be exploited beyond the scope of our project to effectively automate the generation of system test cases.
We introduce the business oriented application that we targeted in the project, by presenting an essential overview and discussing challenges and opportunities for automatically generating system test suites, effective in the business context.
\subsection{Business Oriented Applications: A Sample}
\label{sec:challenges:subject}
We conducted our \begin{change}exploratory \end{change} study on an application that our industrial partner selected as representative of typical customized software applications developed on demand for third parties.
In this section, we outline the distinctive characteristics of the selected application, which we refer to as $ERP$\xspace here on. In a nutshell, $ERP$\xspace is an application that handles the commercial process of a company by managing data entities like orders, invoices, shipping activities, and maintenance requests.
We illustrate our industrial experience using a simplified GUI with the same characteristics of the GUI of the original application, and using examples with realistic although not real data, due to confidentiality reasons that do not allow us to disclose the original application.
We present the application $ERP$\xspace through its GUI and the corresponding test plans as designed by the company testers according to the customers' requirements, to illustrate the requirements for the automatic test generator.
The GUI of $ERP$\xspace is organized by data entities, following a typical interaction with the GUI, where users first select the type of entity, such as orders or invoices, and then manipulate it.
Figure~\ref{fig:mockup}(a) shows the set of six main data entities through the (simplified and anonymised) $ERP$\xspace home page.
Buttons in the top of the home page lead to entity specific pages that offer the first tier action sets for the entity types.
Figure~\ref{fig:mockup}(b) shows the \emph{Invoices} page with the top tier actions for entity Invoices, actions that lead to other pages when selected, for instance the \emph{Edit Invoice} page illustrated in Figure~\ref{fig:mockup}(b).
$ERP$\xspace manages entities that include several data fields editable across multiple tabs, and editing an $ERP$\xspace entity may require long GUI interaction sequences that can terminate successfully (\emph{Save} button) or can abort (\emph{Close} button). $ERP$\xspace manages six types of entities, with an average of 20 fields per entity. Editing an entity requires interacting with five tabs in average.
\begin{figure}[ht!]
\center
\includegraphics[scale=.29]{figures/Mockup_a}\\
(a) Home page\\~\\
\includegraphics[scale=.29]{figures/Mockup_b}\\
(b) Invoice page\\~\\
\includegraphics[scale=.29]{figures/Mockup_c}\\
(c) Edit Invoice page
\caption{GUI screens of the sample application (adapted and simplified from the original user interface)}
\label{fig:mockup}
\end{figure}
New $ERP$\xspace releases are delivered with test plans that are composed of test objectives, where
the test objectives are a set of behaviors exercised during testing, each consisting of interactions executed against the target applications along with corresponding checks on the validity of the test outputs.
Test plans are organized in sections, one for each tested entity type.
A sample test objective for entity invoices is
\begin{quotation}
''when the \textit{new invoice} button is pressed, a form with five tabs and only empty fields must be shown to the user''
\end{quotation}
Test plans are stored as spreadsheets with a sheet for each entity type.
A test objective corresponds to a row in a sheet with three fields (columns): identifier, GUI interactions, and test checks. The identifier uniquely identifies the test case.
The GUI interactions are the set of interactions that characterise the test, for instance ``press the \textit{new invoice} button, ...".
The test checks are a set of checks to determine the correctness of the results, for instance ``check that a form with five tabs and all empty fields is shown".
The $ERP$\xspace test plans resemble the structure of many test plans used in small and medium size software companies that work with internally established though informally defined test procedures, and handle the test documentation with common back office tools, like spreadsheets.
\subsection{Challenges and Opportunities}
\label{subsec:challenges}
While integrating the test case generator $ABT$\xspace (AutoBlacktest~\cite{Mariani:Autoblacktest:ICST:2012}, the test generator that we described in Section~\ref{sec:abt}) in the process of our industrial partner,\footnote{
\begin{change}
Our target application, $ERP$\xspace, is developed in C\# with Microsoft graphical libraries.
$ABT$\xspace supports different programming languages, including C\# and the Microsoft graphical libraries used in $ERP$\xspace.\end{change}
} we faced both technical and organisational challenges:
\begin{inparaenum}[(i)]
\item the \emph{size} of the $ERP$\xspace GUI opens a variety of interaction choices, much more than classic benchmarks used in
\begin{change} our previous\end{change}
academic validation, with a strong impact on the effectiveness of $ABT$\xspace,
\item many failures derive from incorrect results that can be observed only with non trivial \emph{test oracles},
\item the commercial practice of the industrial partners requires \emph{test reports} that consistently match the spreadsheet-style test plans maintained by the organization, which are not directly compatible with the outputs produced by $ABT$\xspace.
\end{inparaenum}
While size and oracles are technical challenges, generating test reports that meet the customers' standards is mainly an organisation-specific issue,
\begin{change}
which
nonetheless must be taken into careful consideration to successfully transfer research to industry.
\end{change}
\begin{change} These
\end{change}
challenges generalize to many industrial applications beyond the application domain considered in this paper:
\begin{inparaenum}[(i)]
\item the difficulty of test generators to cope with huge, deep and non uniform program state spaces is well-known,
\item automatic test generators suffer from the test oracle problem, since they are seldom able to automatically determine the exact expected behavior for the functionalities tested by each test case,
\item automatic test generators often fail to meet the required test documentation standards.
\end{inparaenum}
\begin{change}
This section contextualises the impact of these challenges in our project and discusses some opportunities that can be exploited to address them. In fact,\end{change}
though general solutions seldom exist, industrial business oriented applications can be addressed with domain specific solutions that derive from recurring design patterns, which relate the GUI structure to both the business logic and the test requirements of these applications.
\paragraph{Size of the Interaction Space}
The classic structure of business oriented applications and the many choices that they offer at each interaction step produce an interaction space that is both extremely large and strictly structured.
As an example at each interaction step, $ERP$\xspace users can interact with~$95$ different menus and buttons available in the top menu bar, and with many widgets that become available in the many displayed windows. Since the top menu bar is continuously available for interactions, the \emph{space of possible interactions grows exponentially} with the length of the interaction sequence, making a dense exploration of the execution space impossible.
The~$95$ permanently enabled actions in the $ERP$\xspace top bar produce a space of test sequences with at most five steps that contains more than 7 billion cases, and a several orders of magnitude larger space when considering also the actions that become available in the different windows after selecting the menu items.
Interactions sequences with more than five steps, which are common in $ERP$\xspace, exponentially widen the already giant execution space.
Effective test generation strategy must select a relevant subset of test cases.
\begin{change}
The main problem of testing GUIs is the huge spaces of interaction sequences that derive from the combination of windows and actions~\cite{banerjee:gui-testing:ist:2013}. Current approaches address this problem by relying on the availability of some GUI models to assist the identification of relevant interaction sequences. For example, White and Almezen concentrate on \emph{complete interaction sequences}, defined as sequences that correspond to activities of direct interest to the user, in turn defined as activities that produce observables effect on the surrounding environment of the GUI, such as changes in memory, changes in the behaviours of some peripheral devices, or changes in the underlying software or application software~\cite{white:responsabilities:issre:2000}.
White and Almezen's approach assumes that testers provide a finite-state machine model that identifies the sets of complete interaction sequences on which testing shall focus separately. Similarly, Paiva et al.\ requires hierarchical state machines to steering the test selection strategy~\cite{paiva:hierarchical:asm:2005}.
Yet, Memon et al.\ require testers to annotate GUI events with pre- and post-conditions, and identify the initial and goal states that the test generator shall focus on~\cite{memon:hierarchical:tse:2001}.
Finally, Saddler et a. also relies on pre-defined constraints to direct the test case generation process~\cite{Saddler:EventFlowSllider:ATEST:2016}.
In our project, we failed in proposing these solutions, due to the expertise, costs and time required to define the models that these approaches require.
On the other hand, we successfully took advantage of the characteristics of the GUIs of the target applications, GUIs that are organised by data entities, as discussed in the previous section.
We rely on the characteristic organisation of the target GUIs to automatically partition the interaction sequences into relevant and irrelevant sequences.
The solutions that we discuss in this paper explicitly exploits the specific characteristics of the application domain, and as such, our work radically differs from the above-surveyed approaches, which generate test cases for general purpose productivity applications, like Microsoft Windows, the GVISUAL multimedia database (\cite{white:responsabilities:issre:2000}), Microsoft WordPad (\cite{memon:hierarchical:tse:2001}), Microsoft Notepad (\cite{paiva:hierarchical:asm:2005}).
\end{change}
We defined an industrially-acceptable solution for the considered application domain, by relying on both the structured organization of the GUI and by carefully addressing operations that depend on long interaction sequences in forms with many input widgets.
The \emph{strictly structured organization of the GUI} drives the common interactions with the application, thus providing important guidelines to generate meaningful test sequences, if properly handled.
For instance, $ERP$\xspace users follow goal-oriented interaction patterns that correspond to the GUI organization.
The top menus partition the interaction space according to the primary goal of the interactions, for instance users add orders by navigating across windows enabled after selecting the order menu, and unlikely move to windows entered through top tier actions alternative to the order menu before completing the overall add operation.
Thus the execution space contains many interaction sequences that do not correspond to reasonable interactions, and test case generators that explore the execution space without considering this information produce enormous amount of almost useless test cases.
\newtcolorbox{opportunity}{
colback=gray!5,
colframe=gray!70,
boxrule=0.4mm,
left=1.5mm,
right=1.5mm,
top=1.5mm,
bottom=1.5mm,
fonttitle=\bfseries,
title={Opportunity}
}
\medskip\begin{opportunity}
\paragraph{Graphical menus partition the functional logic}
In most business oriented applications, top tier menus gather the operations that manipulate each type of data entities, and thus menus partition the functional logic of the application in areas.
For example, the menus in Figure~\ref{fig:mockup} indicate that the functionalities to manipulate invoices are accessed with action sequences that start with menu \emph{Invoices}.
This common type of GUI structure is designed to facilitate the users to easily navigate and operate on different data entities.
Test generators can exploit this structure to mitigate the combinatorial explosion of interaction sequences that the test generator may produce, by both focusing on interaction sequences that start from distinct graphical menus, and ignoring the sequences that include actions that jump across different menus before completing any operation.
This simple heuristic can largely reduce both the size and complexity of the execution spaces to be sampled.
\end{opportunity}
We frame our second observation
by defining the concept of \emph{minimal interaction sequence} for a meaningful use of a system functionality as the sequence of GUI actions that must be necessarily executed to exercise a functionality relevant to the user.
The goal of testing is to maximise the set user-relevant functionalities exercised when executing the test suite.
State-of-the-art randomized test generation strategies, \begin{change}which range from purely random strategies~\cite{Android:Monkey:WebSite,Bertolini:GuiTestingEvaluation:ICST:2010} to approaches interleaving random decisions with heuristic decisions~\cite{Mariani:Autoblacktest:ICST:2012},\end{change}
are less and less effective in producing test cases that exercise all relevant functionalities when the length of the corresponding minimal interaction sequences grows.
For example, state-of-the-art randomized test generation strategies can effectively generate test cases that exercise the functionalities that correspond to minimal interaction sequences of two actions, for instance simple CRUD operations like deleting an invoice. They are much less effective in generating test cases that exercise functionalities that correspond to long minimal interaction sequences, for instance CRUD operations like modifying an invoice.
For instance, the minimal interaction sequence for deleting an invoice consists of selecting the menu \emph{Invoices} in the screen in Figure~\ref{fig:mockup}(a), and then clicking \emph{Delete} in the the screen in Figure~\ref{fig:mockup}(b).
In contrast, the minimal interaction sequence for modifying an invoice consists of selecting the menu \emph{Invoices} in the screen in Figure~\ref{fig:mockup}(a), clicking \emph{Edit} in the screen in Figure~\ref{fig:mockup}(b), executing a sufficient number of \emph{fill-in} interactions with the input fields, and then clicking \emph{Save} in the screen in Figure~\ref{fig:mockup}(c).
In a nutshell, state-of-the-art randomized test generation strategies can effectively produce test cases for the 'delete invoice' operation, but not for the 'modify invoice' operation.
To give a concrete feeling of the impact, the length of the minimal interaction sequence of operations that modify data in $ERP$\xspace ranges from 8 to 38 steps.
When the probability of picking up a specific action is close to 1\%, such interaction sequences are singularities in the execution space, hard if not impossible to exhaustively cover with randomized strategies.
For instance, a ten hour run of our test generator $ABT$\xspace against $ERP$\xspace generated test cases that exercised most of the short minimal interaction sequences, but hardly executed any operation whose minimal interaction sequence involved more than three actions.
In particular, $ABT$\xspace easily explored different selection orders of the graphical menus, but never executed the interaction sequences to modify the data entities in $ERP$\xspace.
\medskip\begin{opportunity}
\paragraph{Long interaction sequences derive from many input widgets}
Operations that depend on long interaction sequences, such as operations to create and modify data entities, are very challenging for automatic test case generators.
Long interaction sequences often derive from forms with many input widgets to be filled-in. For instance, the simple input form to create a new invoice of Figure~\ref{fig:mockup} (c) includes eight input fields, and input forms with tens of input fields are common in industrial business oriented applications.
Handling input forms as special entities can largely improve the effectiveness of test case generators, by increasing the probability of a form to be completely filled-in before being submitted.
\end{opportunity}
\paragraph{Test Oracles}
Effective test cases pair test inputs with relevant test oracles that can detect misbehaviours with respect to specific users' expectations.
\begin{change}
Several state-of-the-art test case generators, such as Pex~\cite{Tillmann:pex:TAP:2008}, JBSE~\cite{Braione:enhancing:esecfse:2013}, Evosuite~\cite{Fraser:whole:2013}, BiTe~\cite{Baluda:BidirectionalSymbExe:TSE:2016}, and AFL~\cite{Gutmann:afl:login:2016}, focus on computing test suites that exercise the set of executions of the applications under test as thoroughly as possible, mostly referring to code coverage metrics as indicators of testing thoroughness.
These approaches do not generate test oracles, thus can reveal only \emph{blatant failures}, that is, uncaught exceptions and system hangs, which represent
\end{change}
a small fraction of failures that software companies aim to reveal with in-house testing. \begin{change2}Some techniques generate assertions to reveal regression failures in future releases of the software. Such assertions are useful for regression testing, but not for testing new and extented functionalities.\end{change2}
For example, the large majority of the test requirements in the original $ERP$\xspace test plan include checks of the correctness of the execution that require inspecting the GUI and the database state, and cannot be identified by simply detecting uncaught exceptions and system hangs.
Common examples of checks included in the original $ERP$\xspace test suite are: to make sure that graphical menus, buttons, and text fields appear with correct labels and can or cannot be modified in given screens and application states; to ensure that data change operations result in correct updates of the corresponding tables in the database; to check that the application visualizes the expected subsets of data items from the many items in the database, according to filters set in the input forms.
Checks like these go beyond the revealing capabilities of simple implicit oracles that reveal only uncaught exceptions and system hangs.
\begin{change}
The problem of designing cost-effective test oracles has been largely investigated in the last decades~\cite{Barr:Oracle:TSE:2015}.
Oracles are often provided as assertions embedded either in the test scripts, as in Junit, or in the code in the form of contracts~\cite{Meyer:Eiffel:jss:1988,Briand:Contracts:spe:2003}.
Many approaches generate oracles from formal or semi-formal specifications, when available~\cite{Spivey:Z:Prentice-Hall:1989,Spivey:Z:Prentice-Hall:1989}.
Yet other approaches infer test oracles from informal specification in natural language~\cite{blasi:javadoc:issta:2018,Bottger:Reconciling:2001}.
Regression testing derives oracles by monitoring the execution of the software under test~\cite{Xie:Regression:ecoop:2006}, while dynamic analysis infers invariants~\cite{Ernst:daikon:icse:1999} or models~\cite{Lorenzoli:bct:icse:2008} that can be used as test oracles.
Unfortunately none of these approaches applied well to our context: code was given without either embedded assertions or any formal specifications, regression oracles were not sufficient, and dynamic analysis oracles were largely insufficient to represent the properties of the test plans of our industrial partner.
Nonetheless, although we could not automate the test oracles,
we mitigated the cost of manually checking the test results while executing the test suites, by automatically producing test reports with data about the effects on both the GUI widgets and database tables. These data were relevant for the manual inspection of the software requirements, and for post mortem analysis of the failures.
\end{change}
\medskip\begin{opportunity}
\paragraph{Small sets of outputs capture most relevant classes of non-blatant failures}
There often exists a well identified and relatively small set of output data that capture the most relevant classes of \emph{non-blatant} failures. In fact, many test requirements are concerned with checking the correctness of either attributes of widgets that frequently appear in the GUI after some operation, for instance graphical menus, buttons, text fields and data grids, or database changes produced as a result of an operation.
For example, an entry of the $ERP$\xspace test plan requires to check that selecting the menu \emph{Invoices} leads the GUI to display an editing panel \emph{Invoices} that visualizes all the buttons as shown in Figure~\ref{fig:mockup} (b), and to check that filling the input form of Figure~\ref{fig:mockup} (b) with valid data and saving it leads to inserting a new record in the database table \emph{INVOICES}, with data consistent with the data in the form.
Thus, augmenting the output of the test generator with custom test reports that include the relevant data from both affected GUI widgets and affected database tables
could effectively assist testers in the identification of the relevant failures.
\end{opportunity}
\paragraph{Test Documentation}
In the industrial context of customised software developed on demand, such as the $ERP$\xspace case, a fundamental requirement for the test reports is to map the test results to the test objectives in the test plans (which in turn map to the requirements).
In fact, in many industrial sectors, software companies ask for test reports that help testers easily identify both tested and not-yet tested functionalities, to efficiently plan the test campaign and deployment plans:
Reports that include \emph{detailed information about the tested functionalities} are necessary to integrate a test generator in an industrial testing process.
\begin{change}
The traceability between requirements and test cases is well supported when test cases are derived from the requirements~\cite{Ramesh:traceability:re:1993,Lago:Traceability:jss:2009,Winkler:traceability:sosym:2010}, but becomes a hard problem when test cases are automatically generated by sampling the input space on the implementation.
\end{change}
State-of-the-art test case generators document the generated test suites as
\begin{inparaenum}[(i)]
\item executable test scripts that instantiate the test inputs,
\item uncaught exceptions and system hangs experienced while executing the test scripts, and
\item statement coverage.
\end{inparaenum}
Unfortunately, mapping the generated test cases to the system functionality by manually replaying each test script almost nullifies the benefits of using a test generator, and deducing the missed functionality by identifying inputs that lead to uncovered code can be extremely hard.
\medskip\begin{opportunity}
\paragraph{Mapping between actions on widgets and test requirements suggest mapping from test results to system functionalities}
The neat mapping between actions executed on the widgets in the GUI, such as clicks on graphical menus and buttons, and relevant classes of test requirements, such as validating GUI screens and database changes, provides useful information for test reports. Pairing the actions executed on GUI widgets with relevant results on both other widgets and database tables simplifies the identification of the tested and untested test objectives, as well as the comparison between test results and test oracles.
\end{opportunity}
\bigskip
\begin{change}
In summary,
\end{change}
our experience highlights that
\begin{inparaenum}[(i)]
\item graphical menus partition the functional logic of many business oriented applications, \item long interaction sequences often derive from forms with many input widgets to be filled-in,
\item often a small set of output data captures the most relevant classes of \emph{semantic} failures, and
\item the mapping between actions on widgets and test requirements provides useful information for mapping test results to system functionalities.
\end{inparaenum}
\section{GUI Exploration Strategy}
\label{sec:strategy}
We addressed the main technical challenges of introducing automatic test case generation in industrial settings by exploiting the opportunities offered by
\begin{inparaenum}[(i)]
\item the partition of the functional logic induced by the menus defined in the GUI,
\item the relations among action sequences induced by complex input forms,
\item the relation between outputs and semantic faults, and
\item the mapping between test results and system functionalities induced by the widget--test requirements mapping.
\end{inparaenum}
In this section, we present the new strategy that we defined to efficiently explore the execution space though the GUI, by exploiting the first two opportunities listed above, and discuss experimental results that confirm the effectiveness of the new strategy for the considered class of applications. Section~\ref{sec:oracles} presents the contributions related to the other opportunities.
\subsection{The SSRLS GUI Exploration Strategy}
$ABT_{2.0}$\xspace extends $ABT$\xspace with a new \emph{Semi Systematic RLS (SSRLS) exploration strategy} that substitutes the $ABT$\xspace RLS strategy with
a systematic exploration of the GUI in two cases:
\begin{itemize}
\item \emph{Menu-driven constraints:} SSRLS constrains RLS to enforce a \emph{systematic partitioning} (\cite{Yilmaz:Combinatorial:tse:2013,Yilmaz:combinatorial:computer:2014}) of the generated test cases in groups, based on the items in the graphical menus of the GUI, such that each group of test cases starts accessing a graphical menu and then exercises only actions associated with the initially-accessed menu. Thus, SSRLS prevents useless and time consuming exploration of the many irrelevant interactions sequences that jump across different graphical menus.
\item \emph{Input-form constraints:} When dealing with input forms SSRLS supersedes RLS and executes complex actions that properly fill out and submit the input forms, thus increasing the coverage of the functional logic of the application under test, and avoiding the useless and time consuming exploration of too many incorrect completion of forms before correct submissions.
\end{itemize}
In a nutshell, SSRLS combines the ability of reinforcement learning to quickly reach many different GUI states, with both the ability that systematic testing offers to optimally partition the execution space and the capability that complex actions offer to handle the implicit dependencies present in a form-based window.
Algorithm~\ref{algo:ssrls} presents the SSRLS exploration strategy by highlighting the steps added to the $ABT$\xspace RLS strategy:
Light grey (steps~\ref{algo:ssrls:ifMenuAction},~\ref{algo:ssrls:firstAction},~\ref{algo:ssrls:sys1:disable} and~\ref{algo:ssrls:sys1:enable}) and dark grey background (steps~\ref{algo:ssrls:sys2:check} and~\ref{algo:ssrls:sys2:action}) highlight the statements that implement the partitioning of the execution space and the ad-hoc handling of input forms, respectively.
The main loop (steps \ref{algo:ssrls:mainLoop:begin}--\ref{algo:ssrls:mainLoop:end}) generates an episode (a test case) per iteration until the testing budget is over.
The algorithm initializes episodes with a sequence of actions that reach a randomly selected state of the model learned so far, that is a state observed while executing a previous episode (step~\ref{algo:ssrls:initState}).
In particular, at the first iteration of the loop the model is empty, and the algorithm generates an episode from the main page of the application.
In this initial state, no menu action has been executed yet, making the condition of step~\ref{algo:ssrls:ifMenuAction} evaluate to true, and the algorithm augments the episode with a menu action (step~\ref{algo:ssrls:firstAction}) followed by a sequence of GUI actions selected among the ones that become dynamically available on the GUI (loop at steps \ref{algo:ssrls:episodeLoop:begin}--\ref{algo:ssrls:episodeLoop:end}).
\begin{algorithm}[!th]
\scriptsize
\caption{\small SSRLS: the Semi Systematic RLS}
\label{algo:ssrls}
\begin{algorithmic}[1]
\Require RLS: the Reinforcement Learning based Strategy
\While{the testing budget is not over}\label{algo:ssrls:mainLoop:begin}
\State $current\_state = $ \Call{RLS}{goToARandomState}\Comment{reach a random state of the Q-learning model}\label{algo:ssrls:initState}
\If{\colorbox{light-gray}{MenuAction not done yet}} \label{algo:ssrls:ifMenuAction}
\State \colorbox{light-gray}{$current\_state = $ \Call{RLS}{doMenuAction}}\Comment{select a menu}\label{algo:ssrls:firstAction}
\EndIf
\State \colorbox{light-gray}{\Call{RLS}{disableMenuActions}}\Comment{actions on graphical menus cannot be anymore selected by RLS}\label{algo:ssrls:sys1:disable}
\For{1 $\leadsto$ number of actions per episode}\label{algo:ssrls:episodeLoop:begin}
\If{\colorbox{dark-gray}{$current\_state$ is InputForm AND with some probability $\pi$}}\label{algo:ssrls:sys2:check}
\State \colorbox{dark-gray}{$current\_state =$ do\emph{FillAndSubmit}}\Comment{Fill and submit the input form}\label{algo:ssrls:sys2:action}
\Else \State $current\_state =$ \Call{RLS}{doAction}\Comment{Execute next action}\label{algo:ssrls:rlsAction}
\EndIf
\EndFor\label{algo:ssrls:episodeLoop:end}
\State \colorbox{light-gray}{\Call{RLS}{enableMenuActions}}\Comment{actions on graphical menus can again be selected by RLS}\label{algo:ssrls:sys1:enable}
\EndWhile\label{algo:ssrls:mainLoop:end}
\end{algorithmic}
\vspace{10pt}
Support to the partitioning of the execution space (light gray background): An episode starts with a menu action and then continues only with actions associated with the selected graphical menu.\\\\
Support to input forms (dark gray background): The input forms are entirely filled in and submited with probability $\pi$.
\end{algorithm}
The new steps~\ref{algo:ssrls:sys1:disable} and~\ref{algo:ssrls:sys1:enable} of the algorithm disable the possibility of executing actions on the graphical menus immediately before the main \textit{for} loop, and re-enable the menu actions at the end of an episode, respectively.
These new steps guarantee that each generated test case starts with an access to a given graphical menu and exercises only actions associated with that graphical menu. If step~\ref{algo:ssrls:initState} of the algorithm executes an initial sequence of actions that already includes a menu action, step~\ref{algo:ssrls:ifMenuAction} of the algorithm prevents including additional menu actions in the current episode.
Thus, step~\ref{algo:ssrls:initState} of the algorithm can select the initial sequence of actions of an episode by navigating the current model, without any additional constraint, since the algorithm maintains the invariant that current model contains only sequences of actions that previous episodes selected according to the SSRLS strategy.
The algorithm generates episodes of a fixed parametric length, and selects actions according to the original RSL strategy (step~\ref{algo:ssrls:rlsAction}), unless differently driven (the grey background steps that we discuss next).
Steps~\ref{algo:ssrls:sys2:check} and~\ref{algo:ssrls:sys2:action} of the algorithm control the generation of sequences of actions in the presence of input forms.
Step~\ref{algo:ssrls:sys2:check} of the algorithm checks if the current GUI state corresponds to a screen with an input form, and if so, it executes step~\ref{algo:ssrls:sys2:action} with some probability $\pi$. Step~\ref{algo:ssrls:sys2:action} of the algorithm fills out all the input fields of the form with predefined values before clicking on the submit button.
Step~\ref{algo:ssrls:sys2:action} exploits the $ABT$\xspace capability to execute a set of actions.
The probability $\pi$ is a parameter of the algorithm.
In the experiments reported in this paper, we set $\pi =0.5$, after an empirical evaluation of different values for $\pi$.
\subsection{SSRLS effectiveness}
We extended the ABT prototype with SSRLS to validate the new approach on the industrial business oriented application made available by our project partner.
We investigated two research questions related to the effectiveness of SSRLS:
\begin{itemize}
\item RQ1: Does SSRLS \emph{thoroughly exercise} the functionality of the business oriented application under test?
We evaluate the capability of SSRLS to generate test cases that thoroughly exercise the functionality of the application under test both in absolute terms and in comparison to RLS. We measure the effectiveness of the test cases generated with both SSRLS and RLS based on
\begin{change}
the frequency with which they execute different classes actions,
with particular attention to actions that depend on long interaction sequences.
\end{change}
\item RQ2: Does SSRLS contribute to satisfy the \emph{test objectives}?
We evaluate the quality of the generated test cases with respect to the test plan, both in absolute terms and in comparison to RLS. We measure the significance of the generated test cases as the percentage of test objectives
\begin{change}
(and thus corresponding requirements)
\end{change}
that the automatically generated test cases exercise.
\end{itemize}
We mitigated the impact of randomness by repeating each experiment five times, and reporting \begin{change2}mean values\end{change2}\begin{change}.
We could not negotiate a higher number of repetitions with our industrial partner, due to the cost of the experiments and the limited access to the ERP application.
The stability of results across the repeated experiments convinced us about the significance of the results in the scope of our exploratory study, but the limited amount of repetitions does not allow strong claims that our results generalise.
\end{change}
\subsubsection*{RQ1: Does SSRLS \emph{thoroughly exercise} the functionality of the business oriented application under test?}
We measure the effectiveness of SSRLS test suites by comparing the amount of action types that the test cases generated with SSRLS and RLS exercise, respectively. We identify five main \begin{change}classes of\end{change} actions that reflect the nature of GUI interaction:
\begin{itemize}
\item \emph{Menu} actions that interact with menus;
\item \emph{CRUD} actions that initiate a CRUD (Create, Read, Update, Delete) operation;
\item \emph{Input} actions that enter data in input fields;
\item \emph{SaveKO} actions that cancel submissions or submit invalid forms;
\item \emph{SaveOK} actions that submit valid forms.
\end{itemize}
To thoroughly exercise the functionalities of the application under test, it is important to execute both \emph{SaveKO} and \emph{SaveOK} actions, many of which are executed only with non trivial combinations of \emph{Menu}, \emph{CRUD} and \emph{Input} actions that reach specific windows and states.
We measure the effectiveness of RLS and SSRLS test cases as the amount of executed actions of different types, with specific attention to \emph{SaveKO} and \emph{SaveOK} actions.
We study the impact of the individual optimizations introduced in this paper, by considering three configurations for SSRLS:
\begin{itemize}
\item \emph{SSRLS-partitioning} that uses only the strategy that restricts the access to the menus; it corresponds to Algorithm~\ref{algo:ssrls}, but without the statements with a dark grey background;
\item \emph{SSRLS-fillForms} that uses the complex action specifically designed to deal with forms; it corresponds to Algorithm~\ref{algo:ssrls}, but without the statements with a light grey background;
\item \emph{SSRLS} that exploits both optimizations; it corresponds to Algorithm~\ref{algo:ssrls} with both the statements with light and dark grey background.
\end{itemize}
\begin{figure}[ht!]
\center
\begin{scriptsize}
\includegraphics[scale=.385]{figures/RLS}\\
\vspace{-50pt}
(a) RLS\\
\includegraphics[scale=.51]{figures/SSRLS_no_complex}\\
\vspace{-50pt}
(b) SSRLS-partitioning\\
\includegraphics[scale=.385]{figures/SSRLS_no_menu}\\
\vspace{-5pt}
(c) SSRLS-fillForms\\
\includegraphics[scale=.51]{figures/SSRLS}\\
\vspace{-25pt}
(d) SSRLS\\
\end{scriptsize}
\caption{Execution frequency of interaction sequences of different length}
\label{fig:interactions}
\end{figure}
We generated test cases in overnight sessions \begin{change}of six hours each, which is a practical timeframe for companies that use continuous testing solutions\end{change}, using each of the four strategies: ABT with RLS, and ABT with the three variants of the SSRLS strategy. Figure~\ref{fig:interactions} illustrates the actions executed with the different test suite as finite state models that show the sequences of GUI actions that the different test suites execute.
The states identify the type of executed action, and the transitions represent consecutively executed action types. The weights of the states indicate how often the test cases executed that action type, and the weights of the transitions indicate how often the test suite executes consecutive action pairs.
For instance, the weight $610$ of state Menu in Figure~\ref{fig:interactions}~(a) indicates that the RSL test suite executed Menu actions $610$ times, and the weight of the transition from state Menu to state CRUD indicates that the test suite executes CRUD immediately after Menu actions $227$ times.
\begin{change2}
Table~\ref{tab:interactions:stddev} completes the data in Figure~\ref{fig:interactions} by reporting the mean (columns \emph{m} -- values reported in Figure~\ref{fig:interactions}) and the standard deviation (columns \emph{s}) of the execution frequency for the four considered strategies (rows \emph{RSL}, \emph{SSRLS-partitioning}, \emph{SSRLS-fillFroms}, \emph{SSRLS}) and five class of actions (columns \emph{Menu}, \emph{CRUD}, \emph{Input}, \emph{SaveOK}, \emph{SaveKO}).
\end{change2}
\begin{table}[h]
\begin{change2}
\center
\caption{Mean (m) and standard deviation (s) of the execution frequency for the classes of actions}
\begin{small}
\begin{tabular}{l | rr | rr | rr | rr | rr}
& \multicolumn{2}{c|}{\bf Menu} & \multicolumn{2}{c|}{\bf CRUD}& \multicolumn{2}{c|}{\bf Input}& \multicolumn{2}{c|}{\bf SaveOK}& \multicolumn{2}{c}{\bf SaveKO}\\
\bf Strategy & m&s& m&s& m&s& m&s& m&s\\\hline
RLS & 610 & 53 & 378& 60 & 87& 31 & 0& 0 & 7& 3\\
SSRLS-partitioning & 43& 13 & 484& 210 & 477& 38 & 0& 0 & 49& 22\\
SSRLS-fillForms & 580& 106 & 351& 103 & 55& 37 & 3& 4 & 36& 23\\
SSRLS & 66& 19 & 575& 170 & 251& 105 & 32& 10 & 116& 40 \\
\end{tabular}
\end{small}
\label{tab:interactions:stddev}
\end{change2}
\end{table}
Figure~\ref{fig:interactions}~(a) shows the limitations of the RLS strategy, limitations that we discussed in Section~\ref{sec:challenges}:
RSL test cases execute many irrelevant length 1 sequences of \emph{Menu} actions, and miss many relevant functional behaviors that depend on long interaction sequences.
RLS test cases execute an average of $610 + 378 + 87 + 7=1,082$ actions, more than half of which (610) exercise only irrelevant length~1 sequences of \emph{Menu} actions.
Figure~\ref{fig:interactions}~(a) also indicates that the longer it is the sequence of actions required to execute an action of a given type, the lesser RLS test suites execute that type of action.
RLS test suites executes \emph{CRUD} and \emph{Input} actions, which (in $ERP$\xspace) depend on interaction sequences of length at-least~2, a mean of 378 and 87 times, respectively, \emph{SaveKO} actions, which depend on interaction sequences of length at-least~3, only 7 times, and never executes \emph{SaveOK} actions, which depend on sequences of length at-least~8.
\begin{change2}
Table~\ref{tab:interactions:stddev} shows that the mutual strength of execution frequencies remains stable across the considered classes of actions, despite the observation that the data related to each class of actions are subjected to some variance across the repetition of the RLS experiment.
\end{change2}
The poor results of the RLS test suites depend on the GUI structure of the subject application, a structure that is misleading for the RLS strategy that executes many actions that seldom produce significant computations.
In fact, we observe that
\begin{inparaenum}[(i)]
\item RSL reinforcement learning assigns high reward values to interactions that cause a strong discontinuity in the GUI state,
\item selecting one of the main data entries (top-level menu buttons) reaches windows that strongly differ from the currently visualized ones,
\item the main data entries (top-level menu buttons) are always executable in the GUI of the subject application (see for example in Figure~\ref{fig:mockup})
\end{inparaenum}.
As a consequence, reinforcement learning always favors selecting the main data entries (top-level menus buttons) over other actions, thus wasting most of the testing budget in interactions with the top-level menus buttons, and missing most of the application logic.
\smallskip
Figures~\ref{fig:interactions}~(b) and~(c) show the individual contributions of the two SSRLS systematic components.
SSRLS-partitioning largely reduces the time that RLS wastes in interacting with top-level menus (SSRLS-partitioning visits the Menu state $43$ times compared to $610$ RLS visits) in favour of a high rate of SaveKO Actions (SSRLS-partitioning visits the SaveKO state $49$ times compared to $7$ RLS visits).
Although SSRLS-partitioning improves over RLS, SSRLS-partitioning test suites still miss many functionalities that require long interaction sequences: SSRLS-partitioning test suites execute \emph{SaveKO} actions that depend on interaction sequences up to length $3$, but not \emph{SaveOK} actions that depend on longer sequences.
SSRLS-fillForms also improves over RLS, but is still quite ineffective in executing actions that require long interaction sequences.
In fact, SSRLS-fillForms test suites waste a lot of time interacting with menus (they visit the Menu state $580$ times) and rarely completes longer interaction sequences (they reach \emph{SaveKO} and \emph{saveOK} states $36$ and $3$ times, respectively). However, SSRLS-fillForms test suites improve over both RLS and SSRLS-partitioning with respect to executing \emph{saveOK} actions, thanks to being specifically designed to interact with forms. The analysis of the 3 cases in which SSRLS-fillForms executed \emph{saveOK} actions indicates that it
can successfully execute operations that require interaction sequences up to length $5$ in $ERP$\xspace.
Figure~\ref{fig:interactions}~(d) shows that by combining the two optimization strategies, the test suite executes many relevant application actions with a significant amount of valid and invalid cases:
SSRLS test cases execute \emph{saveKO} and \emph{saveOK} states $116$ and $32$ times, respectively, thus showing a reasonable capability to complete long and useful interaction sequences.
The two strategies are synergetic: the SSRLS-partitioning strategy reduces the time wasted in executing action sequences with only menu actions thus enabling the SSRLS-fillForms strategy explore long and relevant interaction sequences.
\begin{change2}
Table~\ref{tab:interactions:stddev} further supports that SSRLS is the only strategy that can steadily complete interaction sequences that lead to execute \emph{saveOK} actions.
We used the (paired, one tail) Wilcoxon test to evaluate the statistical significance of the alternative hypothesis that any of the strategies SSRLS-partitioning, SSRLS-fillForms and SSRLS improve the execution frequency of each action class with respect to RLS, based on the results of our experiments. Table~\ref{tab:interactions:wilcoxon} reports the $p$-values of each test, with reference to a considered action class (in the columns of the table) and strategy (in the rows). $p$-values lower than 0.05 (in bold in the table) indicate statistical significant improvements.
The table indicates that none of the strategies overcomes RLS with respect to the execution frequency of \emph{Menu} actions. This is expected and not particularly relevant, because the execution frequency of \emph{Menu} actions is reasonably high with any strategy, and interacting with menu items does not imply exercising relevant functionalities per se.
The relevant observation is the substantial improvement of the SSRLS strategy over RLS with respect to the actions of all other classes, and for \emph{CRUD} and \emph{SaveOK} actions when compared to SSRLS-partitioning and SSRLS-fillForms.
\end{change2}
\begin{table}[h]
\begin{change2}
\center
\caption{Wilcoxon (paired, one tail) test}
\begin{small}
\begin{tabular}{l | r | r | r | r | r}
Alternative HP that RLS $<$ \dots & \bf Menu & \bf CRUD& \bf Input& \bf SaveOK& \bf SaveKO\\\hline
\bf SSRLS-partitioning & 0.98 & 0.16 & \bf 0.03 & 1 & \bf 0.03\\
\bf SSRLS-fillForms &0.71& 0.82& 0.98& 0.05& 0.05\\
\bf SSRLS & 1& \bf 0.03& \bf 0.03& \bf 0.03& \bf 0.03\\
\end{tabular}
\\
\begin{center}
\emph{The Wilcoxon (paired, one tail) test indicates the statistical significant values in support of the alternative hypothesis that the strategies SSRLS-partitioning, SSRLS-fillForms and SSRLS improve on RLS for the classes of actions}
\end{center}
\end{small}
\label{tab:interactions:wilcoxon}
\end{change2}
\end{table}
\smallskip
The overall results show that SSRLS can execute not only operations that require short interaction sequences, such as deleting entities, but also operations that require long interaction sequences, such as filling-in complex form, thus significantly outperforming RLS.
\subsubsection*{RQ2: Does SSRLS contribute to satisfy the \emph{test objectives}? }
\label{sec:strategy:testplan:coverage}
We measure the thoroughness of SSRLS test suites as the percentage of test objectives that test cases satisfy over the test objectives in the test plan that the testers of our industrial partner provided to us.
The test objectives in the test plan represent the set of behaviors that are relevant to test according to the test engineers of the application.
Table~\ref{tab:testPlanCoverage} reports
\begin{inparaenum}[(i)]
\item
the functional areas of the application (column \emph{functional area}), where a functional area is a collection of functionalities designed to handle a same type of data entities,
\item the number of test objectives per functional area (column \emph{Test Objectives}),
\item the number and percentage of test objectives that SSRLS and RLS test suites executed (columns \emph{Satisfied w/ SSRLS} and \emph{Satisfied w/ RLS}), respectively.
\end{inparaenum}
The test engineers identified $350$ test objectives, the RLS and SSRLS test cases exercised 42\% and 72\% of the test objectives, respectively. This result is a clear indicator of the progresses of SSRLS over RLS.
The higher effectiveness of the SSRLS strategy over the RLS strategy is confirmed in every functional area.
In our project we observed that, while test engineers were reluctant in considering the original version of $ABT$\xspace based on RLS to automatically generate test suites, due to the relatively low coverage of the test plan that they considered insufficient to improve the overall costs of the testing tasks, they warmly welcomed $ABT_{2.0}$\xspace that achieves reasonable functional adequacy figures thanks to
SSRLS.
Overall, the results of the case study show that SSRLS is significantly more effective than RLS: it fosters the exploration of deeper regions of the execution space, and generates test cases that cover a significant portion of the functional logic of the application under test.
\begin{table}[h]
\center
\caption{Coverage of the test plan}
\begin{small}
\begin{tabular}{c | c | c | c |}
\bf Functional area & \bf Test objectives (\#) & \bf Satisfied w/ SSRLS (\#) & \bf Satisfied w/ RLS (\#)\\\hline
Projects & 73 & 51 (70\%) & 18 (25\%) \\
Orders & 119 & 82 (69\%) & 39 (33\%) \\
Invoices & 52 & 32 (62\%) & 29 (56\%)\\
Tikets & 21 & 20 (95\%) & 18 (90\%)\\
Modules & 10 & 9 (90\%) & 6 (67\%) \\
Offers & 75 & 57 (76\%) & 38 (51\%) \\\hline
Total & 350 & 251 (72\%) & 148 (42\%)\\
\end{tabular}
\end{small}
\label{tab:testPlanCoverage}
\end{table}
\section{Test Reports}
\label{sec:oracles}
In Section~\ref{sec:challenges} we discussed two important challenges related to the inspection of the output generated by the automatic tests: the need to support the semi-automatic validation of the test oracles specifically designed to detect relevant classes of failures in the target application domain, and the need to facilitate the evaluation of the test cases with respect to the test requirements of the applications under test. Solving these challenges is crucial to make $ABT$\xspace effective in industrial testing processes.
In this section, we present the new test reports that $ABT_{2.0}$\xspace can produce and we discuss their ability to assist test analysts in the validation of the test outcomes. We first exemplify the structure of a sample test report, and discuss the design and the structure of the test reports in detail, then we present early empirical results on the effectiveness of the test reports.
\subsection{Structure and Content of the Test Reports}
$ABT_{2.0}$\xspace generates the new test reports in tabular form for the sake of readability. The test reports indicate, for each generated test case, the actions executed on the GUI and the corresponding effect on both the GUI and the database.
For example, Figure~\ref{fig:report} shows a test report that refers to a test case generated with $ABT_{2.0}$\xspace for the application presented in Section~\ref{sec:challenges:subject}. Each row in the table represents an operation that has been executed with:
a sequential number that identifies the operation in the scope of the test case (column \emph{ID});
the actions executed on the GUI (column \emph{Actions});
the effect (i.e., the changes) produced by the operation on the GUI and the database (column \emph{Outputs}).
Each individual change reported in the output starts with a tag that indicates whether the item that has been changed is an item in the GUI (tag \emph{GUI}) or in the database (tag \emph{DB}).
In detail, the test report in Figure~\ref{fig:report} specifies the flow of test case \emph{T3}, which starts by selecting the graphical menu \emph{Invoices} (\emph{T3.1--Actions}), continues by clicking on the button \emph{New invoice} (\emph{T3.2--Actions}) that has appeared in the GUI as a result of the previous operation (\emph{T3.1--Outputs}), and then concludes by filling out and saving (\emph{T3.3--Actions}) the input form visualized at the previous step (\emph{T3.2--Outputs}), which in the end causes the insertion of a new record in the database table \emph{INVOICES} (\emph{T3.3--Outputs}).
\begin{figure}[h]
\center
\begin{scriptsize}
\newcommand{\listActions}[1]{\pbox{5.5cm}{\vspace{3pt} #1 \vspace{4pt}} }
\newcommand{\listOutputs}[1]{\pbox{7cm}{\vspace{3pt} #1 \vspace{4pt}} }
\begin{tabular}{|p{.6cm} | p{5.5cm} | p{7cm} |}\hline
\multicolumn{3}{|c|}{\bf Test report of test case T3}\\\hline
\bf ID & \bf Actions & \bf Outputs\\\hline
T3.1 & Select menu ''Invoices'' & \listOutputs{GUI: Window "Invoices" in foreground \\
GUI: Button "New Invoices" enabled\\
GUI: Grid with columns "ID", "Name", "Data", "Action" as 3 items\\
GUI: Button "View" enabled\\
GUI: Button "Edit" enabled\\
GUI: Button "Delete" enabled}\\\hline
T3.2 & Click button ''New Invoice'' &
\listOutputs{GUI: Window "Invoice" in foreground \\
GUI: Window "Resources" in background \\
GUI: Button "Save" enabled\\
GUI: Button "Close" enabled\\
GUI: Text field ''Invoice Number'' as $\langle$empty$\rangle$ enabled\\
GUI: Text field ''Invoice Name'' as $\langle$empty$\rangle$ enabled\\
GUI: List field "State" ("not Sent", "Sent", "Replied")\\
$~~~~$ as "Sent" enabled\\
GUI: Calendar ''Date" as 05/06/2015 enabled\\
GUI: Text field ''Client Data - Name" as $\langle$empty$\rangle$ enabled\\
GUI: Text field ''Client Data - Surname" as $\langle$empty$\rangle$ enabled\\
GUI: Text field ''Client Data - Email" as $\langle$empty$\rangle$\\
$~~~~$ enabled\\
GUI: Text field ''Client Data - Country" as $\langle$empty$\rangle$ enabled}\\\hline
T3.3 & \listActions{Fill field ''Invoice Number'' as "2015.2"\\
Fill field ''Invoice Name'' as "Payment"\\
Fill field ''Client Data - Name'' as "Paul"\\
Fill field ''Client Data - Surname'' as "Red"\\
Fill field ''Client Data - Email' as\\
$~~~~$ "[email protected]"\\
Fill field ''Client Data - Country'' as "Italy"\\
Click button ''Save'' } & DB: new record in Table INVOICES $\langle$NUMBER=2015.2, LABEL=Payment, NAME=Paul, SURNAME=Red, [email protected], COUNTRY=Italy$\rangle$ \\\hline
\end{tabular}
\end{scriptsize}
\caption{A test report generated by $ABT_{2.0}$\xspace for the case study application introduced in Section~\ref{sec:challenges}}
\label{fig:report}
\end{figure}
A test report like this one has four main qualities:
\begin{description}
\item[Intelligible:] The descriptions reported under the column \emph{Actions} consist of labels that represent high-level domain operations. In some cases, multiple GUI actions can be grouped into a same logical Action to favour conciseness and abstraction, and produce reports that can be more easily interpreted by testers.
\item[Easy to Validate:] The descriptions reported under the column \emph{Output} are designed to include the items that usually testers want to check to detect failures, while balancing completeness and conciseness.
\item[Use of Domain Terminology:] Test reports favour the use of domain terminology to low level technical terminology to simplify the comprehension of the reports and the comparison of the test results with the test oracles.
\item[Efficient to Inspect:] The structure of the test report facilitates the matching between the test results and the corresponding test requirements.
\end{description}
To achieve these goals, $ABT_{2.0}$\xspace exploits domain specific information in several ways. To generate test reports that are concise and \emph{intelligible},
$ABT_{2.0}$\xspace groups consecutive GUI actions executed on input fields as part of a same logical operation, thus acknowledging the common characteristic of business oriented applications that rely on input forms to gather the inputs required for their operations.
For example, with reference to the test report in Figure~\ref{fig:report}, executing the operation \emph{T3.3}, which adds a new Invoice, requires first filling in six input fields and then clicking on the \emph{Save} button. The report collapsed all these fine-grained and correlated actions into a single action of the test report.
To generate test reports that are \emph{easy to validate}, $ABT_{2.0}$\xspace documents the changes that occur in the GUI and in the database during the execution of the test cases. These events are currently detected and reported as specified in the column \emph{Outputs} of Table~\ref{tab:outputs}. Column \emph{Source} indicates the GUI widget or the database event that may originate an entry in the test report. Column \emph{Outputs} lists the information that is extracted from the element indicated in column \emph{Source} (each variable is assigned with the value specified at the right of the $\leftarrow$ symbol). Column \emph{Format} specifies the entry that is added to the test report. Note that the entry is parametric and some values are filled in with the information extracted at runtime during test case execution.
The extracted data generally consist of the values stored on the titles and the body of the most common types of widgets, as well as the changes produced by database operations. Our experience with industrial business oriented applications suggests that the above output data is in most of the cases sufficient to validate business oriented applications against several classes of failures. The collected output data can be clearly adapted to the specific needs of an application.
We remark that in general, as the data in the test reports are intentionally incomplete, validating some test oracles may require manual inspection of the results while replaying the generated test cases. For instance, to avoid the excessive bloating of the reports in the presence of execution states that may contain massive amounts of data, we decided to omit the content of data grids from the report and only include the titles of their columns. Oracles that check the content of the grid may require replaying a generated test case to manually inspect the grid.
To generate reports that \emph{use the domain terminology}, $ABT_{2.0}$\xspace describes the observed outputs with domain specific terms that facilitate the comprehension of the report, as shown by the formats reported in column \emph{Format} of Table~\ref{tab:outputs}.
\begin{table}[h]
\center
\caption{Outputs in the test reports}
\begin{scriptsize}
\newcommand{\listOutputs}[1]{\pbox{5.5cm}{\vspace{3pt} #1 \vspace{4pt}} }
\begin{tabular}{l | l | p{5cm} |}
\bf Source & \bf Outputs & \bf Format \\\hline
Graphical menu & \listOutputs{$\langle M\rangle \leftarrow$ the menu title label\\
$\langle S\rangle \leftarrow$ the menu state \emph{enabled}/\emph{disabled}} & GUI: Menu $\langle M\rangle$ in state $\langle S\rangle$ \\\hline
Button & \listOutputs{$\langle B\rangle \leftarrow$ the button title label\\
$\langle S\rangle \leftarrow$ the button state \emph{enabled}/\emph{disabled}} & GUI: Button $\langle B\rangle$ in state $\langle S\rangle$\\\hline
Text field & \listOutputs{$\langle T\rangle \leftarrow$ the field title label\\
$\langle I\rangle \leftarrow$ the initial value, if any, or $\langle empty\rangle$\\
$\langle S\rangle \leftarrow$ the field state \emph{editable}/\emph{blocked}}
& GUI: Text field $\langle T\rangle$ as $\langle I\rangle$ in state $\langle S\rangle$\\\hline
List field & \listOutputs{$\langle L\rangle \leftarrow$ the field title label\\
$\{\langle V\rangle\} \leftarrow$ the possible values\\
$\langle I\rangle \leftarrow$ the initial value\\
$\langle S\rangle \leftarrow$ the field state \emph{selectable}/\emph{blocked}}
& GUI: List field $\langle L\rangle$ with values $\{\langle V\rangle\}$ as $\langle I\rangle$ in state $\langle S\rangle$\\\hline
Combo-box field & \listOutputs{$\langle C\rangle \leftarrow$ the field title label\\
$\{\langle V\rangle\} \leftarrow$ the markable values\\
$\{\langle I\rangle\} \leftarrow$ the initial values, if any, or $\langle empty\rangle$\\
$\langle S\rangle \leftarrow$ the field state \emph{selectable}/\emph{blocked}}
& GUI: Combo-box field $\langle C\rangle$ with values $\{\langle V\rangle\}$ marked at $\{\langle I\rangle\}$ in state $\langle S\rangle$\\\hline
Data grid & \listOutputs{ $\{\langle C\rangle\} \leftarrow$ the column title labels}
& GUI: Grid with columns $\{\langle C\rangle\}$\\\hline
Window & \listOutputs{ $\langle W\rangle \leftarrow$ the window title label\\
$\langle S\rangle \leftarrow$ window in \emph{foreground}/\emph{background}}
& GUI: Window $\langle W\rangle$ in $\langle S\rangle$\\\hline
Database insert & \listOutputs{ $\langle T\rangle \leftarrow$ the table name\\
$\langle R\rangle \leftarrow$ the inserted record}
& DB: new record in table $\langle T\rangle$ as $\langle R\rangle$\\\hline
Database delete & \listOutputs{ $\langle T\rangle \leftarrow$ the table name\\
$\langle R\rangle \leftarrow$ the deleted record}
& DB: deleted record in table $\langle T\rangle$ was $\langle R\rangle$\\\hline
Database update & \listOutputs{ $\langle T\rangle \leftarrow$ the table name\\
$\langle R'\rangle \leftarrow$ the record before the update event\\
$\langle R\rangle \leftarrow$ the updated record}
& DB: update in table $\langle T\rangle$ as $\langle R'\rangle \rightarrow \langle R\rangle$\\\hline
\end{tabular}
\end{scriptsize}
\label{tab:outputs}
\end{table}
Finally, to generate reports that are \emph{efficient to inspect}, $ABT_{2.0}$\xspace produces browsable test reports. The browsing capability consists of the generation of a list with all the unique operations executed by the test cases included in a given test report, organized according to the graphical menus of the application. Figure~\ref{fig:report:browser} shows the unique operation list produced by $ABT_{2.0}$\xspace for a test report generated for the sample application considered in this paper. It shows the number of test cases executed for each graphical menu, shows the number of occurrences of each operation in the test cases, and lists the individual tests that executed each operation.
The unique operation list facilitates the tester in the inspection of the testing activity that has been automatically performed on each area of the application.
The unique operation list in Figure~\ref{fig:report:browser} shows that $15$, $18$, $27$ and $21$ test cases have been executed on the \textit{Projects}, \textit{Offers}, \textit{Orders}, and \textit{Invoices} menus, respectively. The operations in a same menu have been covered with different frequencies. For instance, the operation \textit{Invoices.View} has been execute $12$ times, while the operation \textit{Invoices.Invoice} only twice. Finally, the unique operation list also indicates which operation of the test reports has executed a specific menu aciton. In the example, \textit{Invoices.Save} has been executed $4$ times in the $21$ test cases that covered the \textit{Invoices} menu. The \textit{Invoices.Save} operation occurred as the third operation of test case \emph{T3} (shown in Figure~\ref{fig:report}), as the eighth operation of the test case \emph{T6} and so forth. When clicking on the identifiers, testers are redirected to the corresponding item of the test report.
Since a test generator may generate many more test cases than the ones that could be manually inspected, the test reports tend to grow quickly. The unique operation list produced by $ABT_{2.0}$\xspace facilitates testers in the inspection of the result produced by relevant operations, for instance by selectively inspecting the outputs produced by the execution of a same operation in many different test cases. The unique operation list also provides a concise reference to quickly identify the functional logic missed by the test generator and optimize the allocation of additional testing effort.
\begin{figure}[h]
\center
\begin{scriptsize}
\includegraphics[scale=.30]{figures/ReportBrowser}
\end{scriptsize}
\caption{The unique operation list of the test reports of $ABT_{2.0}$\xspace: excerpt out of a test report generated for the sample application of our case study}
\label{fig:report:browser}
\end{figure}
The implementation of the test report functionality required three main extensions to ABT: the ability to track GUI output data, the ability to track database changes, and the ability to generate browsable reports. We implemented the ability to track output data by simply adapting the GUI exploration component of $ABT_{2.0}$\xspace. We implemented the ability to track database changes by adding a monitoring layer that relies on the standard change tracking functionality available in database servers\footnote{The interested reader may refer to the change tracking functionality of Microsoft SQL Server documented at https://technet.microsoft.com/en-us/library/cc280462(v=sql.105).aspx [Last read February 2016].}. When change tracking is active, the database server records change data in additional tables of the database, and $ABT_{2.0}$\xspace simply retrieves data from these tables. We implemented the generation of the reports as the generation of Excel workbooks that mimic the format of the test plan that the test analysts are familiar with. The test report shows the data that refer to distinct graphical menus of the GUI on separate spreadsheets\footnote{The possibility of formatting the reports on a per graphical menu basis is enabled by the SSRLS GUI exploration strategy described in Section~\ref{sec:strategy} that fosters the explicit mapping between test cases and graphical menus.}.
\begin{change}
We designed the features for tracking GUI output data and database changes, and producing browsable reports, by taking advantage of the characteristics of the application domain.
Although designed for the specific domain, these features exploit common characteristics, and can be easily adapted to other environments and processes.
Independently for the generalisability of the overall approach, some components of the approach, such as grouping actions to foster intelligibility, can be easily reused in other contexts.
\end{change}
\subsection{Effectiveness of the Test Reports}
We experienced with the test reports of $ABT_{2.0}$\xspace while working with the industrial business oriented application made available by our project partner. In this section we discuss the empirical results on the effectiveness of the test reports.
In the light of the challenges stated in Section~\ref{sec:challenges}, we evaluate the effectiveness of the test reports by investigating the following research questions:
\begin{itemize}
\item RQ3: Do test reports \emph{suffice to verify} test oracles of business oriented applications?
\item RQ4: Do test reports facilitate \emph{scheduling additional testing activities}?
\end{itemize}
To answer RQ3, we compare the test oracles manually defined by the testers in the official test plan to the information tracked in the test reports produced by $ABT_{2.0}$\xspace, and we quantify the portion of oracles that can be verified uniquely based on the information in the reports.
To answer RQ4, we qualitatively discuss how the test reports that $ABT_{2.0}$\xspace generates support the definition of additional testing tasks to increase functional coverage.
\subsubsection{RQ3: Do test reports suffice to verify test oracles of business applications?}
\label{sec:reports:testplan:coverage}
We quantify the effectiveness of the test reports by measuring the number of oracles \begin{change2}that belong to the test plan and\end{change2} that can be checked using the information in the test reports. In particular, we measure the number of \emph{verifiable oracles}, that is the number of oracles defined in the test plan that can be at least potentially verified based on automatically generated test reports, and the number of \emph{verified oracles}, that is the number of oracles that can be verified based on the actual test reports of our experiments.
\begin{change2}As we discuss in Section~\ref{sec:challenges:subject}, the test plan classifies the test objectives by entity type. Figure~\ref{fig:test-objective-with-checks} shows an excerpt of the test plan that exemplifies a test objective for the \emph{Invoice} entities.
A test objective corresponds to a row in a spreadsheet with three fields (columns): the identifier of the test objective, the actions that the tester shall perform to satisfy the test objective, and the checks that the tester shall perform correspondingly. The checks represent the test oracles. In out experiments, we count each individual check as an oracle. With reference to the example in Figure~\ref{fig:test-objective-with-checks}, to satisfy the test objective \emph{8.3: Correctness of the form when adding new invoices}, the tester shall \emph{click on the button "New Invoice"}, check that (i) \emph{the foreground window is "Invoice"} and (ii) \emph{The name of the fields are} as expected. Thus, we count two oracles for this test objective. In this case, these oracles are verifiable, since they can be crosschecked on the outputs of our test reports, such as, the test report in Figure~\ref{fig:report}.
\end{change2}
\begin{figure}[h]
\center
\begin{scriptsize}
\newcommand{\listOutputs}[1]{\pbox{6cm}{\vspace{3pt} #1 \vspace{4pt}} }
\begin{change2}
\begin{tabular}{|p{3cm} | p{1.5cm} | p{6cm} |}\hline
\multicolumn{3}{|l|}{\bf Test plan of application $ERP$\xspace}\\\hline
\bf Test objective identifier & \bf Actions & \bf Checks\\\hline
\dots & \dots & \dots\\\hline
8.3: Correctness of the form when adding new invoices & Click button "New Invoice" & \listOutputs{
- The foreground tab is "Invoice"\\
- The name of the fields are as in Table 12 of the requirements}\\\hline
\dots & \dots & \dots\\
\end{tabular}
\end{change2}
\end{scriptsize}
\caption{Excerpt of the test plan of $ERP$\xspace: A sample test objective and corresponding checks (oracles)}
\label{fig:test-objective-with-checks}
\end{figure}
Table~\ref{tab:testOraclesCoverage} shows the \begin{change2}mean\end{change2}
results obtained for the different functional areas encompassed in the test plan. The table reports both the number and percentage of verified oracles. The data indicate that we have been able to verify the large majority of the test oracles, that is, a total of $310$ out of the $408$ (76\%) test oracles.
\begin{table}[h]
\center
\caption{Test oracles satisfaction rates}
\begin{small}
\begin{tabular}{c | c | r r|}
\bf Functional area & \bf Verifiable oracles (\#) & \multicolumn{2}{c|}{\bf Verified oracles (\#)} \\\hline
Projects & 81 & 65 & (80\%) \\
Orders & 132 &100 & (76\%) \\
Invoices & 56 & 41 & (73\%) \\
Tikets & 38 & 28 & (74\%) \\
Modules & 16 & 9 & (56\%) \\
Offers & 85 & 67 & (79\%) \\\hline
Total & 408 & 310 & (76\%) \\
\end{tabular}
\end{small}
\label{tab:testOraclesCoverage}
\end{table}
Investigating the missed oracles in further detail, we found that 13\% ($53$ out of $408$) of the oracles could not be verified because they map on execution data that $ABT_{2.0}$\xspace does not currently track in the reports, that is, content of data grids ($35$ oracles), graphical attributes, such as the color of GUI widgets ($6$ oracles), and database data not involved with change events ($12$ oracles). These oracles may indicate directions to extend the scope of the tracked data, though the extensions must be carefully evaluated with respect to the balance between thoroughness and conciseness, which we have discussed as an important aspect of the reports.
The remaining 11\% ($45$ out of $408$) of the oracles express test requirements that map on execution data that do not explicitly belong to the GUI or to the database, such as verifying that an email has been sent or that some data have been written in a file. Although the test oracles that could not be checked directly using the test reports require additional testing effort to replay the test cases generated with $ABT_{2.0}$\xspace and manually check their results, we believe that this could be an acceptable cost if limited to a small percentage of the automatic tests, as in our case where only 24\% of the test oracles cannot be checked directly from the reports. Note that the number of test cases that must be replayed is often significantly smaller than the number of test oracles that have not been checked because a single test case may be used to check multiple test oracles.
\subsubsection{RQ4: Do test reports facilitate scheduling additional testing activities}
To answer this research question, we qualitatively evaluated the cost of pairing the results in the test reports with the corresponding test requirements in the test plan, so that additional activities could be suitably defined to cover the missed items. The procedure we experienced consists of the following steps: we scan the test plan sequentially and, for each test oracle defined in the plan, we exploit the browsing capability of the test reports to identify whether the operation referred in the oracle has been executed by some test cases generated with $ABT_{2.0}$\xspace. For each reached oracle, we further exploit the browsing capability to explode the list of test actions that executed the operation associated with the oracle, and used the first action in the list to access the output data tracked by $ABT_{2.0}$\xspace after the execution of the test action and manually verify the oracle.
This procedure is similar to other types of methods proposed in the scientific literature to select representative samples out of the massive amount of test results that can be produced with a test generator, for example in contexts where testers need to inspect the generated test cases. For instance, a commonly used strategy is to retain only the test cases that cover additional branches in the code, based on the order in which the test generator computes them~\cite{Tillmann:pex:TAP:2008,Braione:TestGenerator:SwQuality:2014}. Similarly, our experimental procedure samples the test report by pairing the test results in the reports with the operations that map to test requirements in the test plan, and selecting the results that correspond to the first occurrence of each operation.
The results reported in the paper (see Tables~\ref{tab:testPlanCoverage} and~\ref{tab:testOraclesCoverage}) indicate that, out of the test objectives and the test oracles defined in the test plan, the test cases and the test reports that $ABT_{2.0}$\xspace generates
successfully exercize more than 70\% of the test objectives, and allow for verifying more than 70\% of the oracles related with these test objectives, respectively.
Inspecting the test reports we have been able to schedule the manual generation of a set of test cases necessary to cover the test objectives that have not been covered automatically, and a set of automatic test cases that need to be replayed to check the test oracles could not been verified from the data in the test report.
Overall, the test reports enabled the scheduling of new focused test activities with
significantly reduced effort. In particular, we reduced the test effort from
the \emph{manual design, execution and inspection} of a set of test cases to cover the $350$ test objectives in the test plan, and verify the corresponding test oracles,
to:
\begin{itemize}[noitemsep]
\item the \emph{manual design, execution and inspection} of a set of test cases to cover only $99$ test objectives, since the test cases that $ABT_{2.0}$\xspace yields automatically already exercize $251$ of the $350$ test objectives in the test plan (Tables~\ref{tab:testPlanCoverage}),
\item the \emph{manual verification} of $98$ additional test oracles, but by means of test cases generated with $ABT_{2.0}$\xspace: the test cases generated with $ABT_{2.0}$\xspace hit test objectives that relate to $408$ oracles, and $310$ of these oracles can be directly verified in the test reports (Table~\ref{tab:testOraclesCoverage}), thus only $98$ oracles require to actually replay test cases,
\item the validation of $310$ test oracles by simply browsing the test reports that $ABT_{2.0}$\xspace produces automatically.
\end{itemize}
Based on these data, we claim that $ABT_{2.0}$\xspace has been able to significantly reduce the effort necessary to the design and execute the test cases, and also simplified the checking process for a large portion of the test oracles. We interprete these results as a positive indication of the possibility to optimize testing activities using a technique like $ABT_{2.0}$\xspace.
\section{Lessons learned}
\label{sec:results}
In this section, we report the main insights that we gained with the project activity described in this paper. We believe these insights can be helpful to drive future research effort in system testing and its application to industrial contexts. We describe our insights as a list of lesson learned.
\begin{itemize}
\item \textbf{Lesson Learned 1 - Automation is necessary but not sufficient}. Automation is highly appreciated in industry because it is a key factor to reduce development effort and costs. However, automation alone is insufficient to address the real needs of complex projects and large organizations. In particular, generating test cases that cover many of the functionalities in an application is useful only if testers can understand and interpret the testing activity that has been performed with respect to the test objectives. This is necessary to identify the functional areas that need to be further tested and checked manually (see the experience about test reports and oracles reported in Section~\ref{sec:oracles}). The only generation of tests and discovery of failures is useful but of limited value in industrial contexts, where the adequate validation of all the functionalities of an application is the priority.
\item \textbf{Lesson Learned 2 - Domain-specific approaches might scale to industrial systems} Automatically testing an application without any information about its structure and semantics is extremely challenging, and outside the capability of current automatic system testing techniques. However, we can successfully exploit domain-specific characteristics to dramatically increase the effectiveness of testing tools and make them effective in specific domains. \begin{change}
For instance, we can use EventFlowSlicer~\cite{Saddler:EventFlowSlicer:ASE:2017} that exploits application-specific knowledge provided by testers for generating effective test cases, or Augusto~\cite{Mariani:Augusto:ICSE:2018} that exploits abstract domain knowledge to efficiently test some features of the target application.
\end{change}
In our experience, we found useful to exploit the organization of the GUI in functional areas to make ABT more effective, up to the level of being useful in business oriented applications (see the extension to the GUI exploration strategy proposed in Section~\ref{sec:strategy}). In the future, domain specific solutions should receive greater attention from researchers and practitioners.
\item \textbf{Lesson Learned 3 - Manually-specified oracles can dramatically increase the effectiveness of automated test cases} Automatically generated test cases use implicit oracles to detect failures, that is they can only detect crashes, uncaught exceptions and hangs\begin{change}~\cite{Barr:Oracle:TSE:2015}\end{change}. The lack of powerful oracles is a major limit to failure detection. Our experience shown that the failure detection capability of the generated tests can be dramatically improved by manually specifying a few automatic oracles that can be checked at runtime (see results for RQ3). \begin{change}Our experience confirms Esbah, Deursen and Roest's results about
the importance of human-defined oracles to improve the effectiveness of automated testing~\cite{Mesbah:InvariantBased:TSE:2012}.\end{change}
Although this could sometime be perceived as an extra effort for test engineers, we noticed that once a tool is perceived as useful, for instance because it can automatically cover many test objectives that had to be covered manually in the past, industry people are willing to invest their effort in the definition of program oracles that can increase the effectiveness of the synthesized test cases. Although our experience has been positive, identifying proper classes of system-level oracles that can be exploited by automatically generated test cases still deserves further research.
\item \textbf{Lesson Learned 4 - Cost effective definition of system-level automated oracles is still an open challenge} While there are many languages and approaches to define unit level oracles, such as program program assertions and invariants, there is a lack of languages suitable for system level oracles. Capture and replay tools usually support the concept of checkpoint~\cite{IBR:RFT:2015}, \begin{change} some recent studies investigate the manually specification of automatic oracles~\cite{Mesbah:InvariantBased:TSE:2012,Memon:EventFlow:STVR:2007}, but there is no complete approach to conveniently and cost-effectively specifying system-level oracles\end{change}. In fact, specifying an oracle for a test case requires either the execution of a complex sequence of interactions with the system under test or the definition of complex expressions that refer to GUI widgets. Our experience confirmed that designing system level oracles could be cumbersome. Defining more effective specification methods for the definition of system level oracles is an open challenge for the future.
\item \textbf{Lesson Learned 5 - Inflexible outputs might be a barrier to tool adoption, regardless effectiveness} In our experience, we obtained the most from ABT once it has been integrated with the testing process of the organization ($ABT_{2.0}$\xspace). To enable the integration, it has been of critical importance to produce browsable outputs that could be exploited to guide the definition of the test strategy and plan the test effort. Without this integration, ABT would not have been considered for adoption. To make the system testing technology successful is thus important to produce solutions that can flexibly integrate with an organization process, also in terms of their input/output behavior and reporting capabilities.
\end{itemize}
\section{Threats to Validity}
\label{sec:threats}
\begin{change2}
The main threats to internal validity of the results reported in this paper concern the procedures to collect and analyze data. While the tool was experimented in an industrial context by professional developers and the data are the direct consequence of such activities, the authors of the paper collected and analyzed the data. To mitigate the possibility of introducing any bias in this process, we defined the procedures and the analyses as objectively as possible, severely limiting subjective judgment. The paper describes the processes to allow third parties to replicate the process for different tools and subject applications.
Due to constraints imposed by the industrial context, it was impossible to replicate each experiment more than 5 times per configuration. Although a higher number of repetitions may generate results with higher stability, we observed an already good level of stability of the results across executions, so this limitation is not likely to affect the main conclusions of our study.
The nature of our study introduces some straightforward threads to the external validity of the results. Our goal was to investigate the adoption of $ABT$\xspace in an industrial scenario, and the experience reported in the paper is specific to the considered context: the company, the test case generation tools, and the software products. Still, our experience generated interesting insights that combined with similar studies can create a useful knowledge about the challenges that must be faced when a GUI system testing technology is used in industrial projects.
\end{change2} \section{Related Work}
\label{sec:related}
System test case generation techniques can automatically generate test cases that stimulate an application under test using its GUI. These approaches can address a range of platforms and GUI technologies, including desktop, Web, and mobile applications.
The test case generation process can be guided by different strategies. Several techniques are model-based, that is they first generate a state-based model of the GUI of the system under test and then generate test cases that cover the model according to a criterion
. The model is usually extracted by ripping the GUI of the application~\cite{Yuan:StateFeedback:TSE:2010,Xun:GIT:IEEEtrans:2011}
, and the test cases can be generated according to different criteria, such as covering sequences of events~\cite{Memon:GUIRsmoke:IEEEtrans:2005}, sequences of interacting events~\cite{Xun:GIT:IEEEtrans:2011}, or data-flow relations among event handlers~\cite{arlt:GAZOO:ISSRE:2012}.
Other techniques use search-based solutions to generate test cases that are optimal according to a given goal, for example satisfying a code coverage criterion~\cite{Gross:Exsyst:ISSTA:2012}. Other approaches simply generate test cases randomly or by combining random choices with strategies that influence randomness~\cite{Mariani:GUI:STVR:2014,Bertolini:GuiTestingEvaluation:ICST:2010}. Our research prototype, $ABT$\xspace, uses Q-learning to steer the testing activity, which would be random otherwise, towards the most interesting areas of the application. In this paper, we have discussed our experience with introducing ABT
in the software process of a medium size company that develops software for third parties. We selected $ABT$\xspace because we wanted to avoid model-based solutions, which can generate many infeasible test cases when the functionalities under test require long interaction sequences to be executed~\cite{Bae:Comparison:JSSS:2014}, like several functionalities in our commercial system. Moreover, among the techniques that do not generate tests by covering a model, our past results suggested that ABT was an effective solution~\cite{Mariani:GUI:STVR:2014,Mariani:Autoblacktest:ICST:2012}.
\smallskip
So far,
only few studies considered commercial applications and the integration of automatic GUI testing solutions with the development process of a professional organization. A study that considers commercial software is the one by Bertoli et al.~\cite{Bertolini:GuiTestingEvaluation:ICST:2010} where the fault-revealing effectiveness of several GUI testing techniques is compared using Motorola smartphones as subject applications. Want et al. also empirically investigate the effectiveness of several Android testing techniques with a number commercial apps~\cite{Wang:CommercialApp:ASE:2018}. These studies focus on the comparison among techniques and do not consider the issue of introducing these techniques into the production process of an organization. Contrarily, our experience reports challenges and insights about the industrial exploitation of a GUI testing solution. Although our observations cannot be generalized to every organization and every application, and data from many other similar experiences are necessary to better understand the difficulties of introducing automatic GUI testing in industry, the experience reported in this paper represents a first step towards understanding the relationship between industry and automatic GUI testing.
\smallskip
In our experience we faced both challenges that are well-known to the scientific community and challenges that gained little attention so far. In particular, we faced the problem of dealing with the explosion of the execution space, which is a problem present in almost every non-trivial application, but is exacerbated by the size and structure of commercial applications. We found that exploiting explicit information about the GUI and the structure of the application under test might improve the scalability of existing approaches, as reported also by Saddler and Cohen~\cite{Saddler:EventFlowSllider:ATEST:2016}.
We also faced the oracle problem~\cite{Barr:Oracle:TSE:2015}, which is a hot research topic. While research is mostly focusing on automatically generating program oracles~\cite{Carzaniga:CrossCheck:ICSE:2014}, in our experience we realized that manually specifying the oracles might be cost-effective, but we also realized the lack of languages and approaches to cost-effectively specify them.
Finally, the effective integration with the development process requires tools that can both produce proper outputs and suitably document the performed activity. \begin{change2}The work about automatically documenting test cases focused on the generation of code-level documentation for unit test cases~\cite{panichella:summaries:icse:2016,li:documenting:icst:2016}. In this work, we explored \end{change2}the generation of reports that can be easily interpreted by testers in terms of functional requirements that are/are not adequately tested. \begin{change2}The solution that we defined is deemed relatively cost-effective in the specific context of our experimentation. Indeed, designing automated approaches that further improve efficiency and efficacy\end{change2} is an open challenge that deserves great attention in the future.
\section{Conclusions}
\label{sec:conclusions}
In this paper, we describe our experience in introducing a leading-edge research technology for automatically generating system tests, in a business oriented organisation, by discussing the introduction of $ABT$\xspace for automatically testing a business application. As a result of our experience we identify several open challenges that must be faced to effectively address large industrial applications. The most relevant ones are
\begin{inparaenum}[(i)]
\item scalability, that is automatic system testing techniques must scale to large applications composed of many windows and functionalities that require complex input data to be executed,
\item reporting, that is automatic system testing techniques must generate test reports that can be easily interpreted according to the test requirements of the application, and
\item oracles, that is automatic system testing must be able to detect wrong outputs in addition to crashes.
\end{inparaenum}
Our experience indicates that it is possible to tailor effective system testing solutions to the characteristics of the application under test.
In our industrial context, we exploited information about the structure of both the GUI and the test plan to extended $ABT$\xspace to cope with the identified challenges, leading to the development of $ABT_{2.0}$\xspace. Our results show that $ABT_{2.0}$\xspace can reduce the effort necessary to test the system, by overcoming the main limitations that we faced with the original version of $ABT$\xspace.
\begin{change}We illustrate the elements that led us identifying and implementing the improvements needed for introducing the approach in production, and discuss the open challenges towards a routinely approach for automating the generation of system-level test suites. \end{change}
\begin{change}
An exploratory case study is a preliminary step toward general solutions, which serves to generate research hypotheses for specific and focused causal research.
Thus, we do not claim that our current results directly generalise to all contexts.
Nonetheless, the experience that we report in this paper provides important empirical evidence about effective automation of system testing in industrial settings.
The industrial study that we discuss in the paper indicates that oracles, efficient exploration of the interaction space, and generation of useful reports are critical enabling factors for transferring leading-edge approaches for automatic test case generation into the production line.
The solutions that we developed to address these issues pave the way towards architecting
industrial-strenght system test case generation approaches.
\end{change}
\begin{change}
Currently our industrial partner can autonomously run $ABT_{2.0}$\xspace on the ERP application considered in the study, even though they can hardly address new applications without our support, due to the difficulty in setting up new application-specific configurations and deploying the tool in the context of new software projects.
Based on the results of the experience that we discuss in this paper,
we are now working with an industrial partner to productise $ABT_{2.0}$\xspace, which is still in a prototype stage and needs to be properly embedded in a commercially usable solution.
\end{change}
\begin{change}
Our current research agenda aims to study new solutions towards tailorable system testing approaches that can be easily adapted to the characteristics of specific classes of applications.
We are collaborating with new industrial partners to collect additional evidence of the effectiveness of the approach discussed in this paper with new business oriented applications.
We aim to investigate the practicality of the tool and the actionability of the generated outputs for different scenarios, aiming to assess the general validity of the results reported in this paper.
\end{change}
We are also actively conducting research on developing effective automatic test generation approaches that address scalability, reporting and oracles.
\section*{Acknowledgements}
This work has been partially supported by the H2020 ERC Proof of Concept project AST (grant agreement n. 824939) and by the SISMA national research project (MIUR, PRIN 2017, Contract 201752ENYB).
\bibliographystyle{wileyj}
|
1,108,101,566,646 | arxiv | \section{}
Whistler waves are ubiquitous in the space environment and have been observed in the magnetotail
\cite{VKV14}, ionosphere \cite{MS16}, solar wind \cite{GSP94},
other planets \cite{HMH95,OR95} and numerous laboratory experiments \cite{Stenzel99}.
Some of the earliest observations of whistler waves were correlated with lightning strikes in which
the whistler wave is guided by an ionospheric duct \cite{Smith1961}. An important
aspect of whistler waves is that they are known to cause pitch angle scattering of highly
energetic electrons, for example, in Earth's radiation belt \cite{MAM16}. Furthermore,
EM whistler waves weakly decay from their source region and
travel great distances along the background magnetic field. The group velocity
(and hence the energy) of the EM whistler travels in a cone with a peak angle of $19.5^o$ w.r.t.
Earth's magnetic field, which is known as the shadow boundary and is determined by the long
wavelength inflection point in the dispersion relation \cite{FKS87}.
Therefore, as whistler waves propagate great distances along
Earth's magnetic field, they carry with it energy that pitch angle scatter highly
energetic particles, causing these particles to violate the frozen in condition.
One method for generating whistler waves in a cold, magnetized plasma is
with a magnetic or electric loop antenna driven in the frequency range
$\omega_{LH} < \omega_{VLF} \ll \omega_{ce}$ \cite{FG71,WB72},which we call a VLF antenna
and $\omega_{ce}$ is the electron cyclotron frequency.
In a magnetic loop antenna \cite{Karpman86} the charge density in the antenna ($\rho_{ant}$)
equals 0 and the current density varies with time at frequency $\omega$.
In an electric loop antenna $\rho_{ant} \not= 0$ and the charge density varies
spatially with frequency $\omega$. It has been shown \cite{Karpman86} that the two are
equivalent and both result in singularities in the electric field within a
cone of angle $\theta_c$ measured off the magnetic field direction,
though the electric field singularity is stronger in the electric loop antenna.
In a plasma with no dissipation, the resonance cones form at an angle given by
\begin{equation}\label{cone_angle}
sin^2\left(\theta_c\right) = \frac{\omega^2_{VLF}\left(\omega^2_{pe}+\omega^2_{ce}-\omega^2_{VLF}\right)}
{\omega^2_{pe}\omega^2_{ce}}
\end{equation}
In a plasma with dissipation, the singularities become finite within the angle $\theta_c$
in a spatially localized resonance cone. As noted in previous work, much of
the source power due to a VLF antenna is radiated as electrostatic Lower Oblique
Resonance (LOR) modes [also referred to as quasi-electrostatic whistler waves]
which decay as $R^{-1}$ ($R$ is the distance from the antenna)
away from the source antenna, whereas the EM whistler wave decays as $R^{-1/2}$ \cite{FKS87}.
Considerable experimental work has shown that the loop antennas driven
within the frequency range $\omega_{LH} < \omega \ll \omega_{ce}$
form LOR waves as expected \cite{US14}. However, in these efforts,
it is not clear how much of the power is radiated as EM whistler waves compared
with the LOR modes.
One method that has been proposed to increase the wave power in the EM
whistler wave is through a parametric interaction between LOR modes and a low
frequency density perturbation generated by a dipole antenna which excites
ion sound waves \cite{FKS87,SSA93}.
In this paper, we generate a low frequency density perturbation with a loop
antenna and excite ELF waves instead ion sound waves. We call the low frequency
loop antenna an ELF antenna which is driven at a frequency $\omega_{ELF} < \omega_{LH}$,
and show below that the ELF antenna drives a fast magnetosonic wave which
causes the low frequency density perturbation. We call an antenna consisting of a
combined ELF and VLF antenna
(occupying the same volume but driven at two
different frequencies) a parametic antenna.
We demonstrate in this paper an increase in the EM whistler wave power in
the parametric antenna simulation compared with the simulation of a VLF
antenna alone and attribute
this increase in wave power to a parametric interaction
between the LOR modes and a low frequency density perturbation.
We find in the parametric antenna that
whistler waves are excited on combination frequencies $\omega_{VLF} \pm \omega_{ELF}$,
as expected from theoretical work.
We now describe the 3D simulation set up for three different antennas immersed in a magnetized plasma.
We have performed three different fully kinetic simulations which we call Run 1, Run 2, and Run 3.
Run 1 contains the ELF antenna, Run 2 contains the VLF antenna, and Run 3 contains the parametric
antenna. All three antennas are identical except the frequency at which they are driven.
Besides the different antennas, all three runs are identical.
We present results that demonstrate the formation of resonance cones at angles consistent with
theory and the non-linear excitation of EM whistler waves in the parametric antenna simulation.
Furthermore, we will compare wave spectra with linear theory to demonstrate that the wave structures
that form in the three separate simulations are consistent with theory.
The simulation domain is established in a Cartesian volume such that -600 m $<$ $x,y$ $<$ 600 m and
-750 m $<$ $z$ $<$ 750 m, where $x$ and $y$ are perpendicular to the external magnetic field and z is parallel.
The number of cells used is $n_x=n_y$ = 600 and $n_z$ = 750 so that the grid size in all dimensions is 2 m.
The spatial grid size was chosen based on trial 2D simulations in which we varied
the grid size and compared the evolved field structures. In these different runs, we set the grid size
to 0.50 m, 1 m, 2 m and 4 m. We find good agreement with linear theory
in the different 2D runs up to a grid size of 2 m.
Therefore, for the 3D runs, we chose to use the 2 m grid size due to computational constraints.
For these simulation, we use 8 plasma particles per cell (4 electrons, 4 ions), for a total
of $\sim 2.2 \times 10^9$ particles. The mass ratio of ions to electrons is 1836:1, so that these simulations
are assuming the ion species is hydrogen. We use an implicit, energy conserving algorithm to
provide the Lorentz force particle push and also to solve for the self-consistent EM fields
\cite{WRC04,WRC06}. The time step (dt) used is 3 times the CFL limited time step, which equates to
a value of $dt \approx 11$ ns. This time step was chosen by varying it from 1 to 5 times the
CFL limited time step in otherwise equivalent 2D simulations.
Above 3 times the CFL limited timestep, noticable difference were observed
in the evolved field structures.
The plasma parameters for the simulation results presented in this paper are the following:
The background electron and ion density is $10^5$ cm$^{-3}$. The background magnetic field points
in the z-direction and is 0.30 Gauss. We assume hydrogen ions. The temperature of the plasma is
set to 0 which reduces the numerical noise in the simulation. We have performed simulations
with a finite temperature and achieve similar results to the ones presented here. Furthermore,
by setting the plasma temperature to 0, we do not need to replace particles that can leave the
simulation through the outlet boundaries (discussed in the next paragraph). We have set
$\omega_{VLF}$ = 1.31$\times 10^6$ rad/s $\approx$ 11$\omega_{LH}$ and $\omega_{ELF}$ = 1.04$\times10^5$ rad/s
$\approx 0.88\omega_{LH}$.
The simulation uses outlet boundaries \cite{BL05} which attempt to match the outgoing plasma
waves in the simulation domain with a virtual wave that forms outside the simulation domain.
This matching condition allows the wave to propagate out of the simulation domain and minimizes
reflections back into the simulation domain. We construct a “loop” antenna in the center of the
simulation domain using a “volume” model in LSP. This volume model generates a uniform dipole
current with a set frequency within a fixed volume of space and therefore, via Maxwell's equation,
generates a dipole electric field. However, the volume model only allows
us to define a dipole electric field in a solid region. To construct a dipole loop, we glue four
solid volumes together with each volume composing a side of a cube loop (this is analogous to
a square donut). At the four corners, we superimpose conducting volumes which we find reproduces
better the Lower Oblique Resonance cones. The antenna is placed in the center of the simulation domain
such that the normal vector to the loop antenna lies in the $z$-direction (which is also the direction of
the external magnetic field). Therefore, the plane of the loop lies in the $x-y$ plane. The square antenna has an
inner width (which represents the hollow portion) of 700 cm and and outer width of 1100 cm so that the thickness
of the antenna is 400 cm. In the $z$-direction, the thickness is 1100 cm.
In Figure \ref{LOR_Cones} we show a 2D slice of the LOR cones. The angle that these structures form is consistent
with theory \cite{FG71} and is found to be $\sim 15^o$ for the plasma parameters that we have used.
We varied $\omega_{VLF}$ in 2D simulations (in the x-z plane) and we find the resonance cones form at a
smaller angle when $\omega_{VLF}$ decreases, consistent with Equation \ref{cone_angle}.
Therefore, it is clear that
we can reproduce the resonance cones discussed in previous experimental and theoretical results.
However, because these waves
have a large electrostatic component, the $\vec{J} \cdot \vec{E}$ power generated
does not propagate far from the antenna. The goal of
this paper is to demonstrate, using a fully kinetic PIC model that we can parametrically couple
the electrostatic LOR waves with the electromagnetic magnetosonic waves and pump additional
power into the electromagnetic whistler waves.
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.5]{resonance_cone_plot_small1.eps}
\end{center}
\caption{Magnitude of the electric field early in the parametric antenna simulation
showing the formation of the LOR cones. The black square represents the antenna. Note that only a
portion of the simulation domain in shown in this Figure. The magnetic field is in the z-direction.}
\label{LOR_Cones}
\end{figure}
Prior research \cite{FKS87,SSA93} has discussed in greater depth the dispersion curve for the
EM whistler and ES Lower Oblique modes. As discussed in these papers for values of
$k_{\perp} \ll \omega_{pe}/c$ ($c$ is the speed of light and $k_{\perp}$ is the wave vector
perpendicular to the external magnetic field), the wave is electromagnetic and for
$k_{\perp} \gg \frac{\omega_{pe}}{c}$ the wave is electrostatic.
Therefore, the LOR waves have values of $k_{\perp} \gg \omega_{pe}/c$.
The strict upper limit on the value of $k_{\perp}$ above which the wave
is quasi-electrostatic is known as the shadow boundary which
is the long wavelength inflection point of the refractive index surface.
However, this corresponds to a value of $k_{\perp} \approx 0.1 \omega_{pe}/c$,
and therefore the purely EM whistler mode resides in the range
$0 < k_{\perp} \lesssim 0.1 \omega_{pe}/c$, which requires exceedingly large
computational domains to resolve.
Furthermore, according to Equations (4) and (5) from \textit{Fiala et. al.} \cite{FKS87},
the ES portion of the disperion curve is linear in $k_z$. We find that the linear
portion of the disperion curve starts near $k_{\perp} = \omega_{pe}/c$.
Therefore, in this paper, we define waves with $k_{\perp} < \omega_{pe}/c$ to be
EM and $k_{\perp} > \omega_{pe}/c$ to be ES, which is obviously an approximation,
but a necessary one due to our limited computing resources.
To calculate the EM wave power (measured in Watts), we have developed a
“k-space filter” which allows us to calculate the EM contribution to the
electric fields ($\vec{E}$) and current density ($\vec{J}$).
Essentially, we calculate the Fourier Transform (FT) of the fields such that
$\textrm{FT}\left\{\vec{E}(x,y,z)\right\}=\mathcal{\vec{E}}(k_x,k_y,k_z)$ and
$\textrm{FT}\left\{\vec{J}(x,y,z)\right\}=\mathcal{\vec{J}}(k_x,k_y,k_z)$. Next
we set $k^2_{\perp}=k^2_x+k^2_y$ and invoke a filter in $k$-space according to the following
equations:
\begin{align} \mathcal{\vec{E}}_{EM}(\vec{k}) =
\begin{cases}
\mathcal{\vec{E}}(\vec{k}) & \text{if $k_{\perp} < \frac{\omega_{ce}}{c}$} \\
0 & \text{if $k_{\perp} > \frac{\omega_{ce}}{c}$}
\end{cases}
\label{filtera}
\end{align}
\begin{align} \mathcal{\vec{E}}_{ES}(\vec{k}) =
\begin{cases}
\mathcal{\vec{E}}(\vec{k}) & \text{if $k_{\perp} > \frac{\omega_{ce}}{c}$} \\
0 & \text{if $k_{\perp} < \frac{\omega_{ce}}{c}$}
\end{cases}
\label{filterb}
\end{align}
Where the subscript EM/ES denotes electromagnetic and electrostatic portion of the field.
The same filter is applied to the self-consistent current density, $\mathcal{\vec{J}}$. Note
that in Equations \ref{filtera} and \ref{filterb}, all values of $k_{\|}$ are included in the EM and ES portions of the
E- and J-fields. We next compute the power due to the EM portion of the fields, which we demonstrate below is
mainly due to the EM Whistler wave based on the good agreement between the Whistler disperion and the Fourier
spectrum of the electric field.
We use the following equation to compute the power:
\begin{equation}
P_{EM}=\frac{1}{2}\int\mathcal{\vec{E}}_{EM}\cdot\mathcal{\vec{J}}^*_{EM}+\mathcal{\vec{E}}^*_{EM}\cdot\mathcal{\vec{J}}_{EM}d^3k
\label{EM_power}
\end{equation}
Where the * indicates complex conjugate. In order to compare Equation \ref{EM_power} with the theoretical output of the antenna,
we have also computed the $\vec{J}\cdot\vec{E}$ power from the antenna in the following way:
\begin{equation}
P_{Ant}=\frac{1}{2}\int\mathcal{\vec{E}}_{EM}\cdot\mathcal{\vec{J}}^*_{Ant}+\mathcal{\vec{E}}^*_{EM}\cdot\mathcal{\vec{J}}_{Ant}d^3k
\label{Ant_power}
\end{equation}
where $\mathcal{\vec{J}}_{Ant}$ is the FT of the currrent density in the antenna.
Of course, Equations \ref{EM_power} and \ref{Ant_power} are nearly identical. Essentially,
in using Equation \ref{EM_power}, we are only considering the power which is non-linearly pumped
into the EM fields by the plasma currents which act as a large antenna driven by the parametric
instability.
The lower bound on the integrals in Equations \ref{EM_power} and \ref{Ant_power} is limited by the
perpendicular box size, which is 1200 m in
our simulation. Therefore, the lower bound is $\sim 0.08 \omega_{pe}/c$ and the upper bound is $\omega_{pe}/c$
due to Equation \ref{filtera}
We have performed two simulations with only ELF antennas, one driven at 1 Amp and the other driven at 3 Amps.
We have also performed a simulation with only a VLF antenna driven at 10 Amps. Finally, we performed two simulations
with a parametric antenna such that the
ELF/VLF currents are 1 Amp/10 Amps and 3 Amps/10 Amps. We find little difference between the 1 Amp/10 Amp parameteric
antenna simulation and the 10 Amp VLF antenna simulation, indicating that there is little non-linear interaction
between the LOR modes and density perburbations driven by the ELF antenna in this case.
Therefore, all results
discussed in the remainder of this paper are for the 3 Amp/10 Amp parametric antenna simulation.
We expect the ELF antenna to drive Fast Magnetsonic
(FM) waves which has a disperion equation given by Equation (3) in \textit{Sagdeev et. al.} \cite{SSS77}.
We show in Figure \ref{ELF_dispersion} the $k$-space spectra from the ELF simulation with the white curve
representing the solution to the FM disperion in \textit{Sagdeev et. al.} \cite{SSS77}.
The good fit between the linear disperion curve and the power spectra from the simulation
indicates that we are resolving the necessary wave numbers to drive the FM mode.
\begin{figure}[h]
\begin{center}
\includegraphics[scale=.75]{ELF_early_dispersion.eps}
\end{center}
\caption{The wave power calculated from $E_x$ through the x=0 plane from the 3 Amp ELF simulation at $\sim$ 0.20
ELF periods. The white curve represents the solution to the FM disperion relation.}
\label{ELF_dispersion}
\end{figure}
For comparison, the results of the two calculations from Equations \ref{EM_power} and \ref{Ant_power}
are shown in Figure \ref{power}.
We saved the full 3D electric field, magnetic field, and current density every 200 time steps, which allows
us to resolve 2 data points per VLF period and 26 data points per ELF period. We ran the parametric simulation for
about 80000 time steps, which is about 15 ELF periods. The simulations were performed on massively parallel
computing clusters using several thousand processors and took about 15 days.
To compare the parametric run with the 10 Amp/3 Amp antenna,
we also ran a VLF simulation (driven at 10 Amps) and an ELF simulation (driven at 3 Amps) with identical
simulation domain sizes, grid sizes and time steps. This has allowed us to compare the linear and
non-linear evolution of the plasma. Because the calculation from the VLF and
ELF simulations show that the power output from the antenna level off more quickly than the parametric
antenna simulation, we ran
these two simulation for 50000 time steps. We note an oscillatory trend to all three data
sets, with a frequency of $\sim \omega_{ELF}$. However, given that the VLF simulation is
independent of the ELF frequency, we surmise that we are driving a fundamental FM
mode in the plasma nearly independent of the ELF driving frequency. We tested this idea
by performing another parametric simulation in which the ELF antenna is driven at $\sim$ 0.52$\omega_{LH}$)
(i.e. $\sim$60\% of the original frequency) and indeed find the same periodicity in the power calculation,
demonstrating that the fundamental frequency is independent of the driving frequency.
We have averaged over 1 ELF period to smooth the oscillation and demonstrate an increase
in average power in the parametric antenna. This is shown as the dashed curves in Figure
\ref{power}. The red curves in Figure \ref{power} represents the superposition of the linear power
generated by the ELF and VLF antennas run independently.
In comparing the power generated by the superposition of the ELF and VLF antennas and the parametric antenna,
we note a factor of 3 increase in power.
Not all the power generated by the antenna radiates away from the antenna. Some of the power
generates ES waves and some of it heats the plasma, neither of which are accounted for in
Equations \ref{EM_power} and \ref{Ant_power}. We have calculated the EM wave power from the VLF
simulation alone using Equation \ref{EM_power} and also show this calculation as the blue curve
in Figure \ref{power} and note a factor of 7 gain between the VLF and parametric antennas.
\begin{figure}[h]
\begin{center}
\includegraphics[scale=.5]{power_VLF_ELF_par_avg_long_V2.ps}
\end{center}
\caption{EM power calculation (blue and black curves) using Equation \ref{EM_power} and
the power generated by the antenna (red) using Equation \ref{Ant_power}. The dashed curve
represents the average over one ELF period.}
\label{power}
\end{figure}
To demonstrate that the increase in power observed in Figure \ref{power} is in fact due to EM whistler
waves,
we show wave spectra (FT of field components) in the $y-z$ plane from the 10 Amp VLF simulation,
and the 10 Amp/3 Amp parametric simulation. Based on previously developed theories \cite{FKS87,SSA93}, we infer
that the best explanation for the observed increase in the EM whistler wave power is due to a
parametric interaction between the FM wave and the LOR resonance waves. The whistler dispersion from
\textit{Fiala et. al.} \cite{FKS87} is compared with wave power from the VLF and parametric simulations in
all four panels. Panels (a) and (b) represent the FT of $E_x$ in the y-z plane at x = 40 m and
Panels (c) and (d) represent the FT of $E_z$ in the y-z plane at x = 400 m.
The two left panels are from the VLF simulation and the two right panels are from the parametric simulation.
According to \cite{FKS87}, for $k_{\perp} << \omega_{pe}/c$,
the wave corresponds to the EM whistler wave and for $k_{\perp} >> \omega_{pe}/c$, the wave corresponds to the LOR.
Therefore, if we are exciting the EM whistler in the parametric antenna simulation, then we expect
to observe smaller wave numbers excited. In comparing the VLF (left two panels) and parametric simulations
(right two panels)
we observe that both follow the whistler dispersion curve well. However, in the parametric simulation,
we also see that lower wave number modes have a greater wave power compared with the VLF antenna alone.
Close to the antenna at x=40 m [the antenna is placed close to the center of the simulation domain which
is at the coordinates (0,0,0)], we see large EM wave power in both the VLF and parametric antenna. Though
difficult to see from the color scale, the average wave power in the parametric antenna is $\sim$2 times greater
than the VLF antenna. However, it is very obvious that far from the antenna at x=400 m, the waves are dominated
by the EM whistler wave and that the parametric antenna wave power is much larger than the VLF wave power
($\sim$ 10 times greater).
\begin{figure}[h]
\begin{center}
\includegraphics[scale=.4]{wavePOWER_Ez_Ex_VLF_par_46200_x=400_40.eps}
\end{center}
\caption{(a,b) Wave power from the VLF (panel a) and parametric (panel b) simulations computed from $E_x$
at x=40 m. (c,d) wave power from $E_z$ at x=400 m from the VLF (panel c) and parametric (panel (d)
simulation. All data are from time step 46200 ($\sim$ 9 ELF periods). }
\label{spectra}
\end{figure}
Finally, we compare the non-linear wave amplitudes excited in the parametric simulation with
the linear wave amplitudes. We define the linear fields as the superposition of the fields
from the separate ELF and VLF simulation, and the non-linear fields as the total field from
the parametric simulation minus the fields from the separate ELF and VLF simulations:
equation:
\begin{equation}
\begin{aligned}
E_L &= E_{VLF}+E_{ELF} \\
E_{NL} &= E_{par}-E_L
\end{aligned}
\label{linear_NL}
\end{equation}
Where E denotes the electric field. The same definition can be written for all the field quantities.
We show $E_x$ and $B_y$ for both the linear and non-linear fields in Figure \ref{fig:linear_NL}. Note that
only a portion of the simulation domain is plotted. Notice in both runs
considerable wave activity along the external magnetic field on the
same axis as the antenna at the center of the figures.
However, in the NL fields, we notice off-axis waves that fill the simulation domain that are not
present in the linear fields. We attribute these off-axis waves to the EM whistler waves excited
by the parametric interaction between the LOR waves and the FM waves. This
interpretation is consistent with Figure \ref{spectra} which shows that far from the antenna there
is considerable EM whistler wave power in the parametric antenna compared with VLF antenna.
Figure \ref{fig:linear_NL}a demonstrates that significant non-linear wave activity occurs in the parametric antenna
simulation and Figure \ref{fig:linear_NL}aa shows that these waves have a strong magnetic
field component, consistent with interpretation that these off-axis waves are EM.
\begin{figure}[h]
\begin{center}
\includegraphics[scale=.4]{non_linear_Ex_By.eps}
\end{center}
\caption{(a,aa) Non-linear electric and magnetic field from the paramatric antenna simulation
as defined in Equation \ref{linear_NL}. (b,bb) Linear electric and magnetic fields also defined in Equation \ref{linear_NL}.
All panels are plotted at time step 46200 ($\sim$ 9 ELF periods).}
\label{fig:linear_NL}
\end{figure}
In conclusion, we have shown that a parametric interaction between electrostatic
LOR modes excited by a VLF antenna and FM modes excited by an ELF antenna leads to the non-linear
excitation of electromagnetic whistler waves. While a VLF antenna alone also excites EM whistler
waves, the parametric antenna non-linearly pumps more power (measured in Watts) into the EM whistler
mode compared with a VLF antenna alone. We find evidence for the existance of EM whistler modes in
$k$-space and real space. Furthermore, we have also investigated the non-linear excitation
of wave modes in frequency space at $\omega_{VLF} \pm \omega_{ELF}$ at different spatial locations in the
simulation domain. We indeed find evidence that the combination frequency is excited, as expected
from theory \cite{FKS87}. This antenna could be used, for example, to generate EM whistler waves
from a remote antenna to pitch angle scatter high energy particles away from
satellites which may be subjected to such a harsh evironment.
\subsection{}
\subsubsection{}
|
1,108,101,566,647 | arxiv | \section{Introduction}
Consider the Schr\"odinger equation, $i \partial_{t} u +\Delta u=0$, on $\mathbb{R}^{n+1}$, with initial datum
$u(\cdot, 0) = u_{0}$, and
Carleson's problem of identifying
the exponents $s > 0$ for which
\begin{equation}\label{CarlesonProblem}
\lim_{t\to 0} u(x , t) = u_{0}(x), \qquad \text{a.e.} \quad x \in \mathbb{R}^{n}, \qquad \forall \ u_0\in H^s.
\end{equation}
Here, $H^s$ denotes the inhomogeneous $L^2(\mathbb{R}^n)$--Sobolev space, defined via the Fourier transform as usual.
Carleson \cite{Carl} proved that \eqref{CarlesonProblem} holds as long as
$s \geq 1/4$ in the one-dimensional case, and Dahlberg and Kenig \cite{DahlKenig} showed that this condition is necessary in all dimensions, providing a complete solution for the one-dimensional case.
The higher dimensional problem has since been studied by many authors; see for example
\cite{Cow, Carb, Sjolin, Vega, Bou3, Bou4, MVV1, MVV2, TV, T, GS}. The best known positive result, that (\ref{CarlesonProblem}) holds if
$$s > \frac{1}{2}-\frac{1}{4n},$$ is due to Lee \cite{Lee} when $n=2$ and Bourgain~\cite{Bou2} when $n\ge 3$. Bourgain \cite{Bou2} also showed that $s\ge 1/2-1/n$ is necessary for \eqref{CarlesonProblem} to hold, improving the condition of Dahlberg and Kenig when $n\ge 5$.
Here we improve Bourgain's necessary condition and the condition of Dahlberg and Kenig when $n\ge 3$.
\begin{corollary}\label{ones}
Let $n\ge 3$ and suppose that \eqref{CarlesonProblem} holds. Then $s\ge\frac{1}{2}-\frac{1}{n+2}$.
\end{corollary}
When the initial data $u_{0}$ is a Schwartz function, we can write
\begin{equation}\label{rt}
u(x, t) = e^{i t \Delta}u_{0}(x):=\int_{\mathbb{R}^{n}}\widehat{u}_{0}(\xi)\, e^{2\pi ix\cdot\xi -4\pi^2it | \xi |^{2}} d \xi,
\end{equation}
where $\widehat{u}_0$ denotes the Fourier transform of $u_0$.
By the Niki\v sin--Stein maximal
principle \cite{N, st}, the almost everywhere
convergence \eqref{CarlesonProblem} implies a weak $L^2$-estimate for the
maximal operator, which in turn implies a strong estimate by interpolation with a trivial bound (see for example \cite[Proof of Lemma C.1]{BBCR}). Thus Corollary~\ref{ones} is a consequence of the following theorem.
\begin{theorem}\label{OurBouThm}
Let $n\ge 3$ and suppose that there is a constant $C_s$ such that
\begin{equation}\label{oto}
\left\| \sup_{0< t < 1} \big| e^{it\Delta} f \big| \right\|_{L^{2}(B(0,1))} \le C_s\| f \|_{H^{s}(\mathbb{R}^{n})}
\end{equation}
whenever $f$ is a Schwartz function. Then $s\ge\frac{n}{2(n+2)}$.
\end{theorem}
By the equivalence between local and global estimates \cite{R}, this also yields the following necessary condition for the global maximal estimate.
\begin{corollary}
Let $n\ge 3$ and suppose that there is a constant $C_s$ such that
\begin{equation*}\label{otoGlobal}
\left\| \sup_{0< t < 1} \big| e^{it\Delta} f \big| \right\|_{L^{2}(\mathbb{R}^n)} \le C_s\| f \|_{H^{s}(\mathbb{R}^{n})}
\end{equation*}
whenever $f$ is a Schwartz function. Then $s\ge\frac{n}{n+2}$.
\end{corollary}
The counterexample of Dahlberg and Kenig consists of a concentrated solution, or wave-packet, that travels over a large area, making the left-hand side of \eqref{oto} large. On the other hand, Bourgain considered a sum of data, with different velocities, carefully chosen to create regions of constructive interference, recalling Young's double slit experiment with many slits. Again the regions of coherence travel over a large area, making the left-hand side of the maximal inequality large.
In the light of Bourgain's example, a physical interpretation of Carleson's problem could be to identify the lowest frequency at which an initial state (or configuration of slits) can generate interference patterns, thus obscuring their original state.
Inspired by this, we take a variant of data, previously considered by Barcel\'o, Bennett, Carbery, Ruiz and Vilela \cite{BBCRV}, for which the corresponding solution interferes with itself periodically in time. The difficulty of using their example directly in this context is that the constructive interference reoccurs in the same relatively small regions of space. In order to take advantage of the periodic coherence, we perturb the initial state so that the whole solution travels in a single direction. We then use an ergodicity argument to show that this direction can be taken so that the regions of constructive interference never reappear in exactly the same places, forcing the left-hand side of \eqref{oto} to be large.
\section{The ergodic lemma}
We say that a set $E$ is $\delta$--dense in $F$ if for every point $x\in F$ there is a point $y\in E$ such that $|x-y|< \delta$.
The following lemma is optimal in the sense that the statement fails for larger~$\sigma$. To see this, we can place balls of radius $\varepsilon R^{-1}$ at the points of the set $E_\theta$ and assume that the balls are disjoint. Then the volume of such a set would be of the order $R^{1-(n+2)\sigma}$, a quantity that tends to zero as $R$ tends to infinity when $\sigma > \frac{1}{n+2}$.
\begin{lemma}\label{Lemma:ToroImpr}
Let $n \geq 3$ and $0<\sigma <\frac{1}{n+2}$. Then for all $\varepsilon>0$, there exists $\theta \in \mathbb{S}^{n-1}$ such that
\begin{equation*}
E_\theta:= \bigcup_{t\in R^{2\sigma-1}\mathbb{Z}\cap(0,1)} \big\{x\in R^{\sigma-1} \mathbb{Z}^{n}\, :\, |x|< 2\big\}+t\theta
\end{equation*}
is $\varepsilon R^{-1}$--dense in $B(0,1/2)$ for all sufficiently large $R >1$.
\end{lemma}
\begin{proof}
By rescaling, the statement of the lemma is equivalent to showing that
\begin{equation}\nonumber
\bigcup_{t \in R^{\sigma} \mathbb{Z}\cap(0,R^{1-\sigma})} \big\{x\in \mathbb{Z}^{n}\, :\, |x|< 2R^{1-\sigma} \big\}+t\theta
\end{equation}
is $\varepsilon R^{-\sigma}$--dense in $B(0,R^{1-\sigma}/2)$ for a certain $\theta \in \mathbb{S}^{n-1}$.
That is to say, for any $x\in B(0,R^{1-\sigma}/2)$ there exists a $y_{x} \in \mathbb{Z}^{n}\cap B(0,2R^{1-\sigma})$ and
$t_{x}\in R^{\sigma} \mathbb{Z}\cap (0,R^{1-\sigma})$ such that
\begin{equation*}\label{EquivTorusImpr}
|x - (y_{x} + t_{x}\theta) |< \varepsilon R^{-\sigma},
\end{equation*}
for a certain $\theta \in \mathbb{S}^{n-1}$, independent of $x$.
By taking the quotient $\mathbb{R}^{n} / \mathbb{Z}^{n} = \mathbb{T}^{n}$, this would follow if for any $[x] \in \mathbb{T}^{n}$ there exists $t_{x} \in R^{\sigma} \mathbb{Z}\cap (0,R^{1-\sigma})$ such that
\begin{equation}\label{TorusRiductionImpr}
|[x] - [t_{x} \theta]| < \varepsilon R^{-\sigma}.
\end{equation}
To see this, assume (\ref{TorusRiductionImpr}) and cover
$B(0,R^{1-\sigma}/2)$ with a family of disjoint copies of axis-parallel $\mathbb{T}^{n}$. Denote
the copy that contains $x$ by $\mathbb{T}^{n}_{x}$,
and let $z_{x}$ be the point in $\mathbb{T}^{n}_{x}$ such that $[z_{x}] = [t_{x} \theta]$.
Then $y_{x}: = z_{x} - t_{x}\theta\in \mathbb{Z}^{n}$ and
by construction
\begin{equation}
|x- (y_{x}+t_{x} \theta)|
= |[x] - [t_{x}\theta]| < \varepsilon R^{-\sigma}.
\end{equation}
Note that we also automatically have that
\begin{equation*}
|y_x|\le |x|+|t_x|+ \varepsilon R^{-\sigma}< \tfrac{1}{2}R^{1-\sigma}+R^{1-\sigma}+\varepsilon R^{-\sigma}<2R^{1-\sigma},
\end{equation*}
and so we recover all of the required properties.
It seems likely that ergodic results, similar to
(\ref{TorusRiductionImpr}), are well-known, however we prove this now using Fourier series.
We write
$x$ in place of $[x]$ from now on.
Let $\phi : \mathbb{T}^{n} \to [0,(2/\varepsilon)^n)$ be smooth, supported in $B(0,\varepsilon/2)$, such that $\int \phi=1$, and set
\begin{equation*}\label{Def:PhiTorusImpr}
\phi_{R}(x) := \phi \big( R^{\sigma}x \big).
\end{equation*}
If we could show that there exists $\theta \in \mathbb{S}^{n-1}$ such that
for all $x \in \mathbb{T}^{n}$ there is a
$t_{x} \in (R^{\sigma} \mathbb{Z}+[-\tfrac{\varepsilon}{2}R^{-\sigma},\frac{\varepsilon}{2}R^{-\sigma}])\cap (0,R^{1-\sigma})$ satisfying
\begin{equation}\label{TorusRiductionBisImpr}
\phi_{R}(x-t_{x} \theta) > 0,
\end{equation}
then (\ref{TorusRiductionImpr}) would follow.
Let $\psi: (-\varepsilon/2,\varepsilon/2) \to [0,2/\varepsilon)$ be a one-dimensional Schwartz function such that $\int \psi=1$,
and define
$$
\eta_{R}(t) := R^{3\sigma-1} \sum_{\substack{j \in \mathbb{Z} \\ 0 < j < R^{1-2\sigma}}} \psi(R^{\sigma}(t - R^{\sigma} j)).
$$
Noting that $\eta_{R}$ is supported in
$R^{\sigma} \mathbb{Z}+[-\frac{\varepsilon}{2}R^{-\sigma},\frac{\varepsilon}{2}R^{-\sigma}]$, we will show that there exists $\theta \in \mathbb{S}^{n-1}$ such that,
for all $x \in \mathbb{T}^{n}$,
\begin{equation*}
\int_{\mathbb{R}} \phi_{R}(x-t \theta) \eta_{R}(t)\, d t >0,
\end{equation*}
which implies (\ref{TorusRiductionBisImpr}).
Expanding in Fourier series;
\begin{equation}\nonumber
\phi_{R}(x-t\theta) =
\widehat{\phi_{R}} (0)
+
\sum_{\substack{k \in \mathbb{Z}^{n} \\ k\neq 0}} \widehat{\phi_{R}} (k) e^{ 2\pi ix \cdot k}e^{-2\pi i t \theta \cdot k}
=: \widehat{\phi_{R}} (0)+ \Gamma(t, x, \theta),
\end{equation}
and noting that $\int_{\mathbb{R}} \eta_{R} \simeq 1$ and
$\widehat{\phi_{R}}(0) = \int_{\mathbb{T}^{n}} \phi_{R} \simeq R^{-n\sigma}$,
it would be sufficient
to find $\theta \in \mathbb{S}^{n}$ such that\footnote{We write $A \lesssim B$ if $ A\leq C B$ for some constant $C > 0$ that only depends on unimportant parameters. We also write $A \simeq B$ if $ A \lesssim B$ and $B \lesssim A$.}
\begin{equation}\label{WBS}
\Big| \int_{\mathbb{R}} \Gamma(t, x, \theta)\eta_{R}(t)\,dt \Big|
\lesssim R^{-\gamma},\quad \gamma>n\sigma
\end{equation}
whenever $x \in \mathbb{T}^{n}$.
For the proof of \eqref{WBS}, we note that
\begin{eqnarray}\nonumber
\Big| \int_{\mathbb{R}} \Gamma(t, x, \theta) \eta_{R}(t) dt \Big|
& \leq &
\sum_{\substack{k \in \mathbb{Z}^{n} \\ k\neq 0}} \Big| \widehat{\phi_{R}} (k) \Big|
\ \Big| \int_{\mathbb{R}} e^{-2\pi i t \theta \cdot k} \eta_{R}(t) d t \Big|
\\ \nonumber
&= &
\sum_{\substack{k \in \mathbb{Z}^{n} \\ k\neq 0}} \Big| \widehat{\phi_{R}} (k) \Big|
\ \Big| \widehat{\eta_{R}}( \theta\cdot k) \Big|\\
&\lesssim& \sum_{\substack{k \in \mathbb{Z}^{n} \\ k\neq 0}} \frac{R^{-n\sigma}}{\left( 1 + R^{-\sigma} |k| \right)^{n+1}} \Big| \widehat{\eta_{R}}( \theta\cdot k) \Big|, \label{pull}
\end{eqnarray}
where the final inequality uses the Schwartz decay which follows by integrating by parts in the formula for the Fourier coefficients. Noting that the right-hand side of \eqref{pull} no longer depends on~$x$, in order to find a $\theta\in \mathbb{S}^{n-1}$ such that (\ref{WBS}) holds for all $x\in \mathbb{T}^{n}$, it will suffice to prove that the the right-hand side of \eqref{pull} is similarly bounded after averaging over the sphere. As
$$
\sum_{\substack{k \in \mathbb{Z}^{n} \\ k\neq 0}}
\frac{R^{-n\sigma}}{\left( 1 + R^{-\sigma} |k| \right)^{n+1}}\lesssim \int_{\mathbb{R}^{n}} \frac{R^{-n\sigma}}{\left( 1 + R^{-\sigma} |k| \right)^{n+1}}\, dk\lesssim 1,
$$
by Fubini's theorem, it would suffice to prove that
\begin{equation}\label{ORTSINTEGRAL}
\int_{\mathbb{S}^{n-1}} \Big| \widehat{\eta_{R}}( \theta\cdot k) \Big|\, d \theta \lesssim R^{2\sigma-1} \log R.
\end{equation}
We then use that $\sigma<\frac{1}{n+2}$ so that $1-2\sigma>n\sigma$.
To see \eqref{ORTSINTEGRAL},
we calculate
\begin{eqnarray*}\label{ExplFour}
\widehat{\eta_{R}}(t)
&= &
R^{3\sigma-1} \sum_{\substack{j \in \mathbb{Z} \\ 0 < j < R^{1-2\sigma}}}
\psi\big(R^{\sigma} ( \cdot - R^{\sigma} j)\big)^{\wedge}(t)
\\ \nonumber
& = &
R^{2\sigma-1}
\widehat{\psi}(R^{-\sigma} t)
\sum_{\substack{j \in \mathbb{Z} \\ 0 < j < R^{1-2\sigma}}}
e^{-2\pi i R^{\sigma}j t }
\\ \nonumber
& = &
R^{2\sigma-1}
\widehat{\psi}(R^{-\sigma} t)\frac{e^{2\pi i\lfloor R^{1-2\sigma} \rfloor R^\sigma t}-e^{-2\pi i R^{\sigma} t }}{e^{2\pi iR^\sigma t}-1}.
\end{eqnarray*}
Now since $| \widehat{\psi} \,| \lesssim 1$ this yields
\begin{equation}\nonumber
\int_{\mathbb{S}^{n-1}} \Big| \widehat{\eta_{R}}( \theta \cdot k) \Big|\, d \theta
\lesssim
R^{2 \sigma-1} \int_{\mathbb{S}^{n-1}}
\Big| \frac{\sin(\pi N R^{\sigma} \theta \cdot k)}{\sin (\pi R^{\sigma} \theta \cdot k)} \Big| \, d\theta,
\end{equation}
where $N=\lfloor R^{1-2\sigma} \rfloor+1$.
By the Funk--Hecke theorem (see for example \cite[pp. 35-36]{at}), we have that
\begin{eqnarray*}
\int_{\mathbb{S}^{n-1}}
\Big| \frac{\sin(\pi N R^{\sigma} \theta \cdot k)}{\sin (\pi R^{\sigma} \theta \cdot k)} \Big| \, d\theta&=& |\mathbb{S}^{n-2}| \int_{-1}^1 \Big| \frac{\sin(\pi NR^{\sigma}|k|t )}{\sin (\pi R^{\sigma} |k|t)} \Big| (1-t^2)^{\frac{n-3}{2}} dt\\
&\leq& \frac{|\mathbb{S}^{n-2}|}{R^{\sigma}|k|} \int_{-R^{\sigma}|k|}^{R^{\sigma}|k|} \Big| \frac{\sin(\pi Nt )}{\sin (\pi t)} \Big|\, dt\\
&\lesssim& \log N \ \lesssim \ \log R,
\end{eqnarray*}
where the penultimate inequality is a well-known property of the Dirichlet kernel (see for example \cite[pp. 182]{G}). This completes the proof of
(\ref{ORTSINTEGRAL}) which completes the proof of the lemma.
\end{proof}
\section{Proof of Theorem~\ref{OurBouThm}}
\label{counter}
The maximal estimate \eqref{oto} implies the same estimate over a smaller time interval, and so writing $t/(2\pi R)$ in place of $t$, we know that
\begin{equation}\label{otooBis}
\left\| \sup_{0< t < 1} \big| e^{i\frac{t}{2\pi R}\Delta} f \big| \right\|_{L^{2}(B(0,1))} \lesssim R^s\| f \|_{2}
\end{equation}
whenever $\mathrm{supp\,} \widehat{f} \subset B(0,2R)$ and $R>1$. Thus it would suffice to prove that for this to hold it is necessary that
$s \geq \frac{n}{2(n+2)}$. In fact \eqref{otooBis} is equivalent to \eqref{oto}; see~\cite{Lee, LR}, and so we have not thrown anything away here.
Letting
$0 < \sigma < \frac{1}{n+2}$
we define
\begin{equation}\nonumber
\Omega := \big\{ \xi\in R^{1-\sigma} \mathbb{Z}^{n} \,:\, |\xi|< R \big\} + B(0,\rho),
\end{equation}
where $\rho$ is to be chosen later.
Let $\theta \in \mathbb{S}^{n-1}$, and consider initial data $f_\theta$ defined by
\begin{equation*}\label{TestinfFunctBenn}
f_\theta(x) = e^{i\pi R\theta\cdot x} f(x),\quad \text{where}\quad \widehat{f} = \frac{1}{ \sqrt{|\Omega|} }\chi_{\Omega}.
\end{equation*}
Note that $| \mathrm{supp\,} \widehat{f_\theta}\,| = |\Omega| \simeq R^{n\sigma}$, and
$ \| f_\theta \|_{2}= 1$.
In \cite{BBCRV}, it was shown that
\begin{equation}\label{Phase=1}
|e^{i\frac{t}{2\pi R}\Delta} f(x)| \gtrsim \sqrt{|\Omega|}
\quad
\quad \forall \
(x,t) \in \Lambda,
\end{equation}
where, taking $\varepsilon$ sufficiently small, $\Lambda$ is defined by
\begin{equation}\nonumber
\Lambda = \big\{ x\in R^{\sigma-1}\mathbb{Z}^{n}\,:\, |x|< 2\big\}+ B(0,\varepsilon R^{-1}) \times \big\{ t\in R^{2\sigma-1}\mathbb{Z}\, :\, 0<t<1\big\}.
\end{equation}
We provide the
proof of this for completeness. The idea is that the phase in the integrand in \eqref{rt} never strays too far from zero modulo~$2\pi i$, and so the different pieces of the integral, corresponding to different pieces of~$\Omega$, cannot cancel each other out.
In \cite{BBCRV} they proved that the solution is still large in small intervals of time, however this will suffice for our needs.
We start by showing that
\begin{equation}\label{I*}
x\cdot \xi \in \mathbb{Z} +B(0,\tfrac{1}{20}),
\end{equation}
provided that $\xi \in \Omega$ and $x \in R^{\sigma-1}\mathbb{Z}^{n}\cap B(0,2) +B(0,\varepsilon R^{-1})$. To see this, we write
\begin{equation}\nonumber
\xi =R^{1-\sigma} \ell + v, \qquad \text{where}\quad
\ell \in \mathbb{Z}^{n}, \ \ |\ell| < R^{\sigma}, \ \ |v| < \rho
\end{equation}
and
\begin{equation}\nonumber
x = R^{\sigma-1}m + u, \qquad \text{where}\quad
m \in \mathbb{Z}^{n}, \ \ |m| < 2R^{1-\sigma}, \ \ |u| < \varepsilon R^{-1},
\end{equation}
so that
\begin{eqnarray*}
x \cdot \xi
& = &
( R^{\sigma-1}m + u)\cdot( R^{1-\sigma} \ell + v)
\\ \nonumber
& = &
m \cdot \ell
+ R^{\sigma-1} m\cdot v
+ R^{1-\sigma} \ell \cdot u
+u\cdot v
\\
\nonumber
& =: & I_{1}+I_{2}+I_{3}+I_{4}.
\end{eqnarray*}
Since $I_{1} \in \mathbb{Z}$ and
\begin{equation}\nonumber
| I_{2} | < R^{1-\sigma} 2R^{\sigma-1} \rho = 2\rho,
\quad
| I_{3} | < R^{1-\sigma}R^{\sigma} \varepsilon R^{-1} = \varepsilon,
\quad
| I_{4} | < \rho \varepsilon R^{-1},
\end{equation}
we see that (\ref{I*}) holds by taking $\rho$ and $\varepsilon$ sufficiently small. On the other hand, we also have that
\begin{equation}\label{II*}
\frac{t}{R} |\xi|^{2} \in \mathbb{Z} + \left(-\tfrac{1}{20}, \tfrac{1}{20}\right),
\end{equation}
provided that
$
t \in R^{2 \sigma -1} \mathbb{Z} \cap (0,1).
$
To see this, we write
\begin{equation}\nonumber
t = R^{2\sigma -1}k, \qquad \text{where}
\quad
k \in \mathbb{Z},
\ \
0<k < R^{1-2\sigma},
\end{equation}
so that
\begin{eqnarray*}
\frac{t}{R} |\xi|^{2}
& = &
R^{2(\sigma-1)}k| R^{1-\sigma}\ell + v|^{2}
\\ \nonumber
& = &
R^{2(\sigma-1)}k\big( R^{2(1-\sigma)} | \ell|^{2} + |v|^{2} + 2R^{1-\sigma}\ell \cdot v\big)
\\ \nonumber
& =: &
I\! I_{1} + I\! I_{2} + I\! I_{3},\end{eqnarray*}
where
$I\! I_{1} \in \mathbb{Z}$ while
\begin{equation}\nonumber
| I\! I_{2} | \leq R^{2(\sigma - 1)} k |v|^{2} < R^{2(\sigma-1)} R^{1-2\sigma} \rho^{2} = \rho^{2}R^{-1},
\end{equation}
and
\begin{equation}\nonumber
| I\! I_{3} | \leq R^{2(\sigma-1)} k 2 R^{1-\sigma} |\ell \cdot v| \leq 2R^{\sigma-1} k |\ell| |v| <
2R^{\sigma-1} R^{1-2\sigma} R^{\sigma} \rho \leq 2\rho,
\end{equation}
so that (\ref{II*}) is satisfied for sufficiently small $\rho$. Indeed altogether $|\rho|, |\varepsilon|\le \frac{1}{100}$ is sufficient for our purposes.
Now (\ref{I*}) and (\ref{II*})
imply that the phase in
$$
e^{i\frac{t}{2\pi R}\Delta} f(x)=\frac{1}{\sqrt{|\Omega|}}\int_{\Omega} e^{2\pi ix\cdot\xi -2\pi i\frac{t}{R} | \xi |^{2}} d \xi,
$$
is close enough to zero modulo $2\pi i$ as long as $(x,t)\in \Lambda$, yielding (\ref{Phase=1}).
We now consider
$\Lambda_{\theta,t} \subset \mathbb{R}^{n}$ defined by
\begin{equation}\nonumber
\Lambda_{\theta,t} :=
\big\{ x\in R^{\sigma-1}\mathbb{Z}^{n}\,:\, |x|< 2\big\}+ B(t\theta,\varepsilon R^{-1}),
\end{equation}
and note that
\begin{equation}\nonumber
x\in \Lambda_{\theta,t} \quad \text{and} \quad t\in R^{2\sigma - 1} \mathbb{Z}\cap(0,1) \quad \Rightarrow\quad (x-t \theta,t) \in \Lambda.
\end{equation}
Thus, by (\ref{Phase=1}),
we have that
\begin{equation}\nonumber
\sup_{0< t < 1} \big| e^{i\frac{t}{2\pi R}\Delta} f(x -t\theta )| \gtrsim \sqrt{|\Omega|}
\quad
\quad
\forall \
x \in \Lambda_{\theta} :=\bigcup_{t\in R^{2\sigma - 1} \mathbb{Z}\cap(0,1)}\Lambda_{\theta,t}.
\end{equation}
By Galilean invariance, or direct calculation using the formula \eqref{rt}, we have
$$
\sup_{0< t < 1} \big| e^{i\frac{t}{2\pi R}\Delta} f_\theta(x)|=\sup_{0< t < 1} \big| e^{i\frac{t}{2\pi R}\Delta} f(x -t\theta )|,
$$
and we recall that $\|f_\theta\|_2=\|f\|_{2}= 1$.
Thus, by taking $f_\theta$ in (\ref{otooBis}), we obtain
\begin{equation}\nonumber
\sqrt{|\Omega| |\Lambda_{\theta}|} \lesssim R^{s}.
\end{equation}
Since $\Lambda_\theta$ is nothing more that the $\varepsilon R^{-1}$--neighbourhood of $E_\theta$ from the second section,
we can use Lemma~\ref{Lemma:ToroImpr} to take $\theta\in \mathbb{S}^{n-1}$ so that $|\Lambda_{\theta}| \ge |B(0,1/2)|$ for sufficiently large $R$. As~$|\Omega| \gtrsim R^{n\sigma}$, we let $R$ tend to infinity so that
\begin{equation}\nonumber
s \geq \frac{n\sigma}{2},
\end{equation}
and the proof is completed by letting $\sigma$ tend to $\frac{1}{n+2}$ as we may. \hfill $\Box$
|
1,108,101,566,648 | arxiv | \section{Introduction}
Photodegradation is an inherent problem of materials that are used for high-light-intensity applications such as organic lasing media\cite{dyuma92.01,popov.98.01} or materials for use in nonlinear-optical devices.\cite{zhang98.01,galvan20.01} Peng first observed the recovery of fluorescence after photodegradation of a dye-doped polymer optical fiber.\cite{Peng98.01} Howell observed full recovery after photodegradation of Amplified Spontaneous Emission (ASE) from Disperse Orange 11 (DO11) dye doped in PMMA polymer.\cite{howel01.01,howel02.01} The DO11 molecule was shown not to recover in liquid solution.\cite{howel04.01} An intriguing demonstration that laser cycling could make materials more efficient and robust to future degradation\cite{zhu07.01,kuzyk07.02} motivated further research.
The mechanism responsible for the decay and recovery process is still not well understood.\cite{embay08.01} The purpose of this contribution is to develop a model based on physically realistic assumptions that take into account all observations. The model that we propose correctly predicts all observations, makes new predictions that can be tested, and is expressed in terms of only three parameters that may be calculable from first principles.
\section{Background}
ASE is the most sensitive probe of photodegradation and self healing that has been extensively used to characterize PMMA polymer doped with DO11 chromophores.\cite{embay08.01} While not as well studied, decay and recovery of the AF455 chromophore has been observed using two-photon absorption (TPA) spectroscopy.\cite{zhu07.01,kuzyk07.02} Since ASE and two photon absorption are nonlinear-optical processes, this makes them sensitive probes but can also lead to larger experimental uncertainties. Additionally, identically-prepared polymers tend to vary from sample to sample, so, it may be difficult to reproduce data runs with a high degree of precision.
Keeping these difficulties in mind, it is nevertheless possible to determine common features over the full set of observations that hint at the mechanisms responsible. In particular, the ASE studies suggest the following,
\begin{enumerate}
\item{Dye molecules that irreversibly photodegrade in a liquid are found to self heal after photodegradation in a polymer.\cite{howel02.01,howel04.01}}
\item{Full recovery of the ASE signal after self healing is observed only for dye concentrations near the saturation limit in the polymer. The degree of recovery decreases at lower concentrations.\cite{embay08.01}\label{item:SaturationRecovery}}
\item{For a particular concentration, the decay rate depends more or less linearly on the pump intensity but the recovery rate is a constant.\cite{embay08.01}}
\item{The two-photon absorption cross-section of AF455 decays at a rate in proportion to the intensity, but recovers at a single recovery rate provided that the sample is not severely damaged\cite{zhu07.01}.}
\item{Linear dichroism is constant during decay and recovery, suggesting that molecular reorientation is not responsible.\cite{embay08.01}}
\item{Decay of ASE signal is accompanied by a change in the absorption spectrum with an isosbestic point, suggesting that the decay product is a different species, or a perturbed version of the chromophore, and rules out the possibility that dye diffusion is responsible\cite{embay08.01} unlike in some other systems\cite{lal99.01}.}
\item{Evolution of the spatial population profile of a burn mark during recovery does not fit the diffusion model -- additional evidence that diffusion is not the cause of recovery.\cite{ramin11.01}}
\item{At high-enough intensities that result in visible damage (burn marks and laser ablation), the sample {\em appears} to decay irreversibly, but there is some evidence that the recovery times in these cases are much longer than for the regime in which the sample degrades by a small amount.\cite{ander11.01,desau09.01}}
\item{After cycles of photodegradation and recovery, the ASE intensity increases and the decay time constant increases relative to a non-cycled sample.\cite{howel04.01,zhu07.01}}
\item{As a function of temperature, the ASE intensity decreases, but, the change in linear absorption spectrum peaks at a different wavelength than when the material is photodamaged, suggesting that a new species is populated at higher temperature (as we will show below). Furthermore, the species found at elevated temperature recovers more quickly than a photodamaged molecule. Thus, the thermally-populated species - which we call a bystander state, is not a damaged molecule.}
\item{The decay rate is observed to decrease with increasing concentration, as we discus later (and shown in Figure \ref{fig:ConcDecayFits}); but, the recovery rate increases with concentration.\label{item:DecayRateConcentration}}
\item{The change in the linear absorbance spectrum with concentration peaks at the same wavelength as does the change in the spectrum as the material photodamages. No such change in the normalized linear absorbance spectrum is observed as a function of concentration in MMA liquid as shown in Figure \ref{fig:abschange}. This suggests that the damaged species and dye aggregates may be related.\label{item:AbosrbanceWithDecaySameAsConcetration}}
\end{enumerate}
\begin{figure}
\includegraphics{01changeinOD.eps}
\caption{(a) Change in absorbance as a function of dye concentration with respect to the lowest DO11 dye concentration of DO11 dissolved in the liquid monomer MMA. (b) Same as (a) but for DO11 doped in PMMA polymer. (c) Change in the absorbance due to photodegradation in DO11/PMMA.}
\label{fig:abschange}
\end{figure}
The key observations that drive our model are that (1) full recovery requires high chromophore concentration (Item \ref{item:SaturationRecovery} above), (2) the decay rate decrease with concentration (Item \ref{item:DecayRateConcentration} above) and (3) optical absorption spectrum evolves as a function of time in the same way that it changes with concentration (Item \ref{item:AbosrbanceWithDecaySameAsConcetration} above). These observations, along with the fact that the polymer plays a critical role, suggests that the interactions between chromophores as mediated by the polymer are responsible for self healing. As such, we call our new model the {\em correlated chromophore model}, which is a generalization of the simpler {\em embedded chromophore model} used in the past.\cite{embay08.01} The role of temperature will be taken into account using the grand canonical partition function, which includes correlation scales that depend on interactions between chromophores.
\section{The Model}
In the following sections, we begin by describing the embedded chromophore model, generalize it to include correlations between chromophores, apply the grand canonical ensemble to get the temperature-dependent distribution of the size of the correlated regions, and combine these into the full theory of the decay and healing process.
\subsection{Embedded Chromophore Model}
Consider a system of $N$ {\em non-interacting molecules}, $n$ of which are undamaged. Assuming that there is only one damaged species, the damaged population is $N-n$. If the recovery rate is $\beta$ and the decay rate proportional to the intensity, $I$, and given by $\alpha I$, then the evolution of the population is given by,
\begin{equation}\label{NonInteractingRates}
\frac {d n } {dt} = - \alpha I n + \beta \left( N-n\right),
\end{equation}
which has a solution for $I \neq 0$ of
\begin{equation}\label{nDecay}
\frac {n(t)} {N} = \frac {\beta} {\beta + \alpha I} + \frac {\alpha I } {\beta + \alpha I} \cdot e^{- \left( \beta + \alpha I \right) t },
\end{equation}
where the sample is assumed to be originally pristine at $t=0$.
The population of undamaged molecules, starting when the pump is turned off at time $t_0$, is given by
\begin{equation}\label{nRecover}
\frac {n(t)} {N} = 1 - \left( 1- \frac {n(t_0)} {N} \right) e^{- \beta \left( t - t_0 \right) },
\end{equation}
where $n(t_0)$ is the initial undamaged population at time $t_0$. This model is consistent with the time evolution of the ASE intensity in DO11\cite{embay08.01} and the TPA crossection of AF455\cite{zhu07.01} chromophores at fixed concentration and temperature. Even though the population model is by necessity dependent on more than two populations (i.e. excited electronic and vibronic states of each species must play a role), if optical excitations and de-excitations are fast compared with the photodegradation and recovery process, the dynamics of these other states can be ignored. Note that the population of the chromophore and the decay product can be differentiated by their unique optical properties, such as ASE efficiency and TPA cross section.
\subsection{Correlated Chromophore Model}
The noninteracting model given by Equation \ref{NonInteractingRates} can be phenomenologically generalized by allowing the rates $\alpha$ and $\beta$ to depend on intensity -- for example, by expressing them as a series in the intensity if higher order contributions are small. $\alpha$ is independent of intensity if the optical damage process is dominated by one-photon absorption. If two-photon absorption is important, than a correction term linear in the intensity must be added to the theory. Similarly, it is possible that the recovery rate is accelerated or suppressed in the presence of light, in which case an intensity-dependent correction term needs to be added to $\beta$. The data for DO11/PMMA and AF455/PMMA at fixed concentration and temperature agree with the theory for constant $\alpha$ and $\beta$, so the higher-order correction terms are ignored in this paper, but can be easily generalized if needed.
The observation that decay and recovery kinetics depend on the concentration can be taken into account by making $\alpha$ and $\beta$ a function of the concentration of the chromophores and their decay products. In particular, Figure \ref{fig:abschange} shows that the absorption spectrum changes as a function of DO11 concentration in PMMA polymer, an indication that the DO11 molecules are interacting with each other. The fact that higher concentration samples show accelerated recovery\cite{embay08.01} and decreased decay rates suggests that interactions between molecules may be responsible for self healing. The nature of this interaction is not important in building a phenomenological model that depends on concentration and characteristic energies. Indeed, the phenomenological parameters that we will define in our model would be calculable from first principles as tests of the nature of these interactions.
In the generalized model, we redefine $N$ to be the number of molecules that are associated with each other, be it through physical aggregation of DO11 molecules into dimers or microcrystalites, a correlated region of dyes that interact through the polymer as mediated by phonons, locally oriented domains, or altogether new physics. For the purposes of this paper, we will generically call such a correlated region a domain. Each domain will have its own characteristic decay and recovery rate depending solely on the domain size; and, the domain size distribution will be determined by the grand canonical ensemble. The observed bulk behavior will then be given by the ensemble average. Figure \ref{fig:domain} shows a cartoon representation of a collection of domains with one of the domains specified with $N$ molecules, $n$ undamaged molecules and $N-n$ damaged species.
\begin{figure}
\includegraphics{02domains.eps}
\caption{Domains of dye in polymer.}
\label{fig:domain}
\end{figure}
First, we focus on a single domain of fixed size $N$. We propose that the recovery rate of a damaged species will be accelerated in the presence of undamaged molecules in proportion to the number of undamaged molecules. Thus, in a bigger domain, the recovery rate will be larger than in a small domain. On the other hand, a small domain with mostly undamaged molecules will recover at a faster rate than larger domains that are populated with a preponderance of degraded molecules.
There are several mechanisms that could lead to such behavior. For example, if the undamaged molecules are strongly interacting with each other, forming a damaged species will cost energy. The more neighbors, the higher the energy cost, leading to a slower decay rate and faster recovery rate. The nature of these interactions is not yet understood, but there are many potential candidates, from electric forces that are typically responsible for aggregation into dimers and microcrystalites to spin statistics that cause Bose-Einstein condensation or perhaps new unknown mechanisms. Based on experimental observations, we find that the decay rate $\alpha I$, depends inversely on the number of molecules in a domain, or that the decay rate is given by $\alpha I /N$. These two generalization of Equation \ref{NonInteractingRates} leads to
\begin{equation}\label{InteractingRates}
\frac {d n } {dt} = \beta n (N - n) - \frac{\alpha I} N n .
\end{equation}
Integrating Equation \ref{InteractingRates}, yields
\begin{equation}\label{RecoverN}
n = \frac { \left( N - \alpha I / \beta N \right) n_0} {n_0 + \left( N - n_0 - \alpha I / \beta N \right) \exp \left [ - \left(N \beta - \alpha I/N \right) t\right] } ,
\end{equation}
where $n_0$ is the initial undamaged population at $t=0$. When the pump is turned off ($I=0$), Equation \ref{RecoverN} approaches $n=N$ at infinite time. With the pump on, at the infinite time limit, the population $n$ approaches $N - \alpha I / \beta N$ provided that the intensity is below the critical intensity $I < I_C \equiv N^2 \beta/\alpha $ and vanishes at large time when the intensity is above this critical value. Past work showed that for $I<I_C$, the ASE intensity for DO11 in PMMA polymer did indeed asymptotically approach a nonzero value at long times.\cite{{embay08.01}}
With $I=0$, the population recovers according to
\begin{equation}\label{InteractingRates2}
\frac {d n } {dt} = \beta n (N - n).
\end{equation}
With $n(t_0)$ as the undamaged population after the decay,
\begin{equation}\label{RecoverN2}
n = \frac { N n(t_0)}{n(t_0) + \left[ N - n(t_0)\right] e^{\left( -N \beta t\right)}} ,
\end{equation}
or,
\begin{equation}\label{RecoverN2}
n = \frac { N }{1 + \left[ \frac N {n(t_0)}- 1\right] e^{\left( -N \beta t\right)}} .
\end{equation}
\subsection{Bystander state}
\label{sec:bystander}
\begin{figure}
\includegraphics{03AbsTemp.eps}
\caption{(a) Absorbance spectrum of DO11 molecules when doped in PMMA at various temperatures. (b) Difference in absorbance with respect to absorbance at $10^0C$. As the sample temperature increases, the population of the bystander state increases as can be seen from the growing peak at 3.0eV while the population under main peak decreases.}
\label{fig:abstemp}
\end{figure}
A plot of the optical absorbance as a function of temperature, as shown in Figure \ref{fig:abstemp}a, shows the main absorption peak decreasing as a new peak grows in proportion to the main peak's decrease. The difference between each spectrum and the initial spectrum at $T=10^o C$ is shown in Figure \ref{fig:abstemp}b. The isosbestic point between these two regions is an indication that the decreasing peak reflects a decrease in the population of one species while the growing peak is a sign of its conversion to a new species. This process is found to be associated with a decrease in the ASE peak, an indication that the population of DO11 molecules that generate ASE are being converted to ones that do not contribute to ASE.
The positions of the two peaks in Figure \ref{fig:abstemp}b are at different energies than the corresponding peaks found during photodegradation, as shown in Figure \ref{fig:abschange}c, suggesting that the product formed at higher temperature is not necessarily the damaged state. However, the most convincing evidence that an increase in temperature does not produce a damaged state is that the process is instantaneously reversible, that is, the spectrum change follows the temperature with negligible delay. In contrast, recovery of the damaged species takes several hours, even at elevated temperatures. We take this large difference in time scales as strong evidence that the bystander state is not a damaged species.
\begin{figure}
\includegraphics{04bystanderstate.eps}
\caption{Height of the absorbance peak is plotted as a function of temperature. The data are fit to Equation \ref{bystander} to get an estimate of the energy difference between molecular ground state and the bystander state, and it is found to be given by $\varepsilon_b = 73.4 (\pm 1.1) meV$.}
\label{fig:bystanderstate}
\end{figure}
The peak at higher temperature is not related to the decay product, and is most likely related to a different form of the DO11 molecule, such as the DO11 molecule in an excited vibronic state in the electronic ground state manifold, an isomer, a tautomer, or twisted intramolecular charge transfer (TICT) state.\cite{westf12.01} The important point is that whatever the species, it is not a decay product but does not contribute to ASE. Given that this product does not participate in the photodegradation process, we refer to it as a bystander state of the undamaged molecule. If the energy difference between the undamaged molecule and the bystander state is $\varepsilon_b$, the temperature dependence of the undamaged population is given by,
\begin{equation}\label{bystander}
N'(T) = \frac {N} {1 + \exp \left[ - \varepsilon_b / kT \right]}
\end{equation}
Figure \ref{fig:bystanderstate} shows a fit to the temperature dependence of the peak of the absorption spectrum shown in Figure \ref{fig:abstemp} from which we get $\varepsilon_b = 73.4 (\pm 1.1) meV$. Note that at room temperature, the bystander population is about 5\%.
Generalizing Equation \ref{InteractingRates} to include the bystander state is somewhat complicated by the details of the damage mechanism. For example, does the pump beam damage both the DO11 molecule in its undamaged and bystander state? Is the damaged state also characterized by two species? If the damage process removes molecules from the undamaged population, thermalization will result in an increase in the undamaged population from the reservoir of molecules in the bystander state. Similarly, if a bystander state is associated with the damaged species, thermalization will result in a combination of both. Combinations and permutations of these processes can lead to even more complex behavior.
Because the population of the bystander state is small relative to the undamaged DO11 population, we will generalize Equation \ref{InteractingRates} in the most straightforward way by simply replacing $N$ with $N'$, thus excluding the bystander state from consideration, or
\begin{multline}\label{eq:recoverN6}
n[t; N'(T),I] \\= \frac { \left( N' - \alpha I / \beta \right)n_0} {n_0+(N'-n_0 -\left(\alpha I / \beta \right)) \exp \left [ - \left(N' \beta - \alpha I \right) t\right] } .
\end{multline}
By replacing $N$ with $N'$, we are not reducing the domain size but the effective population that participates in decay and recovery of ASE. $n_0$ continues to refer to the initial undamaged population, but excludes DO11 molecules in the bystander state. The theory can be easily generalized to account for complexities associated with individual systems.
The theory embodied in Equation \ref{eq:recoverN6} is general in the sense that it applies to a broader set of systems than those described when motivating the derivation. For example, it has been suggested that the decay product might be a molecule with an expelled electron that is trapped in the polymer matrix,\cite{desau09.01} an aggregate formed between two molecules, or between molecules that are bound upon the exchange of a proton.\cite{embay08.01} All these mechanisms can be equally represented by our model.
\subsection{Distribution of Domains}
The recovery process is based on groupings of molecules into generic domains and the dynamics were shown to depend on domain size. The domain size will be governed by the competition between attractive forces and thermal disordering, leading to a distribution of domain sizes. Examples of systems with a distribution of domain sizes include ferromagnets,\cite{sakar05.01} micellized surfactant solutions,\cite{gelba96.01} liquid crystals,\cite{colli10.01} and of course dye solutions. In each case, the equilibrium domain size distribution is associated with a minimum of the free energy. The domain size distribution can be derived in several ways.\cite{cates90.01,duque97.01,mckit10.01} The most common and simple method is minimizing the Helmholtz free energy using a grand canonical partition function. We use this approach because our system shares with these others the interaction between entities (in our case molecules, mediated by the polymer) that form a domain and thermal interaction that limit domain size.
The partition function, $z_N$, for a domain with $N$ molecules is given by,
\begin{equation}\label{pfunction}
z_N= \exp{\left[\frac{\lambda (N-1)}{kT}\right]}=\exp{[\gamma(N-1)]}
\end{equation}
where $\lambda$ is the free energy of a single molecule outside of a domain relative to a molecule within a domain such that the energy associated with a domain with $N$ molecules is, $E_N=-\lambda(N-1)$, $k$ is the Boltzmann constant, $T$ is the absolute temperature, and $\gamma=\lambda/kT$. When $\lambda$ is positive, energy is released when the molecule is added to a domain. The global partition function of a collection of domains is then given by,
\begin{equation}\label{gpfunction}
Z=\prod_N\frac{z_N^{\Omega_N}}{\Omega_N!}
\end{equation}
where $\Omega_N$ is the number of domains with $N$ molecules. If our system has unit volume, $\Omega_N$ is the number density of domains of size $N$.
The Helmholtz free energy, $F$, can be obtained from the partition function and simplified using Stirling's approximation,
\begin{align}
\nonumber F &= -kT\ln{Z}\\
\nonumber &= -kT\sum_N [\Omega_N\ln{z_N}-\ln{(\Omega_N!)}]\\
&\approx kT\sum_N \Omega_N \left( \ln \frac {\Omega_N} {z_N} -1 \right) .
\end{align}
The chemical potential of a domain of size $N$ is therefore,
\begin{equation}\label{chempotential}
\mu_N=\frac{\partial F}{\partial \Omega_N}=kT\ln{\frac{\Omega_N}{z_N}}.
\end{equation}
At equilibrium, the chemical potential of each molecule, whether it is a single molecule or part of a domain of any size, is the same. Equating $\mu_N/N$ and $\mu_1$, one obtains a relationship between the number density of domains of size $N$ and the number density of single molecules as,
\begin{align}\label{domainno}
\Omega_N=e^{\gamma (N-1)}\Omega_1^N=\frac 1 {e^{\gamma}} \left[e^{\gamma}\Omega_1 \right]^N .
\end{align}
The total number of molecules in the system is given by,
\begin{align}\label{consmass}
\nonumber \rho&=\sum_{N=1}^{\infty}N\Omega_N\\
\nonumber &=\sum_{N=1}^{\infty}N e^{\gamma (N-1)}\Omega_1^N\\
& =\frac{\Omega_1}{(1-e^{\gamma}\Omega_1)^2},
\end{align}
where $\rho$ can be interpreted as the average number density of molecules for fixed total volume, which is also simply related to the concentration of dye in the system.
After some rearrangement of Equation \ref{consmass}, we get
\begin{equation}\label{volfracx1}
\Omega_1=\frac{(1+2\rho e^{\gamma})-\sqrt{1+4\rho e^{\gamma}}}{2\rho e^{2\gamma}}.
\end{equation}
Substituting Equation \ref{volfracx1} into Equation \ref{domainno}, we can write the number density of domains of size $N$, $\Omega(N)$ as a function of number density, $\rho$,
\begin{align}\label{domainno2}
\nonumber \Omega(N)&=z^{(N-1)}\left[\frac{(1+2\rho z)-\sqrt{1+4\rho z}}{2\rho z^2}\right]^N \\
&= \frac 1 z \left[\frac{(1+2\rho z)-\sqrt{1+4\rho z}}{2\rho z}\right]^N,
\end{align}
where $z=\exp\left(\frac{\lambda}{kT}\right)$.
\begin{figure}
\includegraphics{05DomainDensity.eps}
\caption{Number density of domains $\Omega(N; \rho,T)$ as a function of domain size $N$ at several temperatures (a) and concentrations (b).}
\label{fig:domaindensity}
\end{figure}
In the model, the density, $\rho$, and temperature, $T$, are experimentally controllable, with the free energy of a single molecule outside of a domain relative to a molecule within a domain, $\lambda$, the only free parameter. Figure \ref{fig:domaindensity} shows a plot of the simulated distribution of $\Omega(N; \rho,T)$ at several concentrations and temperatures with the other parameters fixed. At higher temperature, the average domain size decreases as the thermal energy breaks the domains apart.
\subsection{The Full Integrated Model}
The theory of correlated molecules that describes the recovery process of a domain is governed by Equation \ref{eq:recoverN6} while the thermodynamic model describes the distribution of domain size as governed by Equation \ref{domainno2}. The effects of correlations and the statistical model for domain size can be combined through an ensemble average to predict the number density of the undamaged population as a function of time, temperature, light exposure, initial non-equilibrium undamaged population, and concentration of chromophores.
In defining the domain size in the thermodynamic model, $N$ refers to the total number of molecules in a domain, including the DO11 molecule, the bystander species and the degraded species. However, the recovery process is hypothesized to be governed by interactions between DO11 molecules with the bystander state removed, so the bystander species must be excluded. The mean number of undamaged molecules in a domain, $ \overline{n}$, is thus given by the ensemble average,
\begin{align}\label{recnwithT2}
\overline{n}(t;\rho ,T,I,n_0)= \sum_{N=1}^{\infty}n(t;N',I)\Omega(N;\rho, T)\\\approx\int_1^{\infty} n(t)\Omega(N)dN \label{IntegrateDistribution}\\ \equiv \int_1^{\infty} \eta(N, I, t)dN,\label{finaln}
\end{align}
where Equations \ref{eq:recoverN6} and \ref{domainno2} are used to evaluate the integral in Equation \ref{IntegrateDistribution}, and $N'$ is given by Equation \ref{bystander}.
\begin{figure}
\includegraphics{06DecayEvolution.eps}
\caption{Evolution of $\eta(N)$ as a function of time during continuous pumping.}
\label{fig:etadecay}
\end{figure}
\begin{figure}
\includegraphics{07RecEvolution.eps}
\caption{Evolution of $\eta(N)$ as a function of time during recovery after pumping for 10 minutes.}
\label{fig:etaRecover}
\end{figure}
The distribution function $\eta(N,I,t)$ represents the population of the undamaged species and is thus proportional to the contribution to optical absorbance of the undamaged species from each group of domains of size $N$. The total ASE emission from domain size $N$ is also related to $\eta(N,I,t)$. $\eta(N) \Delta N$ is the fraction of the undamaged population living in domains of size between $N$ and $N + \Delta N$.
Figure \ref{fig:etadecay} shows simulations of the evolution of $\eta(N,I,t)$ over time as the material is pumped. Area under each curve represents the ensemble average $\overline{n}(t)$. Since molecules in smaller domains degrade at a higher rate than molecules in larger domains, the peak in the distribution shifts to the right upon photodegradation, as shown by the dashed curve. At long times, the distribution function converges to an equilibrium shape as the decay and recovery rates balance each other in each domain. When the pump is turned off, the damaged species in the larger domains will recover more quickly. As the larger domains recover, the smaller domains follow, resulting in the peak position shifting back to the left, as shown in Figure \ref{fig:etaRecover}. The peak position as a function of domain size during a run of decay and recovery thus traces a hysteresis loop.
From Equation \ref{finaln}, we can predict the behavior of the decay of dye-doped polymer as a function of dye concentration and temperature. Figure \ref{fig:ASEtempSim} shows a series of simulated curves of the time dependence of ASE as predicted for various temperatures.
\begin{figure}
\includegraphics{08ASEtempSim.eps}
\caption{Simulated undamaged population decay as a function of time at several temperatures at particular dye concentration and pump intensity.}
\label{fig:ASEtempSim}
\end{figure}
\section{Testing the Theory}
Experimental details are described elsewhere\cite{ramin11.01}. It is important to note that due to spatial variations from spot to spot, it is difficult to get reproducible data even from a single sample. Typical point to point sample variations can yield as much as 30\% variation in the signal, with larger variations from sample to sample. As such, most measurements require multiple runs to be averaged to decrease the effects of such variability.
In determining the change of optical absorbance, the white light source - which focuses to a circular area, must be overlapped with the region that generates ASE. Since the pump light must be focused to a thin line to optimize ASE signal, the pump light and the wider white probe beam can never fully overlap. It is also difficult to assure that the overlap region is optimized, so, most of the probe might be missing the damaged area. To make matters worse, the pump beam can have hot spots that evolve over long data runs - which can take up to several days. Thus, some areas along the pump line may be fully damaged while others are close to pristine. This makes it difficult to definitively determine the change in the absorption spectrum for a damaged area. To overcome these issues requires painstaking adjustments and multiple runs in different spots on one sample as well as runs on different samples prepared in the same way.
\begin{figure}
\includegraphics{09ASEvsPump.eps}
\caption{ASE intensity plotted as a function of the pump intensity for samples of several concentrations. The inset shows the fit parameters $I_0$ and exponent $p$ determined from the data at each concentration.}
\label{fig:ASEvsPump}
\end{figure}
Empirically we find that the ASE intensity satisfies,
\begin{equation}\label{ASEexpression}
I_{ASE}=\frac{(c/c_0)^q}{1+(I_0/I_{pump})^p},
\end{equation}
and that the undamaged population is governed by the equation
\begin{equation}\label{conc-VS-ASE}
\overline{n} \propto [I_{ASE}]^{1/2.6}
\end{equation}
for a particular pump intensity, where where $I_0$, $p$, $c_0$ and $q$ are constants and where $c$ is the dopant concentration and $I_{pump}$ is the intensity of the pump. Figure \ref{fig:ASEvsPump} shows the ASE intensity as a function of the pump intensity for several concentrations of DO11 dye as well as the fit to Equation \ref{ASEexpression}. The curves are normalized for the figure to make them overlap to show that they all have the same shape. The shape of the curve for the sample of $5 g/l$ concentration is somewhat anomalous.
\begin{figure}
\includegraphics{10ASEvsConc.eps}
\caption{Averaged ASE intensity at several dye concentrations. The data is fit to the Equation $I_{ASE} \propto (c/c_0)^q$ with $c_0 = 5.0(\pm0.6)~g/l$ and $q = 2.6 \pm 0.5$.}
\label{fig:ASEvsConcentration}
\end{figure}
Figure \ref{fig:ASEvsConcentration} shows a plot of the ASE intensity as a function of dye concentration and a fit to the function $I_{ASE} \propto (c/c_0)^q$, which yields $c_0 = 5.0(\pm0.6)~g/l$ and $q = 2.6 \pm 0.5$. Thus, the ASE intensity can be used as a measure of the concentration of undamaged species through this relationship.
As a test of the theory's ability to predict the dependence of the decay of ASE intensity as a function of time, samples of several concentrations are characterized using ASE at fixed temperature and pump intensity. The ASE intensity is converted to population of undamaged DO11 molecules using Equation \ref{conc-VS-ASE}. Initially the data from one concentration is fit to the model given by Equation \ref{finaln} by independently varying all the parameters ($\alpha, \beta, \rho$, and $\lambda$). Subsequently, the rest of the concentrations are fit to the model keeping constant the values obtained for $\alpha$, $\beta$, and $\lambda$ above and allowing only $\rho$, the number density of DO11 molecules, to vary. A value of $\rho$ is determined for each of the concentrations.
Figure \ref{fig:ConcDecayFits} shows representative data and fits to the theory for three concentrations. Six different concentrations are tested, and multiple runs at each concentration yield multiple values of the fit parameter $\rho$, which are averaged. The experimental uncertainty is determined from the spread in the data. Table \ref{tab:results} summarizes the values of the three parameters obtained from the fits. These same values of the three parameters are applied to the the full set of data in this paper. $\alpha$, $\beta$ and $\lambda$ are properties of the material, and therefore should be constant for a given polymer and dye combination, though variations in the polymer due to materials processing may result in slight differences in these constants.
\begin{figure}
\includegraphics{11ConcDecayFits2.eps}
\caption{Normalized undamaged population ($\overline{n}$) obtained from the ASE intensity is fitted to the correlation model to get the values of parameter $\rho$.}
\label{fig:ConcDecayFits}
\end{figure}
\begin{figure}
\includegraphics{12ConcRecFits.eps}
\caption{Recovery of undamaged population ($\overline{n}$) and a fit to the correlated chromophore model using the parameters obtained from fits to the decay of population during pumping.}
\label{fig:ConcRecoverFits}
\end{figure}
\begin{center}
\begin{table}
\caption{Parameters determined for DO11 in PMMA using an average pump intensity of $I_p = 0.202 W/cm^2$.\label{tab:results}}
\begin{tabular}{c c c }
\hline
$\alpha (min^{-1}W^{-1}cm^2)$& $\beta (10^{-4}min^{-1}) $ & $ \lambda (eV)$ \\
Decay Rate & Recovery Rate & Free energy/molecule \\ \hline\hline
$7.09 (\pm 0.13)$ & $3.22 (\pm 0.26)$ & $0.29 (\pm 0.01)$ \\ \hline
\end{tabular}
\end{table}
\end{center}
Using the parameters determined from the decay data, the recovery data is predicted without the use of adjustable parameters. Figure \ref{fig:ConcRecoverFits} shows the predicted population as a function of time (curves) during recovery and the measured population of undamaged DO11 molecules as determined from the ASE intensity. We note that different samples were used to study recovery. Even so, the theory's prediction agrees with the data within experiential uncertainties for a range of concentrations.
\begin{figure}
\includegraphics{13ConcLinearFit2.eps}
\caption{Fit parameter $\rho$ from decay fits as a function of dye concentration as determined during sample preparation.}
\label{fig:ConcLinearFit}
\end{figure}
According to the model, $\rho$ is proportional to the concentration of dyes in the polymer. As such, we expect a linear relation between $\rho$ and concentration. Figure \ref{fig:ConcLinearFit} shows a linear fit to a plot of $\rho$ as a function of the concentration of a series of samples as determined from the amount of dye added to the polymer during its preparation. The linear fit is consistent with the data.
\section{Conclusions}
We have presented a simple model for photodegradation and recovery that depends on three parameters: the intensity-dependent decay rate, the recovery rate, and the free energy of a single molecule outside of a domain relative to a molecule within a domain. This model accounts for all observations of ASE and absorption spectroscopy of DO11 dye in PMMA as a function of time, pump intensity and concentration during decay and recovery with one set of these three parameters. Furthermore, the theory predicts the behavior as a function of temperature, which we are in the process of testing with experiment in which preliminary data is consistent with predictions. A more exhaustive experimental study will be presented in the future.
We find that three parameters fully characterize a composite material made from a particular chromophore and host polymer. As such, $\alpha$, $\beta$ and $\lambda$ -- which should be calculable from first principles based on the underlying mechanisms -- hold the key in providing a connection between experiment and more fundamental theoretical considerations. $\alpha$ is related to the damage cross-section of the molecule, which is empirically found to decrease with increased domain size. $\beta$, on the other hand, characterizes the self-healing process, with $n \beta$ a measure of the collective strength of undamaged molecules to heal. The parameter $\lambda$, on the other hand, determines the distribution of domain sizes, which are affected by temperature and the concentration of the molecules in the sample when it is prepared.
The theory is easily generalizable to account for other observations. For example, all studies to date where self healing is observed conclude that the degradation rate is proportional to the intensity, implying a linear mechanism of damage is responsible. For a nonlinear damage process, Equation \ref{NonInteractingRates} can be generalized by adding a nonlinear function of the intensity. If an irreversible damage mechanisms acts along with a reversible one, an additional species can be added to the model.
The mechanism responsible for self-healing appears to be in the interactions between molecules that are somehow mediated by the polymer. Indirect evidence suggests that damage is associated with charge ejection from pristine molecules and the creation of a trapped charge\cite{zimme94.01} density in the polymer.\cite{desau09.01} We propose the hypothesis that in liquid solution, molecules that are damaged may break apart into fragments (perhaps charged) that by virtue of mixing in the liquid state takes them too far apart, on average, to recombine over a reasonable time frame. Once the molecular ions are neutralized by charges in the liquid through collisions and mass transport, reconstitution of the original molecule from the fragments is energetically unfavorable, and the process is thus irreversible. In this hypothesized mechanism, the ionic fragments separate, but because of the polymer, remain in closer proximity to each other. Electrostatic attraction between the fragments drives recombination. If molecules are associated with others in a domain, the larger fragments remain relatively stationary compared with the lighter ones. Thus, as higher numbers of fragments are produced, attractive forces to the domain increases, enhancing self-healing, as experimentally observed. Dielectric screening due to the polymer may act to lengthen the healing time.
Another possibility, though less likely, is that self healing is a process that is a totally new phenomena analogous to Bose-Einstein condensation, which favors relaxation into a Fock State in which all particles are in the same single-particle state. In this picture, the molecules are correlated due to positive exchange statistics of some sort, where a domain of undamaged molecules induces healing in proportion to their population.
The source of correlations between molecules is not clear; but, there are many possibilities. It has been suggested that the association may be in the form of aggregation,\cite{kuzyk06.06,embay08.01} which can originate in either electrostatic interactions, hydrogen bonding, or the formation of nanocrystalites. Alternatively, phonons in the polymer chains - which behave as bosons, may result in correlations. Or, the mechanism may be an altogether new phenomena. Any process in which an aggregate is of lower energy, or is more probable due to exchange statistics, is a viable candidate. Independent measurements are needed to sort them out.
There are many experiments that can be brought to bare on the problem of mechanisms. The role of the polymer and its interaction with a guest molecule can be tested by varying the properties of each. For example, the fact that self healing is not observed in liquid monomer but in polymer suggests that there exists a critical level of polymerization that leads to self healing. As such, one could measure the healing rate as a function of polymerization, is situ -- while a dye-doped monomer is in the process of polymerizing -- and correlate molecular weight with healing to determine if each domain is composed of molecules that are associated with a single polymer chain. Alternatively, the material can be heated through the polymer glass transition to determine if increased mobility of the polymer chains interferes with the healing phenomena. In addition, the polymer host and guest molecule can be changed to determine the importance of structural and chemical properties to healing.
Optical characterization, including various types of imaging, ASE, absorption spectroscopy, and fluorescence can be applied in tandem with other experiments such as photoconductivity to test a given hypothesis. For example, an observation of photoconductivity in coincidence with changes in population of damaged species as probed optically would support the hypothesis that charge ion generation plays a role. Suppression of healing in the presence of a strong electric field would support the mechanism of charged molecular fragments, as would thermally stimulated discharge measurements. Finally, neutron scattering experiments could be used to directly probe correlations and aggregation.
In summary, the model that we present here predicts all of the observations in terms of three parameters and is based on the hypothesis that self healing originates from a collective phenomena in which pristine molecules induce healing in a damaged molecule in proportion to domain size. Similarly, the decay rate decreases with domain size. Future planned experiments will be used to zero in on the mechanisms.
{\bf Acknowledgements:} We thank the Air Force (Grant No:~FA9550-10-1-0286) for their generous support and Dr.~Fred Gittes for insightful discussions.
|
1,108,101,566,649 | arxiv | \section{Introduction}
Noise from surfaces is a major source of decoherence for quantum systems, including trapped ions~\cite{theBible,Turchette2000,Brownnutt2015}, superconducting qubits~\cite{Wenner2011,Wang2015}, Rydberg atoms~\cite{PhysRevA.88.043429}, nitrogen-vacancy centers in diamond~\cite{Kim2015,PhysRevB.93.024305}, and nanoelectromechanical devices~\cite{yang2011}. In trapped ion systems, electric-field noise from surfaces limits the fidelity of quantum logic operations by heating the ions' motion, presenting a challenge for scalable quantum information processing. It can also introduce systematic shifts in the frequency of trapped-ion atomic clocks~\cite{RevModPhys.87.637,PhysRevLett.118.053002}. The amplitude of the experimentally measured noise is much larger than would be expected from thermal or technical noise produced by the trap electrodes or external sources. Because of this unexplained larger amplitude, ion heating from such noise is termed ``anomalous,'' and understanding or mitigating it is of interest both for basic surface science and for applications including quantum information processing.
The sensitivity of trapped ions to electric-field noise enables their use as exquisitely sensitive surface science probes. Previous work using trapped ions to sense such noise has shown that treatment of trap-electrode surfaces can reduce the amplitude of the noise at the ion location~\cite{Allcock2011,Hite2012,Daniilidis2014,mcKay2014,McConnel2015}. These treatments include ion milling, where high-energy atomic ions are directed at the surface in a low-pressure environment; plasma treatment, where a low-energy plasma is created at the surface in a higher-pressure environment consisting of the gases ionized to create the plasma; and laser treatment, where a pulsed laser is directed at the surface. Factors of reductions in ion heating rates achieved are up to ${\sim}100$ for ion milling~\cite{Hite2012,Daniilidis2014,mcKay2014}, approximately~$4$ for plasma treatment~\cite{McConnel2015}, and approximately~$2$ for laser treatment~\cite{Allcock2011}. These treatments may be applied \textit{in situ}, i.e. within the same vacuum system as the measurements of noise using individual trapped ions, or \textit{ex situ}, i.e. in a separate system, necessarily requiring a (potentially brief) exposure to ambient atmosphere. Here we focus on the effect of ion milling, as it has been shown to have the most dramatic effects in reducing electric-field noise in trapped-ion experiments. Furthermore, we explore the use of \textit{ex situ} ion milling (ESIM) for trap-electrode treatment in particular, as it has the practical advantage that it can be used to treat technologically relevant surface-electrode ion traps without modifications to existing ultra-high-vacuum (UHV) and/or cryogenic systems.
To probe the mechanisms behind anomalous heating, we vary the temperature of the electrode surface. Prior work has shown a large reduction in anomalous ion heating upon cooling nominally untreated trap surfaces to cryogenic temperatures~\cite{needleTrap,Labaziewicz2008,Chiaverini2014}. Beyond this reduction, measurements of the exact form of the temperature dependence can also help place limits on potential models~\cite{Labaziewicz2008_2,Bruzewicz2015}. For instance, models based on the fluctuation of adatoms, either in position or dipole moment~\cite{PhysRevA.87.023421} (or both simultaneously in a correlated manner~\cite{PhysRevA.95.033407}), predict thermally activated noise amplitude, with Arrhenius-type exponential scaling~\cite{Brownnutt2015}. In contrast, models based on thermal fluctuations of charge carriers or atom-polarization in metals~\cite{Brownnutt2015} or insulators~\cite{kumph_NJP_2016} comprising the surfaces predict power-law scalings. However, to date the temperature dependence of anomalous heating above treated surfaces has not been studied.
In this work, we present measurements of megahertz-frequency electric-field noise above ion-trap electrode surfaces both before and after ESIM as a function of temperature, for two different electrode materials, using a single trapped atomic ion as the sensor. We also present noise measurements as a function of trap frequency and ion-electrode distance after ESIM for niobium traps at room temperature. We find that the temperature scalings of the noise before and after ESIM are markedly different, suggesting different mechanisms for anomalous ion heating in the two cases. With the measured frequency and distance scalings, these data appear to rule out known models for anomalous ion heating (after ESIM) in their current forms.
\begin{figure}[t b !]
\includegraphics[width = 1.0 \columnwidth]{fig1.png}
\caption{Electroplated gold traps used in this work. The figure is a photograph of the 1-cm-square trap chip attached and wire-bonded to the transfer stage, which is then mounted in either the ion-milling or experimental chamber. The aluminum cover, below the level of the trap surface, is meant to reduce sputtering of the trap electrode leads and interposer boards underneath. The inset is a micrograph of the central region of the trap electrodes; the RF-electrode rails are labeled, and all others are DC control electrodes. The ion is trapped $50(1) \unit{\mu m}$ above the center of the linear trap section shown here. The niobium traps used in this work were of the same design.}
\label{fig:trapPics}
\end{figure}
Electric-field noise near the frequency $f$ of a trapped-ion motional mode with average (thermal) excitation $\bar{n}$ leads to an ion heating rate $\dot{\bar{n}}(\omega,T,d)$ proportional to the electric-field noise spectral density at the ion's location $S_{E}(\omega,T,d)$, where $\omega=2\pi\times f$, $T$ is the electrode temperature, and $d$ is the ion-electrode distance, as
\begin{equation}
\dot{\bar{n}}(\omega,T,d)=\frac{q^{2}}{4 m \hbar \omega} S_{E}(\omega, T, d).
\label{eq:HRvNoise}
\end{equation}
\noindent Here $q$ and $m$ are the ion's charge and mass, respectively, and $\hbar$ is the reduced Planck constant. Thus, characterization of the ion's motional-state evolution provides a direct measurement of electric-field noise above the surface. For a single $^{88}$Sr$^{+}$ ion and $f=1.3$~MHz, $S_E\approx [2 \times 10^{-14}$~$(\textrm{V}/\textrm{m})^{2}/\textrm{Hz}\cdot \textrm{s}] \times \dot{\bar{n}}$. Measuring a heating rate with $1$~quantum/s uncertainty therefore corresponds to electric-field sensing at the $140$~$(\textrm{nV}/\textrm{m})/\sqrt{\textrm{Hz}}$ level.
\section{Experimental System, Surface-Electrode Ion Traps, and Ion-Milling Procedure}
The motional heating measurements are carried out in linear Paul surface-electrode traps using a $^{88}\mathrm{Sr}^{+}$ ion in an apparatus that has been described previously~\cite{Sage2012, Chiaverini2014, Bruzewicz2015}. Ions are trapped in a UHV cryogenic system which does not require baking of the chamber or trap. A weak thermal link between the trap chip and the cryostat cold stage allows the temperature of the trap chip to be continuously varied between $4$~and $295 \unit{K}$. The motional heating rate is measured along the axial direction using sideband spectroscopy~\cite{Chiaverini2014}.
The linear surface-electrode traps are approximately $7$-$\unit{\mu m}$-thick electroplated (EP) gold, or $2$-$\unit{\mu m}$-thick sputtered niobium, on a sapphire substrate. A micrograph of the electrodes near the center of one of the gold traps is shown in the inset of \figRef{fig:trapPics}, where the layout of the electrodes can be seen. After fabrication, the traps are coated with photoresist to protect them during dicing and storage. Traps are rinsed in acetone and isopropyl alcohol, then blown dry with dry nitrogen, prior to wire bonding. A picture of the trap after wire bonding is shown in the main panel of \figRef{fig:trapPics}.
\begin{figure*}[b t !]
\includegraphics[width = 0.68 \columnwidth]{fig2_a.png}
\includegraphics[width = 0.67 \columnwidth]{fig2_b.png}
\includegraphics[width = 0.67 \columnwidth]{fig2_c.png}\\
\includegraphics[width = 0.68 \columnwidth]{fig2_d.png}
\includegraphics[width = 0.67 \columnwidth]{fig2_e.png}
\includegraphics[width = 0.67 \columnwidth]{fig2_f.png}
\caption{X-ray photoelectron spectroscopy (XPS) of the traps used in this work. Each graph shows data for typical solvent-cleaned chips before milling, after milling in the XPS chamber (2 keV Ar$^{+}$ ions, with a flux density of $6.4\times10^{-1}\unit{(C/m^{2})/s}$, for 2~min), and after a 30~min re-exposure to atmosphere. Upper (lower) row is Nb (Au). Left: Nb~3d (Au~4f) peaks; center: C~1s peak; right: O~1s peak. After milling of Nb, the primarily niobium-pentoxide surface is removed, revealing metallic niobium peaks, while the carbon and oxygen present on the surface, both in carbon-containing compounds and in the oxide, are also eliminated. After re-exposure, a mixture of niobium and niobium pentoxide is present, and carbon-containing compounds reappear, but at a lower level (visible in both the center and right panels of the top row). The O peak is the combination of a narrower, lower-binding-energy metallic oxide peak, and a broader higher-binding-energy peak that we associate with hydrocarbons or carbonates. After milling of Au, the carbon and oxygen present on the surface are eliminated. We associate these with carbon-containing compounds in part due to the O~peak shift (cf. the O~peak in niobium, upper right panel). After re-exposure, some contamination returns, but the Au peaks remain single-component in nature, i.e. in contrast to Nb, we see no evidence of oxidation. The peak at slightly higher binding energy than the O~peak is the Au~4p peak. Binding energy is referenced to the adventitious carbon C~1s peak at 284.8~eV. While the parameters of the milling done in the XPS chamber, particularly the higher Ar$^{+}$ ion flux, lead to a higher material removal rate than the ESIM, we believe these spectra are representative of what would be observed after ESIM since the ion energy and dose are essentially equivalent. The higher background levels visible in the right two panels in the upper row are primarily due to photoelectrons from Nb atoms which have lost various amounts of energy due to inelastic scattering on their way out of the sample. There are more such electrons in the ``After milling'' and ``After re-exposure'' cases since there is a higher density of Nb atoms near the surface, due to the smaller amount of oxide and carbon-containing compounds, in these cases. The high-kinetic-energy edge of these broad backgrounds, appearing just to the left of the Nb peaks, can be seen in the upper left panel at high binding energy.}
\label{fig:xps}
\end{figure*}
Previous ion heating measurements in untreated traps made from Au and Nb have shown similar temperature dependence~\cite{Chiaverini2014}. When measured at room temperature, traps made from EP gold and treated with ion milling have shown drastic reductions in heating rate compared to before treatment~\cite{Hite2012,mcKay2014}. While gold does not readily oxidize with exposure to atmosphere, and furthermore has been shown to not gain oxygen after ESIM and air exposure~\cite{mcKay2014}, niobium forms a few-nanometer-thick oxide when exposed to air. X-ray photoelectron spectroscopy (XPS) on a niobium trap chip (performed in a separate, dedicated apparatus~\footnote{XPS parameters were as follows for the measurements presented here. The x-rays are from a monochromated Al-K$\alpha$ source with energy 1486.6~eV; the x-ray beam diameter is 200~$\mu$m. Charge compensation was performed for all scans, and the analyzer pass energy was approximately 60~eV.}) shows that milling produces a pure metallic surface that acquires a partial coverage of niobium pentoxide after a 30~min exposure to air, with a mixture of pairs of metallic and oxide peaks visible in the spectroscopy (See~\figRef{fig:xps}, top row). In contrast, similar XPS measurements on a gold trap chip show only pure metallic components in the region of the gold peaks before milling, after milling, and after re-exposure (See~\figRef{fig:xps}, bottom row). Though carbon and oxygen are present before milling and after re-exposure in this case, this observation is consistent with carbonaceous contaminants and not with a metallic oxide. Our exploration of these two materials in this study is motivated by their similar behavior prior to ion-milling treatment despite their difference in oxidation susceptibility.
The ESIM is carried out in a separate vacuum chamber. Inside the milling chamber, an ion sputtering gun (OCI Vacuum Microengineering~\footnote{Commercial products are identified in this paper for informational purposes only. Such identification does not imply recommendation or endorsement by the National Institute of Standards and Technology, nor is it intended to imply that the products identified are necessarily the best available for the purpose.}) is mounted perpendicular to the trap surface, so that accelerated Ar$^+$ ions impact the trap chip at normal incidence. The parameters for the ion milling used in this work are $2 \unit{keV}$ ion beam energy, $5 \times 10^{-6} \unit{Torr}$ background partial pressure of Ar, and an ion flux density of $3 \times 10^{-2} \unit{(C/m^{2})/s}$. The ion flux density was determined by measuring the ion current through the trap electrodes. These parameters lead to a material removal rate of approximately 0.64(9)~nm/min as measured via profilometry over a step in the gold film between ion-milled and masked sections. From the expected 2~keV sputter yield~\cite{matsunami_1984} and measured Ar$^{+}$-ion flux density, we calculate an expected material removal rate of 0.61~nm/min, equal, within error, to the measured value. From a similar calculation for niobium, we expect the material removal rate to be 0.24~nm/min, but it was not measured independently. Each trap is treated for a variable amount of time before being exposed briefly to the ambient laboratory air and transferred to the main chamber. The trap is exposed to atmosphere for ${\sim}1 \unit{h}$ during the transfer.
After the initial cleaning of the traps with acetone and isopropyl alcohol, no additional such cleaning was performed before each trap was subsequently inserted into the ion trap apparatus or the milling chamber. Initial heating rates of the axial vibration mode at a frequency $f$ of approximately 1.3~MHz were measured using an unmilled trap; the trap was then removed from the main experimental chamber and mounted in the milling chamber. Additional heating rate measurements were performed after transferring the trap back to the experimental chamber. This process of milling and heating-rate measurement was repeated, and subsequent milling treatments were performed with the trap being exposed to atmosphere during each transfer.
\section{Temperature Dependence Before and After Ion Milling}
After confirming that one round of ESIM treatment reduced heating rates at $295 \unit{K}$ for both gold and niobium traps, we performed subsequent treatment on the same traps to map out the change in heating rates with further milling. Concurrently, we measured the effect on the heating rate near $4 \unit{K}$. The results for two different gold traps (labeled A and B) are shown in \figRef{fig:increase}. In both cases, the heating rates plateau after ${\sim} 40$~min of milling. The plateau behavior appears after ${\sim} 80$~min of milling for niobium, not shown (trap~C; see \figRef{fig:beforeAndAfterMilling} for initial and plateau values). Perhaps surprisingly, the amount of time required to reach the plateau region corresponds to significant material removal: approximately 25~and 20~nm for gold and niobium, respectively. Since ESIM roughens the surface while redepositing sputtered material as it proceeds, however, the complete removal of hydrocarbon and oxide layers of 2~to 10~nm in thickness may require substantial additional milling time. For the traps of both materials, the room temperature heating rates at the plateau are lower than the heating rates of untreated traps by a factor of ${\sim} 10$. Interestingly, however, the heating rates near $4 \unit{K}$ are increased in gold traps. The time to reach a plateau was the same for both temperatures, which indicates that the mechanism responsible for the change was the same in both cases.
\begin{figure}[b t !]
\includegraphics[width = 0.97 \columnwidth]{fig3.pdf}
\caption{The heating-rate plateauing behavior after increasing amounts of \textit{ex situ} ion milling for two nominally identical gold traps (labeled A, depicted by the open symbols, and B, depicted by the closed symbols) with electrodes at 295~K and 4~K; here the heating rate is measured on the axial mode at 1.3~MHz. For each time step, each trap was exposed to air and transferred to and from the milling chamber. The milling time represents the total integrated time that the trap was milled. With the exception of duration, every milling step used nominally the same parameters: $5 \times 10^{-6} \unit{Torr}$ $\mathrm{Ar}$, an ion beam energy of $2 \unit{keV}$, and an ion flux density of $3 \times 10^{-2} \unit{(C/m^2)/s}$. The lines connecting data points are intended as a guide to the eye. Similar data (not shown) was acquired for a niobium trap (trap C), with a plateau time in that case of approximately 80~min.}
\label{fig:increase}
\end{figure}
To further investigate this change in temperature dependence, additional traps of each material were used to measure heating rates at various temperatures from $4$~to $295 \unit{K}$ before and after ESIM. The results are shown in \figRef{fig:beforeAndAfterMilling}. The pre-ESIM heating rates ([red] solid, circular points in the top and bottom panels) are fit to a power law~\cite{Labaziewicz2008_2,Bruzewicz2015},
\begin{equation}
\dot{\bar{n}}(T) = \dot{\bar{n}}_0 \left[1+\left(\frac{T}{T_{\mathrm P}}\right)^\beta \right],
\label{eq:em}
\end{equation}
\noindent where $\dot{\bar{n}}_0$ is the temperature-independent heating rate, $T_{\mathrm P}$ is the thermal activation temperature, $\beta$ is the high-temperature power law exponent, and $T$ is the temperature of the electrodes. After lowering the electrode temperature from ${\sim} 295 \unit{K}$ to ${\sim} 4 \unit{K}$, the heating rate is reduced by a factor of ${\sim} 100$, which is typical in our system for a variety of trap materials and fabrication methods~\cite{Chiaverini2014,Bruzewicz2015}. The scaling exponents and activation temperatures are the same within error for the gold and niobium traps, also consistent with previous measurements, e.g.~\cite{Bruzewicz2015}, where power-law scaling exponents in the range of~1.5 to~1.6 were measured.
\begin{figure}[t b !]
\includegraphics[width = 1.01 \columnwidth]{fig4_a.png}\\
\includegraphics[width = 1.01 \columnwidth]{fig4_b.png}
\caption{Comparison of temperature dependence of heating rates measured on the 1.3~MHz axial mode before and after \textit{ex situ} ion milling for both electroplated gold (top, milling time 60~min) and sputtered niobium (bottom, milling time 100~min). The solid (red) lines are fits of the round points to \eqnRef{eq:em}, and the dashed (blue) lines are fits of the triangular points to \eqnRef{eq:ah}. Key fit parameters for the top [bottom] graph: a pre-ESIM scaling exponent $\beta$ of 1.53(6) [1.48(7)], a pre-ESIM power-law thermal-activation temperature scale $T_{\mathrm P}$ of 9(2)~K [10(2)~K], and a post-ESIM Arrhenius activation temperature scale $T_0$ of 41(9)~K [63(4)~K]. The post-ESIM heating rate in the gold trap was also measured at a trap frequency of 660~kHz (not shown), yielding an Arrhenius fit with a temperature scale $T_0$ of 51(11)~K, equal within error to that determined from the 1.3~MHz heating rate data. Initial and ESIM plateau data for traps A and B (C) are also displayed in the top (bottom) figure to show trap-to-trap variability for each material. The right axes are translated to electric-field noise spectral density via \eqnRef{eq:HRvNoise}.}
\label{fig:beforeAndAfterMilling}
\end{figure}
However, after milling, very different behavior is observed (See \figRef{fig:beforeAndAfterMilling}, [blue] solid, triangular points in both panels). First, the functional form is changed; the heating rates appear to approach an asymptote at both high and low temperatures, with the positive curvatures at high temperature pre-ESIM becoming negative. Second, the values of the heating rates of gold and niobium now differ significantly; in the case of the gold trap, the heating rates are higher than the initial measurements for trap temperatures below $50 \unit{K}$, whereas for the niobium trap, the post-ESIM heating rate is lower over the whole temperature range. Moreover, the data after milling do not fit well to \eqnRef{eq:em} with a power law exponent in the range of all previous measurements (i.e. $1.5<\beta<4$)~\cite{Labaziewicz2008_2,Chiaverini2014,Bruzewicz2015}, but rather show an activated behavior characteristic of Arrhenius scaling,
\begin{equation}
\dot{\bar{n}}(T) = \dot{\bar{n}}_0 + \dot{\bar{n}}_T\, e^{-T_0/T}.
\label{eq:ah}
\end{equation}
\noindent Here $\dot{\bar{n}}_T$ is the high-temperature contribution to the heating rate and $T_0$ is the Arrhenius activation temperature.
As comparison of $\chi^{2}$ goodness-of-fit values cannot strictly and generally be used to determine which of multiple models best represents a given data set, we use the Bayesian information criterion (BIC)~\cite{schwarz_1978} for model comparison. The BIC is a score based on the likelihood function and a penalty for the number of parameters used; the latter component serves to avoid over-fitting and to promote model parsimony. The BIC is an increasing function of the error variance and of the number of model parameters. When comparing multiple models, the one with the lowest BIC is preferred, and the more the difference between the preferred model and the others, the more support there is for the lowest-BIC model (the posterior probability of the model given the data is proportional to $e^{-\textrm{BIC}/2}$); the difference in BIC can therefore be assessed absolutely, and any difference is positive evidence for the lowest-BIC model. Differences in BIC larger than approximately 6 are considered strong evidence, while differences larger than approximately 10 are considered very strong evidence~\cite{kass_and_raftery}.
In comparing the power-law and Arrhenius models (Eqs.~\ref{eq:em} and~\ref{eq:ah}), there is very strong evidence for power-law behavior in the pre-ESIM data for both materials. On the other hand, the post-ESIM data provides very strong evidence for Arrhenius behavior in niobium and positive evidence for Arrhenius behavior in gold (See Table~\ref{tab:model_comp}~\footnote{BIC Values are calculated using the Mathematica software package, ver. 11 (Wolfram Research, Inc.), via the error variance during nonlinear curve fitting.}). While the evidence for the Arrhenius behavior over power-law behavior in post-ESIM gold is not strong, we point out that the best-fit power-law exponent $\beta$ for these data is 0.36(14), significantly different from the pre-ESIM value and from all measured previously or expected theoretically~\cite{Brownnutt2015}.
\begin{table}[b !]
\caption{Bayesian information criterion (BIC) values used for model comparison of temperature dependence in pre- and post-milled heating rate measurements (data from traps D and E in \figRef{fig:beforeAndAfterMilling}). The model with the lower BIC value is preferred (indicated in bold-face type for each condition). The BIC difference value $\Delta$BIC, the score of the lower-BIC model subtracted from that of the higher-BIC model, gives a measure of evidence for the lower-BIC model (probability $\propto e^{-\textrm{BIC}/2}$). In this case, there is very strong evidence for power-law behavior prior to ESIM and slight positive to very strong evidence, depending on material, for Arrhenius behavior after ESIM.}
\begin{ruledtabular}
\begin{tabular}{llccc}
\multicolumn{2}{c}{Condition} & \multicolumn{2}{c}{BIC values} & \multirow{2}{*}{$\Delta$BIC} \\
\multicolumn{2}{c}{and Material} & Power law & Arrhenius & \\
\hline
\multirow{2}{*}{Pre-ESIM} & Au & \textbf{59} & 134 & 75 \\
& Nb & \textbf{55} & 103 & 48 \\
\multirow{2}{*}{Post-ESIM} & Au & 56.8 & \textbf{55.5} & 1.3 \\
& Nb & 95 & \textbf{55} & 40 \\
\end{tabular}
\end{ruledtabular}
\label{tab:model_comp}
\end{table}
For the gold trap we find the best Arrhenius-model fit for $T_0$ is 41(9)~K (see \figRef{fig:beforeAndAfterMilling}, top), while for the niobium trap, $T_0$ is 63(4)~K (both measured at 1.3~MHz trap frequency). We have measured the heating rate in the same gold trap at a 660~kHz trap frequency as well, and in that case we also see Arrhenius behavior with $T_0=51(11)$~K. Detailed temperature dependence at other trap frequencies has not yet been measured in niobium traps after ESIM. However, the data from niobium traps F and G (presented below), which show distance and frequency dependence at room temperature post-ESIM, can be extrapolated to estimate the heating rate at 1.3 MHz and 50 um ion-electrode distance. The extrapolated values are consistent with the measured room-temperature post-ESIM heating rates for niobium traps C and E (\figRef{fig:beforeAndAfterMilling}b). Moreover, while detailed temperature dependence was measured on only one trap of each material (traps D and E), data taken pre- and post-ESIM at room temperature and near 4~K using the three other traps (A, B, and C) are all consistent with the altered temperature dependence described above.
These observations are indicative of different mechanisms for anomalous ion heating before and after ESIM, i.e. for solvent-cleaned compared to milled surfaces. Moreover, the hydrocarbons that adsorb during air exposure after ESIM do not contribute to electric-field noise in the same manner as those present after solvent cleaning; even though the milling is performed \textit{ex situ} in this case, its effect is not nullified by re-adsorption of carbon-containing compounds from the atmosphere. Similarly, re-adsorption of oxygen and carbon in UHV conditions after ion milling has been previously seen to not increase ion heating rates~\cite{Daniilidis2014}. We note that Arrhenius behavior has been observed once before in a single trap~\cite{Labaziewicz2008_2}; in the measurements performed here with ESIM, the temperature-dependent behavior change was observed in all traps studied. Also, the existence of temperature dependence after ESIM in the experiments presented here suggests that they are not limited by technical noise.
Of the leading theoretical models proposed to explain anomalous ion heating, the power law scalings of the temperature dependence for the pre-ESIM measurements follow the lossy dielectric model \cite{kumph_NJP_2016} most closely. Noise, under this hypothesis, originates from the dissipative nature of any dielectric film covering the electrode metal; electric-field noise from this source is distinct from, but analogous to, the Johnson noise of a metal, though here it is based on thermally driven fluctuations in a polarizable material. The model predicts a linear scaling $(\beta = 1)$ of the heating rate with~$T$, while we measure $\beta \approx 1.5$ for both materials. This model also predicts the $1/d^{4}$ distance scaling (for ion-electrode distance $d$) measured in planar surface traps \cite{sedlacek2018,wunderlich2018}, and its $1/f^{2}$ scaling is consistent with widely measured heating-rate frequency scaling~\cite{Brownnutt2015} (c.f. also~\figRef{fig:freqdist}). We note that an extension of the lossy dielectric model to include temperature dependence of the dielectric constant and loss tangent may alter the temperature dependence to agree more closely with our measured scaling; this is plausible given that the loss tangents of many insulators decrease as temperature decreases.
\begin{figure}[t b !]
\includegraphics[width = 0.90 \columnwidth]{fig5_a.png}\\
\includegraphics[width = 0.93 \columnwidth]{fig5_b.png}
\caption{Frequency (top panel) and distance (bottom panel) scaling of ion heating rates in Nb traps at room temperature before and after \textit{ex situ} ion milling (ESIM). The round, solid (red) data points are from a previous measurement~\cite{sedlacek2018} and were taken without ESIM. Using two traps of the same design (labeled F and G) as in that work, data were taken after ESIM [open circles (black) and triangles (green)]. The ion-electrode distance was $64\unit{\mu m}$ for the measurement as a function of frequency (top), and the trap frequency~$f$ was $860 \unit{kHz}$ for the measurement as a function of distance. The lines are power-law fits with exponents as depicted in the legend. The traps used for these measurements are of a different electrode design than those used for the temperature-dependent measurements in this work (see~\cite{sedlacek2018} for details), though they were made in the same process run on the same wafers.}
\label{fig:freqdist}
\end{figure}
Turning now to the post-ESIM measurements, Arrhenius behavior of the temperature scaling is predicted by both the fluctuating dipole (FD) model~\cite{PhysRevA.84.023412,PhysRevA.87.023421} and the adatom diffusion (AD) model~\cite{Brownnutt2015}. The FD model is based on phonon-induced dipole-moment fluctuation of adatoms, and its predictions include heating rate scalings of $1/f$ with frequency (i.e. a $1/f^{0}$ scaling, or frequency independence, of the electric-field noise power spectral density $S_{E}$) in the range relevant to ion trap frequencies (${\sim}1$~MHz), and of $1/d^{4}$ with ion-electrode distance. The Arrhenius-type behavior is predicted at temperatures below an effective temperature $T_{\textrm{FD}}$ set by vibrational modes of adatoms bound to surfaces, estimated to be approximately 50~to 100~K. Above this temperature, the noise is expected to fall as ${\sim}1/T$~\cite{PhysRevA.84.023412}, or to grow as a power law in temperature with an exponent of approximately~$2.5$~\cite{Brownnutt2015}. The AD model, which is based on field-fluctuations due to the dipole moments of adatoms moving along the surface, predicts Arrhenius temperature scaling over the whole temperature range, with frequency and distance heating-rate scalings of $1/f^{3}$ and $1/d^{6}$, respectively. An extension to the AD model (EAD) which considers adatoms diffusing over patches of the surface, where they take on varying dipole moments such that spatial-temporal correlations appear in the noise~\cite{Brownnutt2015}, also predicts Arrhenius temperature scaling, $1/f^{2.5}$ heating-rate frequency scaling, and $1/d^{6}$ distance scaling for motional modes parallel to the planar-trap surface, as in the case of the axial mode measured here. See Table~\ref{tab:theories} for a summary of the model predictions and the scalings observed in this work.
We note that the Arrhenius scaling with temperature predicted by these two models differs at low temperature. While the electric-field noise is expected to be exponentially suppressed for temperatures below $T_{\textrm{FD}}$ under the FD model, the AD and EAD models predict a temperature-independent level of noise at the lowest temperatures, due to diffusion driven by quantum tunneling~\cite{Brownnutt2015}. The post-ESIM data presented here also shows an approach to a temperature-independent level at low temperature.
\section{Trap-Frequency and Ion-Electrode Distance Scalings}
In light of the altered temperature dependence after ESIM, we measured frequency and distance scaling after ESIM using niobium traps at 295~K, both to determine if these scalings are also affected, and to constrain the FD, AD, and EAD models, as their predicted frequency and distance scalings are different. While the frequency dependence was not seen to change after milling in previous work with gold~\cite{Hite2012}, niobium has not been explored, and no measurements of distance scaling after ESIM have been reported previously. Variable-height linear traps~\cite{sedlacek2018} made in the same sputtered-niobium process were used for these post-ESIM measurements; since the multiple zones of this trap design are spread over a region of a few square millimeters around the chip center, these traps were milled for 120~min to ensure every site was milled to the plateau (cf.~\figRef{fig:increase} and surrounding discussion). Results are shown in \figRef{fig:freqdist} where they are plotted with the measurements from~\cite{sedlacek2018}, performed using a niobium trap chip that had not undergone ESIM. Unlike the temperature scaling, neither the $1/f^{{\sim} 2.4}$ frequency scaling nor the $1/d^{{\sim} 4}$ distance scaling measured before ESIM is significantly changed after ESIM.
Thus, while the temperature scaling seen here is supportive of the FD, AD, and EAD models, we see discrepancies with each of them when taking all the ESIM data together (See Table~\ref{tab:theories}). The FD model predicts the observed distance scaling, but does not fit the frequency dependence well---the current theory requires unrealistically heavy or loosely bound adsorbates~\cite{PhysRevA.87.023421} to bring the frequency scaling into the observed range for standard ion trap parameters; in this range, however, the temperature scaling matches well (Arrhenius with a high-$T$ asymptote). The AD and EAD models both make accurate predictions for the frequency scaling behavior, the EAD slightly more so; the distance scaling, however, is not predicted well. In the latter case, where patch geometry is relevant, a more detailed incorporation of the adatom-patch dynamics could potentially lead to different distance dependence. We hope that the material-dependent Arrhenius scaling and additional constraints suggested by these observations will motivate avenues for further understanding of the relevant mechanisms through modification of these, or other, microscopic theories.
\begin{table}[b !]
\caption{Predicted and observed scalings (measured in this work) of ion heating rates for vibrational modes parallel to the surface-electrode trap surface. Electric-field noise scaling is the same as heating-rate scaling except in the case of frequency, where $1$ should be added to the scaling exponent (cf. \eqnRef{eq:HRvNoise}). ($\dagger$) The temperature dependence of the noise in the lossy dielectric model may be strengthened by additional temperature dependence of the material loss tangent, typically an increasing function of temperature in this range. (*) The temperature dependence for the fluctuating dipole model is predicted to be Arrhenius-like up to an effective temperature scale of a few tens of kelvin; above this, the noise is expected to scale either as $1/T$ or $T^{{\sim}2.5}$.}
\begin{ruledtabular}
\begin{tabular}{llll}
\multirow{2}{*}{Model} & \multicolumn{3}{c}{Predicted $\dot{\bar{n}}$ Scalings} \\
& Temperature & Freq. & Distance \\
\hline
Lossy dielectric~\cite{kumph_NJP_2016} & $T$ ($\dagger$) & $f^{-2}$ & $d^{-4}$ \\
Fluct. dipole~\cite{PhysRevA.84.023412,PhysRevA.87.023421} & $e^{-T_0/T}$ (*)& $f^{-1}$ & $d^{-4}$ \\
Adatom diffusion~\cite{Brownnutt2015} & $e^{-T_0/T}$ & $f^{-3}$ & $d^{-6}$ \\
Extension to diffusion~\cite{Brownnutt2015} & $e^{-T_0/T}$ & $f^{-2.5}$ & $d^{-6}$ \\
\\
Condition & \multicolumn{3}{c}{Observed $\dot{\bar{n}}$ Scalings}\\
\hline
Pre-ESIM & $T^{1.51(4)}$ & $f^{-2.4(2)}$ & $d^{-4.0(2)}$ \\
Post-ESIM & $e^{-T_0/T}$ & $f^{-2.2(2)}$ & $d^{-4.0(2)}$ \\
& $T_{0}^{\rm Au}=45(7)$~K & & \\
& $T_{0}^{\rm Nb}=63(4)$~K & & \\
\end{tabular}
\end{ruledtabular}
\label{tab:theories}
\end{table}
\section{Discussion}
The temperature scaling results suggest particular methodologies for mitigation of ion heating rates. In particular, for traps operated at room temperature, ESIM provides approximately a factor of ten reduction in heating rates for gold or niobium; a milling step prior to chamber installation should be performed in this case. For traps operated at low temperatures, ESIM seems useful for niobium, but counterproductive for gold; in the latter case this step should be avoided. One caveat to these general comments is that high-temperature system bakes, which may be required to reach UHV after ESIM and chip installation in non-cryogenic systems, were not performed in this work. Such baking may potentially reduce or alter the effect of ESIM.
Untreated traps have previously been shown to lead to material-independent anomalous heating behavior~\cite{Chiaverini2014}, suggesting that similar contaminants, from processing or solvent cleaning and air exposure, are the dominant sources of electric-field noise across materials. The emergence of material dependence after ESIM, however, gives hope that the exploration of different materials will lead to more basic understanding of the mechanism behind anomalous heating of treated surfaces, since it provides a new experimental variable. In particular, the observed increase in electric-field noise in post-ESIM gold surfaces over untreated surfaces seen here at low temperatures suggests that the temperature-independent component of the underlying noise mechanism in treated gold is not only larger in the megahertz regime than that in niobium, but also larger than the material-independent noise mechanism due to solvent or other-hydrocarbon residue seen on as-fabricated samples. Linking this observation to a unique property of gold could be accomplished by comparison of several surface materials after ESIM. Potentially more practically useful in the near term, material dependence suggests further reduction of heating rates through electrode material or morphological choice in combination with ESIM. Both avenues make clear the importance of a re-investigation of electric-field noise as a function of electrode material, with likely impacts beyond trapped ions, touching on many areas where surface-generated noise limits performance. Moreover, our observation of drastically different behavior of electric-field noise before and after surface ion milling reiterates the utility of individual ions as sensors for furthering our understanding of surface phenomena.
\textit{Note added---}During the review process, we became aware of related measurements of ion heating as a function of temperature, in this case above room temperature for unmilled electrodes~\cite{noel2018_arxiv}. We can use a subset of the data analyzed in the present work for comparison to the thermally activated fluctuator (TAF) model. This model is suggested by the authors of~\cite{noel2018_arxiv} to produce frequency-scaling power-law exponents in agreement with their high trap-temperature electric-field noise measurements. Our measurements of ion heating rates as a function of temperature in unmilled Nb traps (the data presented in Fig.~\ref{fig:beforeAndAfterMilling}) can be used to extract the expected frequency dependence. This can be compared to the frequency-scaling exponents of the ion heating rate, also measured in Nb traps at 295~K and at 4~K (Fig.~\ref{fig:freqdist} and~\cite{sedlacek2018}), namely 2.4(2) and 2.3(2), respectively. Following~\cite{noel2018_arxiv}, we calculate heating-rate exponents, as predicted by the TAF model, of 1.95 at 295~K and 2.03 at 4~K. The measured exponents differ significantly from those predicted by the model for our data, taken at room temperature and below, but more precise measurements of the frequency scaling over the entire temperature range of interest would be required to constrain the model further. We note however that the temperature dependence is not predicted independently for the TAF model, and so it is difficult to completely validate it with ion heating-rate data alone. One would ideally require a separate measure of the fluctuator energy-scale distribution from which the temperature dependence can be predicted~\cite{PhysRevLett.43.646}.
\section{Acknowledgments}
We thank Vladimir Bolkhovsky for niobium trap fabrication, George Fitch for layout assistance, and Peter Murphy, Chris Thoummaraj, and Karen Magoon for assistance with chip packaging. We also thank Libby Shaw for help with interpretation of XPS data. We additionally thank K.~McKay and J.~Wu for helpful comments on the manuscript. This work made use of the MRSEC Shared Experimental Facilities at MIT, supported by the National Science Foundation under award number DMR-14-19807. Electroplated Au traps were fabricated in the Boulder Microfabrication Facility at NIST. This work was sponsored by the Assistant Secretary of Defense for Research and Engineering under Air Force contract number FA8721-05-C-0002. Opinions, interpretations, conclusions, and recommendations are those of the authors and are not necessarily endorsed by the United States Government. This paper is a partial contribution of NIST and is not subject to US copyright.
|
1,108,101,566,650 | arxiv | \section{Introduction}
\label{introduction}
Colloids and molecules with anisotropic shapes and interactions play a significant role in
condensed matter physics, specially in the design of self-assembled structures~\cite{Roh05, Klapp16}.
Particularly, Janus colloids are characterized as particles composed by at least two physically or chemically
distinctive surfaces. They can have different shapes, as rods, spheres and dumbbells.
These systems have a large range of applications including medicine, self-driven molecules,
catalysis, photonic crystals, stable emulsions, biomolecules and self-healing materials\cite{Cas89, Talapin10,ElL11, TuP13, WaM08,
WaM13, Zhang15, Bic14, Ao15}.
From the distinct Janus particles shapes,
Janus dumbbells\cite{Yin01, SiC14, Lu02, YoL12} are colloids formed by two spheres,
each one with distinct characteristics
linked together with a separation that varies from an almost total overlap to one or two monomer diameters.
Due to the resemblance between Janus particles and competing
interaction systems, Janus dumbbells behave
as surfactant in water-based emulsions due its amphiphilic
properties~\cite{SoK11, TaI05, Liu13}.
Self-assembly lamellae or micellae phases were observed on these systems due the competition between attractive
and repulsive forces \cite{Li12, White10, Munao13, Munao14, Munao15b, Avvisati14}.
Recent studies reported the production of silver-silicon (Ag-Si)\cite{SiC14} and silica-polystyrene (SiO$_2$-PS)\cite{Liu09}
Janus dimers. Silica and silicon are classified as anomalous fluids, and therefore have a set of properties that
diverge from the observed for regular materials.
For most part of the fluids,
the diffusion coefficient decreases when the pressure (or density)
increases. However, materials as water~\cite{Ne02a}, silicon~\cite{Mo05} and silica~\cite{Sa03}
show diffusion anomaly, characterized by a
maximum in the diffusion coefficient at constant temperature. Besides diffusion (or dynamical) anomaly,
water, silicon, silica and others fluids, the so-called anomalous fluids, also have
other classes of anomalies, as structure and thermodynamic anomalies. Particularly,
the density anomaly is characterized by the increase of density with the temperature
at a fixed pressure.
Distinct computational models were proposed to study the anomalous behavior of fluids.
Among these models, effective two length scale (TLS) core-softened shoulder potential
are an interesting tool to investigate systems with water-like anomalies. Particularly, the
model proposed by Oliveira~
\textit{et. al.}\cite{Ol06a, Ol06b} reproduces qualitatively the diffusion, structural and thermodynamic anomalies,
and was broadly used to study anomalous systems.
This effective approach to describe anomalous fluids was used to study monomeric and dimeric systems of anomalous
particles~\cite{Ol06a, Ol06b, Ol10, Ga14, Munao16}. The TLS potential was used in our previous works to study the
behavior of Janus dumbbells composed of one anomalous and one non-anomalous monomers
in bulk~\cite{BoK15c, Bordin16a}. Despite the presence of the non-anomalous monomer, the diffusion anomaly was preserved.
Confinement was proposed as a
approach to tune the self-assembled morphologies. Controlling the confinement
intensity it is possible to create micelles with distinct shapes.
Computational and experimental studies have already explore the confinement
effects if the self assembly of polyhedral nanoparticles~\cite{Khad16}, patchy spherical colloids~\cite{Iwa16},
asymmetric and symmetric dumbbells~\cite{Lee09, Muang14}
surfactants and polymers~\cite{Kim15, Ro12, Ro11}.
As well, the confinement affects the diffusivity of spherical Janus swimmers~\cite{Ao15}.
In fact, confinement strongly affects the behavior of fluids in general. For the case of
anomalous fluids~\cite{KoB15}, new anomalies can be observed due confinement~\cite{BoK15a},
and even a superdiffusive regime can be induced~\cite{BoK14c}.
The anomalous region in the pressure $\times$ temperature ($PT$) phase diagram
is usually shifted due confinement. This shift can be to higher or lower temperatures,
regarding on the nature of the fluid-wall interaction~\cite{Krott14}.
Therefore, the question that rises is how the confinement will affect not only the
self-assembled structures, but the dynamical and thermodynamical behavior
of the anomalous/non-anomalous Janus dumbbell
system. In this way, we perform intensive Molecular Dynamics (MD)
simulations of Janus nanoparticles composed of anomalous and non-anomalous monomers
confined between two flat plates.
In addition to the myriad of structures tuned by the confinement, we show
how the confinement and the TLS potential lead the system to have not only diffusion anomaly, but also
the density anomaly, not observed in bulk.
The paper is organized as follows: first we introduce the model and describe the methods
and simulation details; next the results and discussion are given; and
then we present our conclusions.
\section{The Model and the Simulation details}
\label{Model}
In this paper all physical quantities are computed in the standard Lennard Jones (LJ) units\cite{AllenTild},
\begin{equation}
\label{red1}
r^*\equiv \frac{r}{\sigma}\;,\quad \rho^{*}\equiv \rho \sigma^{3}\;, \quad
\mbox{and}\quad t^* \equiv t\left(\frac{\epsilon}{m\sigma^2}\right)^{1/2}\;,
\end{equation}
\noindent for distance, density of particles and time , respectively, and
\begin{equation}
\label{rad2}
p^*\equiv \frac{p \sigma^{3}}{\epsilon} \quad \mbox{and}\quad
T^{*}\equiv \frac{k_{B}T}{\epsilon}
\end{equation}
\noindent for the pressure and temperature, respectively, where $\sigma$, $\epsilon$ and $m$
are the distance, energy and mass parameters, respectively.
Since all physical quantities are defined in reduced LJ units,
the $^*$ is omitted, in order to simplify the discussion.
The systems have $N = 1000$ dimeric particles, totalizing $N = 2000$ particles, confined between two smooth and parallel
plates. The Janus dumbbells particles were modeled using two spherical core-softened particles, each one with mass $m$
and effective diameter $\sigma$, linked rigidly at a distance $\lambda$. The dimers are formed by
monomers of type A and type B.
The particles of type A present anomalous behavior and their interaction is given by a two length scales potential,
potential AA, defined as~\cite{Ol06a, Ol06b}
$$
\frac{U^{AA}(r_{ij})}{\varepsilon} = 4\left[ \left(\frac{\sigma}{r_{ij}}\right)^{12} -
\left(\frac{\sigma}{r_{ij}}\right)^6 \right] +
$$
\begin{equation}
u_0 {\rm{exp}}\left[-\frac{1}{c_0^2}\left(\frac{r_{ij}-r_0}{\sigma}\right)^2\right]\;,
\label{AlanEq}
\end{equation}
\noindent where $r_{ij} = |\vec r_i - \vec r_j|$ is the distance between two A particles $i$ and $j$. The first term of the potential
is a standard 12-6 LJ potential~\cite{AllenTild} and the second one is a Gaussian shoulder centered at $r_0$,
with depth $u_0$ and width $c_0$. The parameters used in this work are $u_0 = 5.0$, $c_0 = 1.0$ and $r_0/\sigma = 0.7$.
Both systems, monomeric and dimeric, modeled by this potential, present density, diffusion and thermodynamic anomalies,
like observed in water, silica and other anomalous fluids~\cite{Ol06a, Ol06b, Ol10, Kell67,Angell76}.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=6cm]{fig1a.eps}
\end{center}
\caption{Interaction potential between particles of type A (dot-dashed blue line),
between particle of type A and B (solid magenta line) and between particles of type B (dashed red line).
The interaction between dimers and confining walls is given by the projection of the potential AB
in $z$-direction. Inset: Janus dumbbells formed by A-B monomers.}
\label{fig1}
\end{figure}
The interaction between particles of type B, the potential BB, is given by a standard 12-6 LJ potential, like the first term of
Eq.~\ref{AlanEq}, cut and shifted at the cutoff radius $r_c$,
\begin{equation}
\label{LJCS}
U^{\rm{CSLJ}}(r_{ij}) = \left\{ \begin{array}{ll}
U_{{\rm {LJ}}}(r_{ij}) - U_{{\rm{LJ}}}(r_c)\;, \qquad r_{ij} \le r_c\;, \\
0\;, \qquad \qquad \qquad \qquad \quad r_{ij} > r_c\;.
\end{array} \right.
\end{equation}
The BB potential has a cutoff radius of $r_c = 2.5$. Meanwhile, the interaction for A-B particles
is given by the Weeks-Chandler-Andersen (WCA) potential, defined by the equation~\ref{LJCS}
with $r_c = 2^{1/6}$ is the cutoff. The
interactions between dimers and walls are given by the projection of the WCA potential in the
$z$-direction. The potentials are illustrated in Fig.~\ref{fig1}.
The simulations were performed in the canonical ensemble using the
ESPResSo package~\cite{espresso1, espresso2}.
The number density, defined as $\rho = N/V$, where $V=L^2\times L_z$ is the volume of the
simulation box, was varied from $\rho = 0.05$ to $\rho = 0.50$.
In all simulations, $L_z=4.0$ and $L$ was obtained from $L = [N/(\rho L_z)]^{1/2}$.
Standard periodic boundary conditions are applied in $x$ and $y$-directions.
The temperature was simulated in the interval between $T = 0.05$ and $T = 0.60$.
The system temperature was fixed using the Langevin thermostat
with $\gamma = 1.0$,
and the equations of motion for the fluid particles were integrated
using the velocity Verlet algorithm, with a time step $\delta t = 0.01$.
We performed $1\times10^6$ steps to equilibrate the system.
These steps are then followed by $5\times10^6$ steps for the results
production stage.
To ensure that the system was equilibrated, the pressure, kinetic energy
and potential energy were analyzed as function of time, as well several
snapshots at distinct simulation times.
Since confined systems can be sensitive to the number of particles
in the simulation, in some points we carried out simulations with
5000 and 10000 particles, and
essentially the same results were observed. As well,
we run some points with a production time of $1\times10^8$
to test if the system was well equilibrated,
and the same results were obtained.
The system dynamics was analyzed using the lateral mean square displacement (LMSD) as function of time, given by
\begin{equation}
\label{r2}
\langle [\vec r_{\parallel\rm cm}(t) - \vec r_{\parallel\rm cm}(t_0)]^2 \rangle =\langle \Delta \vec r_{\parallel\rm cm}(t)^2 \rangle\;,
\end{equation}
\noindent where $\vec r_{\parallel\rm cm}(t_0) = (x_{\rm cm}(t_0)^2 + y_{\rm cm}(t_0)^2)$
and $\vec r_{\parallel\rm cm}(t) = (x_{\rm cm}(t)^2 + y_{\rm cm}(t)^2)$
denote the parallel coordinate of the nanoparticle center of mass (cm)
at a time $t_0$ and at a later time $t$, respectively. The LMSD is related to the lateral
diffusion coefficient, $D_{\parallel}$, by
\begin{equation}
D_{\parallel} = \lim_{t \rightarrow \infty} \frac{\langle \Delta \vec r_{\parallel\rm cm}(t)^2 \rangle}{4t}\;.
\end{equation}
The pressure in confined systems by parallel plates is divided in parallel and perpendicular direction.
The parallel pressure, $P_{\parallel}$, was obtained from
$$
P_{\parallel} = 0.5(\sigma_{xx} + \sigma_{yy})\;,
$$
\noindent where $\sigma_{xx}$ and $\sigma_{yy}$ are the normal stress in the $x$ and $y$ direction.
The system structure was analyzed with the lateral radial distribution function (LRDF) $g_{||}(r)$, defined as~\cite{Ku05b}
\begin{equation}
\label{gr_lateral}
g_{||}(r) \equiv \frac{1}{\rho ^2V}
\sum_{i\neq j} \delta (r-r_{ij}) \left [ \theta\left( \left|z_i-z_j\right| + \frac{\delta z}{2}
\right) - \theta\left(\left|z_i-z_j\right|-\frac{\delta z}{2}\right) \right],
\end{equation}
\noindent where $\delta(x)$ is the Dirac $\delta$ function
and the Heaviside function $\theta (x)$ restricts the sum of particle pair in the same
slab of thickness $\delta z = \sigma$. The lateral radial
distribution function is proportional to the probability of finding a particle
at a distance $r$ from a referent particle inside the slab of thickness
$\delta z$.
In order to check if the Janus system shows density anomaly we evaluate the
temperature of maximum density (TMD). Using thermodynamical relations, the
TMD can be characterized by the minimum of the pressure versus
temperature along isochores,
\begin{equation}
\left(\frac{\partial P_{||}}{\partial T}\right)_{\rho} = 0\;.
\label{TMD}
\end{equation}Confinement effects on the properties of Janus dimers
\noindent The fluid, micellar and aggregated regions in the $P_{\parallel} T$ phase diagrams were defined
analyzing the snapshots of the systems, the lateral diffusion coefficient, $D_{\parallel}$, and
the lateral radial distribution function, $g_{\parallel}(r)$. To define the nanoparticles
in the same aggregate we defined a minimal distance equals to $r_{min} = 1.2$. If
the distance between one monomer of one dimer and a monomer of a distinct dimer is smaller
than $r_{min}$ then both dimers belong to the same cluster.
\section{Results and Discussion}
\label{Results}
\begin{figure}[!h]
\begin{center}
\includegraphics[width=8cm]{fig2.eps}
\end{center}
\caption{Aggregates observed in our simulations: (i) trimeric cluster with $n_c = 3 $, (ii) tetrahedral cluster
with $n_c = 4 $, (iii) hexahedral cluster with $n_c = 6 $, (iv) spherical cluster with $n_c = 8 $, (v) spherical cluster with $n_c =19 $,
(vi) elongated cluster with $n_c = 10$, (vii) elongated cluster with $n_c = 20$, (viii)
disoriented rippled lamellae and (ix) oriented rippled lamellae. Blue particles are the A monomers and red the B monomers.
}
\label{fig2}
\end{figure}
We start our discussion showing the distinct micelles observed for the confined system. Regarding
the temperature and density, the Janus dimers aggregate in clusters with distinct number of nanoparticles
per cluster, $n_c$. At lower densities, trimeric clusters, with $n_c = 3$
nanoparticles in each aggregate and tetrahedral clusters, with $n_c = 4$, are more common.
Increasing the density, hexahedral clusters ($n_c = 6$) are observed, as well spherical and elongated
micelles with distinct $n_c$. The shape of each aggregate is shown in figure~\ref{fig2}. For densities
up to a threshold a coexistence of two or three of these micelles was observed. Above the threshold,
one single rippled lamellae cluster with all the nanoparticles is observed. This lamellae phase can have a
disoriented structure, as shown in figure~\ref{fig2}(viii), or an oriented structure, figure~\ref{fig2}(ix).
\begin{figure}[!h]
\begin{center}
\includegraphics[width=8cm]{fig3.eps}
\end{center}
\caption{Mean number of dimers in each cluster, $<n_c>$, as function of density for $T=0.10$. Inset: zoom
in the region $\rho < 0.30$. Errors bars smaller than the data point are not shown.}
\label{fig3}
\end{figure}
The region of micelles coexistence and the threshold to the lamellae phase depend on the temperature, so lets take the example of $T=0.10$.
In figure~\ref{fig3}, we show the mean number of dimers in each cluster, $<n_c>$, as function of the system density.
We can establish a relation between $<n_c>$ and the type of aggregates. For $0.05 < \rho \leq 0.10$,
$<n_c>$ varies between 4 and 5. Analyzing the system snapshots, for these densities we can see a coexistence
of trimeric, tetrahedral and hexahedral clusters. As $\rho$ increases, more hexahedral aggregates
and less trimeric aggregates are observed in the solution. This region, with the coexistence of trimeric,
tetrahedral and hexahedral clusters was labeled as region I in the figure~\ref{fig3}. For the densities
inside the region II we observe a mixture of tetrahedral and hexahedral clusters, in region III
hexahedral and small spherical clusters and in region IV a coexistence of small spherical and elongated clusters.
All these previous aggregates where observed in the bulk simulation, with a similar $<n_c>$~\cite{BoK15c}.
However, the confinement frustrates the self-assembly as the density increases. Therefore, if for lower densities
the system aggregates in the same micelles observed for the bulk case, for higher densities new kinds
of self-assembled aggregates should be induced by the confinement. In this way, the spherical and elongated
micelles can have a higher $<n_c>$ compared to the bulk case. Basically, the limited space induced by the
confinement leads two or more smaller micelles to merge in a large cluster. This region, with spherical
and elongated micelles formed by more nanoparticles than the observed in bulk was labeled region V.
The size of the micelles grows continually up to $\rho = 0.35$ for $T=0.10$, with approximately 10 aggregates with 100
nanoparticles each, as shown in figure~\ref{fig3}, and above this threshold all the particles aggregate
in a lamellae structure, region VI-A of figure~\ref{fig3}.
For the temperature $T=0.10$ and $\rho > 0.40$, the rippled lamellae is disoriented, without a
preferable direction, as we shown in figure~\ref{fig2}(viii).
However, for some values of temperatures and densities, as $T = 0.25$ and $\rho = 0.375$ to $\rho = 0.425$, the lamellar phase
has a directional ordering, as we shown in figure~\ref{fig2}(ix). The region where we observe the oriented
rippled lamellae was labeled VI-B. These lamellar structures, usually observed in amphiphilic molecules
as in Janus particles~\cite{Khan95, Beltran12, Preisler16},
where not observed in bulk for our model of Janus nanoparticles. Therefore, this new structure is induced by
the geometrical confinement. While in bulk the particles do not have any geometrical restriction, remaining in
the micellae phase when the density is up to $\rho = 0.50$~\cite{BoK15c}, the confinement,
associated to a high density, leads the dumbbells to aggregate in the lammelar cluster.
The time evolution of the distinct lamellar phases is shown in figure~\ref{fig4}. For lower temperatures,
the entropic contribution to the free energy is not sufficient to change the initial configuration and the
lamellae structure do not change with time, remaining disoriented, as shown in figure~\ref{fig4}(A). However, for higher temperatures,
the initially disordered configuration changes to the oriented rippled structure, as we can see in figure~\ref{fig4}(B).
This lamellar phase with a preferable orientation is characteristic
of dumbbells systems with one or two monomer that interacts by a two length scale potential~\cite{Ol10, Bordin16a},
but at lower temperatures it is frustrated by the confinement.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=8cm]{fig4.eps}
\end{center}
\caption{Lamellar phase for the density $\rho = 0.40$ at temperatures (A) $T = 0.10$ and (B) $T = 0.25$ at three
distinct times: end of equilibration time ($t = 0$), half of the production time ($t = 25000$) and end of simulation
($t=50000$). }
\label{fig4}
\end{figure}
\begin{figure}[!h]
\begin{center}
\includegraphics[width=8cm]{fig5.eps}
\end{center}
\caption{$P_{||}\times T$ phase diagram for the confined Janus dimer system. The distinct micellae and lamellae regions
discussed in the previous figures are indicated. The dashed gray lines are the isochores. The density
anomalous regions is defined by the TMD line, and the diffusion anomaly region by the lines of maxima and minima
diffusion.}
\label{fig5}
\end{figure}
In the figure~\ref{fig5} we show the qualitative $P_{||}\times T$ phase diagram for the confined system. The distinct micellae and lamellae
regions are indicated. As discussed previously, new self-assembled structures are induced by the confinement. However,
the aggregation region does not shift to higher or lower temperatures.
Curiously, at the temperatures were we observe the oriented lamellae structure, for densities above $\rho = 0.45$
the system have not a well defined micellar structure, and an amorphous phase is observed. This phase is a fluid
with small diffusion. Taking the isotherm $T = 0.25$ as reference, the pressure drops when
$\rho > 0.45$ and the system shows a reentrant fluid region.
The plot of parallel diffusion coefficient as function of density for some values of $T$ is shown in figure~\ref{fig6}.
As we can see that for $T = 0.25$ the system have $D_{\parallel} \approx 0$ for $\rho = 0.375$, 0.40 and 0.425,
and $D_{\parallel}$ grows when $\rho \ge 0.45$, indicating a melting. The LRDF, showed in figure~\ref{fig7}, also
shows a decrease in the system structure, from the well structured lamellar phase to the fluid phase.
This melting induced by the increase of density was already observed in colloidal glasses systems~\cite{Berthier10, Cos13, Everts16},
but was not observed in our bulk system.
The fluid phase also shows interesting properties, distinct from the bulk case.
The first one is the density anomaly. As we can see, the isochores in figure~\ref{fig5} show a minimum.
The dashed blue line, the so called TMD line, connects these minimum points. The density anomaly was observed for pure
anomalous monomer and dimers (AA dimers)~\cite{Ol06a, Ol10}, however was not present in the bulk Janus system~\cite{BoK15c}.
The reentrant region, that occurs at higher densities, is located where the TMD line ends
and splits the lammelar phase VI-B in two regions.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=8cm]{fig6.eps}
\end{center}
\caption{Parallel diffusion $D_{||}$ as function of density $\rho$ for different temperatures.
The red lines indicate the minima and maxima in the diffusion coefficient.}
\label{fig6}
\end{figure}
\begin{figure}[!h]
\begin{center}
\includegraphics[width=8cm]{fig7.eps}
\end{center}
\caption{Lateral radial distribution function (LRDF) $g_{||}(r)$ for $T = 0.25$, showing the melting induced by the density increase.}
\label{fig7}
\end{figure}
The bulk system has only diffusion anomaly, and it is preserved in the confined case.
However, the diffusion extrema line for the confined system,
defined by the red dot-dashed red line in the phase diagram, figure~\ref{fig5},
have a temperature range larger than in bulk.
Studies for confined anomalous fluids indicate that
the TMD moves to lower temperatures for solvophobic confinement and higher temperatures for
solvophilic confinement~\cite{Krott13, Krott14, cas09}. For the bulk case, our hypothesis
was that the TMD line was absorbed by the micellae region.
Hence, since our confinement is solvophobic (the WCA
potential is purely repulsive), is surprising that the TMD appears, moving to higher temperatures.
This leads to the question: why the anomalies lines are shifted to higher temperatures?
Gavazzoni and co-authors~\cite{Ga14} showed that the anisotropy
can shift the solid-fluid phase boundary of dumbbells systems made only by A monomers.
More than this, the work argues that the kinetic energy and, therefore, the temperature, has two contribution:
the translational temperature and the non-translational temperature. Hence, the contribution
from the dimer rotations to the kinetic energy plays a significant role in this system behavior.
In bulk, our Janus dumbbell can rotate freely in any direction - the only limitation are collisions with
others dimers. However, the confinement imposes a constriction to the system. In our strongly confined
system, with $L_z=4.0$ and $\lambda = 0.8$, not only the translation in $z$-direction is limited,
but the combination of confinement with the competing interactions of Janus systems lead to a interesting
phenomena. As the figure~\ref{fig6} shows, the diffusion coefficient is distinct from zero -- so, the
particles are moving in the parallel direction. In the figure~\ref{fig8}(i) we show a frontal snapshot
of the nanoparticles arrangement. As we can see, the particles are disordered - as expected for fluids.
Notwithstanding, the side snapshot shows that in the $z$-direction the particles have a preferential
position: the attractive B particles stay at the center, and the repulsive A particles are near
the wall. Then, due the confinement and the Janus characteristics,
the dimers are translating in the $xy$-plane, but without rotation.
This places the A monomers side by side in the $xy$-plane, with a internal layer of attractive B monomers.
This internal layer will act similar to a solvophilic wall, with the B monomers pulling one another.
As consequence, the behavior is similar to water in hydrophilic confinement, and the anomalous region
shifts to higher temperatures.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=8cm]{fig8.eps}
\end{center}
\caption{Frontal (i) and side view (ii) of the system in the fluid region at $T = 0.40$ and $\rho = 0.30$.}
\label{fig8}
\end{figure}
\section{Conclusion}
\label{Conclu}
We report the study of confined Janus nanoparticles. This system has special interest in the design of
new material using the confinement to control the self-assembled structures. We have found a rich variety
of aggregates and micelles, including large micelles not observed in the bulk system.
As well, two lamellae phases, with distinct orientations, where induced by the confinement.
The oriented lamellae phase region in the $P_{\parallel}T$ phase diagram is splitted
by a reentrant fluid phase. This melting, induced by increasing the density, was not
present in the bulk system as well.
Another feature that was not observed in the bulk system is the density anomaly.
The combination of confinement effects with the competing interactions of Janus dimers
shifts the TMD line to higher temperatures, rising the anomaly
that was hide inside the aggregation region for the bulk case. As well, the diffusion
anomaly region increases, reaching higher temperatures.
Our results show that materials composed by the association of a monomer which can be modeled by a
two length scale potential and a standard LJ monomer have an interesting
and peculiar behavior.
\section{Acknowledgments}
The authors are grateful to Marcia C. Barbosa from Universidade Federal do Rio Grande do Sul for valuable
and critical discussion. JRB thanks the Brazilian agency CNPq for the financial support.
\providecommand*{\mcitethebibliography}{\thebibliography}
\csname @ifundefined\endcsname{endmcitethebibliography}
{\let\endmcitethebibliography\endthebibliography}{}
\begin{mcitethebibliography}{60}
\providecommand*{\natexlab}[1]{#1}
\providecommand*{\mciteSetBstSublistMode}[1]{}
\providecommand*{\mciteSetBstMaxWidthForm}[2]{}
\providecommand*{\mciteBstWouldAddEndPuncttrue}
{\def\unskip.}{\unskip.}}
\providecommand*{\mciteBstWouldAddEndPunctfalse}
{\let\unskip.}\relax}
\providecommand*{\mciteSetBstMidEndSepPunct}[3]{}
\providecommand*{\mciteSetBstSublistLabelBeginEnd}[3]{}
\providecommand*{\unskip.}}{}
\mciteSetBstSublistMode{f}
\mciteSetBstMaxWidthForm{subitem}
{(\emph{\alph{mcitesubitemcount}})}
\mciteSetBstSublistLabelBeginEnd{\mcitemaxwidthsubitemform\space}
{\relax}{\relax}
\bibitem[Roh \emph{et~al.}(2005)Roh, Martin, and Lahann]{Roh05}
K.-H. Roh, D.~C. Martin and J.~Lahann, \emph{Nature Materials}, 2005,
\textbf{4}, 759\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Klapp(2016)]{Klapp16}
S.~H.~L. Klapp, \emph{Curr. Opin. in Coll. and Inter. Sci.}, 2016, \textbf{21},
76--85\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Casagrande \emph{et~al.}(1989)Casagrande, Fabre, Rapha\"el, and
Veyssi\'e]{Cas89}
C.~Casagrande, P.~Fabre, E.~Rapha\"el and M.~Veyssi\'e, \emph{Europhys. Lett.},
1989, \textbf{9}, 251--255\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Talapin \emph{et~al.}(2010)Talapin, Lee, Kovalenko, and
Shevchenko]{Talapin10}
D.~M. Talapin, J.-S. Lee, M.~V. Kovalenko and E.~V. Shevchenko, \emph{Che.
Rev.}, 2010, \textbf{110}, 389--458\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Elsukova \emph{et~al.}(2011)Elsukova, Li, M{\"o}ller, Spasova, Acet,
Farle, Kawasaki, Ercius, and Duden]{ElL11}
A.~Elsukova, Z.-A. Li, C.~M{\"o}ller, M.~Spasova, M.~Acet, M.~Farle,
M.~Kawasaki, P.~Ercius and T.~Duden, \emph{Phys. Stat. Sol.}, 2011,
\textbf{208}, 2437--2442\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Tu \emph{et~al.}(2013)Tu, Park, and Lee]{TuP13}
F.~Tu, B.~J. Park and D.~Lee, \emph{Langmuir}, 2013, \textbf{29},
12679--12687\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Walther and M{\"u}ller(2008)]{WaM08}
A.~Walther and A.~H.~E. M{\"u}ller, \emph{Soft Matter}, 2008, \textbf{4},
663--668\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Walther and M{\"u}ller(2013)]{WaM13}
A.~Walther and A.~H.~E. M{\"u}ller, \emph{Chem. Rev.}, 2013, \textbf{113},
5194--5261\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Zhang \emph{et~al.}(2015)Zhang, Luijten, and Granick]{Zhang15}
J.~Zhang, E.~Luijten and S.~Granick, \emph{Annu. Rev. Phys. Chem.}, 2015,
\textbf{66}, 581--600\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Bickel \emph{et~al.}(2014)Bickel, Zecua, and W\"urger]{Bic14}
T.~Bickel, G.~Zecua and A.~W\"urger, \emph{Phys. Rev. E}, 2014, \textbf{89},
050303\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Ao \emph{et~al.}(2015)Ao, Ghosh, Li, Schmid, H{\"{a}}nggi, and
Marchesoni]{Ao15}
X.~Ao, P.~K. Ghosh, Y.~Li, G.~Schmid, P.~H{\"{a}}nggi and F.~Marchesoni,
\emph{Europhys. Lett.}, 2015, \textbf{109}, 10003\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Yin \emph{et~al.}(2001)Yin, Lu, and Xia]{Yin01}
Y.~Yin, Y.~Lu and X.~Xia, \emph{J. Am. Chem. Soc.}, 2001, \textbf{132},
771--772\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Singh \emph{et~al.}(2014)Singh, Cassidy, Grammatikopoulos,
Djurabekova, Nordlund, and Sowwan]{SiC14}
V.~Singh, C.~Cassidy, P.~Grammatikopoulos, F.~Djurabekova, K.~Nordlund and
M.~Sowwan, \emph{J. Phys. Chem. C}, 2014, \textbf{118}, 13869--13875\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Lu \emph{et~al.}(2002)Lu, Yin, Li, and Xia]{Lu02}
Y.~Lu, Y.~Yin, Z.-Y. Li and Y.~Xia, \emph{Nano Lett.}, 2002, \textbf{2},
785--788\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Yoon \emph{et~al.}(2012)Yoon, Lee, Kim, and Weitz]{YoL12}
K.~Yoon, D.~Lee, J.~W. Kim and D.~A. Weitz, \emph{Chem. Commun.}, 2012,
\textbf{48}, 9056--9058\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Song \emph{et~al.}(2011)Song, Klivansky, Liu, and Chen]{SoK11}
Y.~Song, L.~M. Klivansky, Y.~Liu and S.~Chen, \emph{Langmuir}, 2011,
\textbf{27}, 14581--14588\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Takahara \emph{et~al.}(2005)Takahara, Ikeda, Ishino, Tachi, Ikeue,
Sakata, Hasegawa, Mori, Matsumura, and Ohtani]{TaI05}
Y.~K. Takahara, S.~Ikeda, S.~Ishino, K.~Tachi, K.~Ikeue, T.~Sakata,
T.~Hasegawa, H.~Mori, M.~Matsumura and B.~Ohtani, \emph{J. Am. Chem. Soc.},
2005, \textbf{127}, 6271--6275\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Liu \emph{et~al.}(2013)Liu, Liu, Zhang, Sun, and Zao]{Liu13}
J.~Liu, G.~Liu, M.~Zhang, P.~Sun and H.~Zao, \emph{Macromolecules}, 2013,
\textbf{46}, 5974--5984\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Liu \emph{et~al.}(2012)Liu, Li, Perez, Gunton, and Brett]{Li12}
Y.~Liu, W.~Li, T.~Perez, J.~D. Gunton and G.~Brett, \emph{Langmuir}, 2012,
\textbf{28}, 3--9\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Whitelam and Bon(2010)]{White10}
S.~Whitelam and S.~A.~F. Bon, \emph{J. Chem. Phys.}, 2010, \textbf{132},
074901\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Muna\`o \emph{et~al.}(2013)Muna\`o, Costa, Giacometti, Caccamo, and
Sciortino]{Munao13}
G.~Muna\`o, D.~Costa, A.~Giacometti, C.~Caccamo and F.~Sciortino, \emph{Phys.
Chem. Chem. Phys}, 2013, \textbf{15}, 20590\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Muna\`o \emph{et~al.}(2014)Muna\`o, O'Toole, Hudson, and
Sciortino]{Munao14}
G.~Muna\`o, P.~O'Toole, T.~S. Hudson and F.~Sciortino, \emph{Soft Matter},
2014, \textbf{10}, 5269\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Muna\`o \emph{et~al.}(2015)Muna\`o, O'Toole, Hudson, Costa, Caccamo,
Sciortino, and Giacometti]{Munao15b}
G.~Muna\`o, P.~O'Toole, T.~S. Hudson, D.~Costa, C.~Caccamo, F.~Sciortino and
A.~Giacometti, \emph{J. Phys. Cond. Matt.}, 2015, \textbf{27}, 234101\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Avvisati \emph{et~al.}(2015)Avvisati, Visers, and
Dijkstra]{Avvisati14}
G.~Avvisati, T.~Visers and M.~Dijkstra, \emph{J. Chem. Phys}, 2015,
\textbf{142}, 084905\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Liu \emph{et~al.}(2009)Liu, Zhang, Liu, Qu, and Yang]{Liu09}
B.~Liu, C.~Zhang, J.~Liu, X.~Qu and Z.~Yang, \emph{Chem Comm}, 2009,
\textbf{26}, 3871--3873\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Netz \emph{et~al.}(2002)Netz, Starr, Barbosa, and Stanley]{Ne02a}
P.~A. Netz, F.~W. Starr, M.~C. Barbosa and H.~E. Stanley, \emph{Physica A},
2002, \textbf{314}, 470\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Morishita(2005)]{Mo05}
T.~Morishita, \emph{Phys. Rev. E}, 2005, \textbf{72}, 021201\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Sastry and Angell(2003)]{Sa03}
S.~Sastry and C.~A. Angell, \emph{Nature Mater.}, 2003, \textbf{2},
739--743\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[de~Oliveira \emph{et~al.}(2006)de~Oliveira, Netz, Colla, and
Barbosa]{Ol06a}
A.~B. de~Oliveira, P.~A. Netz, T.~Colla and M.~C. Barbosa, \emph{J. Chem.
Phys.}, 2006, \textbf{124}, 084505\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[de~Oliveira \emph{et~al.}(2006)de~Oliveira, Netz, Colla, and
Barbosa]{Ol06b}
A.~B. de~Oliveira, P.~A. Netz, T.~Colla and M.~C. Barbosa, \emph{J. Chem.
Phys.}, 2006, \textbf{125}, 124503\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[de~Oliveira \emph{et~al.}(2010)de~Oliveira, Nevez, Gavazzoni,
Paukowski, Netz, and Barbosa]{Ol10}
A.~B. de~Oliveira, E.~Nevez, C.~Gavazzoni, J.~Z. Paukowski, P.~A. Netz and
M.~C. Barbosa, \emph{J. Chem. Phys.}, 2010, \textbf{132}, 164505\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Gavazzoni \emph{et~al.}(2014)Gavazzoni, Gonzatti, Pereira, Ramos,
Netz, and Barbosa]{Ga14}
C.~Gavazzoni, G.~K. Gonzatti, L.~F. Pereira, L.~H.~C. Ramos, P.~A. Netz and
M.~C. Barbosa, \emph{J. Chem. Phys.}, 2014, \textbf{140}, 154502\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Saija(2016)]{Munao16}
G.~M.~F. Saija, \emph{Phys. Chem. Chem. Phys.}, 2016, \textbf{18},
9484--9489\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Bordin \emph{et~al.}(2015)Bordin, Krott, and Barbosa]{BoK15c}
J.~R. Bordin, L.~B. Krott and M.~C. Barbosa, \emph{Langmuir}, 2015,
\textbf{31}, 8577--8582\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Bordin(2016)]{Bordin16a}
J.~R. Bordin, \emph{Physica A}, 2016, \textbf{459}, 1--8\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Khadilkar and Escobedo(2016)]{Khad16}
M.~R. Khadilkar and F.~A. Escobedo, \emph{Soft Matter}, 2016, \textbf{12},
1506\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Iwashita and Kimura(2016)]{Iwa16}
Y.~Iwashita and Y.~Kimura, \emph{Scientific Reports}, 2016, \textbf{6},
27599\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Lee \emph{et~al.}(2009)Lee, Fung, Riley, and Liddell]{Lee09}
S.~H. Lee, E.~Y. Fung, E.~K. Riley and C.~M. Liddell, \emph{Langmuir}, 2009,
\textbf{25}, 7193\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Muangnapoh \emph{et~al.}(2014)Muangnapoh, Avenda{\~{n}}o, Escobedo,
and Watson]{Muang14}
K.~Muangnapoh, C.~Avenda{\~{n}}o, F.~A. Escobedo and C.~M.~L. Watson,
\emph{Soft Matter}, 2014, \textbf{10}, 9729\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Kim and Yi(2015)]{Kim15}
M.~P. Kim and G.-R. Yi, \emph{Frontiers in Materials}, 2015, \textbf{2},
45\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Rosenthal and Klapp(2005)]{Ro12}
G.~Rosenthal and S.~H.~L. Klapp, \emph{Int. J. Mol. Sci.}, 2005, \textbf{13},
9431\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Rosenthal and Klapp(2011)]{Ro11}
G.~Rosenthal and S.~H.~L. Klapp, \emph{J. Chem. Phys.}, 2011, \textbf{134},
154707\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Krott \emph{et~al.}(2015)Krott, Bordin, {Barraz Jr.}, and
Barbosa]{KoB15}
L.~B. Krott, J.~R. Bordin, N.~{Barraz Jr.} and M.~C. Barbosa, \emph{J. Chem.
Phys.}, 2015, \textbf{1}, 134502\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Krott \emph{et~al.}(2015)Krott, Bordin, and Barbosa]{BoK15a}
L.~B. Krott, J.~R. Bordin and M.~C. Barbosa, \emph{J. Phys. Chem. B}, 2015,
\textbf{119}, 291--300\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Bordin \emph{et~al.}(2014)Bordin, Krott, and Barbosa]{BoK14c}
J.~R. Bordin, L.~B. Krott and M.~C. Barbosa, \emph{J. Chem. Phys.}, 2014,
\textbf{141}, 144502\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Krott and Barbosa(2014)]{Krott14}
L.~B. Krott and M.~C. Barbosa, \emph{Phys. Rev. E}, 2014, \textbf{89},
012110\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Allen and Tildesley(1987)]{AllenTild}
P.~Allen and D.~J. Tildesley, \emph{Computer Simulation of Liquids}, Oxford
University Press, Oxford, 1987\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Kell(1967)]{Kell67}
G.~S. Kell, \emph{J. Chem. Eng. Data}, 1967, \textbf{12}, 66\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Angell \emph{et~al.}(1976)Angell, Finch, and Bach]{Angell76}
C.~A. Angell, E.~D. Finch and P.~Bach, \emph{J. Chem. Phys.}, 1976,
\textbf{65}, 3063--3066\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Limbach \emph{et~al.}(2006)Limbach, Arnold, Mann, and Holm]{espresso1}
H.-J. Limbach, A.~Arnold, B.~A. Mann and C.~Holm, \emph{Comput. Phys. Commun.},
2006, \textbf{174}, 704--727\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Arnold \emph{et~al.}(2013)Arnold, Lenz, Kesselheim, Weeber,
Fahrenberger, Roehm, Kosovan, and Holm]{espresso2}
A.~Arnold, O.~Lenz, S.~Kesselheim, R.~Weeber, F.~Fahrenberger, D.~Roehm,
P.~Kosovan and C.~Holm, \emph{Meshfree Methods for Partial Differential
Equations VI}, Springer Berlin Heidelberg, 2013, vol.~89, pp. 1--23\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Kumar \emph{et~al.}(2005)Kumar, Buldyrev, Starr, Giovambattista, and
Stanley]{Ku05b}
P.~Kumar, S.~V. Buldyrev, F.~W. Starr, N.~Giovambattista and H.~E. Stanley,
\emph{Phys. Rev. E}, 2005, \textbf{72}, 051503\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Khandpur \emph{et~al.}(1995)Khandpur, Foerster, Bates, Hamley, Ryan,
Bras, Almdal, and Mortensen]{Khan95}
A.~K. Khandpur, S.~Foerster, F.~S. Bates, I.~W. Hamley, A.~J. Ryan, W.~Bras,
K.~Almdal and K.~Mortensen, \emph{Macromolecules}, 1995, \textbf{28},
8796\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Beltran-Villegas \emph{et~al.}(2012)Beltran-Villegas, Schultz, Nguyen,
Glotzer, and Larson]{Beltran12}
D.~J. Beltran-Villegas, B.~A. Schultz, N.~H. Nguyen, S.~C. Glotzer and R.~G.
Larson, \emph{Soft Matter}, 2012, \textbf{10}, 4593\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Preisler \emph{et~al.}(2016)Preisler, Vissers, Smallenburg, and
Sciortino]{Preisler16}
Z.~Preisler, T.~Vissers, F.~Smallenburg and F.~Sciortino, \emph{J. Chem.
Phys.}, 2016, \textbf{145}, 064513\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Berthier \emph{et~al.}(2010)Berthier, Moreno, and Szamel]{Berthier10}
L.~Berthier, A.~J. Moreno and G.~Szamel, \emph{Phys. Rev. E}, 2010,
\textbf{85}, 060501\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Coslovich and Ikeda(2013)]{Cos13}
D.~Coslovich and A.~Ikeda, \emph{Soft Matter}, 2013, \textbf{9}, 6786\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Everts \emph{et~al.}(2016)Everts, Boon, and van Roij]{Everts16}
J.~C. Everts, N.~Boon and R.~van Roij, \emph{Phys. Chem. Chem. Phys.}, 2016,
\textbf{18}, 5211\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Krott and Barbosa(2013)]{Krott13}
L.~B. Krott and M.~C. Barbosa, \emph{J. Chem. Phys.}, 2013, \textbf{138},
084505\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Castrillon \emph{et~al.}(2009)Castrillon, Giovambattista, Aksay, and
Debenedetti]{cas09}
S.~R.-V. Castrillon, N.~Giovambattista, I.~A. Aksay and P.~G.~. Debenedetti,
\emph{J. Phys.Chem. B}, 2009, \textbf{113}, 1438\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\end{mcitethebibliography}
\end{document}
|
1,108,101,566,651 | arxiv | \section{Introduction}
\label{sec:1}
\IEEEPARstart{W}{ith} the rapid development of wireless communication networks,
improving the spectrum efficiency with limited physical resources has become an
urgent call for B5G/6G communication systems. One of the key technologies to
solve this problem is the non-orthogonal multiple access (NOMA) scheme
\cite{7263349}. Sparse code multiple access (SCMA), a potent code-domain NOMA
scheme, is a new candidate for future communication systems attributed to the
superior capacity of overload tolerance and resource reuse \cite{6666156}. At
the receiver end, the message passing algorithm (MPA) is adopted to detect
individual user symbols.
However, the SCMA system still fails to meet the throughput and reliability for future networks \cite{7973146}. In \cite{9432947}, the amalgamation with existing techniques, e.g., channel coding, was introduced to improve the performance of SCMA. So far, turbo coded SCMA (TC-SCMA) \cite{7248770,7848941,9399239} and low-density parity-check (LDPC) coded SCMA (LDPC-SCMA) \cite{7848813,8766141,8344383,8288397} systems have been widely investigated to obtain a coding gain. While these works conceived impressive joint detection and decoding (JDD) algorithms based on ``turbo principle'' for co-design systems, the above coding schemes are difficult to meet the ultra-reliable transmission for future communication systems and suffer a high computational complexity.
\begin{table*}[!t]
\centering
\renewcommand{\arraystretch}{1.3}
\caption{Overview of existing literature on channel coded SCMA systems with
JDD schemes.}
\label{table1}
\resizebox{\textwidth}{!}{
\begin{tabular}{c!{\vrule width0.9pt}c!{\vrule
width0.9pt}cccccccccccc!{\vrule width0.9pt}cc!{\vrule width0.9pt}cccc}
\Xhline{1.2pt}
\multirow{2}{*}{Contributions}& \multirow{2}{*}{This work}&
\multicolumn{12}{c!{\vrule
width0.9pt}}{PC-SCMA}&\multicolumn{2}{c!{\vrule width0.9pt}}{
TC-SCMA} &\multicolumn{4}{c}{LDPC-SCMA} \\ \cline{3-20}
& & \cite{8171004} & \cite{8463448} & \cite{9238845} &
\cite{ZHANG2020102283} & \cite{sym12101624} & \cite{8234623} &
\cite{8661315} & \cite{8543048} & \cite{9285274} & \cite{s20236740}
& \cite{9082596} & \cite{9386114} & \cite{7848941} & \cite{9399239}
& \cite{7848813} & \cite{8766141} & \cite{8344383} & \cite{8288397}
\\ \hline\hline
\rowcolor[gray]{.85}Joint factor graph & \checkmark & \checkmark &
\checkmark & \checkmark & \checkmark & \checkmark & & & &
\checkmark & \checkmark & & \checkmark & \checkmark & & \checkmark
& & \checkmark & \\
soft-input soft-output & \checkmark & \checkmark & \checkmark & \checkmark &
\checkmark & \checkmark & & & \checkmark & \checkmark & \checkmark
& \checkmark & \checkmark & \checkmark & \checkmark & \checkmark &
\checkmark & \checkmark & \checkmark \\
\rowcolor[gray]{.85}Impact of user load & \checkmark & & & & & &
\checkmark & & & & & & \checkmark & & \checkmark & & & \checkmark &
\\
BER improvement & \checkmark & \checkmark & \checkmark &
\checkmark & \checkmark & & \checkmark & \checkmark & \checkmark &
\checkmark & \checkmark & \checkmark & \checkmark & \checkmark &
\checkmark & \checkmark & \checkmark & \checkmark & \checkmark \\
\rowcolor[gray]{.85}Fading channels & \checkmark & & \checkmark & &
& & \checkmark & \checkmark & & \checkmark & \checkmark &
\checkmark & \checkmark & & \checkmark & & & & \\
Complexity reduction & \checkmark & & & \checkmark & \checkmark &
\checkmark & & \checkmark & & \checkmark & \checkmark & &
\checkmark & \checkmark & \checkmark & \checkmark & \checkmark &
\checkmark & \checkmark\\
\rowcolor[gray]{.85}Latency reduction & \checkmark & & & & & &
\checkmark & & & \checkmark & & & & & \checkmark & & & \checkmark &
\\
Early termination & \checkmark & & & & \checkmark & \checkmark & &
& & \checkmark & & & \checkmark & & \checkmark & \checkmark & & & \\
\rowcolor[gray]{.85}Damping technique & \checkmark & & \checkmark &
\checkmark & \checkmark & \checkmark & & & & & & & & \checkmark &
& \checkmark & & & \\
Convergence analysis & \checkmark & & \checkmark & \checkmark &
\checkmark & & & & & & & \checkmark & \checkmark & \checkmark &
\checkmark & \checkmark & \checkmark & \checkmark & \checkmark\\
\rowcolor[gray]{.85}Effect of channel estimation & & & & & & & & &
& & & \checkmark & \checkmark & & & & & \checkmark & \\
Non-binary coding & \checkmark & & & & & & & & & & & & & & & & &
\checkmark & \checkmark\\
\rowcolor[gray]{.85}Non-binary coding with FOMS & \checkmark & & &
& & & & & & & & & & & & & & & \\
\Xhline{1.2pt}
\end{tabular}}
\end{table*}
As a standard code for control channels for 5G New Radio \cite{3GPP}, polar
code proposed by E. Arikan in \cite{5075875} is the first coding scheme that
can provably achieve the capacity of the binary input discrete memoryless
channel with low complexity. In particular, polar codes can provide excellent
error correction capability and higher spectral efficiency, which is
competitive in the scenario of ultra-reliable low-latency communication (URLLC)
\cite{8705373} for future wireless networks.
Benefiting from the above achievements, polar-coded SCMA (PC-SCMA) systems have
been investigated in the literature. A typical JDD scheme directly combines
soft-input soft-output polar decoder and SCMA detector
\cite{8171004,8463448,9238845,ZHANG2020102283,sym12101624}. Specifically, the
authors in \cite{8171004} proposed a JDD algorithm amalgamating MPA with belief
propagation decoding to obtain performance gains. As a further development, a
joint iteration detection and decoding (JIDD) receiver using a soft
cancellation (SCAN) decoder was proposed in \cite{8463448} without inner
iteration, which laid a foundation for JDD-based PC-SCMA receivers. In
\cite{9238845,ZHANG2020102283,sym12101624}, some modified versions of JIDD were
proposed to accelerate the convergence and reduce the computational complexity.
However, none of these traditional soft-input soft-output schemes can break the BER performance limit of the SCAN decoder.
In contrast, an alternative JDD scheme amalgamates the soft-input hard-output polar decoder with the SCMA detector \cite{8234623,8661315,8543048,9285274,s20236740,9082596,9386114}. To be more
specific, a sequential user partition based JDD receiver was proposed in \cite{8234623,8661315} with limited receiver performance due to the feedback of hard outputs. The authors in \cite{8543048} first proposed a JDD scheme using a soft-input soft-output based successive cancellation (SC) decoder to achieve performance gains. Furthermore, a JIDD employing an SC-list (SCL) decoder was presented in \cite{9285274,s20236740}, which shows a better BER performance by designing the SCL decoder's extrinsic messages for turbo iteration. A joint channel estimation and detection scheme was also proposed for fading channels \cite{9082596,9386114}.
However, most of the soft-input hard-output schemes still lags behind the latency and BER target of URLLC. As a promising solution, non-binary polar codes (NB-PCs) can polarize discrete memoryless channels with arbitrary \emph{q}-ary alphabets for ultra-reliable transmission \cite{6303909}. Importantly, NB-PCs can save decoding latency by symbol-level operation instead of bit-level counterparts \cite{arXiv}, providing potential for URLLC application. Moreover, NB-PCs with different non-binary kernels over $GF(q)$ were investigated in \cite{7752615,8625284,9348796}, which shows significant BER gains over binary counterparts.
The existing studies on coded SCMA systems employing JDD are summarized in
Table \ref{table1}, which facilitates a comparison of this paper's
contributions with other state-of-the-art research. As can be observed, these studies have mainly focused on BER improvement and complexity reduction, while there is a scarcity of literature on latency reduction, overload impact, early termination (ET), damping techniques and non-binary coding. Note that none of these work jointly designed coded SCMA systems with NB-PCs. The high decoding complexity of non-binary coding system increases the implementation difficulty and resource requirements during the hardware design. Besides, the inflexibility for the modulation scheme also hinders the wide application of non-binary coding, where the constellation points must be equal to the cardinality of the input alphabet.
Motivated by the impressive BER and latency performance of NB-PCs, we design a superior non-binary PC-SCMA (NB-PC-SCMA) architecture under a perfect channel state information condition\footnote{In this paper, the effect of channel estimation on the system performance is not considered, as shown in Table \ref{table1}. We assume that the channel state information is perfectly known for both transmitter and receiver.}. Overall, there are still challenges to achieve this target. First, most of the existing non-binary coding systems employ a modulation scheme that cannot be flexibly configured, i.e., constrained order matching strategy (COMS) \cite{8344383,8288397}, to facilitate the joint design, which solely considers a case that the finite field order is identical to the SCMA modulation order. COMS has an inherent weakness since it cannot trade-off system throughput and BER performance to fit the scenario. Second, if unconstrained order matching is considered, the receiver will yield the likelihood information conversion. Thus, the inner soft message exchange rules and the transmission reliability need to be resolved. Finally, although the latency and BER performance of the coded SCMA system are hopefully improved with NB-PCs, the receiver suffers an increased decoding complexity, which is unfriendly for hardware implementation. As a result, the receiver also requires a complexity reduction technique.
In this paper, we are engaged in tackling the above challenges to investigate the NB-PC-SCMA system. The main contributions of this paper are outlined as follows.
\begin{itemize}
\item [1)] To the best of our knowledge, it is the first time to propose an NB-PC-SCMA scheme with free order matching strategy (FOMS). Specifically, we introduce symbol-to-bit conversion at the transmitter while the receiver applies a novel information exchange rule among the bits, field symbols, and SCMA codewords. With the aid of FOMS, the proposed system can freely select the field and modulation order configurations according to the requirements without the limitation of the NB-PC alphabet size.
\item [2)] Furthermore, we propose a non-binary SCL (NB-SCL) and damping
based JIDD (NSD-JIDD) algorithm for NB-PC-SCMA systems by combining the
factor graphs of the NB-PC decoder and the SCMA detector into a joint
factor graph (JFG). According to the connection in the JFG, the extrinsic
soft information is exchanged and is compressed by damping techniques to
improve error propagation. In addition, a cyclic redundancy check
(CRC)-based ET is applied to eliminate redundant iterations, which
facilitates clock cycle savings for the receiver.
\item [3)] To reduce the complexity of the iterative receiver and enhance the convergence performance, the receiver constituents, i.e., the SCMA detector and the polar decoder, are improved. A lazy-search based NB-SCL (L-NB-SCL) decoding is proposed to avoid redundant path splitting in the decoding process. In addition, update operations that are less dominant for user nodes are removed in the SCMA detection process to make full use of the a priori information. Accordingly, the resultant improved NSD-JIDD (ISD-JIDD) algorithm can significantly decrease the computational complexity in the receiver.
\end{itemize}
The rest of this paper is organized as follows. Section \ref{sec:2} presents the FOMS and the proposed NB-PC-SCMA system model. Following this, a multiuser iterative receiver is designed in Section \ref{sec:3}, while its improved scheme is detailed in Section \ref{sec:4}. Section \ref{sec:5} discusses the performance of our proposed system over BER, complexity, and latency with numerical simulation results. Finally, the conclusions and future research prospects are drawn in Section \ref{sec:6}. Especially, the key abbreviations employed in this paper are summarized in Table \ref{table4} for ease of access.
\begin{table}[!t]
\centering
\renewcommand{\arraystretch}{1.3}
\caption{Summary of abbreviations.}
\label{table4}
\begin{tabular}{m{1cm}<{\centering}m{2.7cm}<{\centering}m{1cm}<{\centering}m{2.7cm}<{\centering}@{}}
\toprule[1.2pt]
Acronyms & Full Form & Acronyms & Full Form\\
\midrule[0.6pt]
AWGN & additive white gaussian noise & NB-LDPC-SCMA & NB-LDPC coded SCMA\\
BER & bit error rate & NB-PC & non-binary polar code\\
COMS & constrained order matching strategy & NB-PC-SCMA & non-binary PC-SCMA\\
CRC & cyclic redundancy check & NB-SCL & non-binary SCL\\
ET & early termination & NSD-JIDD & NB-SCL and damping based JIDD\\
FOMS & free order matching strategy & ISD-JIDD & improved NSD-JIDD\\
IN & intermediate node & PC-SCMA & polar-coded SCMA\\
JDD & joint detection and decoding & PN & polar node\\
JFG & joint factor graph & RN & resource node\\
JIDD & joint iteration detection and decoding & SC & successive cancellation\\
LDPC & low-density parity-check & SCAN &soft cancellation\\
LDPC-SCMA & LDPC coded SCMA & SCL & SC-list\\
LLR & log-likelihood ratio & SCMA & sparse code multiple access\\
L-NB-SCL & lazy-search based NB-SCL & TC-SCMA & turbo coded SCMA\\
MPA & message passing algorithm & UN & user nodes\\
NB-LDPC & non-binary LDPC & URLLC & ultra-reliable low-latency communication\\
\bottomrule[1.2pt]
\end{tabular}
\end{table}
\section{System Model}
\label{sec:2}
\begin{figure}[!t]
\centering
\subfloat[COMS ($M = q$)]{\includegraphics[width=1.37in]{CMS.png}}
\label{fig1-a}
\\
\subfloat[FOMS with $M < q$]{\includegraphics[width=1.67in]{FMS1.png}}
\label{fig1-b}
\hfil
\subfloat[FOMS with $M > q$]{\includegraphics[width=1.45in]{FMS2.png}}
\label{fig1-c}
\caption{Different order matching strategies for non-binary coded SCMA.}
\label{fig1}
\end{figure}
\begin{figure*}[!t]
\centering
\includegraphics[width=6.5in]{system_model.png}
\caption{FOMS based NB-PC-SCMA system.}
\label{fig2}
\end{figure*}
\subsection{Free Order Matching Strategy}
\label{sec:2-1}
Assume that \emph{q}-ary codewords are converted into symbols by \emph{M}-point SCMA modulation at the transmitter. Fig. \ref{fig1} exemplifies the difference between COMS and FOMS under non-binary coded SCMA systems. For COMS of $q = 16$ shown in Fig. \ref{fig1}(a), every four bits are mapped into an SCMA codeword and Galois field symbol. Thus, SCMA codewords and field symbols are a group of one-to-one mapping. When $M = 4$ and $q = 16$, e.g., the case FOMS with $M < q$ in Fig. \ref{fig1}(b), a field symbol and an SCMA codeword are associated with 4 bits and 2 bits, respectively. In terms of field symbols, one symbol is mapped into two codewords. FOMS splits a field symbol into two sub-symbols using bit mapping. For $M = 16$ and $q = 4$, e.g., the FOMS with $M > q$ in Fig. \ref{fig1}(c), which is a typical multiple-to-one mapping, two field symbols are mapped into one codeword.
For COMS, only the system with $M = q$ is considered, which facilitates the modulation at the transmitter and the detection at the receiver. However, as the SCMA modulation order \emph{M} increases with \emph{q}, the receiver is more prone to misjudgment and has a worse convergence speed \cite{9260130}. Moreover, the system may suffer additional overhead due to the design of a large codebook.
Intuitively, COMS can be understood as a special case of FOMS with $M = q$. For FOMS, we introduce bit mapping at the transmitter, which makes adopting binary codewords for modulation feasible. Since the order matching is not limited, the universality is the visible advantage of FOMS. Importantly, the FOMS can trade-off the system throughput and reliability given the field order \textit{q} or SCMA modulation order \textit{M} according to the scene, which is demonstrated in Section \ref{sec:5-1}. The message exchange for FOMS-based receiver is depicted in Sections \ref{sec:3}.
\subsection{System Model for FOMS Based NB-PC-SCMA}
\label{sec:2-2}
The uplink NB-PC-SCMA system with FOMS is shown in Fig. \ref{fig2}. The data of \emph{J} users are multiplexed on \emph{K} orthogonal resources, giving an overload factor of $\lambda = {J \mathord{\left/
{\vphantom {J K}} \right.\kern-\nulldelimiterspace} K}$. To be more specific, the \emph{A}-length information bits sent by user \emph{j} ($1 \le j \le J$) are denoted as ${{\bm{u}}_j} = \left[ {{u_{j,1}},{u_{j,2}}, \cdots ,{u_{j,A}}} \right]$, which is encoded as ${{\bm{b}}_j} = \left[ {{b_{j,1}},{b_{j,2}}, \cdots ,{b_{j,D}}} \right]$ by CRC encoder. The bits in ${\bm{b}_j}$ are converted to the field symbol over $GF\left( q \right)$ as the $D'$-length uncoded vector ${{\bm{b'\!\!}}_j} = \left[ {{{b'\!\!}_{j,1}},{{b'\!\!}_{j,2}}, \cdots ,{{b'\!\!}_{j,D'}}} \right]$, where field size $q = {2^p}$ and length $D' = {D \mathord{\left/{\vphantom {D p}}\right.\kern-\nulldelimiterspace} p}$. Then, ${{\bm{b'\!\!}}_j}$ are placed at $D'$ information symbol positions of sequence ${{\bm{a'\!\!}}_j} = \left[ {{{a'\!\!}_{j,1}},{{a'\!\!}_{j,2}}, \cdots ,{{a'\!\!}_{j,N'}}} \right]$, which are determined by Monte-Carlo simulation. The remaining positions are filled with the 0-valued frozen symbols. The resultant sequence ${{\bm{a'\!\!}}_j}$ is then encoded into ${{\bm{c'\!\!}}_j}$ containing $N' = {2^\omega }$ symbols by the non-binary polar encoder, which can be expressed as
\begin{equation}
{{\bm{c'\!\!}}_j} = {{\bm{a'\!\!}}_j}{{\bm{G}}_2}^{ \otimes \omega },
\label{1}
\end{equation}
where ${{\bm{G}}_2}^{ \otimes \omega }$ is the generator matrix of NB-PC and $ \otimes $ denotes the Kronecker product. According to \cite{5513568}, the kernel ${{\bm{G}}_2}$ can be achieved by extending the Arikan kernel to the Galois field, which is written as
\begin{equation}
{{\bm{G}}_2} = \left[ {\begin{array}{*{20}{c}}
1&0\\
\gamma &1
\end{array}} \right],
\label{2}
\end{equation}
where $\gamma \in GF\left( q \right)\backslash 0$ represents the non-zero element over $GF\left( q \right)$.
Then, the output ${{\bm{c'\!\!}}_j}$ of the encoder is converted to the bit stream ${{\bm{c}}_j} = \left[ {{c_{j,1}},{c_{j,2}}, \cdots ,{c_{j,N}}} \right]$ by the bit mapper, where $N = pN'$. Here, we define the code rate as ${R_c} = {D \mathord{\left/{\vphantom {D N}} \right.\kern-\nulldelimiterspace} N} = {{D'} \mathord{\left/{\vphantom {{D'} {N'}}} \right.\kern-\nulldelimiterspace} {N'}}$.
To mitigate interference caused by burst errors, ${{\bm{c}}_j}$ is interleaved by a random interleaver, which is expressed as ${{\bm{d}}_j} = \Pi ({{\bm{c}}_j})$, $1 \le j \le J$. ${{\bm{d}}_j}$ is then mapped to a \emph{K}-dimensional complex codeword ${{\bm{x}}_j} = \left[ {{{\bm{x}}_{j,1}},{{\bm{x}}_{j,2}}, \cdots ,{{\bm{x}}_{j,E}}} \right]$ by the SCMA encoder, where the \emph{e}-th ($1 \le e \le E$) codeword is a sparse column vector ${{\bm{x}}_{j,e}} = {\left[ {x_{j,e}^1,x_{j,e}^2, \cdots ,x_{j,e}^K} \right]^T}$. Supposing the modulation order (i.e., codebook cardinality) of SCMA is \emph{M}, the codeword symbol length is $E = {N \mathord{\left/{\vphantom {N R}} \right.\kern-\nulldelimiterspace} R}$, where $R = {\log _2}M$. The resource sharing structure of SCMA can be represented by a $K \times J$ indicator matrix. For example, a case with 4 resources and 6 users is denoted as
\begin{equation}
{\bm{F}} = \left[ {\begin{array}{*{20}{c}}
0&1&1&0&1&0\\
1&0&1&0&0&1\\
0&1&0&1&0&1\\
1&0&0&1&1&0
\end{array}} \right],
\label{3}
\end{equation}
where the rows and columns of $\bm{F}$ represent the subcarrier and user layers, respectively. The \emph{j}-th user occupies the \emph{k}-th subcarrier if and only if the element ${{\bm{F}}_{kj}}$ in the \emph{k}-th row and \emph{j}-th column of $\bm{F}$ is 1. To be more specific, each user's data is assigned to ${d_u}$ (${d_u} \ll K$) resources, while ${d_r}$ (${d_r} \ll J$) users collide over the \emph{k}-th resource. The factor graph corresponding to the indicator matrix in (\ref{3}) is shown in Fig. \ref{fig4}. There are generally two types of nodes in a factor graph, i.e., user nodes (UNs) and resource nodes (RNs). Let UN-\emph{j} ($1 \le j \le J$) and RN-\emph{k} ($1 \le k \le K$) denote the \emph{j}-th UN and the \emph{k}-th RN, respectively.
\begin{figure}[!t]
\centering
\includegraphics[width=2in]{factor_graph.png}
\caption{SCMA factor graph for 6 users multiplexed over 4 resources.}
\label{fig4}
\end{figure}
Here, we assume that all users are time-synchronized and thus the \emph{e}-th received signal is the superposition of all users' signals, which can be expressed as
\begin{equation}
{{\bm{y}}_e} = \sum\limits_{j = 1}^J {{\mathop{\bm diag}\nolimits} } \left( {{{\bm{h}}_{j,e}}} \right){{\bm{x}}_{j,e}} + {{\bm{z}}_e},
\label{4}
\end{equation}
where ${{\bm{y}}_e} = {\left[ {y_e^1,y_e^2, \cdots ,y_e^K} \right]^T}$ is the received signal, ${{\bm{h}}_{j,e}}{\bm{ = }}\left[ {h_{j,e}^1,h_{j,e}^2, \cdots ,h_{j,e}^K} \right]$ is the channel gain vector and ${{\bm{z}}_e}$ is the $K \times 1$ additive Gaussian vector with element-wise 0 mean and covariance matrix ${N_0}{{\bm{I}}_K}$. Here, ${{\bm{I}}_K}$ denotes the $K \times K$ diagonal matrix. Note that $h_{j,e}^k$ in ${{\bm{h}}_{j,e}}$ denotes the channel gain of the \emph{e}-th transmitted codeword between resource \emph{k} and user \emph{j}, which is available to both transmitter and receiver with the assumption of perfect channel state information. Finally, all received signals in a transmission block can be represented as ${\bm{Y}} = \left[ {{{\bm{y}}_1},{{\bm{y}}_2}, \cdots ,{{\bm{y}}_E}} \right]$.
\section{Multiuser Iterative Receiver Design for NB-PC-SCMA Systems}
\label{sec:3}
In this section, the JFG and receiver for the proposed NB-PC-SCMA system are presented.
\subsection{Joint Factor Graph Model for FOMS Based NB-PC-SCMA}
\label{sec:3-1}
\begin{figure*}[!t]
\centering
\includegraphics[width=5in]{joint_factor_graph.png}
\caption{The JFG for NB-PC-SCMA system.}
\label{fig5}
\end{figure*}
To visualize our proposed receiver, we first introduce the JFG of NB-PC-SCMA, since the message exchange rules in the receiver solely rely on the connection of the JFG. Moreover, the designed JFG equally reflects the impact of FOMS on the receiver, which corresponds to the characteristics regarding FOMS introduced in Section \ref{sec:2-1}.
Fig. \ref{fig5} exemplifies the JFG with $M = 4$, $q = 16$ and $N' = 8$, where 6 users and 4 resources are considered. Since FOMS allows $M \ne q$, the symbol-level MPA and NB-SCL are carried out over \emph{M}-ary and \emph{q}-ary, respectively. We build the mapping norm between the SCMA detector and the polar decoder, as shown in the transformation level in Fig. \ref{fig5}. Note that we term the node on the side of the polar decoder receiving a priori information as the polar node (PN) and the node at the transformation level introduced by FOMS as the intermediate node (IN). Considering \emph{J} users and \emph{E} SCMA codewords, JFG consists of \emph{J} polar factor graphs, \emph{E} SCMA factor graphs and $JN$ INs. Each polar factor graph contains $N'$ PNs. Let ${{\cal X}_{e,k}}$ and ${{\cal V}_{e,j}}$ denote RN-\emph{k} ($1 \le k \le K$) and UN-\emph{j} ($1 \le j \le J$) of the \emph{e}-th ($1 \le e \le E$) codeword, respectively, and let ${{\cal I}_{n,j}}$ ($1 \le n \le N$) and ${{\cal P}_{n'\!,j}}$ ($1 \le n' \le N'$) denote the \emph{n}-th IN and $n'$-th PN of the \emph{j}-th user, respectively.
Here, each UN and PN is associated with 2 INs and 4 INs, respectively, according to Fig. \ref{fig1}(b). Specifically, each ${{\cal V}_{e,j}}$ is connected to ${{\cal I}_{2e - 1,j}}$ and ${{\cal I}_{2e,j}}$, while each ${{\cal P}_{n'\!,j}}$ is connected to ${{\cal I}_{4n'\! - 3,j}}$, ${{\cal I}_{4n'\! - 2,j}}$, ${{\cal I}_{4n'\! - 1,j}}$, and ${{\cal I}_{4n'\!,j}}$. Therefore, we can imagine that ${{\cal P}_{n'\!,j}}$ is linked to ${{\cal V}_{2n'\! - 1,j}}$ and ${{\cal V}_{2n'\!,j}}$.
We integrate the polar factor graph and SCMA factor graph of different symbol levels into one JFG using IN, where messages are passed iteratively. Thus, the receiver for the proposed NB-PC-SCMA system only requires outer iterations, i.e., the loop of the overall operation. There is no inner iteration in the component SCMA detector or polar decoder. Note that we only give the JFG example for $M < q$. The model for $M > q$ is similar except for the connection pattern of IN. The result for $M > q$ can be easily found by referring to the interpretation for the FOMS shown in Fig. \ref{fig1}(c).
\subsection{NB-SCL and Damping Based Joint Iterative Detection and Decoding}
\label{sec:3-2}
In this section, we introduce the proposed multiuser receiver for NB-PC-SCMA systems, which jointly performs SCMA detection and polar decoding. Specifically, the NSD-JIDD message passing process is exemplified in Fig. \ref{fig6}, which is elaborated in Sections \ref{sec:3-2-1} to \ref{sec:3-2-4}.
\subsubsection{SCMA Detection}
\label{sec:3-2-1}
\
\newline
\indent
The SCMA detection process can be interpreted as message passing on the factor graph. Therefore, the connection of the factor graph can be uniquely defined by a pair of sets ${\cal U}\left( j \right)$ and ${\cal R}\left( k \right)$ as follows,
\begin{subequations}
\begin{equation}
{\cal R}\left( k \right) = \left\{ {\left. j \right|{{\bm{F}}_{k,j}} = 1,1 \le j \le J} \right\},
\tag{5.a}
\label{5.a}
\end{equation}
\begin{equation}
{\cal U}\left( j \right) = \left\{ {\left. k \right|{{\bm{F}}_{k,j}} = 1,1 \le k \le K} \right\}.
\tag{5.b}
\label{5.b}
\end{equation}
\label{5}
\end{subequations}
According to the max-log-MPA method \cite{8798841}, in the \emph{t}-th iteration, the messages are passed back and forth through the links in the factor graph and can be updated by (\ref{6}-\ref{7}). For the sake of convenience, we remove all subscripts \emph{e} indicating the codeword positions when
describing SCMA detector since the SCMA detection operation is identical for all codewords.
\begin{figure}[!t]
\centering
\includegraphics[width=3.5in]{NSD-JIDD.png}
\caption{Message passing in the NSD-JIDD receiver.}
\label{fig6}
\end{figure}
For UN-\emph{j} ($1 \le j \le J$), we have
\begin{equation}
\begin{aligned}
\xi _{j \to k}^{\left( t \right)}\left( {x_j^k = w_{j,k}^m} \right) = &{\cal N}\left( {{\mu ^{\left( {t - 1} \right)}}\left( {x_j^k = w_{j,k}^m} \right)} \right. \\
&\left. { + \sum\limits_{i \in {\cal U}\left( j \right)\backslash k} {\xi _{i \to j}^{\left( {t - 1} \right)}\left( {x_j^i = w_{j,i}^m} \right)} } \right),
\end{aligned}
\label{6}
\end{equation}
while for RN-\emph{k} ($1 \le k \le K$), we have
\begin{equation}
\begin{aligned}
&\xi _{k \to j}^{\left( t \right)}\left( {x_j^k = w_{j,k}^m} \right)\\ = &\mathop {\max }\limits_{\scriptstyle x_i^k \in {{\cal W}_{i,k}},i \in {\cal R}\left( k \right)\backslash j\atop
\scriptstyle x_j^k = w_{j,k}^m} \!\! \left\{ {\psi \left( {{{\bm{x}}_{[k]}}} \right) + \!\!\!\!\!\!\!\! \sum\limits_{x_i^k \in {{\bm{x}}_{[k]}},i \in {\cal R}\left( k \right)\backslash j}\!\!\!\! \!\!{\xi _{i \to k}^{\left( t \right)}\left( {x_i^k} \right)} } \right\},
\end{aligned}
\label{7}
\end{equation}
where $\xi _{k \to j}^{\left( t \right)}( {x_j^k = w_{j,k}^m})$ and $\xi _{j \to k}^{\left( t \right)}( {x_j^k = w_{j,k}^m} )$ are the messages sent from RN-\emph{k} to UN-\emph{j} and from UN-\emph{j} to RN-\emph{k} in the \emph{t}-th ($1 \le t \le T$) iteration given the codeword $x_j^k$, respectively. $w_{j,k}^m \in {{\cal W}_{j,k}}$ denotes the \emph{m}-th ($1 \le m \le M$) SCMA codeword of user \emph{j} transmitted by subcarrier \emph{k} in codebook ${\cal W}$. Here, ${\mu ^{\left( {t - 1} \right)}}( {x_j^k = w_{j,k}^m})$ denotes the a priori symbol log-likelihood information input in the $\left( {t - 1} \right)$-th iteration. ${\cal R}\left( k \right)\backslash j$ and ${\cal U}\left( j \right)\backslash k$ denote the set ${\cal R}\left( k \right)$ excluding \emph{j} and the set ${\cal U}\left( j \right)$ excluding \emph{k}, respectively. In (\ref{6}), ${\cal N}\left( \cdot \right)$ refers to the normalization function which ensures $\sum\limits_{m = 1}^M {\exp \left[ {\xi _{j \to k}^{\left( t \right)}( {x_j^k = w_{j,k}^m})} \right]} = 1$. Assume that
\begin{equation}
\begin{aligned}
{\xi {_{j \to k}^{\left( t \right)} }}\!^ *\!\left( {x_j^k = w_{j,k}^m} \right) =& {\mu ^{\left( {t - 1} \right)}}\left( {x_j^k = w_{j,k}^m} \right)\\
&+ \sum\limits_{i \in {\cal U}\left( j \right)\backslash k} {\xi _{i \to j}^{\left( {t - 1} \right)}\left( {x_j^i = w_{j,i}^m} \right)}.
\end{aligned}
\label{8}
\end{equation}
Then ${\cal N}\left( \cdot \right)$ can be further expressed as
\begin{equation}
\begin{aligned}
\xi _{j \to k}^{\left( t \right)}\left( {x_j^k = w_{j,k}^m} \right) = &{\xi {_{j \to k}^{\left( t \right)} }}\!^ *\!\left( {x_j^k = w_{j,k}^m} \right)\\
&- \mathop {\max }\limits_{w_{j,k}^o \in {{\cal W}_{j,k}}} \left\{ {{\xi {_{j \to k}^{\left( t \right)} }}\!^ *\!\left( {x_j^k = w_{j,k}^o} \right)} \right\}.
\end{aligned}
\label{9}
\end{equation}
In addition, the vector function $\psi \left( {{{\bm{x}}_{[k]}}} \right)$ in (\ref{7}) can be defined as
\begin{equation}
\psi \left( {{{\bm{x}}_{[k]}}} \right) = - \frac{1}{{{N_0}}}{\left\| {{y^k} - \sum\limits_{i \in {\cal R}\left( k \right)} {h_i^kx_i^k} } \right\|^2},
\label{10}
\end{equation}
where ${{\bm{x}}_{[k]}} = {\left[ {x_i^k} \right]_{i \in {\cal R}\left( k \right)}}$ is a vector comprising all symbols transmitted over the \emph{k}-th subcarrier.
Therefore, the soft information output by each user can be calculated as
\begin{equation}
{Q^{\left( t \right)}}\left( {{{\bm{x}}_j} = {\bm{w}}_j^m} \right) = \sum\limits_{i \in {\cal U}\left( j \right)} {\xi _{i \to j}^{\left( t \right)}\left( {x_j^i = w_{j,i}^m} \right)},
\label{11}
\end{equation}
where ${\bm{w}}_j^m \in {{\cal W}_j}$ ($1 \le m \le M$) represents the \emph{m}-th SCMA codeword of user \emph{j} in codebook ${\cal W}$.
\subsubsection{Conversion and Calculation of LLRs under FOMS}
\label{sec:3-2-2}
\
\newline
\indent
The soft message output from the SCMA detector will be converted to bit level and further to field symbol level, which follows the JFG connection rigidly. Note that hereinafter we remove the superscript \emph{t} as all operations are performed within the same iteration.
To be more specific, the exchange messages are first converted to the available log-likelihood ratio (LLR) form for the polar decoder. The extrinsic bit LLR of the \emph{e}-th ($1 \le e \le E$) detected SCMA codeword for user \emph{j} can be expressed as
\begin{equation}
{L_{ex,scma}}\left( {{{\hat d}_{j,\left( {e - 1} \right)R + r}}} \right) = \ln \frac{{\sum\limits_{s_{j,e}^i \in {\cal S}{_{j,e}},s_{j,e}^r = 0} {\exp \left( {Q\left( {{{\bm{x}}_{j,e}}} \right)} \right)} }}{{\sum\limits_{s_{j,e}^i \in {\cal S}{_{j,e}},s_{j,e}^r = 1} {\exp \left( {Q\left( {{{\bm{x}}_{j,e}}} \right)} \right)} }},
\label{12}
\end{equation}
where ${\cal S}{_{j,e}} = \left\{ {s_{j,e}^1,s_{j,e}^2, \cdots ,s_{j,e}^R} \right\}$ ($1 \le r,i \le R$ and $i \ne r$) represents a bit set that can be mapped to SCMA codeword ${{\bm{x}}_{j,e}}$ by codebook ${{\cal W}_j}$ of user \emph{j} . In other words, each SCMA codeword message corresponds to \emph{R} bit messages, which conforms to the mapping relationship at the transmitter. Using the Jacobi approximation, we can further derive
\begin{equation}
\begin{aligned}
{L_{ex,scma}}\left( {{{\hat d}_{j,\left( {e - 1} \right)R + r}}} \right) =& \mathop {\max }\limits_{s_{j,e}^i \in {\cal S}{_{j,e}},s_{j,e}^r = 0} Q\left( {{{\bm{x}}_{j,e}}} \right)\\
&- \mathop {\max }\limits_{s_{j,e}^i \in {\cal S}{_{j,e}},s_{j,e}^r = 1} Q\left( {{{\bm{x}}_{j,e}}} \right).
\end{aligned}
\label{13}
\end{equation}
After the extrinsic bit LLRs ${{\bm{L}}_{ex,scma}}( {{{{\bm{\hat d}}}_j}}) = [{{L_{ex,scma}}( {{{\hat d}_{j,1}}} ),{L_{ex,scma}}( {{{\hat d}_{j,2}}}), \cdots ,{L_{ex,scma}}( {{{\hat d}_{j,N}}})}]$ of all \emph{E} codewords for user \emph{j} are obtained, de-interleaving is performed to yield the a priori bit LLRs of the NB-SCL decoder, which can be expressed as
\begin{equation}
{{\bm{L}}_{ap,nb - pc}}\left( {{{{\bm{\hat c}}}_j}} \right) = {\Pi ^{ - 1}}\left( {{{\bm{L}}_{ex,scma}}\left( {{{{\bm{\hat d}}}_j}} \right)} \right).
\label{14}
\end{equation}
The de-interleaved bit LLRs will be converted into a priori symbol LLRs of NB-PCs and then input to the NB-SCL decoder. Before giving the conversion operation, we explain the symbol LLRs of NB-PCs. Unlike bit-level LLRs, the symbol LLRs with more than two likelihood information cannot be defined by a single ratio. We define the $n'$-th ($1 \le n' \le N'$) a priori symbol LLR vector of the non-binary polar decoder as
\begin{equation}
\setcounter{equation}{15}
\begin{aligned}
{{\bm{L}}'_{ap,nb - pc}}({\hat c'_{j,n'}}) = [{L'_{ap,nb - pc}}&({\hat c'_{j,n'}} \!=\! 0),{L'_{ap,nb - pc}}({\hat c'_{j,n'}} \!=\! 1),\\
&\cdots \!,{L'_{ap,nb - pc}}({\hat c'_{j,n'}} \!=\! {\alpha ^{q - 2}}){]^T}\!,
\end{aligned}
\label{15}
\end{equation}
where $\alpha $ is the primitive element of the finite field and a certain LLR is defined as
\begin{equation}
{L'_{ap,nb - pc}}\left( {{{\hat c'}_{j,n'}} = \theta } \right) = \ln \frac{{\Pr \left( {{{\hat c'}_{j,n'}} = 0} \right)}}{{\Pr \left( {{{\hat c'}_{j,n'}} = \theta } \right)}},\theta \in {\mathbb {F}_q},
\label{16}
\end{equation}
where ${\mathbb {F}_q}$ denotes the set $\left\{ {0,1,\alpha ,{\alpha ^2}, \cdots ,{\alpha ^{q - 2}}} \right\}$ of all elements over $GF\left( q \right)$.
\begin{theorem}
For user \emph{j}, the $n'$-th NB-PC symbol LLR with estimate $\theta $ can be defined as
\begin{equation}
{L'_{ap,nb - pc}}\left( {{{\hat c'}_{j,n'}} = \theta } \right) = {{\cal X}_\theta }{{\cal L}_{j,n'}},
\label{17}
\end{equation}
where ${{\cal X}_\theta } = \left[ {{v_1},{v_2}, \cdots ,{v_p}} \right]$ is a binary row vector satisfying the mapping relationship $\left\{ {f:{{\cal X}_\theta } \to \theta } \right\}$ over $GF\left( q \right)$ and ${{\cal L}_{j,n'}}$ is a column vector of a priori bit LLRs, which can be written as
\begin{equation}
\begin{aligned}
{{\cal L}_{j,n'}} = [{L_{ap,nb - pc}}({\hat c_{j,\left( {n' - 1} \right)p + 1}}),&{L_{ap,nb - pc}}({\hat c_{j,\left( {n' - 1} \right)p + 2}}),\\
&\cdots ,{L_{ap,nb - pc}}({\hat c_{j,n'p}}){]^T}.
\end{aligned}
\label{18}
\end{equation}
\end{theorem}
\emph{Proof:} Typically, the \emph{n}-th ($1 \le n \le N$) a priori bit LLR of user \emph{j} can be defined as
\begin{equation}
{L_{ap,nb - pc}}\left( {{{\hat c}_{j,n}}} \right) = \ln \frac{{\Pr \left( {{{\hat c}_{j,n}} = 0} \right)}}{{\Pr \left( {{{\hat c}_{j,n}} = 1} \right)}}.
\label{19}
\end{equation}
According to the connection of JFG described in Section \ref{sec:3-1}, we can conclude that ${{\cal P}_{n',j}}$ is associated with ${{\cal I}_{\left( {n' - 1} \right)p + 1,j}}$, ${{\cal I}_{\left( {n' - 1} \right)p + 2,j}}$, $\cdots$, ${{\cal I}_{n'p,j}}$, which implies that $\Pr( {{{\hat c'}_{j,n'}}})$ can be expressed as a joint probability mass function with respect to $\Pr( {{{\hat c}_{j,\left( {n' - 1} \right)p + i}}})$. When ${\hat c'_{j,n'}} = \theta $, the function can be written as
\begin{equation}
\Pr \left( {{{\hat c'}_{j,n'}} = \theta } \right) = \prod\limits_{i = 1}^p {\Pr \left( {{{\hat c}_{j,\left( {n' - 1} \right)p + i}} = {v_i}} \right)} ,{v_i} \in {{\cal X}_\theta }.
\label{20}
\end{equation}
Then, by substituting (\ref{20}) into (\ref{16}), we can derive
\begin{equation}
\begin{aligned}
{{L}'_{ap,nb - pc}}\left( {{{\hat c'}_{j,n'}} = \theta } \right)& = \sum\limits_{i = 1}^p {\ln } \frac{{\Pr \left( {{{\hat c}_{j,\left( {n' - 1} \right)p + i}} = 0} \right)}}{{\Pr \left( {{{\hat c}_{j,\left( {n' - 1} \right)p + i}} = {v_i}} \right)}}\\
&= \!\!\sum\limits_{1 \le i \le p,{v_i} \ne 0} \!\!{{L_{ap,nb - pc}}\left( {{{\hat c}_{j,\left( {n' - 1} \right)p + i}}} \right)}.
\end{aligned}
\label{21}
\end{equation}
We can find that the result of (\ref{21}) is equivalent to (\ref{17}).$\hfill\blacksquare$
Note that ${L'_{ap,nb-pc}}\left( {{{\hat c'}_{j,n'}} = 0} \right)$ is always
equal to 0 and thus does not require calculation via (\ref{17}). Finally, we
can obtain the all a priori symbol LLRs of user \emph{j}, which can be
expressed as ${{\bm{L}}'_{ap,nb - pc}}({{\bm{\hat c'}}_j}) =
\left[{{\bm{L}}'_{ap,nb - pc}}({\hat c'_{j,1}}),{{\bm{L}}'_{ap,nb - pc}}({\hat
c'_{j,2}}),\cdots ,{{\bm{L}}'_{ap,nb - pc}}({\hat c'_{j,N'}})\right]$
\subsubsection{N-SCl Decoding and Early Termination Mechanism}
\label{sec:3-2-3}
\
\newline
\indent
\begin{figure}[!t]
\centering
\includegraphics[width=3.5in]{PFG.png}
\caption{The example of polar factor graph with $N' = 8$.}
\label{fig7}
\end{figure}
Since the polar decoding process is equivalent for each user, we remove the subscript \emph{j} indicating users for brevity when describing polar decoder. After the transmitted soft information reaches the input of the non-binary polar decoder, i.e., the lower edge of the polar factor graph in the JFG shown in Fig. \ref{fig5}, the receiver will perform NB-SCL decoding.
To be more specific, as shown in Fig. \ref{fig7}, the non-binary polar factor graph with $N' = 8$ contains $\omega + 1$ columns indexed by $\lambda $ and $N'$ rows indexed by $n'$. Each node $\left( {\lambda ,n'} \right)$ stores soft LLR information ${\bm{L}}_\lambda ^{(n')}$ passed to the left and hard symbol estimate $R_\lambda ^{(n')}$ passed to the right, where $1 \le n' \le N'$ and $0 \le \lambda \le w$. Moreover, let ${{\bm{L}}_\lambda } = [{\bm{L}}_\lambda ^{(1)},{\bm{L}}_\lambda ^{(2)}, \cdots ,{\bm{L}}_\lambda ^{(N')}]$ and ${{\bm{R}}_\lambda } = [R_\lambda ^{(1)},R_\lambda ^{(2)}, \cdots ,R_\lambda ^{(N')}]$ denote all LLRs and estimates of the $\lambda $-th column, respectively.
To start with, the soft message ${{\bm{L}}_0}$ of each user's polar decoder is initialized as the received prior information ${{\bm{L}}'_{ap,nb-pc}}\left( {{\bm{\hat c'}}} \right)$, representing the rightmost input in Fig. \ref{fig7}. Then, the message will be passed through the basic computational unit in the factor graph, as shown in the red dashed box. We can obtain the update rules (\ref{23}-\ref{26}) for the ${{\bm{G}}_2}$-based basic computational unit by converting the probability-domain based recursive function \cite{arXiv} to the LLR form.
\begin{equation}
\begin{aligned}
L_\lambda ^{(n')}[\theta ] =& \mathop {\max }\limits_{^{\varphi \in
{{\mathbb {F}}_q}}} \left\{ { - \sum\limits_{i = 1}^2 {L_{\lambda -
1}^{(n')}[\varpi _{(0,\varphi )}^i]} } \right\}\\
&- \mathop {\max }\limits_{^{\varphi \in {{\mathbb {F}}_q}}} \left\{ {
- \sum\limits_{i = 1}^2 {L_{\lambda - 1}^{(n' + {2^{_{w - \lambda
}}})}[\varpi _{(\theta ,\varphi )}^i]} } \right\},
\label{23}
\end{aligned}
\end{equation}
\begin{equation}
\begin{aligned}
L_\lambda ^{(n' + {2^{_{w - \lambda }}})}[\theta ] =& \sum\limits_{i =
1}^2 {L_{\lambda - 1}^{(n' + {2^{_{w - \lambda }}})}[\varpi
_{(R_\lambda ^{(n')},\theta )}^i]}\\
&- \sum\limits_{i = 1}^2 {L_{\lambda - 1}^{(n')}[\varpi _{(R_\lambda
^{(n')},0)}^i]},
\label{24}
\end{aligned}
\end{equation}
\begin{equation}
R_{\lambda - 1}^{\left( {n'} \right)} = R_\lambda ^{\left( {n'} \right)} + \gamma \cdot R_\lambda ^{\left( {n' + {2^{_{w - \lambda }}}} \right)},
\label{25}
\end{equation}
\begin{equation}
R_{\lambda - 1}^{\left( {n' + {2^{_{w - \lambda }}}} \right)} = R_\lambda ^{\left( {n' + {2^{_{w - \lambda }}}} \right)},
\label{26}
\end{equation}
where $L_\lambda ^{(n')}[\theta ]$ ($\theta \in {\mathbb {F}_q}$) in ${\bm{L}}_\lambda ^{(n')}$ denotes the LLR with the estimated value $\theta $, and $\gamma$ is a Galois field element of the kernel ${{\bm{G}}_2}$ defined in (\ref{2}). Besides, $\varpi _{\bm{\alpha}} ^i$ in (\ref{23}-\ref{24}) can be calculated as
\begin{equation}
{{\bm{\varpi}} _{\bm{\alpha}} } = \left[ {\varpi _{\bm{\alpha}} ^1,\varpi _{\bm{\alpha}} ^2} \right] = {\bm{\alpha}} \cdot {{{\bm{G}}}_2},
\label{27}
\end{equation}
where ${\bm{\alpha }} \in {\mathbb {F}}_q^2$.
Note that $R_\lambda ^{(n')}$ in (\ref{25}-\ref{26}) actually represents the result of the re-encoding process for the decision symbols, i.e., the partial sum. In other words, (\ref{25}-\ref{26}) can be interpreted as mod-\textit{q} partial sum functions. The polar decoder will update them after decoding each symbol.
For the path metric of the NB-SCL decoder, we adjust the hardware-friendly function of the NB-PC path metric in \cite{9120673} to the compatible form with the LLR defined in (\ref{16}). Hence, for any path $\ell $ and level $n'$, the path metric $\rho _\ell ^{(n')}$ of the NB-SCL decoder can be calculated recursively as
\begin{equation}
\rho _\ell ^{(n')} = \rho _\ell ^{(n' - 1)} + L_\omega ^{(n')}{\left[ \eta \right]_{\left\langle \ell \right\rangle }} - \mathop {\min }\limits_{\theta \in {\mathbb {F}_q}} L_\omega ^{(n')}{\left[ \theta \right]_{\left\langle \ell \right\rangle }},
\label{28}
\end{equation}
where $L_\omega ^{(n')}{\left[ \cdot \right]_{\left\langle \ell \right\rangle }}$ is the decision LLR for a given path $\ell $, representing the LLR at the decision layer shown in Fig. \ref{fig7}, and $\eta \in {\mathbb {F}_q}$ is the $n'$-th estimate for the $\ell $-th path.
As a result, the decoded sequences with smaller path metrics can survive as candidates. Given a list size \emph{l} for the NB-SCL decoder, the \emph{l} most reliable paths and corresponding metrics of user \emph{j} are expressed as ${{\bm{\Lambda}} '_j} = \left[ {{{{\bm{\hat a}'}}_{{j_1}}},{{{\bm{\hat a}'}}_{{j_2}}}, \cdots ,{{{\bm{\hat a}'}}_{{j_l}}}} \right]$ and ${{\bm{P}}_j} = [\rho _{{j_1}}^{(N')},\rho _{{j_2}}^{(N')}, \cdots ,\rho _{{j_l}}^{(N')}]$, respectively, where ${{\bm{\hat a}'}_{{j_\ell }}}$ and $\rho _{{j_\ell }}^{(N')}$ ($1 \le \ell \le l$) represent the $\ell $-th candidate symbol sequence and path metric, respectively. The codeword symbol sequences of the surviving paths are denoted as ${{\bm{{\cal C}}}'_j} = \left[ {{{{\bm{\hat c}'}}_{{j_1}}},{{{\bm{\hat c}'}}_{{j_2}}}, \cdots ,{{{\bm{\hat c}'}}_{{j_l}}}} \right]$, i.e., the hard estimate information ${{\bm{R}}_0}$ reaching the rightmost side, where ${{\bm{\hat c}'}_{{j_\ell }}}$ is the $\ell $-th codeword symbol sequence. After bit mapping, we can get \emph{l} candidate bit paths ${\bm{\Lambda} _j} = \left[ {{{{\bm{\hat a}}}_{{j_1}}},{{{\bm{\hat a}}}_{{j_2}}}, \cdots ,{{{\bm{\hat a}}}_{{j_l}}}} \right]$ and codeword bits ${\bm{{\cal C}}_j} = \left[ {{{{\bm{\hat c}}}_{{j_1}}},{{{\bm{\hat c}}}_{{j_2}}}, \cdots ,{{{\bm{\hat c}}}_{{j_l}}}} \right]$.
Assume that the $\ell $-th candidate path ${{\bm{\hat a}}_{{j_\ell }}}$ can pass the CRC and show the smallest path metric. Here, if all paths fail the CRC, ${{\bm{\hat a}}_{{j_\ell }}}$ only means the path with the smallest metric. Then the NB-SCL decoder outputs the parameter ${j_\ell }$ for the next stage and performs the CRC check for the next user. In the last iteration, the information bits corresponding to path ${{\bm{\hat a}}_{{j_\ell }}}$ are directly output as the estimated sequence ${{\bm{\hat u}}_j}$ of user \emph{j}. The receiver performs ET and directly outputs the estimated information bits ${{\bm{\hat u}}_1},{{\bm{\hat u}}_2}, \cdots ,{{\bm{\hat u}}_J}$ for all users if and only if the optimal path selected by each user passes the CRC.
\subsubsection{Damping based Extrinsic Message Reconstruction and Prior Information Update}
\label{sec:3-2-4}
\
\newline
\indent
Damping technique is an effective scheme to mitigate the error propagation problem and accelerate the convergence \cite{9234100}. In this paper, we introduce a damping factor $\varepsilon \in (0,1]$ to compress the extrinsic message output by the polar decoder.
After NB-SCL decoding, the codeword bits ${{\bm{{\cal C}}}_j}$, the corresponding path metric ${{\bm{P}}_j}$, and the index ${j_\ell }$ of the selected path will be used for extrinsic message reconstruction. In this paper, we use the Bayes rule to calculate the likelihood information of codeword bits. The path metric $\rho _{{j_\ell }}^{(N')}$ of each candidate path is first normalized to
\begin{equation}
{\delta _{{j_\ell }}} = {{\exp \left( { - \rho _{{j_\ell }}^{(N')}} \right)} \mathord{\left/
{\vphantom {{\exp \left( { - \rho _{{j_\ell }}^{(N')}} \right)} {\sum\limits_{1 \le i \le l} {\exp \left( { - \rho _{{j_i}}^{(N')}} \right)} }}} \right.
\kern-\nulldelimiterspace} {\sum\limits_{1 \le i \le l} {\exp \left( { - \rho _{{j_i}}^{(N')}} \right)} }}.
\label{29}
\end{equation}
Then, the probability that the \emph{n}-th ($1 \le n \le N$) bit takes $\phi $ ($\phi \in \left\{ {0,1} \right\}$) can be obtained by
\begin{equation}
\Pr \left( {{{\hat c}_{j,n}} = \phi } \right) = \sum\limits_{1 \le \ell \le l,{{{\bm{\hat c}}}_{{j_\ell }}} = \phi } {{\delta _{{j_\ell }}}}.
\label{30}
\end{equation}
Therefore, the extrinsic bit LLR of the NB-SCL decoder can be written as
\begin{equation}
{L_{ex,nb-pc}}\left( {{{\hat c}_{j,n}}} \right) = \left\{ {\begin{array}{*{20}{l}}
{ - \infty }&{\Pr \left( {{{\hat c}_{j,n}} = 0} \right) = 0}\\
{\ln \frac{{\Pr \left( {{{\hat c}_{j,n}} = 0} \right)}}{{\Pr \left( {{{\hat c}_{j,n}} = 1} \right)}}}&\begin{array}{l}
\Pr \left( {{{\hat c}_{j,n}} = 0} \right) \ne 0 \\
\&\Pr \left( {{{\hat c}_{j,n}} = 1} \right) \ne 0
\end{array}\\
{ + \infty }&{\Pr \left( {{{\hat c}_{j,n}} = 1} \right) = 0}
\end{array}} \right..
\label{31}
\end{equation}
We can correct the extrinsic bit LLR of user \emph{j} with the selected codeword bit path ${{\bm{\hat c}}_{{j_\ell }}}$, which can be expressed as
\begin{equation}
{L_{ex,nb-pc}}\left( {{{\hat c}_{j,n}}} \right) = \left( {1 - 2{{\hat c}_{{j_\ell },n}}} \right)\left| {{L_{ex,nb-pc}}\left( {{{\hat c}_{j,n}}} \right)} \right|.
\label{32}
\end{equation}
Before sent to the SCMA detector, the messages in the current iteration are directly damped by
\begin{equation}
{\bm{L}}_{ex,nb - pc}^{(t)}\left( {{{{\bm{\hat c}}}_j}} \right) = \varepsilon {\bm{L}}_{ex,nb - pc}^{(t)}\left( {{{{\bm{\hat c}}}_j}} \right),
\label{33}
\end{equation}
where ${\bm{L}}_{ex,nb - pc}^{(t)}$ denotes the extrinsic message of NB-PCs in the \emph{t}-th iteration.
Note that the extrinsic messages of SCMA are not moderated since the damping of the extrinsic messages output by MPA detectors and polar decoder lead to a similar a priori information behavior.
After the damping-aided extrinsic message reconstruction, we can obtain the extrinsic information of the polar decoder, which is then be interleaved as a priori bit LLR of the SCMA detector as
\begin{equation}
{{\bm{L}}_{ap,scma}}({{\bm{{\hat d}}}_j}) = \Pi \left( {{{\bm{L}}_{ex,nb - pc}}\left( {{{{\bm{\hat c}}}_j}} \right)} \right).
\label{34}
\end{equation}
Finally, the interleaved bit LLRs are remapped to the symbol log-likelihood of
SCMA and are sent into the SCMA detector subsequently as the a priori
information for the next iteration, which can be expressed as (\ref{36}) by
employing the Jacobi approximation.
\begin{equation}
\begin{aligned}
\mu \left( {x_{j,e}^k} \right) = &\sum\limits_{s_{j,e}^r \in {\cal
S}{_{j,e}}} {\Big\{ \left( {1 - s_{j,e}^r}
\right){L_{ap,scma}}({{{\hat
d}_{j,\left( {e - 1} \right)R + r}}})}\\
&- \max \left[ {0,{L_{ap,scma}}( {{{\hat d}_{j,\left( {e - 1} \right)R
+ r}}})} \right] \Big\}
\label{36}
\end{aligned}
\end{equation}
Note that the calculation of $\mu \left( {x_{j,e}^k} \right)$ in (\ref{36}) is
identical for $\forall k \in \left[ {1,K} \right]$, which only depends on the
\emph{e}-th codeword mapped by the codebook ${{\cal W}_j}$. The proposed
NSD-JIDD algorithm for the NB-PC-SCMA system is summarized in Algorithm 1.
\begin{algorithm}[t!]
\caption{NB-SCL and Damping Based Joint Iterative Detection and Decoding}
\LinesNumbered
\KwIn{received signals $\bm{Y}$, maximum number of iterations \emph{T} and damping factor $\varepsilon $}
\KwOut{hard decisions of decoded bits ${{\bm{\hat u}}_1},{{\bm{\hat u}}_2}, \cdots ,{{\bm{\hat u}}_J}$}
\textbf{Initialization:} $\xi _{j \to k}^{( 0 )}( {x_j^k = w_{j,k}^m}) = 0$ and ${\mu ^{( 0 )}}( {x_j^k = w_{j,k}^m} ) = \log \frac{1}{M}$ for $\forall w_{j,k}^m \in {{\cal W}_{j,k}}$, $j = 1,2, \cdots ,J$ and $k = 1,2, \cdots ,K$\\
\For{$t = 1,2, \cdots ,T$}
{
Perform SCMA detection using (\ref{6}-\ref{7}) and (\ref{11})\;
\For{$e = 1,2, \cdots ,E$}
{
\For{$j = 1,2, \cdots ,J$}
{
Calculate extrinsic bit LLRs ${{\bm{L}}_{ex,scma}}( {{{{\bm{\hat d}}}_j}})$ of SCMA using (\ref{12});
}
}
\For{$j = 1,2, \cdots ,J$}
{
De-interleave ${{\bm{L}}_{ex,scma}}( {{{{\bm{\hat d}}}_j}})$ to obtain a priori bit LLRs ${{\bm{L}}_{ap,nb - pc}}( {{{{\bm{\hat c}}}_j}})$ of NB-PCs\;
\For{$n' = 1,2, \cdots ,N'$}
{
Calculate the a priori symbol LLRs ${{\bm{L}}'_{ap,nb - pc}}( {{{{\bm{\hat c'}}}_j}})$ of NB-PCs for $\forall \theta \in {{\mathbb F}_q}$ using (\ref{17}) to give the input of the NB-SCL decoder;
}
Perform NB-SCL decoding for user \emph{j} using (\ref{23}-\ref{28}) to obtain ${\bm{\Lambda} _j}$, ${\bm{{\cal C}}_j}$ and ${{\bm{P}}_j}$\;
}
\If {CRC passes}
{
Output the decoded sequences ${\bm{\hat u}_1},{{\bm{\hat u}}_2}, \cdots ,{{\bm{\hat u}}_J}$\;
Break; \ \tcp{Activate ET}
}
\For{$j = 1,2, \cdots ,J$}
{
Perform damping based extrinsic message reconstruction using (\ref{29}-\ref{33})\;
Interleave ${{\bm{L}}_{ex,nb - pc}}\left( {{{{\bm{\hat c}}}_j}} \right)$ to get a priori bit LLRs ${{\bm{L}}_{ap,scma}}( {{{{\bm{\hat d}}}_j}})$ of SCMA\;
}
\For{$e = 1,2, \cdots ,E$}
{
\For{$j = 1,2, \cdots ,J$}
{
Calculate the a priori symbol log-likelihood of SCMA employing (\ref{36});
}
}
}
Get the decoded sequences ${\bm{\hat u}_1},{{\bm{\hat u}}_2}, \cdots ,{{\bm{\hat u}}_J}$\;
\end{algorithm}
\section{Improved Scheme of NSD-JIDD Algorithm}
\label{sec:4}
In this section, we conceive an improved implementation of NSD-JIDD, namely the ISD-JIDD algorithm. To be more specific, the L-NB-SCL decoding algorithm is proposed in Section \ref{sec:4-1} to simplify the path search pattern in NB-SCL decoding, leading to a complexity reduction. Then, in Section \ref{sec:4-2}, we modify the UN update for SCMA detection to mitigate the convergence errors during the iterations.
\begin{table*}[!t]
\centering
\renewcommand{\arraystretch}{1.3}
\caption{The available range of lazy search rate with different code lengths and code rates.}
\label{table2}
\begin{tabular}{cccccccccccccc}
\toprule[1pt]
\multirow{2}{*}{\emph{q}} & \multirow{2}{*}{$\beta$} & \multicolumn{3}{c}{$N' = 64$} & \multicolumn{3}{c}{$N' = 128$} & \multicolumn{3}{c}{$N' = 256$} & \multicolumn{3}{c}{$N' = 512$} \\
\cmidrule(lr){3-3}\cmidrule(lr){4-4}\cmidrule(lr){5-5}\cmidrule(lr){6-6}\cmidrule(lr){7-7}\cmidrule(lr){8-8}\cmidrule(lr){9-9}\cmidrule(lr){10-10}\cmidrule(lr){11-11}\cmidrule(lr){12-12}\cmidrule(lr){13-13}\cmidrule(lr){14-14}
& & ${R_c} = \frac{1}{3}$ & ${R_c} = \frac{1}{2}$ & ${R_c} = \frac{2}{3}$ & ${R_c} = \frac{1}{3}$ & ${R_c} = \frac{1}{2}$ & ${R_c} = \frac{2}{3}$ & ${R_c} = \frac{1}{3}$ & ${R_c} = \frac{1}{2}$ & ${R_c} = \frac{2}{3}$ & ${R_c} = \frac{1}{3}$ & ${R_c} = \frac{1}{2}$ & ${R_c} = \frac{2}{3}$ \\
\cmidrule(lr){1-1}\cmidrule(lr){2-2}\cmidrule(lr){3-5}\cmidrule(lr){6-8}\cmidrule(lr){9-11}\cmidrule(lr){12-14}
\multirow{2}{*}{4} & $\beta _{\min }^ *$ & 0.524 & 0.536 & 0.512 & 0.581 & 0.578 & 0.565 & 0.718 & 0.711 & 0.684 & 0.825 & 0.816 & 0.763 \\
& $\beta _{\max }^ *$ & 0.667 & 0.656 & 0.628 & 0.767 & 0.734 & 0.729 & 0.859 & 0.836 & 0.801 & 0.942 & 0.918 & 0.850 \\
\midrule[0.7pt]
\multirow{2}{*}{16} & $\beta _{\min }^ *$ & 0.571 & 0.563 & 0.558 & 0.721 & 0.719 & 0.706 & 0.871 & 0.859 & 0.825 & 0.924 & 0.910 & 0.889 \\
& $\beta _{\max }^ *$ & 0.762 & 0.750 & 0.721 & 0.837 & 0.828 & 0.812 & 0.941 & 0.930 & 0.895 & 0.959 & 0.941 & 0.936\\
\bottomrule[1pt]
\end{tabular}
\end{table*}
\begin{figure*}[!t]
\centering
\subfloat[Decoding tree of NB-SCL.]{\includegraphics[width=3.5in]{N-SCL.jpg}}
\label{fig8-a}
\hfil
\subfloat[Decoding tree of L-NB-SCL.]{\includegraphics[width=3.5in]{LN-SCL.jpg}}
\label{fig8-b}
\caption{Example of a non-binary polarized decoding tree, where $N' = 4$ and $l = 2$.}
\label{fig8}
\end{figure*}
\subsection{Lazy-Search Based NB-SCL Decoder}
\label{sec:4-1}
Since NB-SCL decoding adopts a global optimal search strategy over $GF(q)$, the decoder searches \emph{q} paths at each information position, leading to redundant computation. The Monte Carlo simulation shows that the symbols at certain positions are completely decoded correctly. Therefore, we consider these high-reliable positions can be decided directly during decoding. In other words, we introduce a local lazy search for NB-SCL decoding. The resultant decoding algorithm, i.e., L-NB-SCL decoding, only seeks paths at unreliable positions evaluated by Monte Carlo simulations.
After the Monte Carlo simulation, let set ${\cal B}$ denote the position where the decision can be achieved directly, i.e., the high-reliable position. To be more specific, ${g_i}$ denotes the error rate of the \emph{i}-th ($1 \le i \le N'$) symbol in the Monte Carlo simulation. Then, we define the set of lazy positions as ${\cal B} = \left\{ {\left. i \right|{g_i} \le {g_{th}}} \right\}$ with cardinality $\chi = \left| {\cal B} \right|$, where ${g_{th}}$ is the selected threshold with the constraint $0 \le {g_{th}} \le {10^{ - 3}}$ (assuming 10,000 Monte Carlo simulations are performed ). Naturally, set ${\cal B}$ is a subset of information position set $\,{\cal A}$. The lazy search rate is then defined as $\beta = {\chi\left/\right.{D'}}$. In particular, the lazy search rates at ${g_{th}} = 0$ and ${g_{th}} = {10^{ - 3}}$ are denoted as $\beta _{\min }^ *$ and $\beta _{\max }^ *$, respectively. Therefore, we have the bound $\beta \in \left[ {\beta _{\min }^ * ,\beta _{\max }^ * } \right]$.
Taking the $GF(4)$ and $GF(16)$ based NB-PC as an example, Table \ref{table2} gives the thresholds of $\beta$ for different code lengths $N'$ and code rates ${R_c}$, where Monte Carlo simulations are carried out at a signal-to-noise ratio of 2 dB. To obtain complexity reduction while maintaining performance, an appropriate $\beta$ is required to balance the trade-off between complexity and BER performance.
Different from the NB-SCL decoder, the L-NB-SCL decoder performs a lazy search, i.e., hard decision to reduce the split paths if $n' \in {\cal B}$. The hard decision function for estimate ${\hat a'_{\ell ,n'}}$ in the $\ell $-th path can be expressed as
\begin{equation}
{\hat a'_{\ell ,n'}} = \mathop {\arg \min }\limits_{\theta \in {{\mathbb {F}}_q}} L_\omega ^{({n'})}{\left[ \theta \right]_{\left\langle \ell \right\rangle }}.
\label{37}
\end{equation}
Note that if $n' \in {\cal B}$, an update for the path metric is no longer required, i.e., $\rho _\ell ^{({n'})} = \rho _\ell ^{({n' - 1})}$, since the selected path complies with the SC decision, leading to a burden value of 0 in (\ref{28}).
The path search process in SCL decoding can be characterized by a decoding tree. Fig. \ref{fig8}(a) and Fig. \ref{fig8}(b) show the $GF(4)$-based NB-SCL decoding tree and L-NB-SCL decoding tree, respectively. Typically, the $N'$-length $GF(q)$-based NB-PC decoding tree is a \emph{q}-ary tree with depth $N'$. Here, the topmost root node denotes a null state and the number adjacent to each node denotes the corresponding path metric. From Fig. \ref{fig8}, compared with NB-SCL decoding, L-NB-SCL decoding splits only one search path when the level $n' \in {\cal B}$. Generally, L-NB-SCL decoding can reach list saturation slower and avoid redundant global search.
Assume that the list is unfull with ${l_{pre}}$ paths surviving at the previous level. If the current stage belongs to ${\cal B}$, the search paths for L-NB-SCL and NB-SCL are ${l_{now}} = {l_{pre}}$ and ${l_{now}} = q{l_{pre}}$, respectively. Then, at the next information level belonging to ${{\cal B}^c}$ (the complement of the set ${\cal B}$), the search paths for both are ${l_{now}} = q{l_{pre}}$ and ${l_{now}} = {q^2}{l_{pre}}$, respectively. Thus, the calculation of $\left( {{q^2} - q} \right){l_{pre}}$ LLRs and path metrics are saved by L-NB-SCL decoding.
The L-NB-SCL decoding is summarized in Algorithm 2, where ${{\bm{R}}_{\lambda \left\langle \ell \right\rangle }} = [ {{R{_\lambda ^{\left( 1 \right)}}}\!_{\left\langle \ell \right\rangle },R{{_\lambda ^{\left( 2 \right)}}\!_{\left\langle \ell \right\rangle }}, \cdots ,R{{_\lambda ^{( {N'})}}\!_{\left\langle \ell \right\rangle }}}]$ ($0 \le \lambda \le w$) denotes the right message of the $\lambda $-th column in the $\ell $-th factor graph. Note that for simplicity, the subscript \emph{j} representing user is omitted in Algorithm 2.
\begin{algorithm}[t!]
\caption{Lazy-Search Based NB-SCL Decoding}
\LinesNumbered
\KwIn{maximum search width \emph{l}, information position set ${{\cal A}}$, high-reliable position set ${\cal B}$ and LLR initial value ${{\bm{L}}_0}$}
\KwOut{path metrics ${\bm{P}} = [ {\rho _1^{(N')},\rho _2^{( {N'} )}, \cdots ,\rho _l^{({N'})}} ]$, \emph{l} bit paths ${{\bm{\hat a}}_\ell } = \left[ {{{\hat a}_{\ell ,1}},{{\hat a}_{\ell ,2}}, \cdots ,{{\hat a}_{\ell ,N'}}} \right]$ and codeword bits ${{\bm{\hat c}}_\ell } = \left[ {{{\hat c}_{\ell ,1}},{{\hat c}_{\ell ,2}}, \cdots ,{{\hat c}_{\ell ,N'}}} \right]$}
\textbf{Initialization:} ${\mathbb {L}} = \left\{ 1 \right\}$, $\rho _1^{\left( 0 \right)} = 0$\\
{
\For{$n' = 1,2, \cdots ,N'$}
{
Update left message $L_\omega ^{({n'})}{[\eta ]_{\left\langle \ell \right\rangle }}$ for $\forall \ell \in {\mathbb {L}}$ and $\eta \in {{\mathbb F}_q}$ using (\ref{23}-\ref{24})\;
\eIf{$n' \notin {\cal A}$}
{
${\hat a'_{\ell ,n'}} \leftarrow 0$ for $\forall \ell \in {\mathbb {L}}$\;
Calculate the path metric $\rho _\ell ^{({n'})}$ for $\forall \ell \in {\mathbb {L}}$ using (\ref{28})\;
}
{
\eIf{$n' \in {\cal B}$}
{
Estimate ${\hat a'_{\ell ,n'}}$ for $\forall \ell \in {\mathbb {L}}$ using (\ref{37})\;
$\rho _\ell ^{( {n'})} \leftarrow \rho _\ell ^{( {n' - 1} )}$ for $\forall \ell \in {\mathbb {L}}$\;
}
{
Calculate temporary metric $\rho _{\ell ,\eta }^{temp}$ for $\forall \ell \in {\mathbb {L}}$ and $\eta \in {{\mathbb F}_q}$ using (\ref{28})\;
Select $\min \left\{ {l,q\left| {\mathbb L} \right|} \right\}$ smallest $\rho _{\ell ,\eta }^{temp}$ and get survived symbol set ${{\mathbb Y}_\ell }$ of each path\;
\For{$\ell \in {\mathbb {L}}$}
{
\eIf{$\rho _{\ell ,\eta }^{temp}$ is seleted}
{
$( {{{\hat a'}_{\ell ,n'}},\rho _\ell ^{( {n'})}}) \leftarrow ( {\eta ,\rho _{\ell ,\eta }^{temp}})$\;
\If{$\left| {{{\mathbb Y}_\ell }} \right| > 1$}
{
\For{$\theta \in {{\mathbb Y}_\ell }\backslash \eta $}
{
Clone the path $\ell $ to a new path $\ell '$, ${\mathbb {L}} \leftarrow {\mathbb {L}} \cup \ell '$\;
$( {{{\hat a'}_{\ell ',n'}},\rho _{\ell '}^{( {n'} )}} ) \leftarrow ( {\theta ,\rho _{\ell,\theta }^{temp}} )$\;
}
}
}
{
Kill the path $\ell $ as ${\mathbb {L}} \leftarrow {\mathbb {L}} \backslash \ell $\;
}
}
}
}
${{R_0^{(n')}}_{\left\langle \ell \right\rangle}} \leftarrow {\hat a_{\ell ,n'}}$\;
Update right message employing (\ref{25}-\ref{26})\;
}
\For{$\ell = 1,2, \cdots ,l$}
{
${{\bm{\hat c'}}_\ell } \leftarrow {{\bm{R}}_{0\left\langle \ell \right\rangle }}$\;
Map ${{\bm{\hat a'}}_\ell }$ and ${{\bm{\hat c'}}_\ell }$ to ${{\bm{\hat a}}_\ell }$ and ${{\bm{\hat c}}_\ell }$, respectively
}
}
\end{algorithm}
\subsection{Modified MPA Detection}
\label{sec:4-2}
During the UN update of SCMA detection, the NSD-JIDD scheme employs the SCMA a priori information from the decoder and the information passed by RN in the prior iteration. However, as another constituent of the receiver, the SCMA detector outputs unreliable likelihood information at low ${{{E_b}} \mathord{\left/{\vphantom {{{E_b}} {{N_0}}}} \right.\kern-\nulldelimiterspace} {{N_0}}}$. To be specific, the information passed by RN commonly suffers a convergence error when ${{{E_b}} \mathord{\left/{\vphantom {{{E_b}} {{N_0}}}} \right.\kern-\nulldelimiterspace} {{N_0}}}$ is low, which leads to a BER degradation.
Explicitly, the dominant belief information passed by the UN should be the a priori information ${\mu ^{\left( {t - 1} \right)}}$ in (\ref{6}) since the latest a priori information from the non-binary polar decoder provides high reliability. By contrast, the passed information $\xi _{i \to j}^{\left( {t - 1} \right)}$ in the prior iteration is not an instant message and performs weakly at low ${{{E_b}} \mathord{\left/{\vphantom {{{E_b}} {{N_0}}}} \right.\kern-\nulldelimiterspace} {{N_0}}}$. After $\xi _{i \to j}^{\left( {t - 1} \right)}$ is employed to update ${{\bm{L}}_{ex,scma}}$ and ${{\bm{L}}_{ap,nb - pc}}$, the information ${\mu ^{\left( {t - 1} \right)}}$ will contain the main messages in $\xi _{i \to j}^{\left( {t - 1} \right)}$ and will be updated again.
As such, we modify the MPA detection in NSD-JIDD by removing the second term $\xi _{i \to j}^{\left( {t - 1} \right)}$ in (\ref{6}). In addition, the a priori information ${\mu ^{\left( {t - 1} \right)}}$ is strictly normalized since ${{\bm{L}}_{ex,nb - pc}}$ is constructed by the Bayes rule. Then, the ${\cal N}\left( \cdot \right)$ operation can be discarded and thus save calculations. The modified UN update can be expressed as
\begin{equation}
\xi _{j \to k}^{\left( t \right)}\left( {x_j^k = w_{j,k}^m} \right) = {\mu ^{\left( {t - 1} \right)}}\left( {x_j^k = w_{j,k}^m} \right).
\label{38}
\end{equation}
Then, we can obtain the modified RN update rule by substituting (\ref{38}) into (\ref{7}) as follows,
\begin{equation}
\begin{aligned}
&\xi _{k \to j}^{\left( t \right)}\left( {x_j^k = w_{j,k}^m} \right)\\ = &\mathop {\max }\limits_{\scriptstyle x_i^k \in {{\cal W}_{i,k}},i \in {\cal R}\left( k \right)\backslash j\atop
\scriptstyle x_j^k = w_{j,k}^m} \!\! \left\{ {\psi \left( {{{\bm{x}}_{[k]}}} \right) + \!\!\!\!\!\!\!\!\!\!\!\! \sum\limits_{x_i^k \in {{\bm{x}}_{[k]}},i \in {\cal R}\left( k \right)\backslash j}\!\!\!\! \!\!\!\!\!\!{{\mu ^{\left( {t - 1} \right)}}\left( {x_i^k} \right)} } \right\}.
\end{aligned}
\label{39}
\end{equation}
Therefore, the UN update in the modified MPA can be merged into the a priori information update. In this way, the ISD-JIDD does not comprise the UN update process, where the MPA detection stage can be represented by (\ref{39}).
The partial message passing for NSD-JIDD and ISD-JIDD are described in Fig. \ref{fig9}(a) and Fig. \ref{fig9}(b), respectively, where the same parameters as Fig. \ref{fig5} are considered. For NSD-JIDD, the messages passed by each RN are stored for the UN update in the next iteration. Whereas for ISD-JIDD, it can be interpreted that each UN integrates with \emph{R} INs, instead of being stored for previous messages, which takes advantage of the updated messages promptly.
\begin{figure}[!t]
\centering
\subfloat[NSD-JIDD.]{\includegraphics[width=3in]{MP_of_JIDD2.png}}
\label{fig9-a}
\subfloat[ISD-JIDD.]{\includegraphics[width=3in]{MP_of_OJIDD2.png}}
\label{fig9-b}
\caption{Partial message passing on the factor graph.}
\label{fig9}
\end{figure}
The overall steps of the ISD-JIDD algorithm are identical to the NSD-JIDD algorithm, only with the improvement of SCMA detection and polar decoding. Specifically, if we replace step 3 in Algorithm 1 by (\ref{39}) and (\ref{11}) to update the UNs and calculate soft messages, respectively, and step 11 performs the L-NB-SCL decoding in Algorithm 2, then the procedures complete the ISD-JIDD algorithm.
\section{Results and Discussions}
\label{sec:5}
Simulation results of the proposed NB-PC-SCMA system over additive white gaussian noise (AWGN) and Rayleigh fading channels are given and analyzed in this section. Particularly, the BER performance, computational complexity and latency are characterized in Sections \ref{sec:5-1} to \ref{sec:5-3}, respectively.
\subsection{BER Performance}
\label{sec:5-1}
In this section, the error performance of the proposed NB-PC-SCMA system is evaluated. The SCMA codebook used in the simulation is designed according to \cite{6966170}. Here, we define the \emph{M}-dimension SCMA codebook with \emph{J} users and \emph{K} resources as $\left( {J,K,M} \right)$. For the receiver, the number of inner iterations for max-log-MPA detection is set to 1. NSD-JIDD and ISD-JIDD algorithms with 5 outer iterations are employed for multi-user detection. The NB-PCs transmitted over AWGN and Rayleigh fading channels are constructed by the Monte Carlo method at ${{{E_b}} \mathord{\left/{\vphantom {{{E_b}} {{N_0}}}} \right.\kern-\nulldelimiterspace} {{N_0}}} = 2$ dB and ${{{E_b}} \mathord{\left/{\vphantom {{{E_b}} {{N_0}}}} \right.\kern-\nulldelimiterspace} {{N_0}}} = 4$ dB, respectively, where the kernel parameter $\gamma$ is set according to \cite{8625284}. Unless otherwise specified, the field order \emph{q} of the NB-PCs is 16. In addition, 16-CRC and 24-CRC are employed for the NB-PCs with $N = 256$ and $N = 1024$, respectively. To limit the computational complexity, the list size \emph{l} is set to 8.
\begin{figure}[!t]
\centering
\subfloat[AWGN channels.]{\includegraphics[width=3in]{DF1.png}}
\label{fig10-a}
\subfloat[Rayleigh fading channels.]{\includegraphics[width=3in]{DF2.png}}
\label{fig10-b}
\caption{BER performance with different $\varepsilon $, where $N = 256$ and ${R_c} = {1 \mathord{\left/{\vphantom {1 2}} \right.\kern-\nulldelimiterspace} 2}$.}
\label{fig10}
\end{figure}
Fig. \ref{fig10}(a) and Fig. \ref{fig10}(b) characterize the BER performance of NSD-JIDD with different $\varepsilon $ over AWGN and Rayleigh fading channels, respectively, where the codebook $\left( {6,4,4} \right)$ is considered. It can be seen that $\varepsilon = 0.4$ and $\varepsilon = 0.6$ achieve the best performance over AWGN and Rayleigh fading channels, respectively, and hence they are adopted for the following simulations.
\begin{figure}[!t]
\centering
\includegraphics[width=3in]{FOMSvsCOMS.png}
\caption{The BER of NB-PC-SCMA systems with COMS and FOMS, where $J = 6$, $K = 4$, $N = 256$ and ${R_c} = {1 \mathord{\left/{\vphantom {1 2}} \right.\kern-\nulldelimiterspace} 2}$.}
\label{fig11}
\end{figure}
Fig. \ref{fig11} shows the BER comparison between FOMS and COMS over Rayleigh fading channels, where 4-point, 8-point, and 16-point SCMA modulation are considered. Here, the throughput in terms of bits per symbol \cite{9399239} is defined as $\lambda {R_c}{\log _2}M$. It can be observed that the BER performance of both COMS and FOMS degrades with the increase of throughput. Particularly, FOMS obtains 0.46 dB, 0.64 dB, and 0.97 dB gains versus COMS at the BER of ${10^{ - 4}}$ when bits per symbol are 1.5, 2.25, and 3, respectively. When $q = 16$, FOMS at $M = 4$ outperforms COMS at $M = 16$ about 4.23 dB at the cost of reduced throughput. Thus, we can adjust the order matching to trade-off the throughput and reliability, enabling the NB-PC-SCMA system with flexibility.
\begin{figure}[!t]
\centering
\includegraphics[width=3in]{gth.png}
\caption{The relationship between the threshold ${g_{th}}$ and BER of the ISD-JIDD algorithm with different \emph{N} and ${R_c}$, where ${{{E_b}} \mathord{\left/{\vphantom {{{E_b}} {{N_0}}}} \right.
\kern-\nulldelimiterspace} {{N_0}}}$ is set to 3 dB and 5 dB for AWGN and Rayleigh fading channels when $N = 256$, respectively and where ${{{E_b}} \mathord{\left/{\vphantom {{{E_b}} {{N_0}}}} \right.\kern-\nulldelimiterspace} {{N_0}}}$ is set to 2.5 dB and 4.3 dB for AWGN and Rayleigh fading channels when $N = 1024$, respectively.}
\label{fig12}
\end{figure}
Before quantifying the performance of the ISD-JIDD algorithm for NB-PC-SCMA systems, we investigate the effect of ${g_{th}}$ in L-NB-SCL decoding on the BER performance in Fig. \ref{fig12}, where the codebook $\left( {6,4,4} \right)$ is considered. Note that ${g_{th}} = - {10^{ - 4}}$ represents the case when NB-SCL decoding is employed. We can see that within the limit of the threshold, especially at high ${R_c}$, there is no performance degradation when L-NB-SCL decoding is adopted at the receiver. Based on Fig. \ref{fig12}, we select the optimal ${g_{th}}$ for different configurations and calculate the corresponding $\beta $ from Monte Carlo simulation results, as shown in Table \ref{table3}.
\begin{figure}[!t]
\centering
\subfloat[BER performance.]{\includegraphics[width=3in]{OSDvsNSD.png}}
\label{fig13-a}
\subfloat[convergence performance.]{\includegraphics[width=3in]{convergence.png}}
\label{fig13-b}
\caption{Performance comparison of ISD-JIDD and NSD-JIDD algorithms with different overloads, where $N = 256$ and ${R_c} = {1 \mathord{\left/{\vphantom {1 2}} \right.\kern-\nulldelimiterspace} 2}$}
\label{fig13}
\end{figure}
\begin{table}[!t]
\centering
\renewcommand{\arraystretch}{1.3}
\caption{Simulation parameters of ISD-JIDD algorithm for different configurations.}
\label{table3}
\begin{tabular}{@{}cccccccc@{}}
\toprule[1pt]
\multirow{2}{*}{Channel} & \multirow{2}{*}{Parameters} & \multicolumn{3}{c}{$N = 256$} & \multicolumn{3}{c}{$N = 1024$} \\
\cmidrule(lr){3-3}\cmidrule(lr){4-4}\cmidrule(lr){5-5}\cmidrule(lr){6-6}\cmidrule(lr){7-7}\cmidrule(l){8-8}
& & $\frac{1}{3}$ & $\frac{1}{2}$ & $\frac{2}{3}$ & $\frac{1}{3}$ & $\frac{1}{2}$ & $\frac{2}{3}$ \\
\cmidrule(r){1-1}\cmidrule(lr){2-2}\cmidrule(lr){3-5}\cmidrule(l){6-8}
\multirow{2}{*}{AWGN} & ${g_{th}}$ & 0.1 & 0.5 & 0.8 & 0.2 & 0.7 & 0.8 \\
& $\beta$ & 0.714 & 0.719 & 0.698 & 0.871 & 0.906 & 0.871 \\
\midrule[0.7pt]
\multirow{2}{*}{\makecell[c]{Rayleigh\\Fading}} & ${g_{th}}$ & 0.1 & 0.5 & 0.7 & 0.2 & 0.6 & 0.8 \\
& $\beta$ & 0.714 & 0.719 & 0.674 & 0.859 & 0.898 & 0.871 \\
\bottomrule[1pt]
\end{tabular}
\end{table}
\begin{figure}[!t]
\centering
\subfloat[AWGN channels.]{\includegraphics[width=3in]{AWGN_cmp.png}}
\label{fig14-a}
\subfloat[Rayleigh fading channels.]{\includegraphics[width=3in]{Rayleigh_cmp.png}}
\label{fig14-b}
\caption{BER comparison of the proposed NB-PC-SCMA to other state-of-the-art coded SCMA schemes, where $N = 256$ and ${R_c} = {1 \mathord{\left/{\vphantom {1 2}} \right.\kern-\nulldelimiterspace} 2}$.}
\label{fig14}
\end{figure}
The performance comparison of ISD-JIDD and NSD-JIDD with different overloads in terms of BER and convergence are given in Fig. \ref{fig13}(a) and (b), respectively. For $\lambda = 150\% $, $\lambda = 200\% $, and $\lambda = 300\% $, the user number \emph{J} is 6, 12, and 24, respectively, where the corresponding resource number \emph{K} is 4, 6, and 8, respectively. The modulation order \emph{M} is set to 4. Here, the NSD-JIDD without ET and the ISD-JIDD with $\beta = 0$ are also plotted as the benchmark.
As seen in Fig. \ref{fig13}(a), a higher user overload $\lambda $ always leads to a poorer BER performance over both AWGN and Rayleigh fading channels despite an improved throughput. ET has no influence on the detection results at all. Moreover, ISD-JIDD shows a better performance than NSD-JIDD at low ${{{E_b}} \mathord{\left/{\vphantom {{{E_b}} {{N_0}}}} \right.\kern-\nulldelimiterspace} {{N_0}}}$ since ISD-JIDD avoids convergence errors with the aid of modified MPA. Overall, ISD-JIDD lags NSD-JIDD only about 0.06 to 0.09 dB at the BER of ${10^{ - 4}}$. Fig. \ref{fig13}(b) shows that both NSD-JIDD and ISD-JIDD require two extra iterations to converge when $\lambda = 200\% $. Apparently, the convergence performance of ISD-JIDD is not suffered, which is attributed to the ET and damping mechanisms.
\begin{table*}[!t]
\centering
\renewcommand{\arraystretch}{1.3}
\caption{The computational complexity of different PC-SCMA schemes.}
\label{table5}
\begin{threeparttable}
\begin{tabular}{m{0.5cm}<{\centering}!{\vrule width0.9pt}m{3.8cm}<{\centering}|m{3.8cm}<{\centering}!{\vrule
width0.9pt}m{3.8cm}<{\centering}|m{3.8cm}<{\centering}}
\Xhline{1.2pt}
\multirow{2}{*}{}& \multicolumn{2}{c!{\vrule width0.9pt}}{Proposed NB-PC-SCMA$^{\dagger}$} & \multicolumn{2}{c}{PC-SCMA} \\ \cline{2-5}
& NSD-JIDD & ISD-JIDD & JIDD \cite{8463448} & CAJIDS \cite{9285274} \\ \hline\hline
\rule{0pt}{12pt}
ADD
\rule{0pt}{12pt}& $E\left[ {2K{d_r}^2{M^{{d_r}}} + } \right.JM({d_u}^2 + \left. {{d_u} - 1)} \right] + J\left[\left( {2q + 1} \right)lN'{\log _2}N' + 2{{{\cal T}_{p_1}}\!\!^{\ddagger}}\right]$ & $E\left[ 2K{d_r}^2{M^{{d_r}}}\right. + \left.JM\left( {{d_u} - 1} \right) \right] + J\left[\left( {2q + 1} \right)lN'{\log _2}N' + 2{{\cal T}_{p_2}}\!\!^{\ddagger}\right]$ & $EK{d_r}\left[ {{M^{{d_r}}}\left( {{d_r} + 2} \right) - 1} \right] + 2JN{\log _2}N$ & $EK{d_r}\left[ {{M^{{d_r}}}\left( {{d_r} + 2} \right) - 1} \right] + J\left(\frac{1}{2}lN{\log _2}N + {{\cal T}_{p_0}}\!\!^{\ddagger}\right)$ \\\hline
\rule{0pt}{12pt}
MUL
\rule{0pt}{12pt}& $EK{d_r}{M^{{d_r}}}\left( {{d_r} + 3} \right)$ & $EK{d_r}{M^{{d_r}}}\left( {{d_r} + 3} \right)$ & $E\left\{ K{d_r}\left[ M\left( {{d_u} - 1} \right) \right.\right.+ {M^{{d_r}}}\cdot \left.\left. \left( {2{d_r} + 4} \right) \right] + JM\left( {{d_u} - 1} \right) \right\} + 4JN{\log _2}N$ & $E\left\{ K{d_r}\left[ M\left( {{d_u} - 1} \right) \right.\right. + {M^{{d_r}}}\cdot\left.\left.\left( {2{d_r} + 4} \right) \right] + JM\left( {{d_u} - 1} \right) \right\} + JlN{\log _2}N$ \\\hline
\rule{0pt}{12pt}
CMP
\rule{0pt}{12pt} & $EK{d_r}M\left( {{M^{{d_r} - 1}} + M - 2} \right) + J\left[2\left( {q - 1} \right)lN'{\log _2}N'\right. + \left.(q - 1){{\cal T}_{p_1}}\right]$ & $EK{d_r}M\left( {{M^{{d_r} - 1}} - 1} \right) + J\left[2\left( {q - 1} \right)lN'{\log _2}N'\right. + \left.(q - 1){{\cal T}_{p_2}}\right]$ & $6JN{\log _2}N$ & $J\left(2lN{\log _2}N + {{\cal T}_{p_0}}\right)$\\\hline
\rule{0pt}{0.5pt}
XOR
\rule{0pt}{0.5pt} & $\frac{1}{2}JlN'{\log _2}N'$ & $\frac{1}{2}JlN'{\log _2}N'$ & 0 & $\frac{1}{2}JlN{\log _2}N$ \\\hline
\rule{0pt}{0.5pt}
EXP
\rule{0pt}{0.5pt} & 0 & 0 & $EK{d_r}{M^{{d_r}}}$ & $EK{d_r}{M^{{d_r}}}$ \\
\Xhline{1.2pt}
\end{tabular}
\begin{tablenotes}
\footnotesize
\item[$\dagger$] The element-wise MUL of non-binary polar decoders are evaluated as ADD since such operations are represented as the addition of exponents.
\item[$\ddagger$] ${\cal T}_{p_0}$, ${\cal T}_{p_1}$, and ${\cal T}_{p_2}$ denote the number of search paths in the SCL, NB-SCL, and L-NB-SCL decoding, respectively.
\end{tablenotes}
\end{threeparttable}
\end{table*}
Finally, we compare the BER performance of the proposed NB-PC-SCMA with ISD-JIDD to other state-of-the-art schemes, where the codebook $\left( {6,4,4} \right)$ is considered. To be more specific, LDPC-SCMA with JDD \cite{7848813}, non-binary LDPC (NB-LDPC) coded SCMA (NB-LDPC-SCMA) with joint trellis based joint decoding and detection (JTDD) \cite{8344383}, PC-SCMA with JIDD \cite{8463448}, and PC-SCMA with CRC aided joint iterative detection and SCL decoding (CAJIDS) \cite{9285274} are simulated as counterparts. Especially, the PC-SCMA schemes with JIDD or CAJIDS can also be considered as benchmarks for different left-to-right strategies during polar decoding. The LDPC and NB-LDPC codes in the simulations are constructed in \cite{4432803} and \cite{4155118}, respectively, where the min-sum and extended min-sum decoders with 30 inner iterations are employed in the receiver, respectively. In addition, the non-iterative NB-PC-SCMA scheme is included to analyze the iterative gain, where the result is based on the traditional COMS.
As demonstrated in Fig. \ref{fig14}(a), the NB-PC-SCMA system with ISD-JIDD outperforms PC-SCMA with CAJIDS and PC-SCMA with JIDD about 0.32 dB and 0.86 dB at the BER of ${10^{ - 4}}$ over AWGN channels when $q = 4$. If the field order \emph{q} is set to 16 with the aid of FOMS, the performance gains increase to 0.80 dB and 1.34 dB, respectively. Explicitly, an iterative gain of at least 2.84 dB can be observed when compared to the non-iterative counterpart. As for another non-binary coding scheme, i.e., NB-LDPC-SCMA, a 0.59 dB performance gain is attained by NB-PC-SCMA with $q = 4$. In contrast to the COMS-based NB-LDPC-SCMA in \cite{8344383}, our proposed FOMS-based NB-PC-SCMA achieves up to 1.07 dB gain when $q = 16$.
For Rayleigh fading channels, the BER performance of all schemes is inferior to that over AWGN channels, while similar comparison results can be observed in Fig. \ref{fig14}(b). To be specific, the iterative gain of the ISD-JIDD achieves at least 2.98 dB. When $q = 16$, the NB-PC-SCMA system with ISD-JIDD obtains 0.67 dB and 1.18 dB gain compared to the binary counterparts with CAJIDS and JIDD, respectively. Furthermore, our proposed FOMS-based NB-PC-SCMA with $q = 16$ outperforms the COMS-based NB-LDPC-SCMA about 0.83 dB. Overall, we can find that the proposed NB-PC-SCMA system with ISD-JIDD always shows the best error correction performance.
\subsection{Complexity}
\label{sec:5-2}
This section evaluates the computational complexity of the receiver for the proposed NB-PC-SCMA scheme, which mainly lies in the SCMA detector and the polar decoder. Here, the computational complexity is quantified by the number of floating-point operations per second per outer iteration. Specifically, addition, multiplication, comparison, exclusive or, and exponential arithmetic operations are counted, which are represented by ADD, MUL, CMP, XOR, and EXP, respectively.
Typically, the complexity of the polar decoder is less than that of the LDPC decoder \cite{7980697}. Thus, other outstanding PC-SCMA schemes, e.g., \cite{8463448} and \cite{9285274}, are considered as comparison benchmarks. The computational complexity of different PC-SCMA schemes with various receiver strategies is presented in Table \ref{table5}. It can be seen that neither NSD-JIDD nor ISD-JIDD includes EXP operations since the max-log-MPA algorithm is applied. As \emph{q} and \emph{N} increase, the exponentially-growing ${\cal T}_{p_1}$ leads to a high computational complexity of NSD-JIDD. By contrast, the ISD-JIDD receiver employs L-NB-SCL decoding to simplify the path search pattern, which yields a much lower ${\cal T}_{p_2}$ than ${\cal T}_{p_1}$. In addition, partial ADD and CMP operations of NSD-JIDD are also reduced by the modified MPA detection in the ISD-JIDD.
Fig. \ref{fig15} illustrates the computational complexity per outer iteration of the proposed NB-PC-SCMA system and benchmarks with different code lengths, where the field size $q = 4$ and the codebook $\left( {6,4,4} \right)$ are considered. We can see that when $N = 256$, NSD-JIDD yields a 48\% and 36\% higher complexity over JIDD and CAJIDS, respectively. In comparison, the complexity of ISD-JIDD is 31\% less than that of NSD-JIDD and only increases by 7\% of JIDD. When $N = 1024$, a 39\% complexity reduction compared to NSD-JIDD can be observed.
\begin{figure}[!t]
\centering
\includegraphics[width=3in]{complexity_comparison.png}
\caption{The computational complexity per outer iteration of different PC-SCMA schemes with codebook $\left( {6,4,4} \right)$, where $q=4$ is considered for NB-PC-SCMA.}
\label{fig15}
\end{figure}
\subsection{Latency}
\label{sec:5-3}
Here, clock cycles are used to measure the latency gain of the NB-PC-SCMA system compared to the binary counterpart. Suppose that all parallelizable instructions are carried out in one clock cycle. SCMA detection takes only three cycles, i.e., each of which corresponds RN update, UN update, and calculation of soft information. The latency of SCMA detection is negligible compared to polar decoding. Therefore, we will focus on the impact of polar decoding on the receiver latency.
The cycles of the binary SCL decoder can be calculated as ${\Gamma _{SCL}} = 2N - 2 + D$ \cite{7742998} with \emph{D} cycles for path sorting and $2N - 2$ cycles for SC decoding. The latency of the SCAN decoder can be expressed as ${\Gamma _{SCAN}} = {T_{SCAN}}\left( {2N - 2} \right)$, where ${T_{SCAN}}$ is the number of SCAN inner iterations. The latency of the L-NB-SCL decoder is written as ${\Gamma _{L-NB - SCL}} = 2N' - 2 + \left( {1 - \beta } \right)D'$. Thus, the latency for the JIDD in \cite{8463448} can be expressed as
\begin{equation}
{\Gamma _{JIDD}} = T \cdot {T_{SCAN}}\left( {2N - 2} \right) = T\left( {2N - 2} \right).
\label{43}
\end{equation}
According to \cite{9285274}, the latency for the CAJIDS can be calculated as
\begin{equation}
{\Gamma _{CAJIDS}} = \sum\limits_{t = 1}^T {2{N_t} - 2 + {D_t}},
\label{44}
\end{equation}
where ${N_t}$ and ${D_t}$ denote the number of decoded bits and information bits before ET in the \emph{t}-th iteration for the distributed CRC aided SCL decoding, respectively. The latency for our proposed ISD-JIDD can be expressed as
\begin{equation}
\begin{aligned}
{\Gamma _{ISD - JIDD}} &= {T_a}\left( {2N' - 2 + \left( {1 - \beta } \right)D'} \right)\\
& = \frac{{{T_a}}}{p}\left[ {2N - 2p + \left( {1 - \beta } \right)D} \right],
\end{aligned}
\label{45}
\end{equation}
where ${T_a}$ denotes the average number of iterations for the ISD-JIDD. To make a fair comparison, we employ the latency of a multiuser receiver with conventional SCL decoding as the benchmark. Then, the latency gain for algorithm $\bm{{\rm{X}}}$ can be written as
\begin{equation}
{{\cal G}_{latency,{\bm{{\rm{X}}}}}} = \frac{{T \cdot {\Gamma _{SCL}} - {\Gamma _{\bm{{\rm{X}}}}}}}{{T \cdot {\Gamma _{SCL}}}},
\label{46}
\end{equation}
where ${\Gamma _{\bm{{\rm{X}}}}}$ denotes the latency for a given algorithm $\bm{{\rm{X}}}$, i.e., ${\Gamma _{\bm{{\rm{X}}}}} \in \left\{ {{\Gamma _{JIDD}},{\Gamma _{CAJIDS}},{\Gamma _{ISD - JIDD}}} \right\}$.
\begin{figure}[!t]
\centering
\includegraphics[width=3in]{average_iteration_number.png}
\caption{The average number of iterations for ISD-JIDD over AWGN channels, where $N = 256$ and ${R_c} = {1 \mathord{\left/{\vphantom {1 2}} \right.\kern-\nulldelimiterspace} 2}$.}
\label{fig16}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics[width=3in]{latency_gain.png}
\caption{Comparison of latency gain over AWGN channels, where $N = 256$ and ${R_c} = {1 \mathord{\left/
{\vphantom {1 2}} \right.\kern-\nulldelimiterspace} 2}$.}
\label{fig17}
\end{figure}
Before characterizing the latency performance, ${T_a}$ versus ${{{E_b}} \mathord{\left/{\vphantom {{{E_b}} {{N_0}}}} \right.\kern-\nulldelimiterspace} {{N_0}}}$ over AWGN channels is presented in Fig. \ref{fig16}. It can be observed that with the improvement of ${{{E_b}} \mathord{\left/{\vphantom {{{E_b}} {{N_0}}}} \right.\kern-\nulldelimiterspace} {{N_0}}}$, the ET mechanism leads to a decrease of ${T_a}$. Especially at ${{{E_b}} \mathord{\left/{\vphantom {{{E_b}} {{N_0}}}} \right.\kern-\nulldelimiterspace} {{N_0}}} = 5$ dB, the ISD-JIDD with $q = 16$ converges and activates ET after 1.55 average iterations, which contrasts significantly with the maximum number of iterations $T = 5$.
The latency gain performance for different schemes is presented in Fig. \ref{fig17}, where the same parameters as Fig. \ref{fig14} are considered. In particular, both PC-SCMA and NB-PC-SCMA with non-iterative designs are plotted as the baseline for the iteration scheme. It can be found that for the PC-SCMA system, JIDD has a 20\% latency gain regardless of ${{{E_b}} \mathord{\left/{\vphantom {{{E_b}} {{N_0}}}} \right.\kern-\nulldelimiterspace} {{N_0}}}$, while CAJIDS exhibits a weak latency gain at high ${{{E_b}} \mathord{\left/{\vphantom {{{E_b}} {{N_0}}}} \right.\kern-\nulldelimiterspace} {{N_0}}}$. By contrast, for the proposed NB-PC-SCMA system, ISD-JIDD obtains about 57\% to 92\% latency gain, attributed to the characteristics of NB-PC, ET mechanism and L-NB-SCL decoding.
Observe also from Fig. \ref{fig17} that the non-iterative scheme achieves up to 90\% latency reduction since the receiver does not consider a feedback iteration mechanism. At low ${{{E_b}} \mathord{\left/{\vphantom {{{E_b}} {{N_0}}}} \right.\kern-\nulldelimiterspace} {{N_0}}}$, ISD-JIDD with $q = 4$ takes approximately twice the latency of the non-iterative PC-SCMA. However, the BER performance of the non-iterative system deteriorates significantly as shown in Fig. \ref{fig14}. Benefiting from FOMS, the latency gain of the proposed system becomes larger as \emph{q} increases. Note that when ${{{E_b}} \mathord{\left/{\vphantom {{{E_b}} {{N_0}}}} \right.\kern-\nulldelimiterspace} {{N_0}}} = 5$ dB, a latency saving of 92\% can be observed by ISD-JIDD with $q = 16$, which even outperforms that of non-iterative NB-PC-SCMA systems.
\section{Conclusion}
\label{sec:6}
In this paper, we have presented the design of the FOMS-based NB-PC- SCMA system for the first time. Specifically, an NSD-JIDD receiver have been proposed to gurantee the BER performance. Moreover, the ISD-JIDD algorithm with L-NB-SCL decoding and modified MPA have been proposed to reduce computational complexity and convergence error. Simulation results show that the proposed NB-PC-SCMA system leads to the best BER performance with up to 92\% latency gain compared to other state-of-the-art architecture. Furthermore, ISD-JIDD achieve at leat 31\% complexity reduction over NSD-JIDD without significant BER loss. In our future work, we will extend the proposed NB-PC-SCMA scheme to the scenarios with imperfect channel state information where channel estimation technologies are considered.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
|
1,108,101,566,652 | arxiv |
\section{Thoughts for future work}
Appropriate choice of a metric to evaluate a rollout is crucial for learning the intention of the user. A poor choice of metrics may lead to inherited bias and the possibility of reward hacking by the policy. An option to mitigate this would be to use humans or a hybrid of metric and humans to choose between a pair of rollouts. On the flip side the use of humans might be expensive and in some cases defeat the optimization for sample complexity this work strived for. We leave this thought for future works to ponder.
\section{Conclusion}
In this work we introduced a fine grained reward learning process using an under-specified metrics function and expert demonstrations for efficiently learning Task oriented dialogue. We demonstrated the efficacy of our method on MultiWoz2.0 dataset by out performing existing state of the art method with only 10\% of data. We believe the methods is generic and can be extend to other NLP tasks.
\subsection{Dataset}
To evaluate our proposed method on Multi-domain Wizard-of-Oz (MultiWoz) \cite{multiwoz} dataset. It is a large scale multidomain, task oriented dataset generated by human-to-human conversation , where one participant plays the role of a user while the other plays the agent.The conversations are between a tourist and a clerk at an information center. The conversations span across 7 domains including attraction, hospital, hotel, police, restaurant, taxi and train. Each dialogue is generated by users with a defined goal which may cover 1-5 domains with a maximum of 13 turns in a conversation. The dataset has 10438 dialogues split into 8438 dialogues for training set and 1000 dialogues each for validation and test set.
\subsection{Prepossessing}
We represent DB results as one-hot vectors as proposed by \cite{dbonehot}. To reduce surface-level variability in the responses, we use domain-adaptive delexicalization preprocessing proposed in \cite{delex}. As proposed in \cite{damd}, We generate delexicalized responses with placeholders for specific values which can be filled with information in DST and database.
\section{Experimental Settings}
\begin{table*}
\centering
\begin{tabular}{llllllllll}
\hline
\textbf{Model} & \multicolumn{3}{c}{\textbf{5\%}} & \multicolumn{3}{c}{\textbf{10\%}} & \multicolumn{3}{c}{\textbf{20\%}} \\
& \textbf{Inform} & \textbf{Success} & \textbf{BLEU} & \textbf{Inform} & \textbf{Success} & \textbf{BLEU} & \textbf{Inform} & \textbf{Success} & \textbf{BLEU} \\
\hline
MD-Sequicity & \small\verb|49.40| & \small\verb|19.70| & \small\verb|10.30| & \small\verb|58.10| & \small\verb|34.70| & \small\verb|11.40| & \small\verb|64.40| & \small\verb|42.10| & \small\verb|13.00| \\
\hline
DAMD & \small\verb|56.60| & \small\verb|24.50| & \small\verb|10.60| & \small\verb|62.00| & \small\verb|39.40| & \small\verb|14.50| & \small\verb|68.30| & \small\verb|42.90| & \small\verb|11.80| \\
\hline
MinTL & \small\verb|75.48| & \small\verb|60.96| & \small\textbf{13.98} & \small\verb|78.08| & \small\verb|66.87| & \small\textbf{15.46} & \small\verb|82.48| & \small\verb|68.57| & \small\verb|13.00| \\
\hline
CASPI(MinTL), $M_{soft}(resp)$ & \small\verb|87.69| & \small\textbf{71.17} & \small\verb|13.51| & \small\verb|82.08| & \small\verb|72.27| & \small\verb|14.10| & \small\verb|89.39| & \small\verb|78.58| & \small\textbf{15.16} \\
\hline
CASPI(MinTL), $M_{hard}(resp)$ & \small\textbf{89.69} & \small\verb|69.47| & \small\verb|13.33| & \small\textbf{92.59} & \small\textbf{78.58} & \small\verb|14.48| & \small\textbf{94.19} & \small\textbf{83.28} & \small\verb|13.65| \\
\end{tabular}
\begin{*}
\caption{\label{sampleEff1}
Comparison of results for end-to-end of Multiwoz2.0. in low resource setting
}
\end{*}
\end{table*}
\input{model}
\input{dataset}
\input{metrics}
\section{Introduction}
\begin{figure}
\scalebox{.3}{
\centering
\includegraphics{figs/example_tod.png}}
\caption{A typical Task oriented dialogue conversation in MultiWoz2.0 dataset}
\label{fig:todEx}
\end{figure}
Offline task-oriented dialogue systems involves solving disparate tasks of belief states tracking, dialogue policy management, and response generation. In this work we strive to improve the performance of dialogue policy management. The need for sample efficiency sample efficiency is key for learning offline Task oriented dialogue system as the access to data are finite and expensive. Recent advancements in Off-policy based reinforcement learning (Batch-RL) methods that uses historical annotated data as against a simulator has proven to be sample efficient and helps in safe policy improvement for generalizable policies. The effective use of these techniques are hindered by the nature of dialogue policy learning. For example, Off-policy based learning many times requires an estimation of behaviour policy for a given state of Markov Decision Process (MDP). In real life, belief-state does not capture the true state of the MDP latent state such as prosody, among others induce stochasticity in the agents response at each turn. Then there is the issue of loss of semantic information from dialogue act to generated natural language text. This is demonstrate by Fig: \ref{fig:todEx}. Use of mere policy imitation for dialogue-act falls short of reasoning on the outcome, rather focuses on each constituent of composite action equally. This is demonstrated in Fig:\ref{fig:todEx}. Turns\#3 and \#2 are rich in semantic information and Turn\#3 key to the transaction of the booking process, while Turn\#4 though of least use in the success of the conversation gets equal weight as other semantically rich term, worse the appear more often than specifics like Turn\#2 and \#3 there by clogging the gradient budget. These importance are lost in imitation policy learning.
\begin{figure*}
\scalebox{.35}{
\centering
\includegraphics{figs/pairwise_flow.png}}
\caption{Process flow of pairwise causal reward learning}
\label{fig:pairwise_flow}
\end{figure*}
The main contribution of this work are, we introduce safe policy improvement in batch reinforcement setting for dialogue policy learning with guarantees for performance. We introduce pairwise causal reward learning to shape reward that reason the intention of human utterance instead of mimic the demonstration. By use of these two off-policy methods we demonstrate sample efficiency.
\section{Method}
\begin{figure*}
\centering
\includegraphics[width=.8\textwidth\relax]{figs/pairwise_model4.png}
\caption{Pairwise causal reward learning network architecture}
\label{fig:pairwise_model}
\end{figure*}
\begin{figure}
\scalebox{.37}{
\centering
\includegraphics{figs/beliefstate_stoch.png}}
\caption{Shows stochacity of dialogue act against belief state in the MultiWoz2.0 dataset}
\label{fig:stoch_belief}
\end{figure}
\subsection{Preliminaries}
We model task-oriented dialogue as a Markov decision process (MDP) \cite{sutton2018reinforcement} with set of states $S$ and actions $A$. The agent at time step $t$ with state $s_t$ performs a composite action $a_t$ as per a target policy $\pi_e(a_t|s_t)$ on the environment with transition probabilities to next state $P(s_{t+1}|s_t,a_t)$, a latent reward function, $R(s_t, a_t)$ with discount factor $\gamma \in [0,1]$. Then the objective is to optimize for the target policy $\pi_e$, that maximizes the discounted sum of future reward on the MDP, given the state-action value function $Q^{\pi_e}(a_t,s_t) = \mathop{\mathbb{E}}_{a_t \sim \pi_e, s_t \sim P}[\sum_{t'=t}^{T}\gamma^{t-t'}R(s_{t'},a_{t'})]$.
In offline Batch-RL. The agent does not get to interact with the environment, instead we are provided with offline data $D$ logged by human agents performing actions based on a latent stochastic behaviour policy $\pi_b$, where $\tau^i \in D$ is a rollout of a dialogue, composing of $\tau^i = ((o_0^i,a_0^i),...,(o_{T-1}^i,a_{T-1}^i))$. Here the $o_t$ is the observation at turn $t$, composing of $o_t=(b_t,u_t^u,u_{t-1}^a)$, where $b_t$ is the belief state of the agent at turn $t$, $u_t^u$ and $u_{t-1}^a$ are the user and agent utterance at time $t$ and $t-1$ respectively.
\subsection{Safe policy improvement}
Batch-RL entails training a policy on rollout generated by the latent behaviour policy. Directly optimizing on the rollouts generated by another policy, leads to large bias in the value function estimation, poor generalization characteristic, and sample inefficiency \cite{batchRLsampleeff}. Safe policy improvement ensures the new policy performance is bounded compared to the old policy, as in this case the behaviour policy. This is given by:
\begin{equation*}
Pr(V^{\pi_e} \geq V^{\pi_b} - \zeta) \geq 1-\delta,
\end{equation*}
where $V^{\pi_e}$ and $V^{\pi_b}$ are value functions of the target and behaviour policy respectively. Here $1-\delta$ and $\zeta$ are the high probability and approximation meta-parameters respectively. \citep{trpo} provide such update mechanism, \eqref{trpoeqn}, whose errors are bounded as long as the constraints of \eqref{trpoeqn} are met, where $D_{KL}(.||.)$ is the KL divergence and $\eta$ is a hyper-parameter.
\begin{equation}\label{trpoeqn}
\begin{split}
L_{sto}(\theta) = \min - \mathop{\mathbb{E}}_{s \sim P^{\pi_{b}}, a \sim \pi_b} \left[\frac{\pi_e(a|{b_t;\theta})}{\pi_b(a|{b_t})}Q^{\pi_{b}}(b_t,a_t)\right] \\
s.t. \mathop{\mathbb{E}}_{s \sim P^{\pi_b}} [D_{KL}(\pi_b(.|{b_t})||\pi_e(.|{b_t}))] \leq \eta
\end{split}
\end{equation}
Use of this update rule requires access to the behavior policy $\pi_b(a_t|s_t)$ which is intractable to estimate and the learnt ones might have bias. Using them to perform bias correction like Important Sampling \cite{importantsampling} might lead to worse policy. Instead we estimate the behaviour policy conditioned on the belief state $b_t$ as against $s_t$ in \eqref{trpoeqn}, which is result in a stochastic behavior policy. The belief state $b_t$ is part of the observation $o_t$ at turn $t$. The actions are stochastic in nature given just the belief state, this demonstrated by Fig:\ref{fig:stoch_belief}. We purport that on availability of more evidence of the observation $o_t$, (beside $b_t$) the mode of the policy collapse to a near deterministic action. To factor this into the policy learning, we have an additional loss:
\begin{equation}
L_{det}(\theta)=\min - \mathop{\mathbb{E}}_{(o_t,a_t) \sim D}[G(\tau,t)\log\pi_e(a_t|o_t)],
\end{equation}
where $G(\tau^1, t)=\sum_{t' = t}^{T} \gamma^{t'-t} R(s_{t'}^1,a_{t'}^1,g^1)$ is the discounted sum of future reward for a rollout $\tau^1$ with goal $g^1$.Hence policy optimization loss function is given by:
\begin{equation}\label{totalLoss}
L(\theta) = L_{sto}(\theta) + L_{det}(\theta)
\end{equation}
We achieve this by doing two forward passes on the policy network, first with only the belief state as the input and another pass with all the observation information to policy network to get the action distribution. The first pass captures the stochasticity of the policy conditioned only on the belief state, $b_t$ and the second pass collapse the mode given other latent information of the state, such as $u_{t}^u$ and $u_{t}^a$.
\subsection{Pairwise causal reward learning}
\begin{algorithm}[tb]
\caption{CASPI}
\label{alg:caspiAlgo}
\begin{algorithmic}
\STATE \textbf{Input}: Dialogue dataset $D$ and evaluation metric $M$
\STATE Sub-sample K-folds of train and val set $(D_T,D_V) \sim D$
\FOR{ $\forall (D_T,D_V)$ }
\STATE Learn ToD in supervised setting by optimizing objective:
\STATE $ -\min \mathop{\mathbb{E}}_{a,s \sim D_T} \log(\pi_m(\hat{a}|s))$
\FOR{ $\forall$ epoch}
\STATE Predict on the valset $D_V$ and add it to the dataset, $D_P$ for pairwise causal learning
\STATE $D_P=D_P\cup\tau|\tau \sim \pi_m$
\ENDFOR
\ENDFOR
\REPEAT
\STATE Sample pair of rollouts $(\tau^1,\tau^2) \sim D_P$
\STATE Learn for $R(.)$ network by optimizing for objective Eqn: \ref{pairwiseobj}
\UNTIL{Convergence using data $D_P$}
\REPEAT
\STATE Optimize for policy $\pi_e$ using objective \ref{totalLoss}
\UNTIL{Convergence using data $D$}
\end{algorithmic}
\end{algorithm}
\label{pairwiseMethod}
\begin{table*}
\centering
\begin{tabular}{llllllll}
\hline
\textbf{Model} & \textbf{Belief} & \multicolumn{2}{c}{\textbf{System Action}} & \textbf{Inform} & \textbf{Success} & \textbf{BLEU} & \textbf{Combined} \\
& \textbf{State Type} & \textbf{Type} & \textbf{Form} & \textbf{(\%)} & \textbf{(\%)} & & \textbf{Score} \\
\hline
HDSA \citep{HDSA} & \small\verb|Oracle| & \small\verb|generated| & \small\verb|graph| & \small\verb|82.9| & \small\verb|68.9| & \small\textbf{23.6} & \small\verb|99.50| \\
\hline
LaRL\citep{larl} & \small\verb|Oracle| & \small\verb|generated| & \small\verb|graph| & \small\verb|82.8| & \small\verb|79.2| & \small\verb|12.8| & \small\verb|93.80| \\
\hline
DAMD\citep{damd} & \small\verb|Oracle| & \small\verb|generated| & \small\verb|span| & \small\verb|89.2| & \small\verb|77.9| & \small\verb|18.6| & \small\verb|102.15|\\
\hline
SOLONIST\citep{damd} & \small\verb|Oracle| & \small\verb|generated| & \small\verb|span| & \small\verb|89.6| & \small\verb|79.3| & \small\verb|18.3| & \small\verb|102.75|\\
\hline
MarCo\citep{damd} & \small\verb|Oracle| & \small\verb|generated| & \small\verb|span| & \small\verb|92.3| & \small\verb|78.6| & \small\verb|20.02| & \small\verb|105.47|\\
\hline
HDNO\citep{damd} & \small\verb|Oracle| & \small\verb|generated| & \small\verb|span| & \small\verb|96.4| & \small\verb|84.7| & \small\verb|18.85| & \small\verb|109.40|\\
\hline
CASPI(DAMD), $M_{soft}(act)$ & \small\verb|Oracle| & \small\verb|generated| & \small\verb|span| & \small \textbf{96.8} & \small\textbf{87.3} & \small\verb|19.10| & \small\textbf{111.15}\\
\hline
\end{tabular}
\begin{*}
\caption{\label{context2resp}
Comparison of results for dialogue-context-to-text generation task of Multiwoz2.0. The use of ground truth or generated results are denoted as Oracle and generated respectively.
}
\end{*}
\end{table*}
The policy optimization objective introduced in the previous section requires access to
per time-step reward $R(s_t,a_t,g))$. To this end, we provide a mechanism to learn a reward that is causally reasoned on the intention of the human demonstrator.
Usually dialogue policy learning is accompanied by metrics $M$, to evaluate the performance of the learnt policy. Though these metrics could serve as a proxy for a reward function, using them directly is challenging. These metric functions usually returns a score for the entire dialogue. Given the complex state-action space of the dialogue management system, these dialogue level feedback are under-specified for rewarding an action performed at each turn.
To address this under-specified feedback, we adapt the preference learning introduced by \cite{humanprefernce} from an online to an offline setting. We parametrize reward for every timestep $t$, as $R(s_t,a_t, g)$. Given a pair of rollouts $\tau^1,\tau^2 \in D$ with actions for each state in the rollouts sampled from the different learnt policies $\pi^1_e$ and $\pi^2_e$ respectively. Let $P[\tau^1 \succ \tau^2]$ be the probabilistic measure that captures the preference for policy $\pi^1_e$ over policy $\pi^2_e$. This preference is true when the sum of rewards of each dialogue of the two rollouts is such that. $\sum_{t = 0}^{T} R(s_t,a_t|(s_t,a_t)\in\tau^1) >\sum_{t = 0}^{T} R(s_t,a_t,g|(s_T,a_t)\in\tau^2)$. We henceforth we refer $\sum_{t = 0}^{T} R(s_t,a_t,g|(s_T,a_t)\in\tau)$ as $R(\tau)$ Then the preferential probability can be represented by:
\begin{equation*}
P[\tau^1 \succ \tau^2] = \frac{\phi(R(\tau^1))}{\phi(R(\tau^1))+\phi(R(\tau^2))}
\end{equation*}
Here $\phi(.)$ could either be $exp(.)$ or identity $\mathds{1}(.)$. In our experiments later works best. We optimize for reward, $R(s_t,a_t,g)$ by minimizing binary cross-entropy loss between the preference probability and the normalized metrics score, $\mu(\tau)$ between a pair of rollout.
\begin{equation}
\begin{split}
L(\theta) = \min -\mathop{\mathbb{E}}_{\tau^1,\tau^2 \sim \Pi} [\mu(\tau^1) \log P[\tau^1 \succ \tau^2] \\ + \mu(\tau^2) \log P[\tau^2 \succ \tau^1]]
\end{split}
\label{pairwiseobj}
\end{equation}
where,
\begin{equation}
\mu(\tau^1)=\frac{M(\tau^1)}{M(\tau^1)+M(\tau^2)}
\label{normalizedScore}
\end{equation}
Learning policy in a sparse reward MDP is a hard problem \cite{explorego}. In online learning, the agents can interact and explore the environment. The agents have the liberty to sample arbitrarily large numbers of rollouts from the environment and it may still fail \cite{explorego} to learn effective policy in sparse reward MDP with large state-action space, as the chance of encountering non-zero reward grows exponentially smaller with the episode length.
This is exacerbated in offline settings as we are forced to learn optimal policy with finite data. Some successes are seen with guided exploration \cite{hardVideo} \cite{sparseRewardGuided1} \cite{sparseRewardGuided2}, where expert demonstrations are used to guide the exploration. This strategy improves the chance of encountering the sparse reward as it restricts the state-action space to regions where non-zero reward exists.
We observe that the dialogue roll-outs are generated by expert latent policy. The data (dialogue rollouts) are distributed as per the optimal latent policy and transition probability. We propose that predictions made by a policy while in the process of learning to maximize the likelihood of the data is a good curriculum for exploring the state-action space for pairwise reward learning. This is the key insight of this work.
We formalize this insight into a method depicted in Fig:\ref{fig:pairwise_flow} and Algo:\ref{alg:caspiAlgo}. The (train) dataset is subsampled into $K$-fold train \& val sets. $K$-baseline models are trained to fit the data distribution generated by experts using cross entropy loss. During the process of fitting the data distribution, the still learning K-policies are used to predict on their corresponding K-fold valset at every epoch of the training. Each of the dialogue is scored by the chosen dialogue level metric. On convergence of the supervised learning process. Pairs of dialogue predictions generated by the above process, along with their corresponding metric score are used to train for preferential optimization objective Eqn.\ref{pairwiseobj}, which in-turn learns fine grained reward $R(a,s,g;\theta)$. The use of K-fold subsampling and K-baseline models helps generate stochaticity in the samples generated. It also helps in effectively using the data and make the method sample efficient.
\begin{table*}
\centering
\begin{tabular}{lccccc}
\hline
\textbf{Model} & \textbf{Pre-trained model} & \textbf{Inform \%} & \textbf{Success \%} & \textbf{BLEU} & \textbf{Combined Score} \\
\hline
DAMD & \small\verb|No| & \small\verb|72.79| & \small\verb|60.45| & \small\verb|16.93| & \small\verb|83.55|\\
\hline
DAMD + multi-action & \small\verb|No| & \small\verb|76.33| & \small\verb|64.35| & \small\verb|17.96| & \small\verb|88.30|\\
\hline
SimpleTOD & \small\verb|Yes| & \small\verb|84.4| & \small\verb|70.10| & \small\verb|15.01| & \small\verb|92.26|\\
\hline
SOLOIST & \small\verb|Yes| & \small\verb|85.5| & \small\verb|72.90| & \small\verb|16.54| & \small\verb|95.74|\\
\hline
MinTL-BART & \small\verb|Yes| & \small\verb|84.88| & \small\verb|74.91| & \small\verb|17.89| & \small\verb|97.79|\\
\hline
CASPI(DAMD), $M_{soft}(act)$ & \small\verb|No| & \small\verb|89.1| & \small\verb|76.1| & \small\textbf{18.08} & \small\verb|100.68|\\
\hline
CASPI(MinTL), $M_{soft}(act)$ & \small\verb|Yes| & \small\textbf{94.59} & \small\textbf{85.59} & \small\verb|17.96| & \small\textbf{108.05}\\
\hline
CASPI(MinTL), $M_{hard}(act)$ & \small\verb|Yes| & \small\verb|93.79| & \small\verb|84.88| & \small\verb|17.47| & \small\verb|106.81|\\
\hline
\end{tabular}
\begin{*}
\caption{\label{end2endResults}
Comparison of results for end-to-end task of Multiwoz2.0.
}
\end{*}
\end{table*}
\subsection{Sample weights for policy optimization}
\begin{equation}
\theta := \theta - R_{caspi}(s,a) \nabla \pi_{blackbox}(a|s;\theta)
\label{sampleWt}
\end{equation}
The learnt reward is akin to sample weights for each instance of the data, that helps to redistribute the gradient update budget among the samples based of their contribution to the the overall success of the Task oriented Dialogue system. To this end, we propose that learnt reward could be used as sample weight to any existing ToD dialogue system to reap the benefit of sample efficiency it brings. We demonstrate this by adopting two exiting ToD with the learnt reward, more about this in the next section \ref{model}
\subsection{Metrics}
\subsubsection{Evaluation}
Since the focus of this work is sample efficiency of dialogue policy learning, we use the context-to-response generation task of Multiwoz2.0~\cite{multiwoz} and use their evaluation metrics to measure the quality of the response as primary objective and for completeness we also evaluate performance of our method on end-to-end dialogue modeling task. Both of these setting uses three evaluations metrics. These include: 1) inform rate - measures the fraction of dialogue, the system has provided the correct entity, 2) success rate - fraction of dialogues, the system has answered all the requested information and 3) BLEU \cite{bleu} - measures the fluency of the generated response. We also report the combined score $(Inform + Success) \times 0.5 + BLEU$ proposed by \citet{slrl}. All the numbers of CASPI reported in this work are median of 5 runs with different seeds.
\subsubsection{Training}
For the metric $M$ used in pairwise causal reward learning , we use the following:
\begin{equation}
M := Inform + Success + \lambda \times BLEU
\label{metricTrain}
\end{equation}
This is very similar to combined score used in evaluation and both are equivalent when $\lambda = 2$. We introduced hyperparamter $\lambda$ to normalize the achievable scale of $BLEU$. We observe that success rate, if used as is, will result in non-markovian and stochastic per turn reward function, since the reward of current state will depend on the performance of future states. Hence, we also use a soft version of the metric $M_{soft}$, where the success rate measures a fraction of requested information provided in a dialogue. We refer the original metric that uses the discrete variant of success rate as $M_{hard}$. The choice of action in reward function $R(s_t,a_t,g)$ can either be dialogue act or generate response, we refer corresponding variants of metrics as $M(act)$ and $M(resp)$. To demonstrate the versatility of the method to adapt to different metrics, we use all the discussed variants of the metric.
\subsection{Baselines}
DAMD: Introduced by \cite{damd}is a domain-aware multi-decoder network. The method also exploits stochastic nature of the dialogue act by using a data-augmentation technique called the multi-action data augmentation. DAMD with data augmentation is denoted here as DAMD + multiaction.
HDSA by \cite{HDSA} proposes to use hierarchical graph representation for dialogue act. It uses a pre-trained 12-layer BERT model (Devlin et al., 2019) to represent dialogue act. The predicted dialogue act is transformed to the hierarchical graph structure using disentangled self-attention model, a 3-layer self-attention model (Vaswani et al., 2017)
SOLOIST \cite{soloist} and SimpleTOD \cite{simpletod} uses pretrained GPT-2-based methods. These method are trained on turn-level data without generated belief state and system act in dialog history.
MinTL-BART \cite{mintl}, introduced Levenshtein belief spans framework that predicts only the incremental change in dialogue state per turn. It leverages the pretrained T5 and BART \cite{bart} as backbone for model architecture.
HDNO proposed by \cite{hdno} is a dialogue policy learning method to solve context-to-response generation task of Multiwoz2.0 \cite{multiwoz}. It exploits the hierarchical nature of dialogue act and response generation task by proposing an option based framework of Hierarchical RL and variational model to learn a latent dialogue act that corresponds to natural language response. Unlike our method, HDNO though highlights the risk of sparsity of metric function such as success rate as reward function, resorts to shaping a proxy reward function. Use markov language model as a proxy reward function. The language model is learnt independent of the metric function. Our method refrains from reward shaping and is independent of the nature of any underspecified metric function. Since we learn fine grained turn specific credit assignment, our solution can adapt to other metric function as long as the pairwise reward network is rich enough to factorize them.
\subsection{Model}
\label{model}
\subsubsection{CASPI(.)}
We believe our pairwise casual reward learning and associated sample improvement is independent of model architecture used for learning Task oriented Dialogue systems. As argued in the previous section our approach could be used as sample weights for any existing methods. To this end we choose two TOD methods that are at the extremes of model architecture spectrum 1) One uses a light weight custom model and 2) Other uses a large standard pre-trained out-of-the box universal language model. We demonstrate the ease of integrating of CASPI with these methods, and demonstrate the improvement in performance and sample efficiency.
\subsubsection{CASPI(DAMD)}
In this setting , we use the neural model proposed by \cite{damd} without their key contribution of data augmentation as the baseline for our experiments. DAMD is composed of three $seq2seq$ generative model using GRUs. The three $seq2seq$ models are one each for belief state, dialogue act and response generation modules. An attention layers is then used to attend the outputs of the $seq2seq$ models with the context vector of previous turn for copy over mechanism. The outputs are then used as representation for predicting series of tokens for their respective modules. For more details on the model architecture and parameter setting refer \cite{damd}. In this setting we use both stochastic, $L_{sto}$ and deterministic, $L_{det}$ loss functions on dialogue act. For DST and response generation, we retain the cross entropy loss as is from DAMD\cite{damd}.
\subsubsection{CASPI(MinTL)}
On the other extreme of model complexity, we use the Task oriented Dialogue model, MinTL\cite{mintl}. MinTL uses a large pretrained language model BART\cite{bart}. BART use as a standard encoder decoder transformer architecture with a bidirectional encoder and an auto-regressive decoder. It is pre-trained on the task of denoising corrupt documents. BART is trained using cross-entropy loss between the decoder output and the original document. For more details of the model architecture and parameter setting, we suggest referring to \cite{mintl} \cite{bart}.
MinTL doesn't explicitly predict dialogue act. Hence we only use the deterministic loss, $L_{det}$ directly on the generated response and for DST we retain the loss as is from MintTL \cite{mintl}.
\subsubsection{Pairwise Causal Learning Network}
Fig \ref{fig:pairwise_flow} describes the process flow of pairwise casual reward learning. We chose DAMD \cite{damd} model for it's light weight to train $K$ baseline models and in the process of training, generate rollouts for pairwise causal reward learning. In all our experiments, we use $K=10$.
Fig:\ref{fig:pairwise_model} illustrates the pairwise casual reward learning network. We use three single bi-LSTM layers, one each to encode goal, belief state and dialogue act or response sequences at each dialogue turn on each of the sampled roll-outs pairs, $\tau^1$ and $\tau^2$. The three encoded representations are concatenate and are fed through couple of feed-forward layers before making a bounded reward prediction $R(s_t, a_t, g)$ for each turn using a sigmoid function. The per turn rewards are summed to form a global reward $R(\tau)$ for the roll-out $\tau$. Using a pair of dialogue rewards $R(\tau^1)$ and $R(\tau^2)$, we compute the probabilistic preference between the roll-outs $P[\tau^1 \succ \tau^2]$ either by standard normalization or a softmax function. The output of this optimized using crossentopy loss described in Eqn:\ref{pairwiseobj}
\section{Related Works}
With the release of multi-domain, multi-turn MultiWoz2.0 dataset~\cite{multiwoz}, there has been flurry of recent works, of which \cite{damd} uses data augmentation.
\citet{schemaGuided} and \citet{simpletod} frame dialogue policy learning as language modeling task. Among the works that uses reinforcement learning. \citet{slrl} uses supervised learning to bootstrap followed by RL fine tuning, whereas \cite{larl} uses policy gradient on latent action space as against handcrafted ones. To our best of knowledge \cite{wayOffPolicy} and \cite{hdno} the only other work that uses Batch-RL for dialogue policy learning. Recently there's has been proliferation in use of large pretrained language model based systems like \cite{simpletod} \cite{mintl} \cite{HDSA} etc.
The line of inverse RL used in this work can be traced back to \citet{maxEntropyIRL}, proposes roll-outs from expert demonstration should have rewards exponentially higher than any other arbitrary roll-outs. This method requires a normalizing constant that integrates across rollouts, which is challenging. \citet{humanprefernce} and \citet{rankingIRL} propose to do relative comparison of two roll-outs there by eliminating the need for normalization constant and they demonstrate in online setting.
\section{Result}
\begin{table}
\centering
\begin{tabular}{llll}
\hline
\textbf{Train data} & \textbf{Inform \%} & \textbf{Success \%} & \textbf{BLEU} \\
\hline
100\% & 96.8 & 87.3 & 19.1 \\
75\% & 94.2 & 81.4 & 19.2 \\
50\% & 91.2 & 76.6 & 17.7 \\
25\% & 91.5 & 68.3 & 15 \\
\hline
\end{tabular}
\caption{\label{sampleEff2}
Sample efficiency study of CASPI(DAMD) on context-to-response generation task of MultiWoz2.0}
\end{table}
We first compare our method against the current state of the art methods on the context-to-response generation task defined by MultiWoz2.0, \cite{multiwoz}. The results are tabulated at Table:\ref{context2resp}. We use CASPI adaptation of DAMD, CASPI(DAMD) for this task. CASPI(DAMD) performs better than other methods on three of the four performance criteria i.e success rate, inform rate and combined score. HDSA \cite{HDSA} has better BLEU score. This rich expressiveness of natural language by HDSA, stems from the use of large 12-layers BERT \cite{bert} model.
Secondly, we compare both adaptation of our methods CASPI(DAMD) and CASPI(MinTL) on the end-to-end dialogue tasks defined by MultiWoz2.0 \cite{multiwoz}. The results are tabulated at Table:\ref{end2endResults}. CASPI(DAMD) with it's light weight model architecture with no pretraining on any external corpus, was able to out perform all other previous method in all evaluation criteria. This goes to show using CASPI to shepard the gradient update process as sample weights for each dialogue turn leads to a model that's well aligned with true objective of the task. CASPI(MinTL) with its robust pretrained model out performs CASPI(DAMD) by a large margin. This goes to show the ease of adaptation of existing methods with CASPI.
\subsection{Sample Efficiency}
\begin{figure}
\scalebox{.3}{
\centering
\includegraphics{figs/HITL_learning.png}}
\caption{Mixed Human-in-the-loop and automatic evaluation metric scores for pairwise causal reward learning}
\label{fig:HITLLearning}
\end{figure}
Inverse reinforcement learning, coupled with off-policy policy learning and evaluation are proven to be sample efficient \cite{batchRLsampleeff} . We argue CASPI is competitive with other sample efficiency techniques, such as data augmentation and transfer learning as performed by \cite{damd} and \cite{mintl} respectively. To demonstrate the hypothesis, we test our method against baseline in a low sample complexity regime. For experimental setup, we adopt the low resource testing strategy from \cite{mintl}. We train our model on 5\%, 10\%, and 20\% of the training data and compare with other baselines on end-to-end dialogue and context-to-response generation tasks, Table \ref{sampleEff1} and \ref{sampleEff2} list the results. In end-to-end task, CASPI(MinTL) trained only on 10\% of data was able to out perform previous state of the art method, MinTL trained on 100\% data on two of the three performance metrics. On the context-to-response generation task, CASPI(DAMD) trained on 75\% of the data was able to match 100\% data performance of HDNO. This goes to show that having the right reward function to guide the budget of the gradient update process to reach the true objective is important in extremely low resource setting.
\subsection{Human Evaluation}
\begin{figure}
\centering
\begin{subfigure}
\centering
\includegraphics[width=0.5\textwidth]{figs/HumaEvalIllustration.png}
\caption{Example of generated responses by different ToD models}
\label{fig:HumanEvalIllustration}
\end{subfigure}
\begin{subfigure}
\centering
\includegraphics[width=0.5\textwidth]{figs/HumanEval.png}
\caption{Human evaluation on criterias:Appropriateness and Fluency }
\label{fig:HumanEvalResults}
\end{subfigure}
\begin{subfigure}
\centering
\includegraphics[width=0.5\textwidth]{figs/CaspiHITL.png}
\caption{Human evaluation of Human in the loop training of CASPI(MinTL) on 5\% of Multiwoz2.0 dataset}
\label{fig:HITLHumanEvaluation}
\end{subfigure}
\end{figure}
Automatic evaluation metrics have their own biases. True objective of ToD is human experience while interacting with the dialogue systems, which automatic evaluation metrics might fall short to capture. To this end we conduct human evaluation on the quality of the generated response. We define quality by the following criterias:
1) Appropriateness: Are the generated responses appropriate for the given context in the dialogue turn?
2) Fluency: Are the generated responses coherent and comprehensible?
A dialogue turn in the test set is randomly picked. The human evaluators were shown context leading up to the turn. The predictions for the turn by different models were anonymized and displayed to the evaluators. This is illustrated in Fig:\ref{fig:HumanEvalIllustration}. The human evaluators were asked to give a score between 1 and 5 for appropriateness and fluency, with score of 5 being best and 1 being the worst. 100 randomly selected dialogue turns were presented to 10 participants .We report the mean and variance of the score. We compare our model performance against MinTL \cite{mintl}, SimpleTOD \cite{simpletod} and DAMD \cite{damd}. Fig:\ref{fig:HumanEvalResults} shows the results of the evaluation. CASPI(MinTL) outperforms all other models in appropriateness score. While fluency score of CASPI(MinTL), MinTL and SimpleTOD are comparable to each other.
\subsection{Human in the loop training}
\label{hitlSec}
In the previous section we argue automatic dialogue evaluation metrics are biased and doesn't truly reflect the human objective but in our method we use these very same dialogue evaluation metrics to learn reward $R(s_t,a_t,g)$.
To bridge this gap, we performed the following human-in-the-loop (HITL) experiment. We first trained a pair CASPI(MINTL) models with different seeds, on 5\% of Multiwoz2.0 dataset. We then used these pair of models to predict on 0.5\% of Multiwoz2.0 train data (40 dialogues) and had a human score these pairs of generated response relative to each other. We then trained for reward $R(s_t,a_t,g)$ using pairwise causal reward learning as described in Sec:\ref{pairwiseMethod}, where examples of the mini batch are randomly sampled either from human scored examples or the ones scored by the automatic evaluation metric as show in Fig:\ref{fig:HITLLearning}. We then trained a fresh CASPI(MINTL) model on the original 5\% of data and the learnt $R(s_t,a_t,g)$. We perform human evaluation of the trained model on 24 dialogues form the test using 3 participants. Fig:\ref{fig:HITLHumanEvaluation} shows the performance.
Though CASPI(MINTL) using just 5\% of the data outperforms DAMD trained on 100\% of data in 2 out of the 3 automatic evaluation metrics shown in Table:\ref{end2endResults} and \ref{sampleEff1}, performs poorly in human appropriateness score. With the HITL score in the reward learning, we see a boost in performance in both the human evaluation criteria: appropriateness and fluency. The 5\% data CASPI(MINTL)'s human approriateness score is now comparable to 100\% data DAMD. This goes to show the versatility of the pairwise causal reward learning. With enough richness of the neural network used, the pairwise causal reward learning can generalize to unknown dialogue evaluation criteria.
\subsection{Analysis}
\begin{figure}
\scalebox{.27}{
\centering
\includegraphics{figs/reward_analysis.png}}
\caption{Example of reward learning process}
\label{fig:learntreward}
\end{figure}
\begin{figure}
\centering
\scalebox{.45}{
` \includegraphics[width=\textwidth]{figs/TypeOfAgents.png}}
\caption{Example of agent behaviour in low sample regime.}
\label{fig:agentType}
\end{figure
\subsubsection{Rewards}
In this section we qualitatively analyze the results of pairwise causal reward learning. Fig:\ref{fig:learntreward} is the same conversation between a tourist and information center agents that we introduced earlier, now we have reward $R(s_t,a_t,g)$, that pairwise causal reward learning has predicted against each turn. We observe that Turn\#3 has received the highest reward, retrospectively we realize that this is the turn the transaction happens which is crucial and risk averse turn in a dialogue, which is captured by the success rate of the automatic evaluation metric. Turn\#2 gets the next best reward which captures crucial information need for transaction to happen in Turn\#3. Turn\#4 gets reward an order lower than Turn\#3 \& 2 because other than nicety, it doesn't contribute much to the success of the conversation. It should be noted that it is typical Turn\#4 will appear in almost all conversation and in supervised learning, it'll be receiving the highest share of gradient. The learnt reward redistributes the gradient budget that is aligned to the success of the dialogue objective.
\subsubsection{Type of agents}
In this section we analyze the type of behaviour CASPI agents sometime exhibit, especially when trained in low sample regime.
Greedy agent: In certain domains, the agents has a tendency to book a service before it has gathered all the required information or before the user requested or agreed for booking a service. The first example in Fig:\ref{fig:agentType} demonstrate this behaviour. Here the user has requested for a taxi, before enough information such as destination or time of departure are gathered, the agent books the taxi. This happens because there are gaps in automatic evaluation metrics. A low BLEU score and relatively high inform and success rate might indicate greedy agent behaviour. Other reasons for low BLEU score includes: lack of diversity in the responses or malformation of response.
Cautious agent: The agent tends to be cautious by providing long winded replies packed with more information than needed. Agent tend to do this so as not to run the risk of loosing rewards through information rate. This behaviour is demonstrated in the second example in Fig:\ref{fig:agentType}
These subtle behaviour demonstrates gap in automatic evaluation metrics. These could be weeded out using Human in the loop as described in Sec:\ref{hitlSec}.
|
1,108,101,566,653 | arxiv | \section{Introduction}
This paper aims at presenting the shared tasks and the datasets of the eighth BioASQ challenge in 2020, as well as at providing an overview of the participating systems and their performance.
Towards this direction, in section~\ref{sec:tasks} we provide an overview of the shared tasks, that took place from February to May 2020, and the corresponding datasets developed for the challenge.
In section~\ref{sec:participants}, we present a brief overview of the systems developed by the participating teams for the different tasks.
Detailed descriptions for some of the systems are available in the proceedings of the lab.
In section~\ref{sec:results}, we focus on evaluating the performance of the systems for each task and sub-task, using state-of-the-art evaluation measures or manual assessment.
Finally, in section~\ref{sec:conclusion}, we sum up this version of the BioASQ challenge.
\section{Overview of the Tasks}
\label{sec:tasks}
This year, the eighth version of the BioASQ challenge comprised three tasks: (1) a large-scale
biomedical semantic indexing task (task 8a), (2) a biomedical question answering task (task 8b), both considering documents in English, and (3) a new task on medical semantic indexing in Spanish (task MESINESP). In this section we provide a brief description of the two established tasks with focus on differences from previous versions of the challenge~\cite{Nentidis2019}. A detailed overview of these tasks and the general structure of BioASQ are available in \cite{Tsatsaronis2015}. In addition, we describe the new MESINESP task on semantic indexing of medical content written in Spanish (medical literature abstracts, clinical trial summaries and health-related project descriptions), which was introduced this year~\cite{Krallinger2020}, providing statistics about the dataset developed for it.
\subsection{Large-scale semantic indexing - Task 8a}
\begin{table}[!htb]
\centering
\begin{tabular}{M{0.1\linewidth}M{0.15\linewidth}M{0.3\linewidth}M{0.3\linewidth}}\hline
\textbf{Batch} & \textbf{Articles} & \textbf{Annotated Articles} & \textbf{Labels per Article} \\ \hline
\multirow{5}{*}{1} & 6510 & 6487 & 12.49 \\
& 7126 & 7074 & 12.27 \\
& 10891 & 10789 & 12.55 \\
& 6225 & 6182 & 12.28 \\
& 6953 & 6887 & 12.75 \\ \hline
Total & 37705 & 37419 & 0.99 \\ \hline
\multirow{5}{*}{2} & 6815 & 6787 & 12.49 \\
& 6485 & 6414 & 12.52 \\
& 7014 & 6975 & 11.92 \\
& 6726 & 6647 & 12.90 \\
& 6379 & 6246 & 12.45 \\ \hline
Total & 33419 & 33069 & 0.99 \\ \hline
\multirow{5}{*}{3} & 6842 & 6601 & 12.70 \\
& 7212 & 6456 & 12.37 \\
& 5430 & 4764 & 12.59 \\
& 6022 & 4858 & 12.33 \\
& 5936 & 3999 & 12.21 \\ \hline
Total & 31442 & 26678 & 0.85 \\ \hline
\end{tabular}
\caption{Statistics on test datasets for Task 8a.}\label{tab:a_data}
\end{table}
In Task 8a the aim is to classify articles from the PubMed/MedLine\footnote{https://pubmed.ncbi.nlm.nih.gov/} digital library into concepts of the MeSH hierarchy. In particular, new PubMed articles that are not yet annotated by the indexers in NLM are gathered to form the test sets for the evaluation of the participating systems.
Some basic details about each test set and batch are provided in Table \ref{tab:a_data}.
As done in previous versions of the task, the task is divided into three independent batches of 5 weekly test sets each, providing an on-line and large-scale scenario, and the test sets consist of new articles without any restriction on the journal published.
The performance of the participating systems is calculated using standard flat information retrieval measures, as well as, hierarchical ones, when the annotations from the NLM indexers become available.
As usual, participants have 21 hours to provide their answers for each test set.
However, as it has been observed that new MeSH annotations are released in PubMed earlier that in previous years, we shifted the submission period accordingly to avoid having some annotations available from NLM while the task is still running.
For training, a dataset of 14,913,939 articles with 12.68 labels per article, on average, was provided to the participants.
\subsection{Biomedical semantic QA - Task 8b}
Task 8b aims at providing a realistic large-scale question answering challenge offering to the participating teams the opportunity to develop systems for all the stages of question answering in the biomedical domain. Four types of questions are considered in the task: “yes/no”, “factoid”, “list” and “summary” questions \cite{balikas13}.
A training dataset of 3,243 questions annotated with golden relevant elements and answers is provided for the participants to develop their systems.
Table \ref{tab:b_data} presents some statistics about the training dataset as well as the five test sets.
\begin{table}[!htb]
\centering
\begin{tabular}{M{0.1\linewidth}M{0.08\linewidth}M{0.08\linewidth}M{0.09\linewidth}M{0.1\linewidth}M{0.16\linewidth}M{0.16\linewidth}M{0.16\linewidth}}\hline
\textbf{Batch} & \textbf{Size} & \textbf{Yes/No} &\textbf{List} &\textbf{Factoid} &\textbf{Summary}& \textbf{Documents} & \textbf{Snippets} \\ \hline
Train & 3,243 & 881 &644 &941 &777 & 10.15 & 12.92 \\
Test 1 & 100 & 25 &20 &32 &23 & 3.45 & 4.51 \\
Test 2 & 100 & 36 &14 &25 &25 & 3.86 & 5.05 \\
Test 3 & 100 & 31 &12 &28 &29 & 3.35 & 4.71 \\
Test 4 & 100 & 26 &17 &34 &23 & 3.23 & 4.38 \\
Test 5 & 100 & 34 &12 &32 &22 & 2.57 & 3.20 \\ \hline
\textbf{Total} & 3,743 & 1033 &719 &1092 &899 & 9.23 & 11.78 \\ \hline
\end{tabular}
\caption{Statistics on the training and test datasets of Task 8b. The numbers for the documents and snippets refer to averages per question.}\label{tab:b_data}
\end{table}
As in previous versions of the challenge, the task is structured into two phases that focus on the retrieval of the required information (phase A) and answering the question (phase B).
In addition, the task is split into five independent bi-weekly batches and the two phases for each batch run during two consecutive days. In each phase, the participants receive the corresponding test set and have 24 hours to submit the answers of their systems.
In particular, in phase A, a test set of 100 questions written in English is released and the participants are expected to identify and submit relevant elements from designated resources, including PubMed/MedLine articles, snippets extracted from these articles, concepts and RDF triples.
In phase B, the manually selected relevant articles and snippets for these 100 questions are also released and the participating systems are asked to respond with \textit{exact answers}, that is entity names or short phrases, and \textit{ideal answers}, that is natural language summaries of the requested information.
\subsection{Medical semantic indexing in Spanish - MESINESP8}
There is a pressing need to improve the access to information comprised in health and biomedicine related documents, not only by professional medical users buy also by researches, public healthcare decision makers, pharma industry and particularly by patients. Currently, most of the Biomedical NLP and IR research is being done on content in English, despite the fact that a large volume of medical documents is published in other languages including Spanish. Key resources like PubMed focus primarily on data in English, but it provides outlinks also to articles originally published in Spanish. MESINESP attempts to promote the development of systems for automatic indexing with structured medical vocabularies (DeCS terms) of healthcare content in Spanish: IBECS\footnote{\footnotesize IBECS includes bibliographic references from scientific articles in health sciences published in Spanish journals. \url{http://ibecs.isciii.es}}, LILACS\footnote{\footnotesize LILACS is the most important and comprehensive index of scientific and technical literature of Latin America and the Caribbean. It includes 26 countries, 882 journals and 878,285 records, 464,451 of which are full texts \url{https://lilacs.bvsalud.org}}, REEC
\footnote{\footnotesize Registro Español de Estudios Clínicos, a database containing summaries of clinical trials \url{https://reec.aemps.es/reec/public/web.html}} and FIS-ISCIII
\footnote{\footnotesize public healthcare project proposal summaries (Proyectos de Investigación en Salud, diseñado por el Instituto de Salud Carlos III, ISCIII) \url{https://portalfis.isciii.es/es/Paginas/inicio.aspx}}. The main aim of MESINESP is to promote the development of semantic indexing tools of practical relevance of non-English content, determining the current-state-of-the art, identifying challenges and comparing the strategies and results to those published for English data. This task was organized within the framework of the Spanish Government's Plan for Promoting Language Technologies (Plan TL), that aims to promote the development of natural language processing, machine translation and conversational systems in Spanish and co-official languages.
A training dataset with 369,368 articles manually annotated with DeCS codes (\emph{Descriptores en Ciencias de la Salud}, derived and extended from MeSH terms)\footnote{\footnotesize 29,716 come directly from MeSH and 4,402 are exclusive to DeCS } was released. 1,500 articles were manually annotated and verified at least by two human experts (from a pool of 7 annotators), and from them a development and gold standard for evaluation were generated. A further background dataset was produced from diverse sources, including machine-translated text.
Consistently, the different collections averaged, per document, around 10 sentences, 13 DeCS codes, and 300 words, of which between 130 and 140 were unique ones.
In order to explore the diversity of content from this dataset, we generated clusters of semantically similar records from the training dataset's titles by, first, creating a Doc2Vec model with the gensim library,\footnote{\footnotesize \url{https://radimrehurek.com/gensim/}} and then using that similarity matrix to feed an unsupervised DBScan algorithm from the sklearn python package,\footnote{\footnotesize \url{https://scikit-learn.org/}} that basically creates clusters from high density samples. The resulting 27 clusters were visualized with the libraries from the Carrot Workbench project. \footnote{\footnotesize \url{https://project.carrot2.org/}}
(Figure {\ref{fig:02}}).
\begin{figure*}[!htb
\centerline{\includegraphics[width=1\textwidth]{figures/mesinesp_clusters_removed2dbscan.png}}
\caption{Content visualization of MESINESP training dataset using clustering techniques. Among subjects shown: clinical cases, non-Spanish languages, medication and device reviews, health care management etc. This reflects DeCS extension from MeSH terms to other subjects, such as Public Health issues. }\label{fig:02}
\end{figure*}
\section{Overview of participation}
\label{sec:participants}
\subsection{Task 8a}
This year, 7 teams participated in the eighth edition of task a, submitting predictions from 16 different systems in total. Here, we provide a brief overview of those systems for which a description was available,
stressing their key characteristics. A summing-up of the participating systems and corresponding approaches is presented in Table~\ref{tab:a_sys}.
\begin{table}[!htb]
\centering
\begin{tabular}{M{0.3\linewidth}M{0.6\linewidth}}\hline
\textbf{System} & \textbf{Approach} \\ \hline
X-BERT BioASQ & X-BERT, Transformers ELMo, MER \\\hline
NLM CNN & SentencePiece, CNN, embeddings, ensembles\\\hline
dmiip\_fdu & d2v, tf-idf, SVM, KNN, LTR, DeepMeSH, AttentionXML, BERT, PLT \\\hline
Iria & Luchene Index, k-NN, stem bigrams, ensembles, UIMA ConceptMapper\\\hline
\end{tabular}
\caption{Systems and approaches for Task 8a. Systems for which no description was available at the time of writing are omitted. }\label{tab:a_sys}
\end{table}
This year, the LASIGE team from the University of Lisboa, in its ``X-BERT BioASQ'' system propose a novel approach for biomedical semantic indexing combining a solution based on Extreme Multi-Label Classification (XMLC) with a Named-Entity-Recognition (NER) tool.
In particular, their system is based on X-BERT~\cite{chang2019x}, an approach to scale BERT~\cite{Devlin2018} to XMLC, combined with the use of the MER~\cite{Couto2018} tool to recognize MeSH terms in the abstracts of the articles.
The system is structured into three steps. The first step is the semantic indexing of the labels into clusters using ELMo~\cite{Peters2018}; then a second step matches the indices using a Transformer architecture; and finally, the third step focuses on ranking the labels retrieved from the previous indices.
Other teams, improved upon existing systems already participating in previous versions of the task.
Namely, the National Library of Medicine (NLM) team, in its ``\textit{NLM CNN}'' system enhance the previous version of their ``\textit{ceb}'' systems \cite{Rae2019}, based on an end-to-end Deep Learning (DL) architecture with Convolutional Neural Networks (CNN), with SentencePiece tokenization~\cite{Kudo2018}.
The Fudan University team also builds upon their previous ``\textit{AttentionXML}''~\cite{You2018} and ``\textit{DeepMeSH}''~\cite{peng2016} systems as well their new ``\textit{BERTMeSH}'' system, which are based on document to vector (d2v) and tf-idf feature embeddings, learning to rank (LTR) and DL-based extreme multi-label text classification, Attention Mechanisms and Probabilistic Label Trees (PLT)~\cite{Jain2016}.
Finally, this years versions of the ``\textit{Iria}'' systems~\cite{Ribadas2015} are also based on the same techniques used by the systems in previous versions of the challenge which are summarized in Table~\ref{tab:a_sys}.
Similarly to the previous versions of the challenge, two systems developed by NLM to facilitate the annotation of articles by indexers in MedLine/PubMed, where available as baselines for the semantic indexing task. MTI \cite{morkBioasq2014} as enhanced in \cite{zavorin2016} and an extension based on features suggested by the winners of the first version of the task \cite{tsoumakasBioasq}.
\subsection{Task 8b}
This version of Task b was tackled by 94 different systems in total, developed by 23 teams. In particular, 8 teams participated in the first phase, on the retrieval of relevant material required for answering the questions, submitting results from 30 systems.
In the second phase, on providing the exact and ideal answers for the questions, participated 18 teams with 72 distinct systems.
Three of the teams participated in both phases.
An overview of the technologies employed by the teams is provided in Table \ref{tab:b_sys} for the systems for which a description were available. Detailed descriptions for some of the systems are available at the proceedings of the workshop.
\begin{table}[!htb]
\centering
\begin{tabular}{M{0.2\linewidth}M{0.1\linewidth}M{0.55\linewidth}}\hline
\textbf{Systems} & \textbf{Phase}& \textbf{Approach} \\ \hline
pa & A, B &
BM25, BERT,
Word2Vec,
SQuAD, PubMedQA,
BioMed-RoBERTa
\\\hline
bio-answerfinder & A, B &
Bio-AnswerFinder, LSTM, ElasticSearch, BERT,
Electra, BioBERT, SQuAD, wRWM
\\\hline
Google & A & BM25, BioBERT, Synthetic Query Generation, BERT, reranking \\\hline
bioinfo & A & BM25, ElasticSearch, distant learning, DeepRank \\\hline
KU-DMIS & B &
BioBERT, NLI, MultiNLI, SQuAD,
BART, beam search, BERN, language\_chec
\\\hline
NCU-IISR & B & BioBERT, logistic regression, LTR \\\hline
UoT & B & BioBERT, multi-task learning, BC2GM \\\hline
BioNLPer & B & BioBERT, multi-task learning, NLTK, ScispaCy \\\hline
LabZhu & B & BERT, BoiBERT, XLNet, SpanBERT, transfer learning, SQuAD, ensembling\\\hline
MQ & B & Word2Vec, BERT, LSTM, Reinforcement Learning (PPO) \\\hline
DAIICT & B & textrank, lexrank, UMLS \\\hline
sbert & B & Sentence-BERT, BioBERT, SNLI, MutiNLI, multi-task learning, MQU \\\hline
\hline
\end{tabular}
\caption{Systems and approaches for Task8b. Systems for which no information was available at the time of writing are omitted.}\label{tab:b_sys}
\end{table}
The ``\textit{ITMO}'' team participated in both phases of the task experimenting in its ``pa'' systems with differing solutions across the batches. In general, for document retrieval the systems follow an approach with two stages. First, they identify initial candidate articles based on BM25, and then they re-rank them using variations of BERT~\cite{Devlin2018}, fine-tuned for the binary classification task with the BioASQ dataset and pseudo-negative documents. They extract snippets from the top documents and rerank them using biomedical Word2Vec based on cosine similarity with the question. To extract exact answers they use BERT fine-tuned on SQUAD~\cite{rajpurkar2016squad} and BioASQ datasets and employ a post-processing to split the answer for list questions and additional fine-tuning on PubMedQA~\cite{jin2019pubmedqa} for yes/no questions.
Finally, for ideal answers they generate some candidates from the snippets and their sentences and rerank them using the model used for phase A. In the last batch, they also experiment with generative summarization, developing a model based on BioMed-RoBERTa~\cite{gururangan2020don} to improve the readability and consistency of the produced ideal answers.
Another team participating in both phases of the task is the ``\textit{UCSD}'' team with its ``bio-answerfinder'' system. In particular, for phase A they rely on previously developed Bio-AnswerFinder system~\cite{ozyurt2020bio}, which is also used as a first step in phase B, for re-ranking the sentences of the snippets provided in the test set.
For identifying the exact answers for factoid and list questions they experimented on fine-tuning Electra~\cite{clark2020electra} and BioBERT~\cite{lee2019biobert} on SQuAD and BioASQ datasets combined.
The answer candidates are then scored considering classification probability, the top ranking of corresponding snippets and number of occurrences. Finally a normalization and filtering step is performed and, for list questions, and enrichment step based on coordinated phrase detection.
For yes/no questions they fine-tune BioBERT on the BioASQ dataset and use majority voting.
For summary questions, they employ hierarchical clustering, based on weighted relaxed word mover's distance (wRWMD) similarity~\cite{ozyurt2020bio} to group the top sentences, and select the sentence ranked highest by Bio-AnswerFinder to be concatenated to form the summary.
In phase A, the ``\textit{Google}'' team participated with four distinct systems based on different approaches. In particular, they used a BM25 retrieval model, a neural retrieval model, initialized with BioBERT and trained on a large set of questions developed through Synthetic Query Generation (QGen), and a hybrid retrieval model~\footnote{https://ai.googleblog.com/2020/05/an-nlu-powered-tool-to-explore-covid-19.html} based on a linear blend of BM25 and the neural model~\cite{ma2020zero}. In addition, they also used a reranking model, rescoring the results of the hybrid model with a cross-attention BERT rescorer~\cite{pappas2019}.
The team from the University of Aveiro, also participated in phase A with its ``bioinfo'' systems, which consists of a fine-tuned BM25 retrieval model based on ElasticSearch~\cite{gormley2015elasticsearch}, followed by a neural reranking step. For the latter, they use an interaction-based model inspired on the DeepRank~\cite{pang2017deeprank} architecture building upon previous versions of their system~\cite{almeida2020calling}. The focus of the improvements was on the sentence splitting strategy, on extracting of multiple relevance signals, and the independent contribution of each sentence for the final score.
In phase B, this year the ``\textit{KU-DMIS}'' team participated on both exact and ideal answers.
For exact answers, they build upon their previous BioBERT-based systems~\cite{Yoon2019} and try to adapt the sequential transfer learning of Natural Language Inference (NLI) to biomedical question answering. In particular, they investigate whether learning knowledge of entailment between two sentence pairs can improve exact answer generation, enhancing their BioBERT-based models with alternative fine-tuning configurations based on the MultiNLI dataset~\cite{williams2017broad}.
For ideal answer generation, they develop a deep neural abstractive summarization model based on BART~\cite{lewis2019bart} and beam search, with particular focus on pre-processing and post-processing steps. In particular, alternative systems were developed either considering the answers predicted by the exact answer prediction system in their input or not.
In the post-processing step, the generated candidate ideal answers for each question where scored using the predicted exact answers and some grammar scores provided by the language\_check tool\footnote{https://pypi.org/project/language-check/}.
For factoid and list questions in particular, the BERN~\cite{kim2019neural} tool was also employed to recognize named entities in the candidate ideal answers for the scoring step.
The ``\textit{NCU-IISR}'' team also participated in both parts of phase B, constructing two BioBERT-based models for extracting the exact answer and ranking the ideal answers respectively. The first model is fine-tuned on the BioASQ dataset formulated as a SQuAD-type QA task that extracts the answer span. For the second model, they regard the sentences of the provided snippets as candidate ideal answers and build a ranking model with two parts. First, a BioBERT-based model takes as input the question and one of the snippet sentences and provides their representation. Then, a logistic regressor, trained on predicting the similarity between a question and each snippet sentence, takes this representation and outputs a score, which is used for selecting the final ideal answer.
The ``\textit{UoT}'' team participated with three different DL approaches for generating exact answers. In their first approach, they fine-tune separately two distinct BioBERT-based models extended with an additional neural layer depending on the question type, one for yes/no and one for factoid and list questions together. In their second system, they use a joint-learning setting, where the same BioBERT layer is connected with both the additional layers and jointly trained for all types of questions.
Finally, in their third system they propose a multi-task model to learn recognizing biomedical entities and answers to questions simultaneously, aiming at transferring knowledge from the biomedical entity recognition task to question answering. In particular, they
extend their joint BioBERT-based model with simultaneous training on the BC2GM dataset~\cite{smith2008overview} for recognizing gene and protein entities.
The ``\textit{BioNLPer}'' team also participated in the exact answers part of phase B, focusing on factoids. They proposed 5 BioBERT-based systems, using external feature enhancement and auxiliary task methodologies.
In particular, in their ``factoid qa model'' and ``Parameters retrained'' systems they consider the prediction of answer boundaries (start and end positions) as the main task and the whole answer content prediction as an auxiliary task.
In their ``Features Fusion'' system they leveraged external features including NER and part-of-speach (POS) extracted by NLTK~\cite{loper2002nltk} and ScispaCy~\cite{neumann2019scispacy} tools as additional textual information and fused them with the pre-trained language model representations, to improve answer boundary prediction.
Then, in their ``BioFusion'' system they combine the two methodologies together.
Finally, their ``BioLabel'' system employed the general and biomedical domain corpus classification as the auxiliary task to help answer boundary prediction.
The ``LabZhu'' systems also participated in phase B, with focus on exact answers for the factoid and list questions. They treat answer generation as an extractive machine comprehension task and explore several different pretrained language models, including BERT, BioBERT, XLNet~\cite{DBLP:journals/corr/abs-1906-08237} and SpanBERT~\cite{joshi2020spanbert}. They also follow a transfer learning approach, training the models on the SQuAD dataset, and then fine-tuning them on the BioASQ datasets. Finally, they also rely on voting to integrate the results of multiple models.
The ``\textit{MQ}'' team, as in past years, focused on ideal answers, approaching the task as query-based summarisation. In some of their systems the retrain their previous classification and regression approaches~\cite{molla2019classification} in the new training dataset. In addition, they also employ reinforcement learning with Proximal Policy Optimization (PPO)~\cite{schulman2017proximal} and two variants to represent the input features, namely Word2Vec-based and BERT-based embeddings.
The ``\textit{DAIICT}'' team also participated in ideal answer generation, using the standard extractive summarization techniques textrank~\cite{mihalcea2004textrank} and lexrank~\cite{erkan2004lexrank} as well as sentence selection techniques based on their similarity with the query. They also modified these techniques investigating the effect of query expansion based on UMLS~\cite{bodenreider2004unified} for sentence selection and summarization.
Finally, the ``\textit{sbert}'' team, also focused on ideal answers. They experimented with different embedding models and multi-task learning in their systems, using parts from previous ``\textit{MQU}'' systems for the pre-processing of data and the prediction step based on classification and regression~\cite{molla2019classification}.
In particular, they used a Universal Sentence Embedding Model~\cite{conneau2017supervised} (BioBERT-NLI~\footnote{https://huggingface.co/gsarti/biobert-nli}) based on a version of BioBERT fine-tuned on the the SNLI~\cite{bowman2015large} and the MultiNLI datasets as in Sentence-BERT~\cite{reimers2019sentence}.
The features were fed to either a single logistic regression or classification model to derive the ideal answers. Additionally, in a multi-task setting, they trained the model on both the classification and regression tasks, selecting for the final prediction one of them.
In this challenge too, the open source OAQA system proposed by \cite{yang2016learning} served as baseline for phase B exact answers. The system which achieved among the highest performances in previous versions of the challenge remains a strong baseline for the exact answer generation task. The system is developed based on the UIMA framework. ClearNLP is employed for question and snippet parsing. MetaMap, TmTool \cite{Wei2016}, C-Value and LingPipe \cite{baldwin2003lingpipe} are used for concept identification and UMLS Terminology Services (UTS) for concept retrieval. The final steps include identification of concept, document and snippet relevance based on classifier components and scoring and finally ranking techniques.
\subsection{Task MESINESP8}
\begin{table}[!htb]
\centering
\begin{tabular}{M{0.3\linewidth}M{0.5\linewidth}}\hline
\textbf{System} & \textbf{Approach} \\ \hline
Iria & bigrams, Luchene Index, k-NN, ensembles, UIMA ConceptMapper\\\hline
Fudan University & AttentionXML with multilingual-BERT \\\hline
Alara (UNED) & Frequency graph matching \\\hline
Priberam & BERT based classifier, and SVM-rank ensemble\\\hline
LASIGE & X-BERT, Transformers ELMo, MER \\\hline
\end{tabular}
\caption{Systems and approaches for Task MESINESP8. Systems for which no description was available at the time of writing are omitted. }\label{tab:mesinesp_sys}
\end{table}
For the newly introduced MESINESP8 task, 6 teams from China, India, Portugal and Spain participated and results from 24 different systems were submitted.
The approaches were similar to the comparable English task, and included KNN and Support Vector Machine classifiers, as well as deep learning frameworks like X-BERT and multilingual-BERT, already described in subsection 3.1.
A simple lookup system was provided as a baseline for the MESINESP task. This system extracts information from an annotated list. Then checks whether, in a set of text documents, the annotation are present. It basically gets the intersection between tokens in annotations and tokens in words. This simple approach obtains a MiF of 0.2695.
\section{Results}
\label{sec:results}
\subsection{Task 8a}
\begin{table*}[!htbp]
\centering
\begin{tabular}{M{0.3\linewidth}M{0.1\linewidth}M{0.1\linewidth}M{0.1\linewidth}M{0.1\linewidth}M{0.1\linewidth}M{0.1\linewidth}}\hline
\textbf{System} & \multicolumn{2}{c}{\textbf{Batch 1}} & \multicolumn{2}{c}{\textbf{Batch 2}} & \multicolumn{2}{c}{\textbf{Batch 3}} \\ \hline
& MiF & LCA-F & MiF & LCA-F & MiF & LCA-F \\ \cline{2-7}
deepmesh\_dmiip\_fdu & \textbf{1.25} & \textbf{2.25} & 1.875 & 3.25 & 2.25 & 2.25 \\
deepmesh\_dmiip\_fdu\_ & 2.375 & 3.625 & \textbf{1.25} & \textbf{1.25} & 1.75 & 2 \\
attention\_dmiip\_fdu & 3 & \textbf{2.25} & 3.5 & 3.125 & 3 & 3.25 \\
Default MTI & 4.75 & 3.75 & 6 & 5.25 & 6 & 5.5 \\
MTI First Line Index & 5.5 & 4.5 & 6.75 & 5.875 & 5.75 & 5.25 \\
dmiip\_fdu & - & - & 2.375 & 1.625 & \textbf{1.5} & \textbf{1.25} \\
NLM CNN & - & - & 5 & 6.75 & 5.5 & 7 \\
iria-mix & - & - & - & - & 8.25 & 8.25 \\
iria-1 & - & - & - & - & 9.25 & 9.25 \\
X-BERT BioASQ & - & - & - & - & 10.75 & 10.75 \\
\hline
\end{tabular}
\caption{Average system ranks across the batches of the task 8a. A hyphenation symbol (-) is used whenever the system participated in fewer than 4 test sets in the batch. Systems participating in fewer than 4 test sets in all three batches are omitted.}\label{tab:a_res}
\end{table*}
In Task 8a, each of the three batches were independently evaluated as presented in Table~\ref{tab:a_res}.
Standard flat and hierarchical evaluation measures \cite{balikas13} were used for measuring the classification performance of the systems. In particular, the micro F-measure (MiF) and the Lowest Common Ancestor F-measure (LCA-F) were used to identify the winners for each batch \cite{kosmopoulos2015evaluation}.
As suggested by Demšar \cite{Demsar06}, the appropriate way to compare multiple classification systems over multiple datasets is based on their average rank across all the datasets.
In this task, the system with the best performance in a test set gets rank 1.0 for this test set, the second best rank 2.0 and so on.
In case two or more systems tie, they all receive the average rank.
Then, according to the rules of the challenge, the average rank of each system for a batch is calculated based on the four best ranks of the system in the five test sets of the batch.
The average rank of each system, based on both the flat MiF and the hierarchical LCA-F scores, for the three batches of the task are presented in Table~\ref{tab:a_res}.
\indent The results in Task 8a show that in all test batches and for both flat and hierarchical measures, the best systems outperform the strong baselines. In particular, the ``\textit{dmiip\_fdu}'' systems from the Fudan University team achieve the best performance in all three batches of the task. More detailed results can be found in the online results page\footnote{\footnotesize \url{http://participants-area.bioasq.org/results/8a/}}.
Comparing these results with the corresponding results from previous versions of the task, suggests that both the MTI baseline and the top performing systems keep improving through the years of the challenge, as shown in Figure {\ref{fig:01}}.
\begin{figure*}[!htb
\centerline{\includegraphics[width=1\textwidth]{figures/MiF8a.png}}
\caption{The micro f-measure (MiF) achieved by systems across different years of the BioASQ challenge. For each test set the MiF score is presented for the best performing system (Top) and the MTI, as well as the average micro f-measure of all the participating systems (Avg). }\label{fig:01}
\end{figure*}
\subsection{Task 8b}
\textbf{Phase A}:
In the first phase of Task 8b, the systems are ranked according to the Mean Average Precision (MAP) measure for each of the four types of annotations, namely documents, snippets, concepts and RDF triples.
This year, the calculation of Average Precision (AP) in MAP for phase A was reconsidered as described in the official description of the evaluation measures for Task 8b\footnote{http://participants-area.bioasq.org/Tasks/b/eval\_meas\_2020/}.
In brief, since BioASQ3, the participant systems are allowed to return up to 10 relevant items (e.g. documents), and the calculation of AP was modified to reflect this change. However, the number of golden relevant items in the last years have been observed to be lower than 10 in some cases, resulting to relatively small AP values even for submissions with all the golden elements. For this reason, this year, we modified the MAP calculation to consider both the limit of 10 elements and the actual number of golden elements.
In Tables \ref{tab:bA_res_doc} and \ref{tab:bA_res_sni}
some indicative preliminary results from batch 2 are presented. The full results are available in the online results page of Task 8b, phase A\footnote{\footnotesize \url{http://participants-area.bioasq.org/results/8b/phaseA/}}. The results presented here are preliminary, as the final results for the task 8b will be available after the manual assessment of the system responses by the BioASQ team of biomedical experts.
\begin{table*}[!htbp]
\centering
\begin{tabular}{M{0.3\linewidth}M{0.13\linewidth}M{0.13\linewidth}M{0.13\linewidth}M{0.13\linewidth}M{0.12\linewidth}}\hline
\textbf{System} & \textbf{Mean Precision} & \textbf{Mean Recall} & \textbf{Mean F-measure} & \textbf{MAP} & \textbf{GMAP} \\ \hline
pa & \textbf{0.1934} & 0.4501 & \textbf{0.2300} & \textbf{0.3304} & 0.0185\\
AUEB-System1 & 0.1688 & \textbf{0.4967} & 0.2205 & 0.3181 & 0.0165\\
bioinfo-3 & 0.1500 & 0.4880 & 0.2027 & 0.3168 & \textbf{0.0223}\\
bioinfo-1 & 0.1480 & 0.4755 & 0.1994 & 0.3149 & 0.0186 \\
bioinfo-4 & 0.1500 & 0.4787 & 0.2002 & 0.3120 & 0.0161 \\
AUEB-System2 & 0.1618 & 0.4864 & 0.2126 & 0.3103 & 0.0149 \\
bioinfo-2 & 0.1420 & 0.4648 & 0.1914 & 0.3084 & 0.0152 \\
bioinfo-0 & 0.1380 & 0.4341 & 0.1830 & 0.2910 & 0.0117 \\
AUEB-System5 & 0.1588 & 0.4549 & 0.2057 & 0.2843 & 0.0116 \\
Ir\_sys4 & 0.1190 & 0.4179 & 0.1639 & 0.2807 & 0.0056 \\
Google-AdHoc-MAGLEV & 0.1310 & 0.4364 & 0.1770 & 0.2806 & 0.0109 \\
Ir\_sys2 & 0.1190 & 0.4179 & 0.1639 & 0.2760 & 0.0055 \\
Google-AdHoc-BM25 & 0.1324 & 0.4222 & 0.1758 & 0.2718 & 0.0088 \\
AUEB-System3 & 0.1688 & \textbf{0.4967} & 0.2205 & 0.2702 & 0.0146 \\
Ir\_sys3 & 0.1325 & 0.3887 & 0.1730 & 0.2678 & 0.0045 \\
\hline
\end{tabular}
\caption{Results for document retrieval in batch 2 of phase A of Task 8b.
Only the top-15 systems are presented.
}\label{tab:bA_res_doc}
\end{table*}
\begin{table*}[!htbp]
\centering
\begin{tabular}{M{0.3\linewidth}M{0.14\linewidth}M{0.12\linewidth}M{0.14\linewidth}M{0.12\linewidth}M{0.12\linewidth}}\hline
\textbf{System} & \textbf{Mean Precision} & \textbf{Mean Recall} & \textbf{Mean F-measure} & \textbf{MAP} & \textbf{GMAP} \\ \hline
AUEB-System1 & \textbf{0.1545} & 0.2531 & \textbf{0.1773} & \textbf{0.6821} & 0.0015 \\
AUEB-System2 & 0.1386 & 0.2260 & 0.1609 & 0.6549 & 0.0011 \\
pa & 0.1348 & \textbf{0.2578} & 0.1627 & 0.3374 & \textbf{0.0047} \\
bioinfo-4 & 0.1308 & 0.2009 & 0.1413 & 0.2767 & 0.0016 \\
bioinfo-1 & 0.1373 & 0.2103 & 0.1461 & 0.2721 & 0.0016 \\
bioinfo-2 & 0.1299 & 0.2018 & 0.1408 & 0.2637 & 0.0011 \\
bioinfo-3 & 0.1321 & 0.2004 & 0.1404 & 0.2607 & 0.0014 \\
MindLab QA System & 0.0811 & 0.1454 & 0.0916 & 0.2449 & 0.0005 \\
MindLab Red Lions++ & 0.0830 & 0.1469 & 0.0932 & 0.2394 & 0.0005 \\
AUEB-System5 & 0.0943 & 0.1191 & 0.0892 & 0.2217 & 0.0011 \\
MindLab QA Reloaded & 0.0605 & 0.1103 & 0.0691 & 0.2106 & 0.0002 \\
Deep ML methods for & 0.0815 & 0.0931 & 0.0811 & 0.2051 & 0.0001 \\
bioinfo-0 & 0.1138 & 0.1617 & 0.1175 & 0.1884 & 0.0009 \\
MindLab QA System ++ & 0.0639 & 0.0990 & 0.0690 & 0.1874 & 0.0001 \\
AUEB-System3 & 0.0966 & 0.1285 & 0.0935 & 0.1556 & 0.0011 \\
bio-answerfinder & 0.0910 & 0.1617 & 0.1004 & 0.1418 & 0.0008 \\
AUEB-System4 & 0.0080 & 0.0082 & 0.0077 & 0.0328 & 0.0000 \\
\hline
\end{tabular}
\caption{Results for snippet retrieval in batch 2 of phase A of Task 8b.
}\label{tab:bA_res_sni}
\end{table*}
\textbf{Phase B}:
In the second phase of task 8b, the participating systems were expected to provide both exact and ideal answers.
Regarding the ideal answers, the systems will be ranked according to manual scores assigned to them by the BioASQ experts during the assessment of systems responses~\cite{balikas13}.
For the exact answers, which are required for all questions except the summary ones, the measure considered for ranking the participating systems depends on the question type.
For the yes/no questions, the systems were ranked according to the macro-averaged F1-measure on prediction of no and yes answer.
For factoid questions, the ranking was based on mean reciprocal rank (MRR) and for list questions on mean F1-measure.
Some indicative results for exact answers for the third batch of Task 8b are presented in Table~\ref{tab:bB_res}. The full results of phase B of Task 8b are available online\footnote{\footnotesize \url{http://participants-area.bioasq.org/results/8b/phaseB/}}. These results are preliminary, as the final results for Task 8b will be available after the manual assessment of the system responses by the BioASQ team of biomedical experts.
\begin{table*}[!htbp]
\centering
\begin{tabular}
{M{0.205\linewidth}M{0.0852\linewidth}M{0.0852\linewidth}M{0.105\linewidth}M{0.11\linewidth}M{0.0852\linewidth}M{0.0852\linewidth}M{0.0852\linewidth}M{0.0852\linewidth}}
\hline
\textbf{System} & \multicolumn{2}{c}{\textbf{Yes/No}} & \multicolumn{3}{c}{\textbf{Factoid}} & \multicolumn{2}{c}{\textbf{List}} \\
\hline
& Acc. & F1 & Str. Acc. & Len. Acc. & MRR & Prec. & Rec. & F1 \\ \cline{2-9}
Umass\_czi\_5 & \textbf{0.9032} & 0.8995 & 0.2500 & 0.4286 & 0.3030 & \textbf{0.7361} & 0.4833 & \textbf{0.5229} \\
Umass\_czi\_1 & 0.8065 & 0.8046 & 0.2500 & 0.3571 & 0.2869 & 0.6806 & 0.4444 & 0.4683 \\
Umass\_czi\_2 & 0.8387 & 0.8324 & 0.2500 & 0.3571 & 0.2869 & 0.6806 & 0.4444 & 0.4683 \\
pa-base & \textbf{0.9032} & 0.8995 & 0.2500 & 0.4643 & 0.3137 & 0.5278 & 0.4778 & 0.4585 \\
pa & \textbf{0.9032} & 0.8995 & 0.2500 & 0.4643 & 0.3137 & 0.5278 & 0.4778 & 0.4585 \\
Umass\_czi\_4 & \textbf{0.9032} & 0.9016 & \textbf{0.3214} & 0.4643 & 0.3810 & 0.6111 & 0.4361 & 0.4522 \\
KU-DMIS-1 & \textbf{0.9032} &\textbf{0.9028} & \textbf{0.3214} & 0.4286 & 0.3601 & 0.6583 & 0.4444 & 0.4520 \\
KU-DMIS-4 & 0.8387 & 0.8360 & 0.2857 & 0.4286 & 0.3357 & 0.6167 & 0.4444 & 0.4490 \\
KU-DMIS-5 & \textbf{0.9032} & \textbf{0.9028} & \textbf{0.3214} & 0.4643 & 0.3565 & 0.6167 & 0.4444 & 0.4490 \\
KU-DMIS-2 & 0.8710 & 0.8697 & \textbf{0.3214} & 0.4286 & 0.3446 & 0.6028 & 0.4444 & 0.4467 \\
KU-DMIS-3 & 0.8387 & 0.8360 & 0.2500 & 0.4643 & 0.3357 & 0.6111 & 0.4444 & 0.4431 \\
UoT\_allquestions & 0.5806 & 0.3673 & \textbf{0.3214} & 0.3929 & 0.3423 & 0.5972 & 0.4111 & 0.4290 \\
UoT\_baseline & 0.5806 & 0.3673 & \textbf{0.3214} & 0.3929 & 0.3512 & 0.4861 & 0.4056 & 0.4214 \\
Best factoid & 0.5806 & 0.4732 & 0.2857 & 0.3929 & 0.3333 & 0.5208 & 0.4056 & 0.4107 \\
bio-answerfinder & 0.8710 & 0.8640 & \textbf{0.3214} & 0.4286 & 0.3494 & 0.3884 & \textbf{0.5083} & 0.4078 \\
FudanLabZhu2 & 0.7419 & 0.6869 & \textbf{0.3214} & 0.5357 &\textbf{0.3970} & 0.5694 & 0.3583 & 0.3988 \\
FudanLabZhu3 & 0.7419 & 0.6869 &\textbf{0.3214} & 0.4643 & 0.3655 & 0.5583 & 0.3472 & 0.3777 \\
FudanLabZhu4 & 0.7419 & 0.6869 & 0.2857 & \textbf{0.5714} & 0.3821 & 0.5583 & 0.3472 & 0.3777 \\
FudanLabZhu5 & 0.7419 & 0.6869 & \textbf{0.3214} & 0.4286 & 0.3690 & 0.5583 & 0.3472 & 0.3777 \\
UoT\_multitask\_l. & 0.5161 & 0.3404 & \textbf{0.3214} & 0.4286 & 0.3643 & 0.5139 & 0.3556 & 0.3721 \\
BioASQ\_Baseline & 0.5161 & 0.5079 & 0.0714 & 0.2143 & 0.1220 & 0.2052 & 0.4833 & 0.2562 \\
\hline
\end{tabular}
\caption{Results for batch 3 for exact answers in phase B of Task 8b.
Only the performance of the top-20 systems and the BioASQ Baseline are presented.
\label{tab:bB_res}}
\end{table*}
Figure {\ref{fig:Exact}} presents the performance of the top systems for each question type in exact answers during the eight years of the BioASQ challenge.
The diagram reveals that this year the performance of systems in the yes/no questions keeps improving. For instance, in batch 3 presented in Table \ref{tab:bB_res}, various systems manage to outperform by far the strong baseline, which is based on a version of the OAQA system that achieved top performance in previous years.
Improvements are also observed in the preliminary results for list questions, whereas the top system performance in factoid questions is fluctuating in the same range as done last year.
In general, Figure {\ref{fig:Exact}} suggests that for the latter types of question there is still more room for improvement.
\begin{figure*}[!htbp
\centerline{\includegraphics[width=1\textwidth]{figures/8bBExact.png}}
\caption{
The official evaluation scores of the best performing systems in Task B, Phase B, exact answer generation, across the eight years of the BioASQ challenge.
Since BioASQ6 the official measure for Yes/No questions is the macro-averaged F1 score (macro F1), but accuracy (Acc) is also presented as the former official measure. The results for BioASQ8 are preliminary, as the final results for Task 8b will be available after the manual assessment of the system responses. }\label{fig:Exact}
\end{figure*}
\subsection{Task MESINESP8}
The task proved to be a challenging one, but overall we believe the results were pretty good. Compared to the setting for English, the overall dataset was significantly smaller, and also the track evaluation contained not only medical literature, but also clinical trial summaries and healthcare project summaries. Moreover, in case of the provided training data, two different indexing approaches were used by the literature databases: IBECS has a more centralized manual indexing contracting system, while in case of LILACS a number of records were indexed in a sort of distributed community human indexer effort. The training set contained 23,423 unique codes, while the 911 articles in the evaluation set contained almost 4,000 correct DeCS codes. The best predictions, by Fudan University, scored a MIF (micro F-measure) of 0.4254 MiF using their AttentionXML with multilingual-BERT system, compared to the baseline score of 0.2695. Table \ref{tab:mesinesp_res} shows the results of the runs for this task. As a matter of fact, the five best scores were from them.
\begin{table}[!htb]
\centering
\begin{tabular}{M{0.31\linewidth}M{0.105\linewidth}M{0.105\linewidth}M{0.105\linewidth}M{0.105\linewidth}M{0.105\linewidth}M{0.105\linewidth}}
\hline
\textbf{System} & MiF & MiP & MiR & EBF & MaF & Acc. \\
\hline
Model 4 & \textbf{0.4254} & 0.4374 & \textbf{0.4140} & \textbf{0.4240} & \textbf{0.3194} & \textbf{0.2786} \\
Model 3 & 0.4227 & 0.4523 & 0.3966 & 0.4217 & 0.3122 & 0.2768 \\
Model 1 & 0.4167 & 0.4466 & 0.3906 & 0.4160 & 0.3024 & 0.2715 \\
Model 2 & 0.4165 & 0.4286 & 0.4051 & 0.4150 & 0.3082 & 0.2707 \\
Model 5 & 0.4130 & 0.4416 & 0.3879 & 0.4122 & 0.3039 & 0.2690 \\
PriberamTEnsemble & 0.4093 & 0.5336 & 0.3320 & 0.4031 & 0.2115 & 0.2642 \\
PriberamSVM & 0.3976 & 0.4183 & 0.3789 & 0.3871 & 0.2543 & 0.2501 \\
iria-mix & 0.3892 & 0.5353 & 0.3057 & 0.3906 & 0.2318 & 0.2530 \\
PriberamBert & 0.3740 & 0.4293 & 0.3314 & 0.3678 & 0.2009 & 0.2361 \\
iria-1 & 0.3630 & 0.5024 & 0.2842 & 0.3643 & 0.1957 & 0.2326 \\
iria-3 & 0.3460 & 0.5375 & 0.2551 & 0.3467 & 0.1690 & 0.2193 \\
iria-2 & 0.3423 & 0.4590 & 0.2729 & 0.3408 & 0.1719 & 0.2145 \\
PriberamSearch & 0.3395 & 0.4571 & 0.2700 & 0.3393 & 0.1776 & 0.2146 \\
iria-4 & 0.2743 & 0.3068 & 0.2481 & 0.2760 & 0.2619 & 0.1662 \\
BioASQ\_Baseline & 0.2695 & 0.2337 & 0.3182 & 0.2754 & 0.2816 & 0.1659 \\
graph matching & 0.2664 & 0.3501 & 0.2150 & 0.2642 & 0.1422 & 0.1594 \\
exact matching & 0.2589 & 0.2915 & 0.2328 & 0.2561 & 0.0575 & 0.1533 \\
LasigeBioTM TXMC F1 & 0.2507 & 0.3559 & 0.1936 & 0.2380 & 0.0858 & 0.1440 \\
Anuj\_Ensemble & 0.2163 & 0.2291 & 0.2049 & 0.2155 & 0.1746 & 0.1270 \\
Anuj\_NLP & 0.2054 & 0.2196 & 0.1930 & 0.2044 & 0.1744 & 0.1198 \\
NLPUnique & 0.2054 & 0.2196 & 0.1930 & 0.2044 & 0.1744 & 0.1198 \\
X-BERT BioASQ F1 & 0.1430 & 0.4577 & 0.0847 & 0.1397 & 0.0220 & 0.0787 \\
LasigeBioTM TXMC P & 0.1271 & 0.6864 & 0.0701 & 0.1261 & 0.0104 & 0.0708 \\
Anuj\_ml & 0.1149 & \textbf{0.7557} & 0.0621 & 0.1164 & 0.0006 & 0.0636 \\
X-BERT BioASQ & 0.0909 & 0.5449 & 0.0496 & 0.0916 & 0.0045 & 0.0503 \\ \hline
\end{tabular}
\caption{ Final scores for MESINESP task submissions, including the official MiF metric in addition to other complementary metrics.
\label{tab:mesinesp_res}}
\end{table}
Although MiF represent the official competition metric, other metrics are provided for completeness. It is noteworthy that another team (Anuj-ml, from India) that was not among the highest scoring on MiF, nevertheless scored considerably higher than other teams with Precision metrics such as EBP (Example Based Precision), MaP (Macro Precision) and MiP (Micro Precision). Unfortunately, at this time we have not received details on their system implementation.
One problem with the medical semantic concept indexing in Spanish, at least for diagnosis or disease related terms, is the uneven distribution and and high variability.~\cite{almagro2020}.
\section{Conclusions}
\label{sec:conclusion}
This paper provides an overview of the eighth BioASQ challenge. This year, the challenge consisted of three tasks: The two tasks on biomedical semantic indexing and question answering in English, already established through the previous seven years of the challenge, and the new MESINESP task on semantic indexing of medical content in Spanish, which ran for the first time.
The addition of the new challenging task on medical semantic indexing in Spanish, revealed that in a context beyond the English language, there is even more room for improvement, highlighting the importance of the availability of adequate resources for the development and evaluation of systems to effectively help biomedical experts dealing with non-English resources.
The overall shift of participant systems towards deep neural approaches, already noticed in the previous years, is even more apparent this year.
State-of-the-art methodologies have been successfully adapted to biomedical question answering and novel ideas have been investigated.
In particular, most of the systems adopted on neural embedding approaches, notably based on BERT and BioBERT models, for all tasks of the challenge.
In the QA task in particular, different teams attempted transferring knowledge from general domain QA datasets, notably SQuAD, or from other NLP tasks such as NER and NLI, also experimenting with multi-task learning settings.
In addition, recent advancements in NLP, such as XLNet~\cite{DBLP:journals/corr/abs-1906-08237}, BART~\cite{lewis2019bart} and SpanBERT~\cite{joshi2020spanbert} have also been tested for the tasks of the challenge.
Overall, as in previous versions of the challenge, the top preforming systems were able to advance over the state of the art, outperforming the strong baselines on the challenging shared tasks offered by the organizers.
Therefore, we consider that the challenge keeps meeting its goal to push the research frontier in biomedical semantic indexing and question answering.
The future plans for the challenge include the extension of the benchmark data though a community-driven acquisition process.
\section{Acknowledgments}
Google was a proud sponsor of the BioASQ Challenge in 2019. The eighth edition of BioASQ is also sponsored by the Atypon Systems inc.
BioASQ is grateful to NLM for providing the baselines for task 8a and to the CMU team for providing the baselines for task 8b.
The MESINESP task is sponsored by the Spanish Plan for advancement of Language Technologies (Plan TL) and the Secretaría de Estado para el Avance Digital (SEAD).
BioASQ is also grateful to LILACS, SCIELO and Biblioteca virtual en salud and Instituto de salud Carlos III for providing data for the BioASQ MESINESP task.
\bibliographystyle{splncs04}
|
1,108,101,566,654 | arxiv | \section{Introduction}
Mass, charge radius, lifetime, electric (magnetic) transition probability, and deformation are among the most fundamental observables for the many-body nuclear system.
A systematic analysis of these data have been successful in bringing forth a global picture of the atomic nuclei.
For example, experimental binding energies or charge radii for neighboring nuclei do not differ much
except at several specified regions like closed shells or onset of shape.
On the other hand, comparing these observables of atomic nuclei, which differ by one or a few neutrons or protons,
have yielded many empirical relations or filters for special interaction strengths between the valence nucleons.
Of them, the Garvey-Kelson relations for nuclear binding energies are probably one of the best known
examples~\cite{GK1966PRL16,GK-np2001,GK2008PhysRevC.77.041304,GK2011PhysRevC.83.054309,GK2013-PhysRevC.87.057304,GK2013PhysRevC.87.014313,GK2013PhysRevC.88.064325,GK2014PhysRevC.89.061304}.
The validity of this relation in the nuclear charge radius has also been examined recently~\cite{Piekar2010EPJA46}.
The radius of the charge (proton) distribution can be assumed to be equal to that of the nuclear mass distribution,
considering the nucleus as a liquid drop with the protons homogeneously distributed over the sphere of the nucleus.
Although not accurate in prediction, this simple liquid-drop model can serve as a guide, and the interesting physics
can be found in local deviations from the global behavior.
In our recent work~\cite{Sun2014PhysRevC.90.054318,Sun2015PhysRevC.91.019902}
we proposed a set of nuclear charge radius relations $\delta R_{ip-jn}(Z,N)$,
\begin{eqnarray}
\delta R_{ip-jn}(Z,N)&= &R(Z,N)-R(Z,N-j) \nonumber \\
& &-[R(Z-i,N)-R(Z-i,N-j)] \nonumber \\
&\simeq & 0 \;,
\label{eq1}
\end{eqnarray}
where $R(Z,N)$ is the root-mean-square (rms) charge radius of the nucleus with $N$ neutrons and $Z$ protons. $i$ and $j$ are integers.
The validity of such relation is a consequence of the smooth transition in the nuclear structure that is often found when
going from a nucleus to its neighboring nuclei. Eq.~\ref{eq1} holds precisely over almost the whole nuclear chart
except at a few regions characterized by shape transition and shape coexistence at, e.g., $N \sim 60$, $N \sim 90$ and $Z \sim 80$.
These exceptions raise the possibilities that more accurate local systematics may be developed from the experimental data.
One simple case connecting only even-even nuclei is
\begin{eqnarray}
\delta R_{2p-2n}(Z,N)&= &R(Z,N)-R(Z,N-2) \nonumber \\
& &-[R(Z-2,N)-R(Z-2,N-2)] \nonumber \\
&\simeq & 0 \;.
\label{eq2}
\end{eqnarray}
The term $R(Z,N)-R(Z,N-2)$, the so-called isotope shift,
involves the variation of the charge distribution when only two neutrons are added to the system.
In this sense, $\delta R_{2p-2n}(Z,N)$ is nothing but
the difference of isotope shifts for neighboring two isotopic chains. Hereafter we simply
rewrite $\delta R_{2p-2n}(Z,N)$ as $\delta R(Z,N)$.
In this work, we aim to examine and quantify the correlation between the local charge radius relations $\delta R(Z,N)$
and those of deformation data. The correlation is made from cases that both charge radius and quadrupole deformation data are experimentally availbale.
This then leads us to an improved relation by correcting the contribution from quadrupole deformation effect in atomic nuclei.
Moreover, this new relation can be naturally extended to the reduced electric quadrupole transition probability $B(E2)$ between the first $2^+$ state and the $0^+$ ground state,
and the mean lifetime $\tau$ of the first 2$^+$ state.
\section{Shape effect on charge radii}\label{sec1}
For the system of spherical nuclei, the rms charge radii can be empirically described by
\begin{equation}\label{eq:Alaw}
R(Z,N)=\sqrt{3/5} r_0 A^{1/3} \;,
\end{equation}
where $A$ is the mass number and $r_0$ is fixed to 1.2 fm throughout this paper.
Thus $\delta R(Z,N)$ for the even-even isotopes is
\begin{equation}
\delta R(Z,N) \equiv \delta R(Z,N)_{\mbox{sph}} = \sqrt{\frac{3}{5}}r_0\delta(A^{1/3}) \;.
\label{eq4}
\end{equation}
Numerically it is easy to see that $\delta R(Z,N)_{\mbox{sph}}$ goes down to a few times $10^{-4}$ fm with increasing mass number.
For a deformed nucleus, the charge radius can be simply decoupled to the spherical and deformation part.
For the important case of an axially symmetric shape,
by neglecting the high-order corrections one can express this approximately as:
\begin{equation}\label{eq.5}
R(Z,N)=R_0(Z,N)\left[1+\frac{5}{8\pi}\beta_2^2(Z,N)\right] \;
\end{equation}
where $\beta_2(Z,N)$ is the rms quadrupole deformation for the nuclide ($Z,N$). As will be described later, it is derived experimentally from the reduced electric quadrupole transition probability $B(E2)$.
$R_0(Z,N)$ corresponds to the charge radius of a spherical nucleus that the nuclear volume is conserved, and is defined by Eq.~\ref{eq:Alaw}.
Eq.~(\ref{eq.5}) may be generalized to include higher order multipoles or triaxial shape~\cite{Bohr1969book,Greiner1996book}.
Accordingly, the experimental four-point relation $\delta R(Z,N)$ can be expressed in terms of two variables representing the spherical equivalent radius and the deformation.
This corresponds to a two-parameter model of describing $\delta R(Z,N)$,
\begin{eqnarray}
\delta R(Z,N) & = & \delta R(Z,N)_{\mbox{sph}} + \delta R(Z,N)_{\mbox{def}} \nonumber \\
& = & \delta R(Z,N)_{\mbox{sph}} + \frac{5}{8\pi}\delta(R_0\beta_2^2) \;,
\label{eq6}
\end{eqnarray}
where $ \delta R(Z,N)_{\mbox{sph}}$ is defined in Eq.~(\ref{eq4}), and $\delta R(Z,N)_{\mbox{def}}$ comes from the variance of deformation in the relevant nuclei,
\begin{eqnarray}
\delta R(Z,N)_{\mbox{def}} & \equiv& \frac{5}{8\pi}\delta(R_0\beta_2^2) \nonumber \\
&=& \sqrt{\frac{3}{5}}\frac{5}{8\pi}r_0[A^{1/3}\beta_2^2(Z,N) \nonumber \\
&& + (A-4)^{1/3}\beta_2^2(Z-2,N-2) \nonumber \\
&& -(A-2)^{1/3}\beta_2^2(Z-2,N) \nonumber \\
&& - (A-2)^{1/3}\beta_2^2(Z,N-2)] \nonumber \\
&\approx& \sqrt{\frac{3}{5}} \frac{5}{8\pi}r_0A^{1/3}\delta\beta_2^2 \;.
\label{eq7}
\end{eqnarray}
The approximation of the last term is valid especially for heavier system.
Because of the negligible contribution of $\delta R(Z,N)_{\mbox{sph}}$, the resulting $\delta R(Z,N)$ is mostly determined by the terms relevant to nuclear deformation.
Although dynamic deformations and higher-order multipoles are not included in this equation,
they can be subsumed in principle under the deformation term in which $\delta\beta_2^2$ is replaced by $\sum_i\delta\beta_i^2$.
\section{Correlation between charge radius and quadrupole deformation}
\subsection{Experimental data}
\begin{figure}[htbp]\noindent
\centering
\includegraphics[width=0.45\textwidth]{fig1-2016.eps}
\caption{(Color online) $\delta R(Z,N)$ as a function of $\delta R(Z,N)(\beta_2^2)$ for all experimentally known cases. The linear fit is indicated by the dash line.}\label{fig.1}
\end{figure}
We can now examine the correlation between the experimental $\delta R(Z,N)$ and $\delta R(Z,N)_{\mbox{def}}$. The resulting correlation plot is shown in Fig.~\ref{fig.1}.
The experimental charge radius and deformation data are from the latest evaluations~\cite{Angeli2013,beta2016}. There are in total 149 even-even nuclei from Ne to Cm.
It is seen that almost all the data follow a linear trend within 1 standard deviation ($\sigma$). A coefficiency of 0.29(6) is determined with a reduced $\chi^2$ of 0.8.
This indicates that experimental data of charge radii and quadrupole deformation are in well consistency.
Specifically, this correlation can be seen for the Sr isotopes in Fig.~\ref{fig.2}.
The sudden onset of the shape transition at $N=60$ is reflected distinctly by both $\delta R(Z,N)_{\mbox{def}}$ and $\delta R(Z,N)$.
The deformation parameters for the relevant 4 nuclei, $^{98}_{38}$Sr, $^{96}_{38}$Sr, $^{96}_{36}$Kr and $^{94}_{36}$Kr are 0.40(1), 0.175(6), 0.25(3), 0.19(1), resulting in
$\delta R(Z,N)_{\mbox{def}}$ of 0.088(14) fm for $^{98}$Sr.
Therefore, once considering the deformation correlation, the large $\delta R(Z,N)$ value at $N=60$, the well-known region of phase transitions, can be largely diminished.
Similar correlation has been observed at $N\sim 90$ for the Nd isotopes. Unfortunately, it is not possible yet to test the region at $Z\sim80$ due to missing deformation data experimentally.
\begin{figure}[htbp]\noindent
\centering
\includegraphics[width=0.45\textwidth]{Z=38.eps}
\caption{(Color online) Experimentally known $\delta R(Z,N)$ (filled circles) and $\delta R(Z,N)_{\mbox{def}}$ (open squares) for the Sr isotopic chain.
}\label{fig.2}
\end{figure}
5 cases at $^{20}_{10}$Ne, $^{46}_{20}$Ca, $^{46}_{22}$Ti, $^{76}_{34}$Se, and $^{78}_{36}$Kr, show deviations from the linear trend by more than 2$\sigma$.
Of them, $^{20}_{10}$Ne is the lightest nuclide with available charge radius and deformation data. It is known already
that the precision of the charge radius formula deteriorates with decreasing mass number (in particularly for $A<60$)~\cite{Sun2014PhysRevC.90.054318}.
This may be understood in the fact that the collective ``deformation'' property is more suitable for heavy nuclei in comparision with lighter nuclei.
Further check on the mass dependence confirms this argument, where 8 of 10 cases with $A<60$ have deviations from the linear trend by more than 1 $\sigma$.
It should be noted that 1 $\sigma$ discussed here is even as small as 0.0040 fm.
These few cases may be (partially) related to the various
sources of charge radii and deformation, given the different methods of systematic errors associated with different techniques.
We noted that a recent analysis~\cite{BE2-Birch2016145} of $B(E2)$ measurements, from which $\beta_2$ is derived, concluded that most prevalent methods of measuring $B(E2)$
values are equivalent.
Such comparison is not available yet for charge radius data across the entire chart of nuclides.
Anyway, a consistent and equivalent set of nuclear charge radii and $B(E2)$ data are definitely crucial.
A combined analysis of the cases for $^{46}_{20}$Ca and $^{46}_{22}$Ti shows that
increasing the charge radius by 0.3\% or decreasing the $\beta_2$ by about 40\% (unlikely) for $^{44}_{20}$Ca
will result in their $\delta R(Z,N)$ values in better agreements with the linear trend.
Similar arguments hold also for the cases of $^{76}_{34}$Se and $^{78}_{36}$Kr.
Therefore, the correlation identified here may provide us a very accurate way
to investigate the consistence of both the charge radius and deformation surface.
In Ref.~\cite{Sun2014PhysRevC.90.054318}, it was found that Eq.~(\ref{eq1}) is remarkably successful even at nuclei with magic neutron and/or proton numbers.
This can be easily understood with the correlation identified. Nuclei with magic number of neutrons and/or protons is mostly in the spherical shape, i.e.,
their $\beta_2$ values of less than 0.1, thus leading to naturally a net $\delta R(Z,N)$.
This is very different from the counterpart in nuclear mass, the valence proton-neutron interactions $\delta V_{pn}$~\cite{Brenner2006PhysRevC.73.034315,Cakirli2005PhysRevLett.94.092501,Chen2009Phys.Rev.Lett.122503}. It depends strongly on the spatial overlap of the valence orbits and presents a dramatic variation when crossing neutron shell closures.
\subsection{Correlations in nuclear models}
The knowledge of both experimental data on charge radii and deformation are still very limited.
It will be very useful if nuclear models can provide these data either in absolute values or in differential values at a reasonable precision. For example,
when one experimental deformation parameter is missing, one can resort to the theoretical predictions in nuclear models.
Care should be taken that quantities like $B(E2)$ refer to the charge (proton) distribution in the nucleus and that in particular $\beta$ is
the charge deformation related to this charge distribution. This should be kept in mind when comparing the experimental results to nuclear models.
\begin{figure}[htbp!]\noindent
\centering
\includegraphics[width=0.45\textwidth]{fig1-RMFbeta2.eps}
\includegraphics[width=0.45\textwidth]{fig1-RMFbeta.eps}
\caption{(Color online) Same as Fig.~\ref{fig.1} but using the predicted $\delta R(Z,N)$ and $\delta R(Z,N)_{\mbox{def}}$ in the RMF model (a), and
using experimental $\delta R(Z,N)$ and predicted $\delta R(Z,N)_{\mbox{def}}$ in the RMF model (b). Errorbars are not shown for experimental
data.}\label{fig.3}
\end{figure}
Global nuclear models can provide both charge radius and deformation data self-consistently. We choose Hartree-Fock-Bogoliubov (HFB-24)
model~\cite{Goriely2013PhysRevC.88.024308} and the relativistic mean-field (RMF) model~\cite{Geng2005PTP} to check the same correlation. As shown in Fig.~\ref{fig.3}(a),
a linear correlation is predicted between $\delta R(Z,N)$ and $\delta R(Z,N)_{\mbox{def}}$ in the RMF.
The slope is determined to be 0.60, about a factor two larger than that of experimental data.
The difference from the experimental trend should be related to the fact that all nuclei are treated with axially symmetric shapes in the RMF approach.
The same correlation has also been found in the HFB-24 but with a different coefficience parameter (0.85).
However, such correlation vanish once we use the theoretical $\beta_2$ values instead of experimental data.
As an example, $\delta R(Z,N)$ vs. $\delta R(Z,N)_{\mbox{def}}$ calculated using the $\beta_2$ in the RMF is shown in Fig.~\ref{fig.3}(b).
This indicates that current nuclear models are not accurate enough yet to reproduce the correlation seen in the experiment.
\section{Discussion}
\subsection{Improved charge radius formula}
As verified in the previous section, the $\delta R(Z,N)$ can be quantitatively reproduced with the $\delta R(Z,N)_{\mbox{def}}$ for the existing data.
This leads to the improved charge radius formula as follows:
\begin{equation}\label{eq8}
\delta R(Z.N)^{\mbox{corr}}=\delta R(Z,N)-\mbox{C}\cdot \delta R(Z,N)_{\mbox{def}} \approx 0 \;
\end{equation}
where $\mbox{C}$ is 0.29(6) determined from Fig.~\ref{fig.1}. $\delta R(Z,N)$ and $\delta R(Z,N)_{\mbox{def}}$ are given in Eq.~(\ref{eq2}) and Eq.~(\ref{eq7}), respectively.
The cases with experimental data are used to the check the accuracy of the relation without/with deformation correction.
The weighted mean values of $\delta R(Z,N)^{\mbox{corr}}$ amount to only -8$\times 10^{-4}$ fm with the weighted standard deviation of 5$\times 10^{-3}$ fm.
This is about 15\% improvement in precision comparing with that without correction (i.e., $\delta R(Z,N)$).
The significance is that Eq.~(\ref{eq8}) can be extended to cases even when sudden variances occur in nuclear shapes.
\subsection{Correlation of charge radius with $B(E2)$ and $\tau$}
Experimental quadrupole deformation values are derived from the model-independent
experimental values of the reduced electric quadrupole transition probability $B(E2)$, between
the $0^+$ ground state and the first $2^+$ state in even-even nuclides, using the semi-empirical approach,
\begin{equation}
\beta_2 = \frac{4\pi}{3ZR_0^2}[\frac{B(E2)}{\mbox{e}^2}]^{1/2} \;.
\label{eq:be2}
\end{equation}
Here $R_0=r_0A^{1/3}=1.2A^{1/3}$ fm and $B(E2)$ is in units of e$^2$b$^2$.
It is assuming a uniform charge distribution out to the distance $R$ and zero charge beyond~\cite{Greiner1996book,RAMAN2001ADNDT78}.
The $B(E2)$ values are fundamentally important quantities for determining the collectivity in nuclei.
Moreover, $B(E2)$ are related to the mean lifetime $\tau$ of the first $2^+$ state through
\begin{equation}
\tau(1+\alpha) = 40.81\times 10^{13}E^{-5}[\frac{B(E2)}{\mbox{e}^2\mbox{b}^2}]^{-1} \;,
\end{equation}
where $E$ is the excitation energy of the first $2^+$ state (in units of keV), and $\tau$ in ps.
The total internal conversion coefficient $\alpha$ for a specific $E$ is needed for correction.
$\delta R(Z,N)_{\mbox{def}}$ can be accordingly rewritten in terms of $B(E2)$ and $\tau$,
\begin{eqnarray}
\delta R(Z,N)_{\mbox{def}} &=& 4.35\times 10^3 \delta(\frac{B(E2)/ \mbox{e}^2\mbox{b}^2}{Z^2A})\mbox{b}^{1/2} \nonumber \\
&=& 1.77\times 10^{21}\delta\left(\frac{E^{-5}Z^{-2}A^{-1}}{\tau(1+\alpha)}\right) {\mbox{b}^{1/2}} \;,
\label{eq9}
\end{eqnarray}
where all the quantities $E$, $B(E2)$, $Z$, $A$ and $\alpha$ in the $\delta$ term are for the four neighboring even-even nuclei.
In case of no abrupt shape transition, $\delta R(Z,N)_{\mbox{def}} \simeq 0$, and the following relation involving again four neighboring doubly even nuclei should hold well,
\begin{eqnarray}
\delta(A^{1/3}\beta_2^2 ) \simeq \delta(\frac{B(E2)}{Z^2A}) \simeq \delta\left(\frac{E^{-5}Z^{-2}A^{-1}}{\tau(1+\alpha)}\right) \simeq 0 \;.
\end{eqnarray}
For heavy nuclear system, where the difference in ${Z^2A}$ can be safely neglected, we can then get the relation
\begin{eqnarray}
\delta B &= &B(E2)(Z,N)-B(E2)(Z,N-2) \nonumber \\
& &-B(2)(Z-2,N) +B(E2)(Z-2,N-2)) \nonumber \\
&\simeq & 0 \;,
\label{eq11}
\end{eqnarray}
The same relation was proposed independently in Ref.~\cite{BE2relation-PhysRevC.12.2038}.
To examine the validity of eq.(\ref{eq11}), we use the same data set as in Fig.~1 were used to examine the validity of .
The ``theoretical'' $B(E2)$ value of a given nucleus $(Z,N)$, $B(E2)_{\mbox{pred}}$, is calculated in
terms of experimental $B(E2)$ of its three neighboring nuclei $(Z-2,N)$, $(Z,N-2)$, $(Z-2,N-2)$. Fig.~\ref{fig.4} shows the
relative differences of the predictions defined as $[B(E2)_{\mbox{pred}}-B(E2)_{\mbox{exp}}]/B(E2)_{\mbox{exp}}$.
It is seen that $B(E2)$ values can be calculated within an accuracy of $\pm$25\%, and often better.
The large deviations are shown at $N\sim 60$ and 90. It should be aware that the precisions of experimental data are getting worse as well at these regions.
\begin{figure}[htbp]\noindent
\centering
\includegraphics[width=0.45\textwidth]{fig-BE2.eps}
\caption{(Color online) Relative differences in $B(E2)$ between the predictions of Eq. (12) and experimental data. The $\pm$ 25\% accuracy is indicated by two dotted lines.}\label{fig.4}
\end{figure}
Inserting Eq.~(\ref{eq9}) into Eq.~(\ref{eq8}) we get a correlation between four-point charge radius relation with that of $B(E2)$ or $\tau$.
This new relation, in principle, should be more accurate for predictions of unknown $B(E2)$ values than Eq.~(\ref{eq11}), because
possible shape transitional effect can be at least partially compensated by the relevant charge radius data.
When seven quantities out of eight , e.g., four charge radii and three $B(E2)$ data, are known, then the last $B(E2)$ can be calculated.
Unfortunately, this will give mostly a too large uncertainty to make meanful predictions.
The uncertainty is mainly propagated from the charge radius data and is typically about 1 order of magnitude higher than that from Eq.~(\ref{eq11}).
\section{Conclusion}
With the available experimental data, a linear correlation has been found between the charge radius relation $\delta R(Z,N)$ and
the according quadrupole deformation (and thus $B(E2)$) relation $\delta R(Z,N)_{\mbox{def}}$. This correlation can provide a consistence check or analysis of experimental data on charge radii and deformation data.
In the near future it is also interesting to see whether the linear coefficiency 0.29(6) remains for more exotic nuclei, especially at shape transitional regions.
The large deviation of four-point charge radius relation $\delta R(Z,N)$ at shape transitional regions, can be quantitatively reproduced
with the $\delta R(Z,N)_{\mbox{def}}$ when experimental data are available.
This in turn gives an improved charge radius formula, and therefore is very useful to make reliable short-range extrapolations of
charge radii over the nuclear chart. Same correlation has been found in globle nuclear models, but so far the model itself is not accurate enough to reproduce the experimental data. Moreover, the relation can be generalized to a new relation between charge radius and $B(E2)$ or $\tau$. A simple four-point $B(E2)$ relation can reproduce experimental $B(E2)$ values within an accuracy of about $\pm$25\%.
Finally, we would like to mention that a consistent description~\cite{Wood1999Nucl.Phys.A323,E0-PhysRevLett.101.022502,E0-PhysRevC.85.034331,PhysRevC.79.054301,PhysRevC.80.061301,PhysRevC.85.034321,Li2013866,Zhao2014Phys.Rev.C11301}
of radius and transition probabilities of atomic nuclei are important to understand their correlation and thus for a better
interpretation of experimental results. A recent example is shown in $^{111-129}$Cd~\cite{Yordanov2016PhysRevLett.116.032501},
in which the parabolic behavior of the charge radii is found due to the linear tendency of the quadruple deformation.
\section{Acknowledgments}
This work has been supported by the NSFC
under No. 11235002, 11475014 and National Program on Key Basic
Research Project (2016YFA0400502). The authors thank Z. P. Li, Z. M. Niu, P.W. Zhao, and L. H. Zhu for useful comments.
\input{v1.bbl}
\end{document}
|
1,108,101,566,655 | arxiv | \subsection{Quark masses}
\label{sec:qm}
\newcommand{\ensuremath{f_{\pi, \rm PDG}}}{\ensuremath{f_{\pi, \rm PDG}}}
\newcommand{\text{MeV}}{\text{MeV}}
In the Standard Model (and many extensions), quark masses and the CKM matrix all stem from Higgs-Yukawa couplings between the quark
fields and the Higgs doublet.
It is therefore natural to consider the bottom-quark mass, $m_b$, in this report.
As discussed in Sec.~\ref{h2h_incl}, $m_b$ can be extracted from the inclusive semileptonic $B$ decay distributions, along
with~$|V_{cb}|$.
In the theory of inclusive decays, the charm-quark mass, $m_c$, is also needed to control an infrared sensitivity; see
Sec.~\ref{h2h_incl}.
Figure~\ref{fig:qm} compares results from lattice QCD with realistic sea content of $n_f=2+1+1$ or $2+1$ sea quarks with the
FLAG~2019~\cite{Aoki:2019cca} average for the $2+1+1$ sea.
\begin{figure}[b]
\centering
\includegraphics[width=0.48\textwidth]{figures/mb} \hfill
\includegraphics[width=0.48\textwidth]{figures/mc}
\caption{Comparison of results for the bottom-quark mass $\bar{m}_b=m_{b,\ensuremath{\overline{\rm MS}}}(m_{b,\ensuremath{\overline{\rm MS}}})$ (left) and the charm-quark mass
$\bar{m}_c=m_{c,\ensuremath{\overline{\rm MS}}}(m_{c,\ensuremath{\overline{\rm MS}}})$ (right).
Squares denote lattice-QCD calculations with $2+1+1$ flavors of sea quark~\cite{Lytle:2018evc,Bazavov:2018omf,Gambino:2017vkx,%
Bussone:2016iua,Colquhoun:2014ica,Chakraborty:2014aca,Alexandrou:2014sha,Carrasco:2014cwa};
triangles denote lattice-QCD calculations with $2+1$ flavors of sea quark~\cite{Petreczky:2019ozv,Nakayama:2016atf,%
Yang:2014sea,Lee:2013mla,McNeile:2010ji};
circles denote results extracted from $e^+e^-$ collisions near $Q\bar{Q}$ threshold~\cite{Mateu:2017hlz,Chetyrkin:2017lif,%
Ayala:2016sdn,Beneke:2016oox,Kiyo:2015ufa,Dehnadi:2015fra,Penin:2014zaa,Narison:2011xe,Bodenstein:2011fv,Bodenstein:2011ma,%
Chetyrkin:2009fv,Boughezal:2006px,Brambilla:2001qk}.
The vertical band shows the FLAG~2019 average for $2+1+1$ sea flavors.
Note that $2+1$-flavor calculations are in rough (good) agreement for bottom (charm).}
\label{fig:qm}
\end{figure}
The average for $\bar{m}_b$ is dominated by the very precise result from the Fermilab Lattice, MILC, and TUMQCD Collaborations,
while that for $\bar{m}_c$ is dominated by the corresponding Fermilab/MILC/TUMQCD result together with two separate results from the
HPQCD Collaboration.
The FLAG~2019~\cite{Aoki:2019cca} averages (for $2+1+1$ sea flavors) are
\begin{eqnarray}
\bar{m}_b = m_{b,\ensuremath{\overline{\rm MS}}}(m_{b,\ensuremath{\overline{\rm MS}}}) &=& 4.198(12)~\text{GeV} , \\
\bar{m}_c = m_{c,\ensuremath{\overline{\rm MS}}}(m_{c,\ensuremath{\overline{\rm MS}}}) &=& 1.280(13)~\text{GeV} ,
\end{eqnarray}
based on Refs.~\cite{Bazavov:2018omf,Gambino:2017vkx,Bussone:2016iua,Colquhoun:2014ica,Chakraborty:2014aca}
and~\cite{Lytle:2018evc,Bazavov:2018omf,Chakraborty:2014aca,Alexandrou:2014sha,Carrasco:2014cwa}, respectively.
Another recent review~\cite{Komijani:2020kst} finds averages with somewhat smaller uncertainties
\begin{eqnarray}
\bar{m}_b = m_{b,\ensuremath{\overline{\rm MS}}}(m_{b,\ensuremath{\overline{\rm MS}}}) &=& 4.188(10)~\text{GeV} , \\
\bar{m}_c = m_{c,\ensuremath{\overline{\rm MS}}}(m_{c,\ensuremath{\overline{\rm MS}}}) &=& 1.2735(35)~\text{GeV} ,
\end{eqnarray}
based on the same original sources.
In the case of $\bar{m}_c$, two results~\cite{Alexandrou:2014sha,Carrasco:2014cwa} agree poorly with the others, increasing
$\chi^2/\text{dof}$ of the average by a factor of around~5.
FLAG~2019~\cite{Aoki:2019cca} stretches the error bar by $\sqrt{\chi^2/\text{dof}}$, while Ref.~\cite{Komijani:2020kst} discards
them: the resulting error bar is smaller because of the compatibility of the inputs as well as the lack of stretching.
Other differences in averaging methodology are quantitatively unimportant.
In either case, the quoted averages are much more precise than those in the~PDG.
A last remark is that the most precise results~\cite{Lytle:2018evc,Bazavov:2018omf,Chakraborty:2014aca} all use the very high
statistics MILC HISQ ensembles with staggered fermions for the sea quarks~\cite{Bazavov:2012xda,Bazavov:2017lyh}.
In the future, other groups~\cite{Baron:2010bv,Baron:2010th,Aoki:2010dy,Boyle:2017jwu} will have to collect similar statistics to
enable a complete cross check.
Four distinct methods are used in the results shown in Fig.~\ref{fig:qm}:
1)~converting the bare lattice mass to the \ensuremath{\overline{\rm MS}}\ scheme,
2)~fitting to a formula for the heavy-light hadron mass in the heavy-quark expansion~\cite{Kronfeld:2000ck,Kronfeld:2000gk}, and
3)~computing moments of quarkonium correlation functions~\cite{Bochkarev:1995ai,Allison:2008xk}.%
\footnote{Lattice methods with no results in Fig.~\ref{fig:qm} are not discussed here.} The first two require an intermediate
renormalization scheme that can be defined for any ultraviolet regulator: quark masses defined this way can be computed with lattice
gauge theory or dimensional regularization.
For example, HPQCD~13 ($\Upsilon$ decays)~\cite{Lee:2013mla} uses two-loop lattice perturbation theory to convert the bare NRQCD
mass to the pole mass~\cite{Hart:2004bd,Hart:2009nr}, and dimensional regularization to convert the pole mass into the \ensuremath{\overline{\rm MS}}\ mass.
Instead of the pole mass, one can use a regularization-independent momentum-subtracted mass~\cite{Martinelli:1994ty}.
Like the \ensuremath{\overline{\rm MS}}\ scheme these RI-MOM schemes are mass-independ\-ent renormalization schemes, but they depend on the gauge.
In lattice gauge theory, Landau gauge is easily obtained on each gauge-field configuration via a minimization
procedure~\cite{Davies:1987vs}.
The mass renormalization factor, $Z_m$, can be computed from the three point function for the scalar or pseudoscalar density,
because $Z_m^{-1}=Z_S=Z_P$ (up to technical details for Wilson fermions).
For example, the matrix element $\langle p'|P|p\rangle$, between gauge-fixed quark states, can be used to define $Z_P$ using the
same formulas for lattice gauge theory as for continuum gauge theory (with dimensional regularization)~\cite{Martinelli:1994ty}.
The schemes labeled RI-MOM and RI$'$-MOM use $p'=p$ and slightly different definitions of the quark-field normalization~$Z_2$;
for a review see Ref.~\cite{Aoki:2010yq}.
The momentum transfer $q\equiv p'-p=0$ here, namely it is ``exceptional'' in the sense of Weinberg's theorem~\cite{Weinberg:1959nj}.
On the other hand, the RI-sMOM scheme~\cite{Sturm:2009kb} chooses $p'$ and $p$ such that $p^2=q^2=p^{\prime2}\equiv\mu^2$.
Without the exceptional momentum, the extraction of $Z_P$ is more robust.
It would be interesting to see whether RI-sMOM on the ETM 2+1+1 ensembles yields $\bar{m}_b$ favoring the RI$'$-MOM results or the
RI-sMOM results on MILC's ensembles.
The HQE method starts with the HQE formula for a heavy-light hadron mass~\cite{Falk:1992wt,Bigi:1994ga},
\begin{equation}
M = m + \bar{\Lambda} + \frac{\mu_\pi^2}{2m} - d_J\frac{\mu_G^2(m)}{2m} + \cdots,
\label{eq:hqe}
\end{equation}
where $M$ is the hadron mass, which is computed in lattice QCD as a function of the quark mass, $m$, and $d_J$ depends on the spin
of the hadron.
The quantities can be identified with the energy of gluons and light quarks, $\bar{\Lambda}$, the Fermi motion of the heavy quark,
$\mu_\pi^2$, and the hyperfine splitting, $\mu_G^2$.
($\mu_G^2$ depends logarithmically on~$m$.) Although this idea is not new~\cite{Kronfeld:2000ck,Kronfeld:2000gk}, to be precise one
has to confront the definition of~$m$.
Although the pole mass is natural in the context of the HQE, it is not suitable in practice, because of its infrared sensitivity.
The $\ensuremath{\overline{\rm MS}}$ mass, on the other hand, breaks the power counting: $m_\text{pole}-m_{\ensuremath{\overline{\rm MS}}}\propto \alpha_sm_\text{pole}$.
Instead, one chooses mass definitions that, in some sense, lie in between these two choices.
Gambino \emph{et al.}~\cite{Gambino:2017vkx} choose the kinetic mass~\cite{Bigi:1996si}, while
Fermilab/MILC/TUMQCD~\cite{Bazavov:2018omf} choose the minimal renormalon subtracted (MRS) mass~\cite{Brambilla:2017hcq}.
After extracting $m_\text{kin}$ or $m_\text{MRS}$ from fitting Eq.~(\ref{eq:hqe}), the result can be converted to the \ensuremath{\overline{\rm MS}}\ scheme
with three- and four-loop perturbation theory, respectively.
In addition to the different matching, the error bar from Fermilab/MILC/TUMQCD is so small because it is based on the largest data
set of all calculations in Fig.~\ref{fig:qm}.
See Sec.~\ref{sec:HQE-LQCD} for further discussion and results for $\bar\Lambda$, $\mu_\pi^2$, $\mu_G^2$, and higher-dimension
corrections to the HQE.
One can avoid an intermediate scheme by computing a short-distance quantity in lattice QCD, taking the continuum limit, and
analyzing the result with \ensuremath{\overline{\rm MS}}\ perturbation theory.
For example, on can compute moments of quarkonium correlation functions~\cite{Bochkarev:1995ai,Allison:2008xk},
\begin{eqnarray}
G_\Gamma^{(n)} &=& \sum_t t^n G_\Gamma(t), \\
G_\Gamma(t) &=& c_\Gamma \sum_{\bm{x}} \langle \bar{Q}\Gamma Q(\bm{x},t)\,\bar{Q}\Gamma Q(\bm{0},0) \rangle,
\end{eqnarray}
for some Dirac matrix $\Gamma$.
In lattice gauge theory, the pseudoscalar density needs no renormalization if $\Gamma=\gamma^5$ and $c_{\gamma^5}=m_Q^2$.
The moments $G_\Gamma^{(n)}$ are physical observables with a good continuum limit, which is proportional to $m_Q$ to the appropriate
power, multiplied by a dimensionless function of $\alpha_s(m_Q)$.
Thus, these moments also yield determinations of the strong coupling as well as quark masses.
In Fig.~\ref{fig:qm}, results obtained in this way are labeled ``moments''.
The same moments $G_\Gamma^{(n)}$ can be obtained from the cross section for $e^+e^-$ annihilation into ${Q\bar{Q}}$ hadrons via a
suitably subtracted dispersion relation.
In this case, $\Gamma=\gamma^\mu$ for the electromagnetic current, and $c_{\gamma^\mu}=1$ because the electromagnetic current is
conserved.
Thus, the same perturbative calculations (only changing $\Gamma$) can be used to extract the bottom- and charm-quark masses and
$\alpha_s$ from experimental measurements.
The dispersion relation, related sum rules, and the perturbative series for the moments are the basis of the result labeled
$e^+e^-\to{}b\bar{b}$ and $e^+e^-\to{}c\bar{c}$ in Fig.~\ref{fig:qm}.
The order $\alpha_s^p$, $p=1$, $2$, $3$, became available in 1993~\cite{Broadhurst:1993mw}, 1997~\cite{Chetyrkin:1997mb}, and
2006~\cite{Chetyrkin:2006xg,Boughezal:2006px}, respectively.
\subsection{Leptonic decays}
\label{sec:dc}
Instead of semileptonic decays, CKM matrix elements can also be determined from purely leptonic decays.
For example, a goal of Belle~II is to improve the determination of $V_{ub}$ from $B^+\to\tau^+\nu$,
as well as $V_{cd}$ from $D^+\to\ell^+\nu$ and $V_{cs}$ from $D_s^+\to\ell^+\nu$, and a goal of LHCb is to observe $B_c\to\tau\nu$.
The rates for leptonic decays suffer a helicity suppression, making tauonic and muonic decays preferred experimentally.
Leptonic decays are mediated by the axial-vector part of the electroweak current, as well as possible pseudoscalar currents, so
they complement semileptonic decays in this way.
The hadronic quantity describing the decay is known as the \emph{decay constant}, defined by
\begin{equation}
\langle0|\bar{b}\gamma^\mu\gamma^5u|B^+(p)\rangle = ip^\mu f_{B^+},
\label{eq:AfB}
\end{equation}
where $p^\mu$ is the four-momentum of the $B$ meson and $f_{B^+}$ is the decay constant.
For other mesons, the axial currents and notation change in obvious ways.
From the partial conservation of the flavor-nonsinglet axial current, the pseudoscalar density can also be used to compute the
decay constant:
\begin{equation}
(m_b+m_u)\langle0|\bar{b}\gamma^5u|B^+(p)\rangle = M_{B^+}^2 f_{B^+},
\label{eq:mPfB}
\end{equation}
where $m_b$ and $m_u$ are bare quark masses.
Equations~(\ref{eq:AfB}) and (\ref{eq:mPfB}) are the basis of lattice-QCD calculations.
In general, the axial current used is not a Noether current, so it is not absolutely normalized.
Fermion formulations with good chiral symmetry (staggered, overlap, domain wall) provide an absolutely normalized pseudoscalar
density.
Until recently, however, lattice spacings have not been small enough to use these approaches for the $b$~quark.
Methods developed especially for heavy quarks have therefore been used, and they do not provide any absolutely normalized
$\bar{b}\Gamma u$ bilinears.
Figure~\ref{fig:dc} compares results from lattice QCD with realistic sea content of $n_f=2+1+1$ or $2+1$ sea quarks with the
FLAG~2019~\cite{Aoki:2019cca} average for the $2+1+1$ sea.
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{figures/dc}
\caption{Comparison of results for the $B_{(s)}$-meson (top) and the $D_{(s)}$-meson (bottom) decay constants.
Squares denote lattice-QCD calculations with $2+1+1$ flavors of sea quark~\cite{Bazavov:2017lyh,Hughes:2017spc,Bussone:2016iua,%
Carrasco:2014poa,Dowdall:2013tga};
triangles denote lattice-QCD calculations with $2+1$ flavors of sea quark~\cite{Boyle:2017jwu,Christ:2014uea,Yang:2014sea,%
Na:2012iu,Na:2012kp,McNeile:2011ng,Bazavov:2011aa,Davies:2010ip}.
The vertical bands show the FLAG~2019 average for $2+1+1$ sea flavors~\cite{Aoki:2019cca}.}
\label{fig:dc}
\end{figure}
Because the Fermilab/MILC results dominate the FLAG average, we simply quote them~\cite{Bazavov:2017lyh}:
\begin{eqnarray}
f_{B^+} &=& 189.4 (0.8)_\text{stat} (1.1)_\text{syst} (0.3)_{\ensuremath{f_{\pi, \rm PDG}}} [0.1]_\text{EM scheme}~\text{MeV} , \label{eq:fB+}\\
f_{B^0} &=& 190.5 (0.8)_\text{stat} (1.0)_\text{syst} (0.3)_{\ensuremath{f_{\pi, \rm PDG}}} [0.1]_\text{EM scheme}~\text{MeV} , \label{eq:fB0}\\
f_{B_s} &=& 230.7 (0.8)_\text{stat} (1.0)_\text{syst} (0.2)_{\ensuremath{f_{\pi, \rm PDG}}} [0.2]_\text{EM scheme}~\text{MeV} . \label{eq:fBs}\\
f_{D^0} &=& 211.6 (0.3)_\text{stat} (0.5)_\text{syst} (0.2)_{\ensuremath{f_{\pi, \rm PDG}}} [0.2]_\text{EM scheme}~\text{MeV} , \label{eq:fD0}\\
f_{D^+} &=& 212.7 (0.3)_\text{stat} (0.4)_\text{syst} (0.2)_{\ensuremath{f_{\pi, \rm PDG}}} [0.2]_\text{EM scheme}~\text{MeV} , \label{eq:fD+}\\
f_{D_s} &=& 249.9 (0.3)_\text{stat} (0.2)_\text{syst} (0.2)_{\ensuremath{f_{\pi, \rm PDG}}} [0.2]_\text{EM scheme}~\text{MeV} , \label{eq:fDs}
\end{eqnarray}
where the systematic uncertainties stem from different choices in choosing fit ranges for the correlation functions and checking
the continuum extrapolation by adding a coarser lattice; the third ``\ensuremath{f_{\pi, \rm PDG}}'' error comes from converting from lattice units to
MeV with the pion decay constant of the PDG~\cite{PDG2018}; the last uncertainty stems from ambiguities in estimating
electromagnetic effects in the context of a QCD calculation omitting QED.
The results are arguably precise enough for the foreseeable future.
The results in Eqs.~(\ref{eq:fB+})--(\ref{eq:fDs}) again use the very high statistics MILC HISQ ensembles with staggered fermions
for the sea quarks.
Here the lattice spacing is, for some ensembles, small enough to reach the $b$ quark, so the calculation uses the HISQ action for
all $b$ and light quarks alike.
Thus, an absolutely normalized pseudoscalar density is available, so the uncertainty is essentially statistical, as propagated
through a fit to the continuum limit with physical quark mass.
Again, other groups will have to collect similar statistics in the future to enable a complete cross check.
To go beyond the precision quoted here, analyses of leptonic decays will have to include QED radiative corrections to the measured
rates.
The issues and an elegant solution for light mesons (pion and kaon) can be found in
Refs.~\cite{Carrasco:2015xwa,Giusti:2017dwk,DiCarlo:2019thl}.
Radiative corrections for heavy-light mesons will be more difficult to incorporate, because of the hierarchy of soft scales
$\Lambda_\text{QCD}$, $\Lambda_\text{QCD}^2/m_Q$, $\Lambda_\text{QCD}^3/m_Q^2$, etc.
\section{Executive Summary}
\label{intro}
The magnitudes of two of the elements of the Cabibbo-Koba\-yashi-Maskawa (CKM) quark mixing
matrix~\cite{Cabibbo:1963yz,Kobayashi:1973fv}, $|V_{ub}|$ and $|V_{cb}|$, are extracted from semileptonic
$B$-meson decays.
The results of the $B$ factories, analysed in the light of the most recent theoretical calculations, remain puzzling, because -- for
both $|V_{ub}|$ and $|V_{cb}|$ -- the determinations from exclusive and inclusive decays are in tension by about 3$\sigma$.
Recent experimental and theoretical results reduce the tension, but the situation remains unclear.
Meanwhile, measurements in the semitauonic channels at Belle, Babar, and LHCb show discrepancies with the Standard Model (SM)
predictions, pointing to a possible violation of lepton-flavor universality.
LHCb and the upcoming experiment Belle~II have the potential to resolve these issues in the next few years.
Thirty-five participants met at the Mainz Institute for Theoretical Physics to develop a medium-term strategy of analyses and
calculations aimed at the resolution of these issues.
Lattice and continuum theorists discussed with experimentalists how to reshape the semileptonic analyses in view of the much larger
luminosity expected at Belle~II and how to best exploit the new possibilities at LHCb, searching for ways to systematically validate
the theoretical predictions, to confirm new physics indications in semitauonic decays, and to identify the kind of new physics
responsible for the deviations.
\subsubsection*{Format of the workshop}
The program took place during a period of five days, allowing for ample discussion time among the participants.
Each of the five workshop days was devoted to specific topics: the inclusive and exclusive determinations of $|V_{cb}|$ and
$|V_{ub}|$, semitauonic $B$ decays and how they can be affected by new physics, as well as related subjects such as purely leptonic
$B$ decays and heavy quark masses.
In the mornings, we had overview talks from the experimental and theoretical sides, reviewing the main aspects and summarizing the
state of the art.
In the late afternoon, we organized discussion sessions led by experts of the various topics, addressing questions that have been
brought up before or during the morning talks.
\subsubsection*{Exclusive heavy-to-heavy decays}
The $B\to D^{(*)}\ell \nu$ decays have received significant attention in the last few years.
New Belle results for the $q^2$ and angular distributions have allowed studies of the role
played by the parametrization of the form factors in the extraction of $|V_{cb}|$. It turns out that
the extrapolation to zero-recoil is very sensitive to the parametrization employed, a problem
that can be solved only by precise calculations of the form factors at non-zero recoil. Until
these are completed, the situation remains unclear, with repercussions on the calculation of
$R(D^*)$ as well, with diverging views on the theoretical uncertainty of present estimates
based on Heavy Quark Effective Theory (HQET) expressions.
Beside a critical reexamination of these recent developments, we discussed several incremental and qualitative improvements in
lattice QCD, also in baryonic decays.
Though unlikely to carry much weight in determining $|V_{cb}|$, the latter offer great opportunities to test lepton-flavor
universality violation (LFUV) and lattice QCD.
The discussions also addressed the fact that QCD errors are now almost as small as effects from QED.
Thus, further improvement must be theoretically made by properly studying the effect of QED radiation, especially the treatment of
soft photons and photons that are neither soft nor hard and their sensitivity to the meson wave functions.
Concerning studies of LFUV, we discussed the role played by higher excited charmed states in establishing new physics and
the challenges that the present $R(D^{(*)})$ measurements represent for model building.
\subsubsection*{Exclusive heavy-to-light decays}
This determination of $|V_{ub}|$ relies on nonperturbative calculations of the form factor of $B\to \pi\ell\nu$, which is the most
precise channel.
We discussed the status of the light-cone sum rule (LCSR) calculations and several recent improvements in lattice QCD,
in particular the most recent results from the Fermilab Lattice \& MILC Collaborations and from the RBC \& UKQCD Collaborations,
as well as future prospects.
The Fermilab/MILC calculation alone leads to a remarkably small total error on $|V_{ub}|$, about $4\%$.
While at present the most precise extraction of $|V_{ub}|$ comes from $B\to \pi\ell\nu_\ell$, it is worth considering the channel
$B_s \to K\ell\nu$ as well, because here the lattice-QCD calculations are affected by somewhat smaller uncertainties.
$B_s \to K\ell\nu$ can be accessible at Belle~II in a run at the $\Upsilon(5S)$ and a precision of about 5--10\% could be achieved
with $1~\text{fb}^{-1}$.
On the other hand, LHCb has an ongoing analysis of the ratio $B(B_s \to K\ell\nu)/B(B_s\to D_s\ell\nu)$, which will provide a new
determination of $|V_{ub}/V_{cb}|$.
This approach follows the success that LHCb demonstrated for semileptonic baryon decays via the precise measurement of the ratio
$B(\Lambda_b\to p\mu\nu)/(\Lambda_b\to \Lambda_c \mu\nu)$ in the high-$q^2$ region.
This measurement, combined with precise lattice-QCD calculations of the form factors, allowed the extraction of ratio
$|V_{ub}/V_{cb}|$ with an uncertainty of $7\%$.
We discussed also other channels, in particular how to study $B\to\pi\pi \ell\nu$ including the resonant structures.
Careful studies of other heavy-to-light channels will also be crucial to improve the signal model for the inclusive $|V_{ub}|$
measurements.
\subsubsection*{Inclusive heavy-to-heavy decays}
The theoretical predictions in this case are based on an operator product expansion.
Theoretical uncertainties already dominate current determinations, and better control of all higher-order corrections
is needed to reduce them.
In this respect, it would be important to have the perturbative-QCD corrections to the complete coefficient of the Darwin operator and to
check the treatment of QED radiation in the experimental analyses.
A full $O(\alpha_s^3)$ calculation of the total width may be within reach with recently developed techniques.
From the experimental point of view, new and more accurate measurements will be most welcome, in particular to better understand
the correlations between different moments and moments with different cuts.
A better determination of the higher hadronic mass moments and a first measurement of the forward-backward asymmetry would benefit
the global fit, as would a better understanding of higher power corrections.
The importance of having global fits to the moments in different schemes and by different groups has also been stressed.
This calls for an update of the $1S$ scheme fit and could lead to a cross-check of the present theoretical uncertainties.
Lattice QCD already provides inputs to the fit with the calculation of the heavy quark masses, which have been reviewed.
New developments discussed at the workshop may soon be able to provide additional information that can be fed into the fits,
such as constraints on the heavy-quark quantities $\mu_\pi^2$ and $\mu_G^2$.
The two main approaches are \emph{i)}~computing inclusive rates directly with lattice QCD and \emph{ii)}~using the heavy quark
expansion for meson masses, precisely computed at different quark mass values.
The state of theoretical calculations for inclusive semitauonic decays has also been discussed, as they represent an important
cross-check of the LFUV signals.
\subsubsection*{Inclusive heavy-to-light decays}
This determination is based on various well-founded theoretical methods, most of which agree well.
The 2017 endpoint analysis by BaBar seems to challenge this consolidated picture, suggesting discrepancies between some of the
methods and a lower value of~$|V_{ub}|$.
For the future, the complete NNLO corrections in the full phase space should be implemented and the various methods should be
upgraded in order to make the best use of the Belle~II differential data based on much higher statistics.
These data will make it possible to test the various methods and to calibrate them, as they will contain information on the shape
functions.
The SIMBA and NNVub methods seem to have the potential to fully exploit the $B\to X_u \ell \nu$ (and possibly radiative) measurements
through combined fits to the shape function(s) and $|V_{ub}|$.
The separation of $B^\pm$ and $B^0$ in the experimental analyses will certainly help to constrain weak annihilation, but the real
added value of Belle~II could be precise measurements of kinematic distributions in $M_X$, $q^2$, $E_l$, etc.
A detailed measurement of the high $q^2$ tail might be very useful, also in view of attempts to check quark-hadron duality.
Experimentally, better hybrid (inclusive+exclusive) Monte Carlos are badly needed; $s$-$\bar s$ popping should be investigated to
develop a better understanding of kaon vetos.
The $b\to c$ background will be measured better, which will benefit these analyses.
\subsubsection*{Leptonic decays}
The measurement of $B\to\tau\nu$ is not yet competitive with semileptonic decays for measuring $|V_{ub}|$,
because of a 20\% error on the rate. Belle~II will improve on this.
The corresponding lattice-QCD calculation is however very precise, with an error below~1\%, according to the 2019 report from
FLAG~\cite{Aoki:2019cca} and based mainly on a result from Fermilab/MILC that was presented at the workshop.
That said, the mode is useful today to model builders trying to understand new physics explanations of the
tension between inclusive and exclusive determinations of $|V_{ub}|$.
Belle~II will also access $B\to\mu\nu(\gamma)$ with the possibility to reach
an uncertainty on the branching fraction of about $5\%$ with $50~ab^{-1}$, allowing for a new determination
of $|V_{ub}|$ in the long term.
We discussed also the LHCb contribution to leptonic decays with the process $B\to\mu\mu\mu\nu_\mu$ where two of the muons
come from virtual $\gamma$ or light vector meson decays.
A study of this channel has been published in
~\cite{Aaij:2018pKa}
and a very stringent upper limit obtained, inconsistent with the existing branching fraction predictions, calls for new reliable theoretical calculations.
\section{Heavy-to-heavy exclusive decays}
\label{h2h_excl}
The aim of this section is to present an overview of $b\to c$ exclusive decays.
After an introduction to the parametrization of the relevant form factors between hadronic states we describe the status of current
lattice QCD calculations with particular focus on $B\to D^*$ and $\Lambda_b\to \Lambda_c$.
Next, we discuss experimental measurements of $B\to D^{(*)}$ semileptonic decays with special focus on the ratios $R(D^{(*)})$, and
several phenomenological aspects of these decays: the extraction of $V_{cb}$, theoretical predictions for
$R(D^{(*)})$, the role of $B\to D^{**}$ transitions and constraints on new physics.
We also briefly discuss the information that is required to reproduce results presented in experimental analyses and to incorporate
older measurements into approaches based on modern form factor parametrizations.
We conclude with the description of HAMMER, a tool designed to more easily calculate the change in signal acceptancies,
efficiencies and signal yields in the presence of new physics.
\subsection{Parametrization of the form factors}
\label{sec:param}
In this section, we introduce the form factors for the hadronic matrix elements that arise in semileptonic decays.
Several different notations appear in the literature, often using different conventions depending on whether the final-state meson
is heavy (e.g., $D$) or light (e.g., $\pi$).
A general decomposition relies, however, only on Lorentz covariance and other symmetry properties of the matrix elements.
As discussed below, it is advantageous to choose the Lorentz structure so that the form factors have definite parity and spin.
In this spirit, let us consider the matrix elements for a meson decay $B_{(l)}\to X^{(\ast)}\ell\nu$, where the quark content of
the $\bar{B}$ is $bl$ with $l$ a light quark ($u$, $d$, or $s$), and the quark content of the $X$ is $\bar{q}l$ where $q$ can be
either a light quark or the~$c$ quark.
The desired decomposition can be written as
\begin{align}
\left\langle X(p') |S |B_{(l)}(p) \right\rangle &= \frac{M^2-m^2}{m_b - m_q} f_0(q^2),
\label{eq:ff-scalar} \\
\left\langle X(p') |V^{\mu} |B_{(l)}(p) \right\rangle &= \left[(p+p')^\mu - \frac{M^2-m^2}{q^2}q^\mu\right]f_+(q^2) +
\frac{M^2-m^2}{q^2}q^\mu\,f_0(q^2),
\label{eq:ff-vector} \\
\left\langle X(p') |T^{\mu\nu}|B_{(l)}(p) \right\rangle &= 2\frac{p^\mu p^{\prime\nu} - p^\nu p^{\prime\mu}}{M+m}
f_T(q^2), \label{eq:ff-pseudo-tensor} \\
\left\langle X^\ast(p') |P |B_{(l)}(p) \right\rangle &= \frac{2m}{m_b + m_q}(\epsilon^\ast \cdot q) A_0(q^2),
\label{eq:ff-pseudo} \\
\left\langle X^\ast(p') |V^{\mu} |B_{(l)}(p) \right\rangle &= \frac{2i}{M+m}\varepsilon^{\mu\nu\alpha\beta}
\epsilon^\ast_\nu p_\alpha p'_\beta\,V(q^2),
\label{eq:ff-vector-vector} \\
\left\langle X^\ast(p') |A^{\mu} |B_{(l)}(p) \right\rangle &= 2m\frac{\epsilon^\ast\cdot q}{q^2}q^\mu\,A_0(q^2) +
(M+m) \left(\epsilon^{\ast\mu} - \frac{\epsilon^\ast\cdot q}{q^2}q^\mu\right) A_1(q^2) \nonumber \\
&- \frac{\epsilon^\ast\cdot q}{M+m}\left[(p+p')^\mu - \frac{M^2 - m^2}{q^2}q^\mu\right]A_2(q^2),
\label{eq:ff-axial} \\
\left\langle X^\ast(p') |T^{\mu\nu}|B_{(l)}(p) \right\rangle &= i\epsilon^{\mu\nu\sigma\rho}
\left\{\epsilon^\ast_\sigma\left[(p+p')_\rho T_1(q^2) - \vphantom{\frac{M^2-m^2}{q^2}} \right.\right. \nonumber \\
& \left.\left. \hspace{2em} q_\rho\frac{M^2-m^2}{q^2}\left(T_1(q^2) - T_2(q^2)\right)\right]\right.
\label{eq:ff-tensor} \\
+& \left.(\epsilon^\ast\cdot p)\frac{(p+p')_\sigma q_\rho}{q^2}\left[T_1(q^2) - T_2(q^2) -
\frac{q^2}{M^2 - m^2}T_3(q^2)\right]\right\}, \nonumber
\end{align}
where $q^\mu= (p - p')^\mu$ is the momentum transfer,
$S=\bar{b}q$ is the scalar current,
$P=\bar{b}\gamma^5q$ is the pseudoscalar current,
$V^\mu = \bar{b}\gamma^\mu q$ is the vector current,
$A^\mu = \bar{b}\gamma^\mu\gamma_5 c$ is the axial current,
$T^{\mu\nu} = \bar{b}\sigma^{\mu\nu} c$ is the tensor current,
$m_q$ is the mass of the quark~$q$,
$M$ is the mass of the parent meson ($B$ in this case),
$m$ (without subscript) is the mass of the daughter meson,
and $r=m/M$.
Contracting Eqs.~(\ref{eq:ff-vector}) and (\ref{eq:ff-axial}) with $q_\mu$ and using the appropriate Ward identities shows that the
scalar form factor, $f_0$, and pseudoscalar form factor, $A_0$, appear in the vector and axial vector transitions.
The $J^P$ quantum numbers of the form factors are given in Table~\ref{tab:qn}.
\begin{table} \centering
\caption{Quantum numbers of various meson form factors.}
\label{tab:qn}
\begin{tabular}{lccccc}
\hline\hline
& $0^+$ & $0^-$ & $1^-$ & $1^+$ & $2^+$ \\
\hline
$B_{(l)}\to X\ell\bar{\nu}$ & $f_0$ & -- & $f_+$ & -- & $f_T$ \\
$B_{(l)}\to X^\ast\ell\bar{\nu}$ & -- & $A_0$ & $V_0$ & $A_1, A_2$ & $T_1, T_2, T_3$ \\
\hline\hline
\end{tabular}
\end{table}
The tensor form factors in Eqs.~(\ref{eq:ff-pseudo-tensor}) and~(\ref{eq:ff-tensor}) appear in extensions of the Standard Model.
One can impose bounds on the shape of these form factors by using QCD dispersion relations for a generic decay
$H_b\to H_q\ell\bar{\nu}$.
Since the amplitude for production of $H_b H_q$ from a virtual $W$ boson is determined by the analytic continuation of the form
factors from the semileptonic region of momentum transfer $m_\ell^2 < q^2 < M^2- m^2$ to the pair production region
$q^2\geq M^2 + m^2$, one can find constraints in the pair-production region, amenable to perturbative QCD calculations, and then
propagate the constraint to the semileptonic region by using analyticity.
The result of this process applied to the form factors is the model-independent Boyd-Grinstein-Lebed (BGL)
parametrization~\cite{Boyd:1995cf,Boyd:1997kz}, which expands a form factor $F(z)$ in the dimensionless variable $z$ as
\begin{align}
F(z) &= \frac{1}{B_F(z)\phi_F(z)}\sum_{j=0}^\infty a^F_j z^j,
\label{BGLseries} \\
z(q^2;t_0) &= \frac{\sqrt{t_+ - q^2} - \sqrt{t_- - q^2}}{\sqrt{t_+ - q^2} + \sqrt{t_+ - t_0}},
\label{eq:z-def}
\end{align}
where $t_\pm = (M\pm m)^2$, $B_F(z)$ are known as the \emph{Blaschke factors}, which incorporate the below- or
near-threshold~\cite{Caprini:2017ins} poles in the $s$-channel process $\ell\nu\to \bar{B}X$, and $\phi_F(z)$ is called the
\emph{outer function}.
The poles, and hence the Blaschke factor, depend on the spin and the parity of the intermediate state, which is why it is useful to
use fixed $J^P$ for the form factors.
See Sec.~\ref{sec:z-remarks} for more details.%
\footnote{In particular, there are cases when one should \emph{not} use the naive choice $t_+=(M+m)^2$ in Eq.~(\ref{eq:z-def}).
The correct choice is the branch point of a cut in the complex-$q^2$ plane, which sometimes is at~$t_\text{cut}<(M+m)^2$.}
Of course, in practical applications the series (\ref{BGLseries}) is truncated at some power $z^{n_F}$.
By taking certain linear combinations of form factors with the same spin and parity one obtains the BGL notation for the helicity
amplitudes,
\begin{align}
f^\text{BGL}_+ &= f_+, \label{fpBGL} \\
f^\text{BGL}_0 &= (M^2 - m^2) f_0, \label{f0BGL} \\
g &= \frac{2}{M+m} V, \label{gBGL} \\
f &= (M+m)A_1, \label{fBGL} \\
\mathcal{F}_1 &= M(M+m)(w-r)A_1 - \frac{2Mm(w^2-1)}{1+r}A_2, \label{F1BGL} \\
\mathcal{F}_2 &= 2A_0, \label{F2BGL}
\end{align}
leaving aside the (BSM) tensor form factors.
Here the velocity transfer
\begin{equation}
w = v_M\cdot v_m = \frac{M^2 + m^2 - q^2}{2Mm},
\label{eq:wdef}
\end{equation}
with $v_M=p/M$ and $v_m=p'/m$, is often used in heavy-to-heavy decays.
For heavy-to-light decays it can be helpful to work with the energy of the daughter meson in the rest frame of the parent, i.e.,
\begin{equation}
E = p'\cdot v_M = \frac{M^2 + m^2 - q^2}{2M}.
\end{equation}
These form factors are subject to three kinematic constraints, namely
\begin{align}
(M^2 - m^2)f^\text{BGL}_+(q^2=0) &= f^\text{BGL}_0(q^2=0), \label{ffKin1}\\
(M-m)f(q^2=q_\text{max}^2) &= \mathcal{F}_1(q^2=q_\text{max}^2), \label{ffKin2}\\
\frac{2}{M^2-m^2} \mathcal{F}_1(q^2=0) &= \mathcal{F}_2(q^2=0), \label{ffKin3}
\end{align}
where $q^2_\text{max}=(M-m)^2$, corresponding to $w=1$ and $E=m$.
The variable $z$ can also be expressed via $w$,
\begin{equation}
z = \frac{\sqrt{w+1} - \sqrt{2N}}{\sqrt{w+1} + \sqrt{2N}},
\end{equation}
where $N=(t_+ - t_0)/(t_+ - t_-)$, is real for $q^2 \leq (M + m)^2$, and it becomes a pure phase beyond that limit.
The constant $t_0$ defines the point at which $z=0$.
Often $t_0 = t_-$, one end of the kinematic range, so $z$ ranges from 0 at maximum $q^2$ to
$z_\text{max}=(1-\sqrt{r})^2/(1+\sqrt{r})^2$ when $m_\ell\approx 0$.
Alternatively, the choice $t_0=(M+m)(\sqrt{M}-\sqrt{m})^2$ sets $z=0$ exactly in the middle of the kinematic range.
Even for $B\to\pi\ell\nu$, $z$ is always a small quantity, which ensures a fast convergence of the
power series defined in \eqref{BGLseries}.
Unitarity constraints from the QCD dispersion relations are translated into constraints for the coefficients of the BGL expansion.
In general,
\begin{equation}
\sum^\infty_{j=0} \left(a^F_j\right)^2 \leq 1,
\end{equation}
for each form factor $F$, but in the particular case of $\bar{B}\to D^\ast\ell\bar{\nu}$ the bound becomes
\begin{equation}
\sum^\infty_{j=0} \left[\left(a^f_j\right)^2 + \left(a^{\mathcal{F}_1}_j\right)^2\right] \leq 1,
\end{equation}
for the $f$ and $\mathcal{F}_1$ form factors, because they have the same quantum numbers.
These bounds are known as the \emph{weak} unitarity constraints.
A modification of the BGL parametrization by Bourrely, Lellouch and Caprini (BCL)~\cite{Bourrely:2008za} is often
chosen in analyses of heavy-to-light decays.
The BCL parametrization improves BGL by fixing two artifacts of the truncated BGL series.
In particular, it removes an unphysical singularity at the pair production threshold and corrects the large $q^2$
behavior (see~\cite{Lepage:1980fj,Akhoury:1993uw}) in the functional form.
These two modifications improve the convergence of the expansion.
However, the kinematic range is much more constrained in the heavy-to-heavy case, and lies farther from both the production
threshold and the large $q^2$ region.
Therefore, the presence of far singularities or an incorrect asymptotic behavior are not expected to spoil the $z$-expansion in
that case.
In the heavy-to-heavy case, one can sharpen the \emph{weak} unitarity constraints on the BGL coefficients using
heavy quark symmetry (HQS) which relates the different $B^{(*)}\to D^{(*)}\ell \bar\nu$ channels and their form factors: each form
factor is either proportional to the Isgur-Wise function $\xi(w)$ or zero.
Using heavy quark effective theory (HQET) one can improve the precision by introducing radiative and power (i.e.\ in inverse powers
of the heavy masses) corrections.
Then we can define any form factor in such a way that it admits the expansion in both $\alpha_s$ and the heavy quark masses
\begin{equation}
F(w) = \xi(w)\left(1+c_{\alpha_s}\frac{\alpha_s}{\pi} + c_b\frac{\Lambda_{\textrm{QCD}}}{m_b} +
c_c\frac{\Lambda_{\textrm{QCD}}}{m_c} + \cdots\right).\label{eq:ffexp}
\end{equation}
These expansions can be used to link the $z$~expansion coefficients of different form factors, leading to the so-called
\emph{strong} unitarity constraints \cite{Caprini:1997mu,Bigi:2017jbd}.
The power corrections depend on subleading Isgur-Wise functions that have been estimated with QCD sum
rules~\cite{Neubert:1992wq,Neubert:1992pn,Ligeti:1993hw}.
Previous analyses of $B\to D^\ast\ell\nu$ have used the Caprini-Lellouch-Neubert (CLN) parametrization~\cite{Caprini:1997mu}.
CLN employ a notation for the form factors that satisfies (\ref{eq:ffexp}),\footnote{See~\cite{Bigi:2017jbd} for a comprehensive table including other decays.}
\begin{align}
S^\text{CLN}_1 &= \frac{f^\text{BGL}_0}{M^2(1-r)\sqrt{r}(1+w)}, & \quad &
P^\text{CLN}_1 = \frac{\sqrt{r}}{1+r}\mathcal{F}_2, \\
V^\text{CLN}_1 &= \frac{2\sqrt{r}}{1+r}f^\text{BGL}_+, & \quad &
V^\text{CLN}_4 = M\sqrt{r}g, \\
A^\text{CLN}_1 &= \frac{f}{M\sqrt{r}(1+w)}, & \quad &
A^\text{CLN}_5 = \frac{\mathcal{F}_1}{M^2(1-r)\sqrt{r}(1+w)}, \\
R^\text{CLN}_1 &= \frac{V^\text{CLN}_4}{A^\text{CLN}_1}, & \quad &
R^\text{CLN}_2 = \frac{w - r}{w-1} - \frac{(1-r)}{w-1}\frac{A^\text{CLN}_5}{A^\text{CLN}_1},
\end{align}
where the letter naming the form factor ($S$, $P$, $V$ and~$A$) encodes its quantum numbers (scalar, pseudoscalar, vector and
axial vector), and $R^\text{CLN}_{1,2}$ are two convenient ratios of form factors.
Sometimes the ratio $R^\text{CLN}_0 = P^\text{CLN}_1/A^\text{CLN}_1$ is considered.
In the CLN parametrization the
\emph{strong} unitarity constraints obtained with HQET at NLO are used to remove some of the coefficients of the $z$ expansion.
Further, specific numerical coefficients are introduced in a polynomial in~$w$ for $R_{1,2}^\text{CLN}$.
The numerical values were determined using information available in 1997, which has been partly superseded but not updated.
The numerical values also omit error estimates (which were discussed in the original CLN paper~\cite{Caprini:1997mu}, although in an optimistic manner) because at the time the experimental statistical errors dominated,
which is no longer the case.
A consensus of the workshop recommends that CLN no longer be used, certainly not unless the numerical coefficients have been updated and
the ensuing theoretical uncertainties are accounted for.
It is better to use a general form of the $z$~expansion.
\str{
The strong unitarity constraints are used to remove some of the coefficients of the expansion, and in the end a simplified polynomial
is presented per each form factor:
\begin{align}
S^\text{CLN}_1(w) &= V^\text{CLN}_1(w)\left(\frac{2\sqrt{r}}{1+r}\right)^2\frac{1+w}{2} 1.0036 \times \nonumber \\
& \left(1 - 0.0068(w-1) + 0.0017(w-1)^2 - 0.0013(w-1)^3\right), \\
V^\text{CLN}_1(w) &= V^\text{CLN}_1(1)\left(1 - 8\rho_1^2 z + (r1\rho^2_1 - 10)z^2 - (252\rho^2_1 - 84)z^3\right), \\
A^\text{CLN}_1(w) &= A^\text{CLN}_1(1)\left(1 - 8\rho_{A_1}^2z + (53\rho_{A_1}^2 - 15)z^2 - (231\rho_{A_1}^2 - 91)z^3\right), \\
R^\text{CLN}_1(w) &= R^\text{CLN}_1(1) - 0.12(w-1) + 0.05(w-1)^2, \\
R^\text{CLN}_2(w) &= R^\text{CLN}_2(1) + 0.11(w-1) - 0.06(w-1)^2, \\
P^\text{CLN}_1(w) &= A^\text{CLN}_1(w) 1.218 \times \nonumber \\
& \left(1 - 0.2367(w-1) - 0.0508(w-1)^2 + 0.0988(w-1)^3\right).
\end{align}
where the fit parameters are the slopes $\rho^2_1$ and $\rho^2_{A_1}$, the normalization factors $V^\text{CLN}_1(1)$ and
$A^\text{CLN}_1(1)$, and the constants $R^\text{CLN}_1(1)$ and $R^\text{CLN}_2(1)$.
Actually, CLN declares $R^\text{CLN}_1(1) \approx 1.27$ and $R^\text{CLN}_2(1) \approx 0.80$, but the late approach in experimental
fits is to fit them to the available data, relaxing the HQS constraints.
This procedure is known as the \emph{practical CLN parametrization}.
Since this expansion was developed for the $B^{(\ast)}\to D^{(\ast)}$ decays, the hadronic form factors are notably missing.
An alternative and less restrictive way of imposing the strong unitarity constraints has been developed in~\cite{Boyd:1997kz}
and expanded in~\cite{Bigi:2016mdz,Bigi:2017jbd}.
}
HQET naturally presents another basis for the form factors of the $\bar{B}\to D^{(\ast)}\ell\bar{\nu}$ processes.
Using velocities instead of momenta and otherwise mimicking the Lorentz structure of Eqs.(\ref{eq:ff-vector}),
(\ref{eq:ff-vector-vector}), and~(\ref{eq:ff-axial}), the notation is $h_+$ and $h_-$ for $\bar{B}\to D\ell\bar{\nu}$,
and $h_V$ and $h_{A_{1,2,3}}$ for $\bar{B}\to D^{\ast}\ell\bar{\nu}$.
In the heavy quark limit, these form factors tend to
\begin{equation}
h_X(w) = \eta(\alpha_s)\xi(w) + O\big(\frac{\Lambda_{\rm QCD}}{m_{b,c}}\big), \label{hqetFF1}
\end{equation}
for $X=+$, $A_1$, $A_3$, $V$, and
\begin{equation}
h_Y(w) = \beta(\alpha_s)\xi(w) + O\big(\frac{\Lambda_{\rm QCD}}{m_{b,c}}\big), \label{hqetFF2}
\end{equation}
with $Y=-$,~$A_2$.
Here $\eta(\alpha_s) = 1 + O(\alpha_s)$, while $\beta(\alpha_s) = O(\alpha_s)$.
In this representation, the identities expressed in Eqs.~(\ref{ffKin1})--(\ref{ffKin3}) become evident.
Finally, for the case of a baryonic decay $\Lambda_b\to Y_{(q)}\ell\nu$, with $Y=p,\Lambda_c$, we define
\begin{align}
\left\langle Y(p') | S |\Lambda_b(p)\right\rangle &= \bar{u}_q(p') \frac{M-m}{m_b - m_q} f_0(q^2) u_b(p), \\
\left\langle Y(p') | P |\Lambda_b(p)\right\rangle &= \bar{u}_q(p')\gamma_5\frac{M+m}{m_b + m_q} g_0(q^2) u_b(p), \\
\left\langle Y(p') | V^{\mu} |\Lambda_b(p)\right\rangle &= \bar{u}_q(p') \left[(M-m)\frac{q^\mu}{q^2} f_0(q^2) + \right. \nonumber \\
& \left. \frac{M+m}{s_+}\left((p + p')^\mu -
\frac{q^\mu}{q^2}(M^2- m^2)\right) f_+(q^2) +\right.\nonumber \\
& \left.\left(\gamma^\mu - 2\frac{mp^\mu + Mp^{\prime\mu}}{s_+}\right)
f_\bot(q^2)\right] u_b(p), \\
\left\langle Y(p') | A^{\mu} |\Lambda_b(p)\right\rangle &=-\bar{u}_q(p')\gamma_5\left[(M+m)\frac{q^\mu}{q^2} g_0(q^2) + \right. \nonumber \\
& \left.\frac{M-m}{s_-}\left((p + p')^\mu -
\frac{q^\mu}{q^2}(M^2- m^2)\right) g_+(q^2) +\right.\nonumber \\
& \left.\left(\gamma^\mu + 2\frac{mp^\mu - Mp^{\prime\mu}}{s_-}\right) g_\bot(q^2)\right] u_b(p), \\
\left\langle Y(p') |q_\nu T^{\mu\nu} |\Lambda_b(p)\right\rangle &=-\bar{u}_q(p')\left[\left((p + p')^\mu - \frac{q^\mu}{q^2}(M^2-m^2)\right)\frac{q^2}{s_+} h_+(q^2)\right. \nonumber \\
& \left.+ (M + m) \left(\gamma^\mu - 2\frac{mp^\mu + Mp^{\prime\mu}}{s_+}\right) h_\bot(q^2)\right] u_b(p),
\end{align}
where $M$ is the mass of the $\Lambda_b$, $m$ is the mass of the daughter baryon and $s_\pm = (M\pm m)^2 - q^2$.
The $z$ expansions for the baryonic form factors employed in Ref.~\cite{Detmold:2015aaa}
use trivial outer functions and do not impose unitarity bounds on the coefficients of the expansion.
As a result, the coefficients are unconstrained and reach values as high as $\sim10$. See also Sec.~\ref{sec:z-remarks}.
\subsection{Heavy-to-heavy form factors from lattice QCD}
\label{sec:hth_lat}
The lattice QCD calculation of the form factors for the semileptonic
decay of a hadron uses
two- and three-point correlation functions, which are constructed
from valence quark propagators obtained by solving the Dirac equation
on a set of gluon field configurations. Averaging the correlation
functions over the gluon field configurations then yields the appropriate Feynman
path integral.
The two-point correlation functions give the amplitude for a hadron to be created
at the time origin and then destroyed at a time $T$. The three-point
correlation functions include the insertion of a current $J$ at
time $t$ on the active quark line, changing the active quark
from one flavor to another.
Usually calculations are performed with the initial hadron at
rest. Momentum is inserted at the current so that a range of
momentum transfer, $q$, from initial to final hadron can be mapped
out.
The three-point correlation functions (for multiple $q$ values) and
the two-point correlation functions (with multiple momenta in the
case of the final-state hadron) are fit
as functions of $t$ and $T$ to determine the matrix elements of
the currents between initial and final hadrons that yield the
required
form factors. An important point here is that the initial and final
hadrons that we focus on are the ground-state particles in their respective
channels. However, terms corresponding to excited
states must be included in the fits in order to make sure that systematic effects
from excited-state contamination are taken into account in the
fit parameters that yield the ground-state
to ground-state matrix element of $J$ and hence the form factors.
Statistical uncertainties in the form factors obtained obviously
depend on the numbers of samples of gluon-field configurations
on which correlation functions are calculated. To improve statistical
accuracy further, calculations usually include
multiple positions of the time origin
for the correlation functions on each configuration.
The numerical cost of the calculation of quark propagators falls
as the quark mass increases and so heavy ($b$ and $c$) quark
propagators are typically numerically inexpensive.
The accompanying light quark propagators for heavy-light hadrons
are much more expensive, especially if $u/d$ quarks with physically
light masses are required. It is this cost that limits the
statistical
accuracy that can be obtained, especially since the statistical
uncertainty for a heavy-light hadron correlation function
(on a given number of gluon field configurations)
also grows as the separation in mass between the heavy and light
quarks increases.
A key issue for heavy-to-heavy ($b$ to $c$) form factor calculations
is how to handle heavy quarks on the lattice. Discretization of
the Dirac equation on a space-time lattice gives systematic discretization
effects that depend on powers of the quark mass in lattice units.
The size of these effects depends on the value of the lattice spacing
and the power with which the effects appear (i.e., the level of
improvement used in the lattice Lagrangian).
Since the $b$ quark is so heavy, its mass in lattice units will be larger
than 1 on all but the finest lattices ($a < $ 0.05~fm) currently in
use. Highly-improved discretizations of the Dirac equation are
needed to control the discretization effects. A good example of such
a lattice quark formalism is the highly improved
staggered quark (HISQ) action developed
by HPQCD~\cite{Follana:2006rc} for both light and heavy quarks with
discretization errors appearing at $O(\alpha_s(am)^2)$ and $O((am)^4)$.
An alternative approach is to make use of the fact
that $b$ quarks are nonrelativistic inside their bound states.
This means that a discretization of a nonrelativistic action
(NRQCD) can be used, expanding the action to some specified order in the
$b$ quark velocity. Discretization effects then depend on the
scales associated with the internal dynamics and
these scales are all much smaller
than the $b$ quark mass. Relativistic effects can be included
and discretization effects corrected at the cost of complicating the action
with additional operators.
A third possibility is to start from the
Wilson quark action and improved versions of it but to tune the
parameters (such as the quark mass) using a nonrelativistic
dispersion relation for the meson, which is known as the Fermilab method~\cite{ElKhadra:1996mp}.
This removes the leading source
of mass-dependent discretization effects, whilst retaining a
discretization that connects smoothly to the continuum limit.
Again, improved versions of this approach (such as the Oktay-Kronfeld action \cite{Oktay:2008ex})
include additional operators.
The $c$ quark has a mass larger than $\Lambda_{\mathrm{QCD}}$
but within lattice QCD it can be treated successfully as a light quark
because its mass in lattice units is less than 1 on lattices in
current use (with $a < 0.15$~fm). This means that, although
discretization effects are visible in lattice QCD calculations
with $c$ quarks, they are not large and can easily be extrapolated
away accurately for a continuum result. For example, discretization
effects are less than 10\% at $a=0.15$~fm in calculations of
the decay constant of the $D_s$ using the HISQ action~\cite{Davies:2010ip}.
Purely nonrelativistic approaches to the $c$ quark are therefore not
useful on the lattice.
There can be some advantage for $b$-to-$c$ form factor
calculations in using the same action for $b$ and $c$, however, as we
discuss below.
Because lattice and continuum QCD regularize the theory in a
different
way, the lattice current $J$ needs a finite renormalization factor to match its
continuum counterpart so that matrix elements of $J$, and form
factors
derived from them, can be used
in continuum phenomenology.
For NRQCD and Wilson/Fermilab quarks the current
$J$ must be normalized using lattice QCD perturbation theory.
Since this is technically rather challenging it has only been
done through $O(\alpha_s)$ and this
leaves a sizeable (possibly several percent) systematic error from
missing higher-order terms in the perturbation theory.
If Wilson/Fermilab quarks are used for both $b$ and $c$ quarks,
then arguments
can be made about the approach to the heavy-quark limit
that can reduce, but not eliminate, this uncertainty~\cite{Harada:2001fj}.
Relativistic treatments of the $b$ and $c$ quarks
have a big advantage here, because $J$ can generally
be normalized in a fully nonperturbative way
within the lattice QCD calculation and without additional
systematic errors.
The advantages of this approach were first demonstrated by the
HPQCD collaboration using the HISQ action to determine
the decay constant of the $B_s$ \cite{McNeile:2012qf}. The HISQ PCAC relation
normalizes the axial-vector current in this case.
Calculations for multiple
quark masses on lattices with multiple values of the lattice spacing
allow both the physical dependence of the decay constant on quark
mass and the dependence of the discretization effects to be mapped
out
so that the physical result at the $b$ quark mass can be determined.
This calculation has now been updated and extended to the $B$ meson
by the Fermilab Lattice and MILC collaborations \cite{Bazavov:2017lyh}, achieving
better than 1\% uncertainty. HPQCD is now carrying out a similar approach
to $b$-to-$c$ form factor calculations \cite{McLean:2019sds}, and the JLQCD
collaboration is also working in that direction
\cite{Kaneko:2019vkx} with M\"{o}bius domain-wall quarks.
An equivalent approach, using ratios of hadronic quantities at different
quark masses where normalization factors cancel, has been developed by the
European Twisted Mass collaboration using the twisted-mass action \cite{Blossier:2009hg,Bussone:2016iua}
for Wilson fermions.
\subsubsection{$B \rightarrow D^{(*)}$ form factors from lattice QCD}
Early lattice QCD calculations of $B\to D$ form factors were limited to the determination of
$\mathcal{G}^{B\to D} (w) = 4 r f_+ (q^2)/(1+r)$ (with notation defined near \eqref{eq:wdef})
at the zero-recoil point $w=1$. Results include the $N_f=2+1$ calculation of Fermilab/MILC~\cite{Okamoto:2004xg,Qiu:2013ofa} and the
$N_f=2$ calculation of Atoui et al.~\cite{Atoui:2013zza}.
More recently Fermilab/MILC~\cite{Lattice:2015rga} and HPQCD~\cite{Na:2015kha,Monahan:2017uby} have presented $N_f=2+1$ calculations of
the $B\to D$ form factor at non-zero recoil based on partially overlapping subsets of the same MILC asqtad ($a^2$ tadpole improved) ensembles.
The Fermilab/MILC calculation~\cite{Lattice:2015rga} uses configurations with four different lattice spacings and with pion masses in
the range $[260,670]$~MeV. The bottom and charm quarks are implemented in the Fermilab approach. The form factors $f_{+,0}^{B\to D}(w)$
are extracted from double ratios of three point functions up to a matching factor which is calculated at 1-loop in lattice perturbation
theory.
The results are presented in terms of three synthetic data points which can be subsequently fitted using any form factor
parametrization.
The systematic uncertainty due to the joint continuum-chiral extrapolation is about 1.2\% and dominates the error budget.
The HPQCD calculations~\cite{Na:2015kha,Monahan:2017uby} rely on ensembles with two different lattice spacings and two/three light-quark
masses values, respectively.
The treatment of heavy quarks is different from that used in the Fermilab/MILC papers: the bottom quark is described in NRQCD and the
charm quark using HISQ.
The form factors are extracted from appropriate three-point functions and the results are presented in terms of the parameters of a
modified BCL $z$~expansion that incorporates dependence on lattice spacing and light-quark masses into the expansion coefficients.
In order to combine the Fermilab/MILC and HPQCD results~\cite{Aoki:2019cca}, it is necessary to generate a set of synthetic data which
is (almost exactly) equivalent to the HPQCD calculation.
The two sets of synthetic data can then be combined while taking into account the correlation due to the fact the Fermilab/MILC and HPQCD
share MILC asqtad configurations.
As mentioned above, dominant uncertainties are of systematic nature, implying that this correlation (whose estimate is rather uncertain)
is a subdominant effect.
A simultaneous fit of Fermilab/MILC and HPQCD synthetic data together with the available Belle and Babar data
yields a determination of $|V_{cb}|$ with an overall 2.5\%
uncertainty (dominated by the experimental error which contributes about 2\% to the total error).
Finally, both collaborations present values for both the $f_+$ and $f_0$ form factors, which allow for a lattice only calculation of the
SM prediction for $R(D)$.
The uncertainty on the Fermilab/MILC and HPQCD combined determination of $R(D)$, without experimental input, is about 2.5\% and is
negligible compared to current experimental errors.
The advantage of an approach in which currents can be nonperturbatively normalized has
been demonstrated by HPQCD for $B_s \rightarrow D_s$ form factors in~\cite{McLean:2019qcx}. They use the HISQ action for all quarks, extending the method developed for decay constants. The range of heavy quark masses can be increased on successively finer lattices (keeping the value in lattice units below 1) until the full range from $c$ to $b$ is reached. The full $q^2$ range of the decay can also be covered by this method since the spatial momentum
of the final state meson (which should also be less than 1 in lattice units) grows in step with
the heavy meson/quark mass. Results from~\cite{McLean:2019qcx} improve on the
uncertainties obtained in~\cite{Monahan:2017uby} with NRQCD $b$ quarks and this
promising all-HISQ approach is now being extended to other processes. It is interesting to
observe that the $B_s\to D_s$ form factors are very close to the $B\to D$ form factors over
the entire kinematic range, see also \cite{Kobach:2019kfb,Bordone:2019guc}.
Calculations of $B\to D^*$ form factors at non-zero recoil are considerably more involved due to difficulties in describing the resonant
$D^*\to D \pi$ decay. Up to now, lattice QCD simulations have focused on the single
$B \rightarrow D^*$ form factor that contributes to the rate at zero
recoil, $A_1(q^2_{max})$. The quantity generally quoted is
$h_{A_1}(1)$ where
\begin{equation}
h_{A_1}(1) = \frac{M_B+M_{D^*}}{2\sqrt{M_BM_{D^*}}} A_1(q^2_\text{max})
\end{equation}
The combination of the lattice QCD result and the experimental rate, extrapolated to zero recoil, yields a value for~$V_{cb}$.
The Fermilab Lattice/MILC Collaborations have achieved the highest precision for this result so far \cite{Bailey:2014tva}.
They use improved Wilson quarks within
the Fermilab approach for both $b$ and $c$ quarks and work on gluon field
configurations that include $u/d$ (with equal mass) and $s$ quarks in
the sea ($n_f=2+1$) using the asqtad action. By taking a ratio of three-point correlation
functions they are able simultaneously able to improve their statistical
accuracy and reduce part of the systematic uncertainty from the normalization
of their current operator.
Their result is $h_{A_1}(1)=0.906(4)(12)$ where the uncertainties are
statistical and systematic respectively. Their systematic error is dominated
by discretization effects. They take the systematic uncertainty from
missing higher-order terms in the perturbative current matching~\cite{Monahan:2012dq} to
be $0.1\alpha_s^2$.
The HPQCD collaboration have calculated $h_{A_1}(1)$ on gluon field
configurations that include $n_f=2+1+1$ HISQ sea quarks using NRQCD
$b$ quarks and HISQ $c$ quarks \cite{Harrison:2017fmw}. Their result, $h_{A_1}(1) = 0.895(10)(24)$
has a larger uncertainty, dominated by the systematic uncertainty of
$0.5\alpha_s^2$ allowed for in the current matching.
They were also able to calculate the equivalent result for
$B_s \rightarrow D_s^*$, obtaining $h^s_{A_1}(1) = 0.879(12)(26)$ and
demonstrating that the dependence on light quark mass is small.
The $B_s \rightarrow D_s^*$ provides a better lattice QCD comparison
point than $B \rightarrow D^*$ because it has less sensitivity to
light quark masses (in particular the $D^*D\pi$ ``cusp") and to the volume.
\begin{figure}
\centerline{\includegraphics[width=0.8\textwidth]{fermilab_nrqcd_data_v2.pdf}}
\caption{ Plot taken from Ref.~\cite{McLean:2019sds} showing
the comparison of lattice QCD results for
$h_{A_1}(1)$ (left side) and $h^s_{A_1}(1)$ (right side).
Raw results for $h_{A_1}(1)$ are
from~\cite{Harrison:2017fmw} and~\cite{Bailey:2014tva}
and are plotted as a function of valence
(=sea) light quark mass, given by the square of $M_{\pi}$.
On the right are points for $h^s_{A_1}(1)$ from~\cite{Harrison:2017fmw}
plotted at the appropriate valence mass for the $s$ quark, but
obtained at physical sea light quark masses. The final result for
$h_{A_1}(1)$ from~\cite{Bailey:2014tva}, with its full
error bar,
is given by the inverted blue triangle. The inverted red triangles
give the final results for $h_{A_1}(1)$ and $h^s_{A_1}(1)$
from~\cite{Harrison:2017fmw}. The HPQCD results of \cite{McLean:2019sds} are given by the black stars.}
\label{fig:hA1-comparison}
\end{figure}
More recently the HPQCD collaboration have used the HISQ action
for all quarks, with a fully nonperturbative current normalization,
to determine $h^s_{A_1}(1)$ \cite{McLean:2019sds}. Their result,
$h^s_{A_1}(1) = 0.9020(96)(90)$ agrees well
with the earlier results and has smaller systematic uncertainties.
Figure~\ref{fig:hA1-comparison} compares the three results.
The importance of being able to compare lattice QCD and experiment away
from the zero recoil point is now clear and several lattice QCD calculations
are underway, attempting to cover the full $q^2$ range of the decay and
all 4 form factors.
This includes calculations for $B \rightarrow D^*$ from JLQCD~\cite{Kaneko:2019vkx} with M\"{o}bius domain-wall quarks, Fermilab/MILC~\cite{Vaquero:2019ary}
(see also talk at Lattice 2019) with improved Wilson/Fermilab quarks and LANL/SWME with an improved version of this formalism known as the Oktay-Kronfeld action~\cite{Bhattacharya:2020xyb}. Calculations for other $b\to c$ pseudoscalar-to-vector form factors, $B_s \rightarrow D^*_s$~\cite{McLean:2019jll} and $B_c \rightarrow (J/\psi,\eta_c)$ are also underway from
HPQCD~\cite{Lytle:2016ixw,Colquhoun:2016osw} using the all-HISQ approach. At the same time further $B \rightarrow D$ and $B_s \rightarrow D_s$ form factor calculations are in progress, including those using a variant of the Fermilab approach known as Relativistic Heavy Quarks on RBC/UKQCD configurations~\cite{Flynn:2019jbg}.
In future we should be able to compare results from multiple actions with experiment
for improved accuracy in determining~$|V_{cb}|$.
\subsubsection{$\Lambda_b \to \Lambda_c^{(*)}$ form factors from lattice QCD}
\label{sec:LbLcLattice}
The $\Lambda_b \to \Lambda_c$ form factors have been calculated with $2+1$ dynamical quark flavors; the vector
and axial vector form factors can be found in Ref.~\cite{Detmold:2015aaa}, while the tensor form factors (which
contribute to the decay rates in many new-physics scenarios) where added in Ref.~\cite{Datta:2017aue}. This calculation used two different lattice spacings of approximately 0.11~fm and 0.08~fm,
sea quark masses corresponding to pion masses in the range from 360 down to 300~MeV, and valence quark masses corresponding to pion masses in the range
from 360 down to 230~MeV. The lattice data for the form factors, which
cover the kinematic range from near $q^2_{\rm max}\approx 11\:{\rm GeV}^2$ down to $q^2\approx 7\:{\rm GeV}^2$,
were fitted with a modified version of the BCL $z$ expansion~\cite{Bourrely:2008za} discussed in Sec.~\ref{sec:param}, where simultaneously to the expansion in $z$, an expansion in powers of the lattice spacing and quark masses is performed.
No dispersive bounds were used in the $z$ expansion here (this is something that can perhaps be improved in the future, see also Sec.~\ref{sec:z-remarks}).
The form factors extrapolated to the continuum limit and physical pion mass yield the following Standard Model predictions:
\begin{equation}
\frac{1}{|{V_{cb}}|^2}\Gamma (\Lambda_b \to \Lambda_c\: \mu^- \bar{\nu}_\mu)
= (21.5 \:\pm\: 0.8_{\,\rm stat} \:\pm\: 1.1_{\,\rm syst})\:\:{\rm ps}^{-1}
\end{equation}
for the fully integrated decay rate, which has a total uncertainty of 6.3\% (corresponding to a 3.2\% theory uncertainty in a possible $|V_{cb}|$ determination from this decay rate),
\begin{equation}
\frac{1}{|{V_{cb}}|^2}\int_{7\:{\rm GeV}^2}^{q^2_{\rm max}}
\frac{\mathrm{d}\Gamma (\Lambda_b \to \Lambda_c\: \mu^- \bar{\nu}_\mu)}{\mathrm{d}q^2} \mathrm{d} q^2
= (8.37 \:\pm\: 0.16_{\,\rm stat} \:\pm\: 0.34_{\,\rm syst})\:\:{\rm ps}^{-1} \label{eq:LbLcPartialRate}
\end{equation}
for the partially integrated decay rate, which has a total uncertainty of 4.5\% (corresponding to 2.3\% for $|V_{cb}|$), and
\begin{equation}
R(\Lambda_c)=\frac{\Gamma (\Lambda_b \to \Lambda_c\: {\tau^- \bar{\nu}_\tau})}{\Gamma (\Lambda_b \to \Lambda_c\: {\mu^- \bar{\nu}_\mu})} \:=\: 0.3328 \:\pm \:0.0074_{\,\rm stat} \:\pm\: 0.0070_{\,\rm syst}
\end{equation}
for the lepton-flavor-universality ratio, which has a total uncertainty of 3.1\%. The systematic uncertainties of the vector and axial vector form factors are dominated
by finite-volume effects and the chiral extrapolation. Both of these can be reduced substantially in the future by adding a new lattice gauge field ensemble
with physical light-quark masses and a large volume, and dropping the ``partially quenched'' data sets that have $m_\pi^{(\mathrm{val})}<m_\pi^{(\mathrm{sea})}$.
Adding another ensemble at a third, finer lattice spacing will also be beneficial to better control the continuum extrapolation.
At this workshop, there was some discussion about the validity of the modified $z$ expansion; it has been argued that it would be safer to first
perform chiral/continuum extrapolations and then perform a secondary $z$ expansion fit. This is expected to make a difference mainly if nonanalytic
quark-mass dependence from chiral perturbation theory is included. However, the fits used in Ref.~\cite{Detmold:2015aaa} for the $\Lambda_b$ form factors were analytic in the lattice spacing
and light-quark mass. Note that the shape of the $\Lambda_b \to \Lambda_c\: \mu^- \bar{\nu}_\mu$ differential decay rate was later measured by LHCb, and found to
be in good agreement with the lattice QCD prediction all the way down to $q^2=0$~\cite{Aaij:2017svr}.
Motivated by the prospect of an LHCb measurement of $R(\Lambda_c^*)$, work is now also underway to compute the $\Lambda_b \to \Lambda_c^{*}$ form factors in lattice QCD, for the $\Lambda_c^*(2595)$ and $\Lambda_c^*(2625)$, which have
$J^P=\frac12^-$ and $J^P=\frac32^-$, respectively. Preliminary results were shown at the workshop. For these form factors, the challenge is that, to project the $\Lambda_c^*$ interpolating field
exactly to negative parity and avoid contamination from the lower-mass positive parity states, one needs to perform the lattice calculation in the $\Lambda_c^*$ rest frame. With the $b$-quark action currently in use,
discretization errors growing with the $\Lambda_b$ momentum then limit the accessible kinematic range to a small region near $q^2_{\rm max}$. To predict $R(\Lambda_c^*)$,
it will be necessary to combine the lattice QCD results for the form factors in the high-$q^2$ region with heavy-quark effective theory and LHCb data for the shapes of
the $\Lambda_b \to \Lambda_c^*\, \mu^-\bar{\nu}_\mu$ differential decay rates~\cite{Boer:2018vpx}.
\subsection{Measurements of $B \to D^{(*)} \ell \nu$ and related processes \label{ssec::exp}}
\subsubsection{Measurements with light leptons}
The decays $B\to D^*\ell\nu$ and $B\to D\ell\nu$ have been measured at Belle and BaBar as well as at older experiments (CLEO, LEP). Unfortunately, most of these measurements assume the Caprini-Lellouch-Neubert parametrization of the form factors (see Sec.~\ref{sec:param}) and report results in terms of $|V_{cb}|$ times the only form factors relevant at the zero-recoil point $w=1$, namely $\mathcal{F}(1)\equiv h_{A_1}(1)$ for $B\to D^*\ell\nu$ and $\mathcal{G}(1)\equiv 2\sqrt{M_D M_B}/(M_D + M_B))f_+(1)$ for $B\to D\ell\nu$,
and of the other CLN parameters, instead of a general form of the $z$~expansion or the raw spectra. The Heavy Flavor Averaging Group (HFLAV) has performed an average of these CLN measurements~\cite{Amhis:2019ckw} and reports
\begin{eqnarray}
\eta_\mathrm{EW}\mathcal{F}(1)|V_{cb}| & = & (35.27\pm 0.11(\rm stat)\pm 0.36(\rm syst))\times 10^{-3}~,
\label{eq:b2dstar}\\
\eta_\mathrm{EW}\mathcal{G}(1)|V_{cb}| & = & (42.00\pm 0.45(\rm stat)\pm 0.89(\rm syst))\times 10^{-3}~. \label{eq:b2d
\end{eqnarray}
Notice that Eq.~(\ref{eq:b2dstar}) together with $h_{A_1}(1)=0.904(12)$ \cite{Aoki:2019cca}
leads to the low value $|V_{cb}|=38.76(69) 10^{-3}$.
Eq.~(\ref{eq:b2d}) together with $\mathcal{G}(1)=1.0541(83)$ \cite{Lattice:2015rga} leads to a consistent result $|V_{cb}|=39.58(99) 10^{-3}$.
In the case of $B\to D\ell\nu$ one can also use
the existing lattice calculations at non-zero recoil \cite{Lattice:2015rga,Na:2015kha} to guide the extrapolation
to zero recoil, together with the $w$ spectrum measured by Belle \cite{Glattauer:2015teq}. In the BGL parametrization, this leads to
a higher value, $|V_{cb}|=40.83(1.13) 10^{-3}$, a more reliable determination than (\ref{eq:b2d}).
In the following we will have a closer look at the most recent measurements by the various experiments.
{\bf Belle} has recently updated the untagged measurement of the $B^0\to D^{*-}\ell^+\nu$ mode~\cite{Waheed:2018djm}. While the new analysis is based on the same 711~fb$^{-1}$ Belle data set, the re-analysis takes advantage of a major improvement of the track reconstruction software, which was implemented in 2011, leading to a substantially higher slow pion tracking efficiency and hence to much larger signal yields than in the previous publication~\cite{Dungel:2010uk}. Again $D^{*+}$~mesons are reconstructed in the cleanest mode, $D^{*+}\to D^0\pi^+$ followed by $D^0\to K^-\pi^+$, combined with a charged, light lepton (electron or muon) and yields are extracted in 10 bins for each of the 4 kinematic variables describing the $B^0\to D^{*-}\ell^+\nu$~decay. These yields are published along with their full error matrix. The updated publication also contains an analysis of these yields using both the CLN and the BGL form factors (where BGL has only 5 free parameters).
The CLN analysis results in $\eta_\mathrm{EW}\mathcal{F}(1)|V_{cb}|=(35.06\pm 0.15(\rm stat)\pm 0.56(\rm syst))\times 10^{-3}$, while the BGL fit gives $\eta_\mathrm{EW}\mathcal{F}(1)|V_{cb}|=(34.93\pm 0.23(\rm stat)\pm 0.59(\rm syst))\times 10^{-3}$. Both results are thus well consistent. This contrasts with a tagged measurement of $B^0\to D^{*-}\ell^+\nu$ first shown by Belle in November 2016~\cite{Abdesselam:2017kjf}. Analyzing the raw data of this measurement in terms of the CLN and BGL form-factors gives a difference of almost two standard deviations in $|V_{cb}|$~\cite{Bigi:2017njr,Grinstein:2017nlq}. However, this result has remained preliminary and will not be published. A new tagged analysis, using an improved version of the hadronic tag is now underway and should clarify the experimental situation.
{\bf Babar}
has presented a full four-dimensional angular analysis of $B^0\to D^{*0}\ell^-\nu_\ell$ decays, using both CLN and BGL parametrizations \cite{Dey:2019bgc}.
This analysis is based on the full data set of 450~fb$^{-1}$, and exploits the hadronic $B$-tagging approach.
The full decay chain $e^+e^-\to \Upsilon(4S)\to B_{\rm tag} B_{\rm sig}(\to D^*\ell\nu_\ell)$ is considered in a kinematic fit that includes constraints on the beam properties, the secondary vertices, the masses of $B_{\rm tag}$, $B_{\rm sig}$, $D^*$ and the missing neutrino. After applying requirements on the probability of the $\chi^2$ of this constrained fit, which is the main discriminating variable, the remaining background is only about $2\%$ of the sample. The resolution on the kinematic variables is about a factor five better than the one possible with untagged measurements. The shape of the form factors is extracted using an unbinned maximum likelihood fit where the signal events are described by the four dimensional differential decay rate.
The extraction of $|V_{cb}|$ is performed indirectly by adding to the likelihood the constraint that the integrated rate $\Gamma=\mathcal{B}/\tau_B$, where $\mathcal{B}$ is the $B\to D^*\ell\nu$ branching fraction and $\tau_B$ is the $B$-meson lifetime. The values of these external inputs are taken from HFLAV \cite{Amhis:2019ckw}.
The final result, using $h_{A_1}(1)$ from \cite{Bailey:2014tva},
is $|V_{cb}|=(38.36\pm 0.90)\times 10^{-3}$ with a 5-parameter BGL version and
$|V_{cb}|=(38.40\pm0.84)\times 10^{-3}$ in the CLN case, both compatible with the above
HFLAV average. Nevertheless, the individual form factors show significant deviations
from the world average CLN determination by HFLAV.
{\bf LHCb} has extracted $V_{cb}$ from semileptonic $B_s^0$ decays for the first
time~\cite{Aaij:2020hsi}. The measurement uses both $B_s^0\rightarrow
D_s^{-}\mu^+\nu_{\mu}$ and $B_s^0\rightarrow D_s^{*-}\mu^+\nu_{\mu}$ decays using
$3$~fb$^{-1}$ collected in 2011 and 2012.
The value of $|V_{cb}|$ is determined from the observed yields of $B_s^0$ decays
normalized to those of $B^0$ decays after correcting for the relative reconstruction
and selection efficiencies. The normalization channels are $B^0\to D^-\mu^+\nu_{\mu}$
and $B^0\to D^{*-}\mu^+\nu_{\mu}$ with the $D^-$ reconstructed with the same decay
mode as the $D_s$, ($D_{(s)}^-\to [K^+K^-]_{\phi}\pi^-$), to minimize the systematic
uncertainties.
The shapes of the form factors are extracted as well, exploiting the kinematic
variable $p_{\perp}(D_s)$ which is the component of the $D_s^-$ momentum
perpendicular to the $B_s^0$ flight direction. This variable is correlated with $q^2$. In this analysis both the CLN parametrization and a 5-parameter version of BGL have been
used. The results for $V_{cb}$ are
\begin{eqnarray}
|V_{cb}|_{CLN}&=&(41.4\pm0.6(\rm stat)\pm 0.9(\rm syst)\pm 1.2(\rm ext))\times 10^{-3}\nonumber\\ \nonumber
|V_{cb}|_{BGL}&=&(42.3\pm0.8(\rm stat)\pm 0.9(\rm syst)\pm 1.2(\rm ext))\times 10^{-3},
\end{eqnarray}
where the first uncertainty are statistical, the second systematic and the third due to the limited knowledge of the external input, in particular the $B_s^0$ to $B^0$ production ratio $f_s/f_d$ which is known with an uncertainty of about $5\%$. The results are compatible with both the inclusive and exclusive decays. Although not competitive with the results obtained at the $B$~factories, the novel approach used can be extended to the semileptonic $B^0$ decays.
\subsubsection{Past measurements of $R(D)$ and $R(D^*)$}
$R_{D}$ and $R_{D^*}$ are defined as the ratios of the semileptonic decay width of $B_d$ and $B_u$ meson to a $\tau$ lepton and its associated neutrino $\nu_\tau$ over the $B$ decay width to a light lepton.
A summary of the currently available measurements of $R_{D}$ and $R_{D^*}$ is presented in Table~\ref{tab_H2H_RDexp}, showing the yield of $B$ signal and $B$ normalization decays and the stated uncertainties. The data were collected by the BaBar and Belle experiments at $e^+e^-$ colliders operating at the $\Upsilon(4S)$ resonance, which decays exclusively to pairs of $B^+B^-$ or $B^0\bar B^0$ mesons. The LHCb experiment operates at the high energy $pp$ collider at CERN at total energies of 7 and 8 TeV, where pairs of $b$-hadrons (mesons or baryons) along with a large number of other charged and neutral particles are produced. While the maximum production rate of the $\Upsilon(4S)\to B\bar B$ events has been 20 Hz, the rates observed at LHCb exceed 100kHz.
\begin{table}[t]
\centering
\begin{tabular}{|lll|cc|c|}\hline
Experiment & tag & $\tau$ decay & N(D$\tau\nu_{\tau}$) & N$_{norm}$ & R(D) \\ \hline
Babar \cite{Lees:2012xj,Lees:2013uzd} & Had.& $\ell\nu_\ell\nu_\tau$ & $489 \pm 63$ & $2891 \pm 65$ & $0.440 \pm 0.058 \pm 0.042$ \\
Belle \cite{Huschle:2015rga} & Had.& $\ell\nu_\ell\nu_\tau$ & $320 \pm 55$ & $3147 \pm 72$ & $0.375 \pm 0.064 \pm 0.026$ \\
Belle \cite{Abdesselam:2019dgh} & SL & $\ell\nu_\ell\nu_\tau$ & $1778 \pm 204$& $22896 \pm 471$ & $0.307 \pm 0.037 \pm 0.016$ \\ \hline
\multicolumn{5}{|c|}{HFLAV} & $0.340 \pm 0.027 \pm 0.013$ \\
\multicolumn{5}{|c|}{Theory} & $0.299 \pm 0.003$ \\ \hline
Experiment & tag & $\tau$ decay & N(D$^* \tau\nu_{\tau}$) & N$_{norm}$ & R(D$^*$) \\ \hline
Babar \cite{Lees:2012xj,Lees:2013uzd} & Had.& $\ell\nu_\ell\nu_\tau$ & $888 \pm 63$ & $11953 \pm 122$ & $0.332 \pm 0.024 \pm 0.018$ \\
Belle \cite{Huschle:2015rga} & Had.& $\ell\nu_\ell\nu_\tau$ & $503 \pm 65$ & $3797 \pm 74$ & $0.293 \pm 0.038 \pm 0.015$ \\
Belle \cite{Hirose:2016wfn,Hirose:2017dxl} & Had.& $\pi\nu_\tau, \rho\nu_\tau$ & $298 \pm 29$ & $7213 \pm 96$ & $0.270 \pm 0.035 \pm 0.028$ \\
LHCb \cite{Aaij:2015yra} & - & $\mu\nu_\mu\nu_\tau$ & 16480 & 363000 & $0.336 \pm 0.027 \pm 0.030$ \\
LHCb \cite{Aaij:2017uff,Aaij:2017deq} & - & $\pi\pi\pi\nu_\tau$ & 1273 & 17660 & $0.280 \pm 0.018 \pm 0.029$ \\
Belle \cite{Abdesselam:2019dgh} & SL & $\ell\nu_\ell\nu_\tau$ & $651 \pm 46$& $16942 \pm 148$& $0.283 \pm 0.018 \pm 0.014$ \\ \hline
\multicolumn{5}{|c|}{HFLAV} & $0.295 \pm 0.011 \pm 0.008$ \\
\multicolumn{5}{|c|}{Theory} & $0.253 \pm 0.005$ \\ \hline
\end{tabular}
\caption{Summary of $R_D$ and $R_{D^*}$ measurements and theoretical predictions. The number of observed signal and normalization events is also reported. The normalization channel is B$\to$D$^{(*)}\ell\nu_{\ell}$ for all measurements but the LHCb one with three-prong $\tau$ decays, where the normalization channel is B$\to$ D$^*\pi\pi\pi$. The latter LHCb measurement has been updated using
the latest HFLAV average for ${\cal{B}}(B\to D^*\ell\nu_\ell)$. The quoted theory predictions are arithmetic averages of the values reported in Table \ref{tab:RD-RDstar} below; they are given for illustration only and do not imply consent from the authors of the calculations. }
\label{tab_H2H_RDexp}
\end{table}
Currently we have only two measurements~\cite{Lees:2012xj,Lees:2013uzd,Huschle:2015rga} of the ratios $R_{D}$ and $R_{D^*}$ based on two distinct samples of hadronic tagged $B\bar B$ events with signal $B\to D\tau\nu_\tau$ and $B\to D^*\tau\nu_\tau$ decays and purely leptonic tau decays, $\tau^- \to e^-\bar\nu_e\nu_\tau$ or $\tau^- \to \mu^-\bar{\nu}_\mu\nu_\tau$. In addition, there is a measurement from Belle~\cite{Hirose:2016wfn,Hirose:2017dxl} of $R_{D^*}$ with hadronic tags and a semileptonic one-prong $\tau$ decay ($\tau^-\to\pi^-\nu_\tau$ or $\tau^-\to\rho^-\nu_\tau$).
A Belle measurement~\cite{Abdesselam:2019dgh} of $R_{D}$ and $R_{D^*}$ with semi-leptonic tags and purely leptonic $\tau$ decays appeared recently, superceding a
previous measurement~\cite{Sato:2016svk} of $R_{D^*}$ obtained with the same technique.
At LHCb only decays of neutral $B$ mesons producing a charged $D^*$ meson and a muon of opposite charge are selected, with a single decay chain $D^{*+}\to D^0 (\to K^-\pi^+) \pi^+$.
LHCb published two measurements~\cite{Aaij:2015yra,Aaij:2017uff,Aaij:2017deq} of $R_{D^*}$, the first relying on purely leptonic $\tau$ decays and normalized to the $B^0 \to D^{*+}\mu^-\bar \nu_\mu$ decay rate, and the more recent one using 3-prong semileptonic $\tau$ decays, $\tau^-\to\pi^+\pi^-\pi^- \nu_\tau$ and normalization to the decay $B^0 \to D^{*+}\pi^+\pi^-\pi^-$.
This LHCb measurement extracts directly the ratio of branching fractions ${\cal K}(D^*)={\cal B}(B^0 \to D^{*+}\tau^-\bar \nu_\tau)/{\cal B}(B^0 \to D^{*+}\pi^+\pi^-\pi^-)$. The ratio ${\cal K}(D^*)$ is then converted to $R(D^*)$ by using the known branching fractions of $B^0 \to D^{*+}\pi^+\pi^-\pi^-$ and $B^0 \to D^{*+}\mu^-\bar \nu_\mu$.
BaBar and Belle analyses rely on the large detector acceptance to detect and reconstruct all final state particles from the decays of the two B mesons, except for the neutrinos. They exploit the kinematics of the two-body $\Upsilon(4S)$ decay and known quantum numbers to suppress non-$B\bar B$ and combinatorial
backgrounds. They differentiate the signal decays involving two or three missing neutrinos from decays involving a low mass charged lepton, an electron or muon, plus an associated neutrino.
LHCb isolates the signal decays from very large backgrounds by exploiting the relatively long $B$ decay lengths which allows for a separation of the charged particles from the $B$ and charm decay vertex from many others originating from the pp collision point. There are insufficient kinematic constraints and therefore the total $B$ meson momentum is estimated from its transverse momentum, degrading the resolution of kinematic quantities like the missing mass and the momentum transfer squared $q^2$. Also, the production of $D^{*+} D_s^-$ pairs with the decay $D_s^-\to \tau^-\bar \nu_\tau$ leads to sizable background in the signal sample.
The summary in Table~\ref{tab_H2H_RDexp} indicates that the results are not inconsistent. For BaBar and Belle the systematic uncertainties are comparable for $R_{D^*}$,
while Belle systematic uncertainties are smaller for $R_{D}$.
However the differences in the signal yield and the background suppression lead to smaller statistical errors for BaBar. The Belle measurements based on semileptonic tagged samples result in a 50\% smaller signal yield than for the hadronic tag samples.
For the two LHCb measurements, the event yields exceed the BaBar yields by close to a factor of 20, but the relative statistical errors on $R_{D^*}$ are comparable to BaBar, and the systematic uncertainties are larger by a factor of 2.
\subsubsection{Lessons learned}
All currently available measurements are limited by the difficulty of separating the signal from large backgrounds from many sources, leading to sizable statistical and systematic uncertainties. The measurement of ratios of two $B$ decay rates with the very similar - if not identical – final state particles, significantly reduces the systematic uncertainties due to detector effects, tagging efficiencies, and also from uncertainties in the kinematics due to form factors and branching fractions. For all three experiments the largest systematic uncertainties are attributed to the limited size of the MC samples, the fraction and shapes of various backgrounds, especially from decays involving higher mass charm states, and uncertainties in the relative efficiency of signal and normalization, the efficiency of other backgrounds, as well as lepton mis-identification. Though the total number of $B\bar B$ events of the full Belle data set exceeds the one for BaBar by 65\%, the signal BaBar signal yield for $B \to D^{(*)} \tau\nu_\tau$ exceeds Belle by 67\% due to differences in event selection and fit procedures.
While the use by Belle of semileptonic B decays as tags for $B\bar B$ events benefits from the fewer decay modes with higher BFs, the presence of a neutrino in the tag decays results in the loss of stringent kinematic constraints. The resulting signal yields are lower by 50\% compared to hadronic tags, and the backgrounds are much larger. The use of the ECL, namely the sum of the energies of the excess photons in a tagged event, in the fit to extract the signal yield is somewhat problematic, since it includes not only the photons left over from incorrectly reconstructed $B \bar B$ events, but also photons emitted from the high intensity beams. As a result the signal contributions are difficult to separate from the very sizable backgrounds.
\subsubsection{ Outlook for $R(D)$ and $R(D^*)$}
Belle II and the upgraded LHCb are expected to collect large data samples with considerably improved detector performances. This should lead to much reduced detector related uncertainties, higher signal fractions, and opportunities to measure many related processes. The goal is to push the sensitivity of many measurements of critical variables and distributions beyond theory uncertainties and thereby increase the sensitivity to non-Standard Model processes.
Currently there are only two measurements of the ratio $R_{D}$, one each by BaBar and Belle, based on two distinct samples of hadronic tagged $B \bar B$ events for the signal $B\to D\tau\nu_\tau$ and $B\to D^* \tau\nu_\tau$ decays. The decay $B\to D\tau\nu_\tau$ is dominated by a P-wave, whereas in the $B\to D^* \tau\nu_\tau$ S, P, and D waves contribute and the impact for contributions from new physics processes is expected to be smaller. A contribution of a hypothetical charged Higgs would result in an S-wave for $B\to D\tau\nu_\tau$, and a P-wave for $B\to D^*\tau\nu_\tau$, thus measurements of the angular distributions and the polarization of the $\tau$ lepton or $D$ and $D^*$ mesons will be important. Such measurements would of course also serve as tests of other hypotheses, for instance contributions from leptoquarks.
The studies for many decay modes, the detailed kinematics of the signal events, the four-momentum transfer $q^2$, the lepton momentum, the angles and momenta of $D$ and $D^*$ and the $\tau$ spin should be extended to perform tests for potential new physics contributions.
Belle II will benefit from major upgrades to all detector components, except for the barrel sections of the calorimeter and the muon detector. In addition, a new data acquisition and analysis software are being developed to benefit from the very high data rates and improved detector performance.
Upgrades to the precision tracking and lepton identification, especially at lower momenta, are expected to significantly improve the mass resolution and purity of the signal samples. This should also improve the detector modeling of efficiencies for signal and backgrounds and fake rates that are the major contributions to the current systematic uncertainties. The much larger data rates should allow choice of cleaner and more efficient $B \bar B$ tagging algorithms.
Major improvements to the MC simulation signal and backgrounds will be needed. They require much better understanding of all semileptonic $B$ decays, contributing to signal and backgrounds, i.e., updated measurements of branching fractions and form factors and theoretical predictions, especially for backgrounds involving higher mass charm mesons, either resonances or states resulting from charm quark fragmentation. The fit to extract the signal yields could be improved by reducing the backgrounds and making use of fully 2D or 3D distributions of kinematic variables, and by avoiding simplistic parametrizations.
The suppression of fake photons and $\pi^0$s needs to be scrutinized to avoid unnecessary signal loss and very large backgrounds for $D^{*0}$ decays. Shapes of distributions entering multi-variable methods to reduce the backgrounds should be scrutinized by comparisons with data or MC control samples, and any significant differences should be addressed. The use of ECL, the sum of the energies of all unassigned photon in an event, may be questionable, given the expected high rate of beam generated background.
The first study by Belle of the $\tau$ spin in $B\to D^*\tau\nu_\tau$ decays with $\tau^-\to\rho^-\nu_\tau$ or $\tau^-\to\pi^-\nu_\tau$ is very promising, it indicates that much larger and cleaner data samples will be needed. The systematic uncertainty on the $R_{D^*}$ measurement of 11\% is dominated by the hadronic $B$ decay composition of 7\% and the size of the MC sample \cite{Hirose:2017dxl}. The measured transverse $\tau$ polarization of $P_\tau = -0.38\pm 0.51^{+0.21}_{-0.16}$ is totally statistics dominated, and implies $P_\tau < 0.5$ at 90\% C.L.
Among the many other measurements Belle II is planning, ratios $R$ for both inclusive and inclusive semileptonic $B$ decays are of interest, for instance
in addition to $R_D$, $R_{D^{*}}$, and $R_{D^{**}}$ also $R_{X_c}$, as well as $R_\pi$ and $R_{X_u}$, which rely on unique capabilities of Belle II.
The LHCb detector is currently
undergoing a major upgrade with the goal to switch to an all software trigger and to be able to select and record data up to rates of 100kHz. Replacements of all tracking devices are planned, ranging from radiation hard pixel detector near interaction region to scintillation fibers downstream. Improvements to electron and muon detection and reduction in pion misidentification will be critical for the suppression of backgrounds, and should also allow rate comparison for decays involving electron or muons. LHCb relies on large data samples rather than MC simulation to assess signal efficiencies and most importantly the many sources of backgrounds and their suppression.
Several analyses are underway based on Run 1 and Run 2 data samples, and are benefiting from improved trigger capabilities. The first analysis based on 3-prong $\tau$ decays showed a clear separation of the $\tau$ decay vertex from both the $D$ and the proton interaction point, improving the signal purity to about 11\%, compared to 4.4\% for the purely leptonic 1-prong $\tau$ decay. This may therefore be the favored $\tau$ decay mode, and should also be tried for $B^+\to D^0\tau^+\nu_\tau$. Improved measurements of the branching fractions for normalization and the $\tau$ decays will be essential.
As a follow-up on the first LHCb measurement of $R_{D^*}$, a simultaneous fit to two disjoint $D^0 \mu^-$ and $D^{*+}\mu^-$ samples is in preparation, taking into account the large feed-down from $D^*$ decay present in the $D^0 \mu^-$ sample. As pointed out above, the decay $B^+\to D^0\tau^+\nu_\tau$ is more sensitive to new physics processes than $B^0\to D^{*-}\tau^+\nu_\tau$ and thus this analysis is expected to be very important to establish the excess in these decay modes and its interpretation. This analysis will benefit from the addition of dedicated triggers sensitive to $D^0 \mu^-$, $D^{*+}\mu^-$, $\Lambda_c^+ \mu$ and $D_s^{+}\mu$ final states.
LHCb is considering a series of other ratios measurements, among several $b \to c$ transitions ($\bar B_s^0 \to D_s^- \tau^+\nu_\tau$, $B\to D^{**} \tau^+\nu_\tau$ and $\Lambda_b^+\to \Lambda_c^{(*)} \tau^+\nu_\tau$) and certain $b\to u$ transitions ($B^+\to\rho^0\tau^+\nu_\tau$, $B^+\to p\bar p \tau^+\nu_\tau$ and $\Lambda_b^0\to p\tau^-\nu_\tau$), most of which will be challenging to observe and not trivial to normalize. The decay $\Lambda_b^+\to \Lambda_c^* \tau^+\nu_\tau$ probes a different spin structure, and a precise measurement of $R_{\Lambda_c}$ would be of great interest for the interpretation of the excess of events in $R_{D}$ .
The observation of the decay $B_c^-\to J/\psi (\to\mu^+\mu^-) \tau^- (\to \mu^-\bar\nu_\mu\nu_\tau) \bar\nu_\tau$ has recently been reported. It is a very rare process which is only observable at LHCb. The final state of 3 muons is a unique signature, though impacted by sizable backgrounds from hadron misidentification. The measured ratio $R_{J/\psi} = 0.71\pm 0.17 \pm 0.18$ has large uncertainties, dominated systematically by the signal simulation since the form factors are unknown.
\subsection{Extraction of $V_{cb}$ and predictions for $R_{D^{(*)}}$}
\label{sec:Vcb-RD-RDs}
The values of $V_{cb}$ extracted from inclusive and exclusive decays have been in tension for a long time~\cite{Amhis:2016xyh}.
In order to extract $V_{cb}$ from $B\rightarrow D^{(*)}l\nu$ data we need information on the form factors,
which is mostly provided by lattice QCD.
For the $B\rightarrow D$ form factors $f_{+,0}$ there are lattice results at $w\geq 1$~\cite{Lattice:2015rga,Na:2015kha,Aoki:2019cca}.
A fit to all the available experimental and lattice data of $B\rightarrow Dl\nu$ leads to \cite{Bigi:2016mdz}
\begin{align}
V_{cb} \cdot 10^3 &= 40.49(97)\,,
\end{align}
with $\chi^2/\mathrm{dof} = 19.0/22$. Similar results have been obtained in
\cite{Aoki:2019cca}.
For $B\rightarrow D^*$ at the moment there is only information on one of the four form factors at zero-recoil,
$A_1(w=1)$~\cite{Bailey:2014tva,Harrison:2017fmw},
however further developments look promising~\cite{Aviles-Casco:2017nge,Kaneko:2018mcr,Aviles-Casco:2019vin}.
At the other end of the $w$ or $q^2$ spectrum there are results available from
LCSR~\cite{Faller:2008tr,Gubernari:2018wyi}.
In view of the advanced experimental precision, a key question for the precise extraction of $V_{cb}$ and a
robust prediction of $R(D^{(*)})\equiv \mathcal{B}(B\rightarrow D^{(*)} \tau\nu)/\mathcal{B}(B\rightarrow D^{(*)}l\nu)$
is how large the theoretical uncertainties are.
For example, whenever relations such as (\ref{eq:ffexp}) are used, how large are HQET corrections beyond NLO, i.e.\ of
$O\left(\alpha_s^2,\Lambda^2_{\mathrm{QCD}}/m_{c,b}^2, \alpha_s \Lambda_{\mathrm{QCD}}/m_{c,b}\right)$ and how
accurate are the QCDSR results that are used at NLO?
A guideline for an answer to these questions can be provided by studying the size of NLO corrections in the HQET expansion and by a
comparison with corresponding available lattice results~\cite{Bigi:2017jbd}.
A definite answer, especially for the pseudoscalar form factor $P_1$, which is needed for the prediction of $R(D^*)$,
will be given only by future lattice results~\cite{Aviles-Casco:2017nge,Kaneko:2018mcr,Aviles-Casco:2019vin}.
In all experimental analyses prior to 2017, HQET relations have been employed in terms of a form of the CLN
parametrization~\cite{Caprini:1997mu} where theoretical uncertainties noted in Ref.~\cite{Caprini:1997mu} were set to zero by fixing
coefficients to definite numbers.
Moreover, the slope and curvature of $R_{1,2}(w)$ depend on the same underlying theoretical quantities as $R_{1,2}(1)$, which makes
the variation of the latter and fixing of the former inconsistent.
In future experimental analyses this has to be taken into account.
Recent preliminary Belle data~\cite{Abdesselam:2017kjf} allowed for a reappraisal of fits to $B\rightarrow D^*l\nu$ by several
groups~\cite{Bigi:2017njr,Bigi:2017jbd,Grinstein:2017nlq,Bernlochner:2017jka,Bernlochner:2017xyx,Jaiswal:2017rve,Harrison:2017fmw}.
For the first time, Ref.~\cite{Abdesselam:2017kjf} reported deconvoluted $w$ and angular distributions which are independent of the
parametrization.
This allowed to test the possible influence of different parametrizations on the extracted value of $V_{cb}$.
Indeed, based on that data set the central values for $\vert V_{cb}\vert$ varied by up to 6\% between CLN and BGL
fits~\cite{Bigi:2017njr,Grinstein:2017nlq,Jaiswal:2017rve}. By floating some additional parameters of the less flexible CLN
parametrization, the agreement between BGL and CLN could be restored~\cite{Bigi:2017njr,Bernlochner:2017xyx}.
Furthermore, in the literature one could observe a correlation of smaller central values
for $V_{cb}$ with stronger HQET+QCDSR input~\cite{Abdesselam:2017kjf,Bigi:2017njr,Bigi:2017jbd,Grinstein:2017nlq,Bernlochner:2017jka,%
Bernlochner:2017xyx,Jaiswal:2017rve,Harrison:2017fmw}.
Recently, on top of the tagged analysis Ref.~\cite{Abdesselam:2017kjf} a new untagged Belle analysis of $B\rightarrow D^*l\nu$
appeared~\cite{Abdesselam:2018nnh}. The new, more precise data brought the $|V_{cb}|$ central values of the CLN and BGL fits closer
together. However, in order to obtain a reliable error, it is necessary to employ the BGL parametrization with a sufficient number of
coefficients rather than the CLN parametrization. Including the new data, Ref.~\cite{Gambino:2019sif} obtains
\begin{align}
V_{cb} \cdot 10^3 &= 39.6\left(^{+1.1}_{-1.0}\right)\,, \label{eq:Vcb}
\end{align}
with a $\chi^2/\mathrm{dof} = 80.1/72$.
The inclusion of LCSRs or strong unitarity constraints, where input from HQET is used in a conservative way, basically does not change
the fit result~\cite{Gambino:2019sif}. The $V_{cb}$ value in Eq.~(\ref{eq:Vcb}) differs by $1.9\sigma$ from the inclusive result.
\begin{table*}[t]
\centering
\begin{tabular}{ccc}\hline\hline
Ref. & $R(D)$ & Exp. deviation \\\hline
\cite{Bigi:2016mdz} & $0.299(3)$ & $1.4\sigma$ \\
\cite{Bernlochner:2017jka} & $0.299(3)$ & $1.4\sigma$ \\
\cite{Jaiswal:2017rve} & $0.302(3)$ & $1.3\sigma$ \\
\cite{Bordone:2019vic} & $0.297(3)$ & $1.4\sigma$ \\\hline\hline
\end{tabular}
\qquad
\begin{tabular}{|ccc|}\hline\hline
Ref. & $R(D^*)$ & Exp. deviation \\\hline
\cite{Bernlochner:2017jka} & $0.257(3)$ & $2.7\sigma$ \\
\cite{Gambino:2019sif} & $0.254\left(^{7}_{6}\right)$ & $2.7\sigma$ \\
\cite{Jaiswal:2020wer} & $0.251\left(^{4}_{5}\right)$ & $3.1\sigma$ \\
\cite{Bordone:2019vic} & $0.250(3)$ & $3.2\sigma$ \\\hline\hline
\end{tabular}
\caption{Recent theory predictions for $R(D^{(*)})$.
The deviations are calculated from the HFLAV spring 2019 updates
$R(D)^{\mathrm{exp}} = 0.340(27)(13)$~\cite{Amhis:2019ckw,Lees:2012xj,Lees:2013uzd,Huschle:2015rga,Abdesselam:2019dgh} and
$R(D^*)^{\mathrm{exp}} = 0.295(11)(8)$
\cite{Amhis:2019ckw,Lees:2012xj,Lees:2013uzd,Huschle:2015rga,Sato:2016svk,Aaij:2015yra,Hirose:2016wfn,Hirose:2017dxl,Aaij:2017uff,Aaij:2017deq,Abdesselam:2019dgh},
respectively.
For older predictions see Refs.~\cite{Fajfer:2012vx,Celis:2012dk,Tanaka:2012nw}.
Table adapted and extended from Ref.~\cite{Schacht:2017vfd}.}
\label{tab:RD-RDstar}
\end{table*}
The shortcomings of the CLN parametrization have been addressed in several recent articles \cite{Bernlochner:2017jka,Jaiswal:2017rve,Jung:2018lfu,Bordone:2019vic,Bordone:2019guc}:
varying the coefficients of the HQE consistently allows for a simultaneous description of the available experimental and lattice data in $B\to D$, while the parametrization
dependence in the extraction of $V_{cb}$ from Ref.~\cite{Abdesselam:2017kjf} remains \cite{Bernlochner:2017jka}. Including additionally contributions at $O(1/m_c^2)$
and higher orders in the $z$ expansion, the extracted values for $V_{cb}$ using the BGL parametrization and the HQE become compatible \cite{Bordone:2019vic}.
For the above reasons, older HFLAV averages, which are based on the CLN parametrization, should not be employed in future analyses, with the
exception of the total branching ratios, whose parametrization dependence is expected to be negligible. The two most recent experimental analyses of $\bar{B}_{(s)} \rightarrow D_{(s)}^*l^-\bar{\nu}_l$
\cite{Dey:2019bgc,Aaij:2020hsi}
present results obtained in both CLN and a simplified version of the BGL parametrization. They did not observe sizeable parametrization dependence, but found very different values of $V_{cb}$. However, they did not provide data in a format that allows for
independent reanalyses.
For the lepton flavor nonuniversality observables $R(D^{(*)})$
we list a few recent theoretical predictions in Table~\ref{tab:RD-RDstar}.
Predictions for further lepton flavor non-universality observables of underlying $b\rightarrow cl\nu$ transitions can be found in Refs.~\cite{Bernlochner:2018bfn,Cohen:2019zev}.
Compared to predictions from before 2016, the predictions in Table~\ref{tab:RD-RDstar} make use of new lattice results and new experimental data.
The results are based on different methodologies and a different treatment of the uncertainties of HQET + QCDSR.
We have a very good consensus for $R(D)$ predictions because in this case the predictions are dominated by
the recent comprehensive lattice results from Refs.~\cite{Lattice:2015rga,Na:2015kha,Aoki:2016frl}.
QED corrections to $R(D)$ remain a topic which deserves further study~\cite{Becirevic:2009fy,deBoer:2018ipi}.
In the case of $R(D^*)$, as we do not have yet lattice information on the form factor $P_1$, we can use the exact endpoint
relation $P_1(w_{\mathrm{max}}) = A_5(w_{\mathrm{max}})$ and results from HQET and QCDSR.
Depending on the estimate of the corresponding theory uncertainty one obtains different theoretical errors for the prediction of $R(D^*)$.
As soon as we have lattice results for $P_1$~\cite{Aviles-Casco:2017nge}, the different fits will stabilize and
we expect a similar consensus as for $R(D)$.
Despite the most recent experimental results being closer to the SM predictions,
the $R(D^{(*)})$ anomaly persists and remains a tough challenge for model builders.
\subsection{Semileptonic $B\to D^{**}\ell\bar\nu$ decays \label{ssec::Ddoublestar}}
\def\Lambda_{\rm QCD}{\Lambda_{\rm QCD}}
\newcommand{D^{(*)}}{D^{(*)}}
\newcommand{D^{**}}{D^{**}}
\newcommand{D^{1/2^+}}{D^{1/2^+}}
\newcommand{D^{3/2^+}}{D^{3/2^+}}
\newcommand{D^*_0}{D^*_0}
\newcommand{D^*_1}{D^*_1}
\newcommand{D_1}{D_1}
\newcommand{D^*_2}{D^*_2}
Semileptonic $B$ decays to the four lightest excited charm mesons, $D^{**} =
\{D_0^*,\, D_1^*,$ $D_1,\, D_2^*\}$, are important both because they are
complementary signals of possible new physics contributions to $b\to
c\tau\bar\nu$, and because they are substantial backgrounds to the $R(D^{(*)})$
measurements (as well as to some $|V_{cb}|$ and $|V_{ub}|$ measurements).
Thus, the correct interpretation of future $B\to D^{(*)}\ell\bar\nu$
measurements requires consistent treatment of the $D^{**}$ modes.
The spectroscopy of the $D^{**}$ states is important, because in addition to the
impact on the kinematics, it also affects the expansion of the form
factors~\cite{Leibovich:1997tu,Leibovich:1997em} in HQET~\cite{Georgi:1990um,Eichten:1989zv}.
The isospin averaged masses and widths for the six lightest
charm mesons are shown in Table~\ref{tab:charm}. In the HQS~\cite{Isgur:1989vq,Isgur:1989ed} limit, the spin-parity of the light
degrees of freedom, $s_l^{\pi_l}$, is a conserved quantum number, yielding
doublets of heavy quark symmetry, as the spin $s_l$ is combined with the heavy
quark spin~\cite{Isgur:1991wq}. The ground state charm mesons containing light
degrees of freedom with spin-parity $s_l^{\pi_l} = \frac12^-$ are the
$\big\{D,\, D^*\big\}$. The four lightest excited $D^{**}$ states correspond in
the quark model to combining the heavy quark and light quark spins with $L=1$
orbital angular momentum. The $s_l^{\pi_l} = \frac12^+$ states are
$\big\{D_0^*,\, D_1^*\big\}$ while the $s_l^{\pi_l} = \frac32^+$ states are
$\big\{D_1,\, D_2^*\big\}$. The $s_l^{\pi_l} = \frac32^+$ states are narrow
because their $D^{(*)}\pi$ decays only occur in a $d$-wave or violate heavy
quark symmetry. In the case of $B_s$ decays, all four $D_s^{**}$ states are
narrow.
\begin{table}[b]
\tabcolsep 6pt
\centerline{\begin{tabular}{ccccc}
\hline\hline
Particle & $s_l^{\pi_l}$ & $J^P$ & $m$ (MeV) & $\Gamma$ (MeV)\\
\hline
$D^*_0$ & $\frac12^+$ & $0^+$ & $2349$ & $236$ \\
$D^*_1$ & $\frac12^+$ & $1^+$ & $2427$ & $384$ \\
\hline
$D_1$ & $\frac32^+$ & $1^+$ & $2421$ & $31$ \\
$D^*_2$ & $\frac32^+$ & $2^+$ & $2461$ & $47$ \\
\hline\hline
$D^*$ & $\frac12^-$ & $1^-$ & $2009$ & 0. \\
$D$ & $\frac12^-$ & $0^-$ & $1866$ & 0. \\
\hline\hline
\end{tabular}}
\caption{Isospin averaged masses and widths of the six lightest charm mesons,
rounded to 1\,MeV~\cite{PDG2018} (from Ref.~\cite{Bernlochner:2017jxt}).}
\label{tab:charm}
\end{table}
A simplifying assumption used in Refs.~\cite{Leibovich:1997tu,Leibovich:1997em}
to reduce the number of subleading Isgur-Wise functions was to neglect certain
$O(\Lambda_{\rm QCD}/m_{c,b})$ contributions involving the chromomagnetic operator in
the subleading HQET Lagrangian, motivated by the fact that the mass splittings
in both the $s_l^{\pi_l} = \frac12^+$ and $s_l^{\pi_l} = \frac32^+$ doublets
were measured to be much smaller than $m_{D^*} - m_D$. This is not supported by
the more recent data (see Table~\ref{tab:charm}), so
Ref.~\cite{Bernlochner:2016bci} extended the predictions of
Refs.~\cite{Leibovich:1997tu,Leibovich:1997em} accordingly, including deriving
the HQET expansions of the form factors which do not contribute in the $m_\ell =
0$ limit. The impact of arbitrary new physics operators was analyzed in
Ref.~\cite{Bernlochner:2017jxt}, including the $O(\Lambda_{\rm QCD}/m_{c,b})$ and
$(\alpha_s)$ corrections in HQET. The corresponding results in the
heavy quark limit were obtained in Ref.~\cite{Biancofiore:2013ki}.
The large impact of the $O(\Lambda_{\rm QCD}/m_{c,b})$ contributions to the form
factors can be understood qualitatively by considering how heavy quark symmetry
constrains the structure of the expansions near zero recoil. It is useful to
think of a simultaneous expansion in powers of $(w-1)$ and $(\Lambda_{\rm QCD}/m_{c,b})$.
(The kinematic ranges are $0 < w-1 \lesssim 0.2$ for $\tau$ final states, and $0
< w-1 \lesssim 0.3$ for $e$ and $\mu$.) The decay rates to the spin-1 $D^{**}$
states, which are not helicity suppressed near $w=1$, are of the form
\begin{equation}
\frac{{\rm d}\Gamma_{D_1,\, D_1^*}}{{\rm d}w} \sim
\sqrt{w^2-1}\, \big[ \big( 0_{\rm (HQS)}
+ 0_{\rm (HQS)}\,\varepsilon + \varepsilon^2 + \ldots \big)
+ (w-1)\, \big(\varepsilon^0 + \varepsilon + \ldots \big) + \ldots \big] .
\end{equation}
Here $\varepsilon$ is a power-counting parameter of order $\Lambda_{\rm QCD}/m_{c,b}$, and
the 0-s are consequences of heavy quark symmetry. The $\varepsilon^2$ term in
the first parenthesis is fully determined by the leading order Isgur-Wise
function and hadron mass splittings~\cite{Leibovich:1997tu,Leibovich:1997em,Bernlochner:2016bci,Bernlochner:2017jxt}.
The same also holds for those new
physics contributions to $B\to D_0^* \ell\bar\nu$, which are not helicity
suppressed. This explains why the $O(\Lambda_{\rm QCD}/m_{c,b})$ corrections to the
form factors are very important, and can make $O(1)$ differences in
physical predictions, without being a sign of a breakdown of the heavy quark
expansion. The sensitivity of the $D^{**}$ modes to new physics is
complementary and sometimes greater than those of the $D$ and $D^*$
modes~\cite{Biancofiore:2013ki,Bernlochner:2017jxt}. Thus, using HQET, the
predictions for $B\to D^{**}\tau\bar\nu$ are systematically improvable by better
data on the $e$ and $\mu$ modes, just like they are for $B\to D^{(*)}
\tau\bar\nu$~\cite{Bernlochner:2017jka}, and are being implemented in
HAMMER~\cite{Ligeti:2016npd,Duell:2016maj,Bernlochner:2020tfi}.
\subsection{New physics in $B\to D^{(*)} \tau \nu$}
Independently of the recent discussion on form factor parametrizations and their influence on the extraction of $V_{cb}$ (covered in Sec.~\ref{sec:param}) it is clear from Table~\ref{tab:RD-RDstar} that the SM cannot accomodate the present experimental data on $R(D^{(*)})$. Even after the inclusion of the most recent Belle measurement~\cite{Abdesselam:2019wbt}, the significance of the anomaly remains $3.1\sigma$.
This leaves, apart from an underestimation of systematic uncertainties on the experimental side, NP as an exciting potential explanation. The required size of such a contribution comes as a surprise, however: defining $\hat R(X)\equiv R(X)/R(X)_{\rm SM}$, the new average corresponds to $\hat R(D)=1.14\pm0.10$ and $\hat R(D^*)=1.14\pm0.06$; for NP to accommodate these data, a contribution of $5-10\%$ relative to a SM tree-level amplitude is required for NP interfering with the SM, and $O(40\%)$ for NP without interference.
An effect of this size can be clearly identified with upcoming measurements by LHCb and Belle~II~\cite{Cerri:2018ypt,Kou:2018nap}. It would also immediately imply large effects in other observables.
The potential of $R(D^{(*)})$ as discovery modes does not diminish the importance of additional measurements with $b$-hadrons. Specifically, even with a potential discovery, model discrimination will require measurements beyond these ratios. These additional measurements fall in four categories:
\begin{itemize}
\item Additional $R(X)$ measurements such as $R(D^{**}), R(\Lambda_c), R(X_c), R(J/\psi)$ and $R(B_s^{(*)})$, are important crosschecks to establish $R(D^{(*)})$ as NP with independent systematics and provide independent NP sensitivity (especially $R(X_c)$ and $R(\Lambda_c)$), as discussed
in subsections~\ref{ssec::exp} and~\ref{ssec::Ddoublestar}.
Note, however, the existence of an approximate sum rule relating the NP contributions to $R(\Lambda_c)$, $R(D)$, and $R(D^*)$ \cite{Blanke:2018yud}.
\item Integrated angular and polarization asymmetries and polarization fractions are excellent model discriminators. In many models they are completely determined once the measurements of $R(D^{(*)})$ are taken into account. For instance, the recent measurement of the longitudinal polarization fraction of the $D^*$ in $B\to D^*\tau\nu$, $F_L(D^*)$, was able to rule out solutions that remained compatible with the whole set of the remaining $b\to c\tau\nu$ data~\cite{Aebischer:2018iyb,Iguro:2018vqb,Blanke:2018yud,Bardhan:2019ljo,Alok:2019uqc,Murgui:2019czp}.
The model-discriminating potential of both $R(D^{(*)})$ and selected angular quantities is visualized in Fig.~\ref{fig::FitsNPmodels}, where fit results for pairs of $B\to D^{(*)}\tau\nu$ observables within all phenomenologically viable single-mediator scenarios with left-handed neutrinos to the state-of-the-art data are shown.
\item Differential distributions in $q^2$ and the different angles are extremely powerful in distinguishing between NP models, as can be seen for instance from a recent analysis of data with light leptons in the final state~\cite{Jung:2018lfu}. They require, however, large amounts of data and the insufficient information on the decay kinematics can pose difficulties for the interpretation of the data, as discussed in subsection~\ref{ssec::interpretation}. However, already the rather rough available information on the differential rates $d\Gamma/dq^2(B\to D^{(*)}\tau\nu)$~\cite{Huschle:2015rga,Lees:2012xj} is excluding relevant parts of the parameter space~\cite{Sakaki:2014sea,Freytsis:2015qca,Celis:2016azn,Bhattacharya:2016zcw,Murgui:2019czp}.
\item An analysis of the flavor structure of the observed effect, \emph{e.g.} in $b\to c (e,\mu)\nu$, $b\to u\tau\nu$ and $t\to b\tau\nu$ transitions.
\end{itemize}
\begin{figure}[t]
\includegraphics[height=3.75cm]{figures/RDRDstarNPmodels.png}
\includegraphics[height=3.75cm]{figures/AFBDPtauDNPmodels.png}
\includegraphics[height=3.75cm]{figures/FLPtauDstarNPmodels.png}
\caption{\label{fig::FitsNPmodels} State-of-the-art fit results in single-mediator models for selected pairs of observables in $B\to D^{(*)}\tau\nu$ decays (following Ref.~\cite{Murgui:2019czp} for form factor and input treatment). All outer ellipses correspond to $95\%$ confidence level, inner (where present) to $68\%$.
We show the SM prediction in grey, the experimental measurement/average in yellow (where applicable) and scenarios I, II, III IV and V in dark green, green, dark blue, dark red and red, respectively, see text. Contours outside the experimental ellispse imply that the measured central values cannot be accomodated within that scenario. The limit $BR(B_c\to\tau\nu)\leq 30\%$ has been applied throughout, but affects only the fits with scalar coefficients. Dark green contours are missing in the two graphs on the right, because the predictions of scenario I are identical to the SM ones.}
\end{figure}
In addition to the above observables, the leptonic decay $B_c\to \tau\nu$ plays a special role. Although it is not expected to be measured in the foreseeable future, it provides nevertheless a strong constraint on NP, since the relative influence of scalar NP is enhanced in this mode. A limit can then be obtained even from the total width of the $B_c$ meson~\cite{Li:2016vvp}. Theoretical estimates for the partial width assumed unaffected by NP can be used to strengthen these bounds~\cite{Beneke:1996xe,Alonso:2016oyd,Celis:2016azn}, and also data from LEP~\cite{Akeroyd:2017mhr}. Both approaches rely on additional assumptions, however, see Refs.~\cite{Blanke:2018yud,Bardhan:2019ljo} for recent extensive discussions.
The constraints discussed so far are relevant in any scenario trying to address the existing anomalies. An interesting subclass of such models is that where the existence of a single mediator coupling to only the known SM degrees of freedom is assumed, classified in~\cite{Freytsis:2015qca}, creating only a subset of the possible operators at the $b$ scale.
Among those, only five scenarios remain that can reasonably well accomodate the data described above, see also
Refs.~\cite{Tanaka:2012nw,Blanke:2018yud,Freytsis:2015qca,Bhattacharya:2016zcw,Ivanov:2017mrj,Alok:2017qsi,Bifani:2018zmi,%
Blanke:2019qrx,Shi:2019gxi} for comparisons (additional constraints in specific scenarios are commented on below):
Scenario~I yields only a left-handed vector operator, created by either a heavy color-less vector particle~\cite{Greljo:2015mma,Boucenna:2016wpr,Boucenna:2016qad,Megias:2017ove} (phenomenologically highly disfavored) or a leptoquark, see Refs.~\cite{Li:2016vvp,Fajfer:2012jt,Deshpande:2012rr,Sakaki:2013bfa,Duraisamy:2014sna,Calibbi:2015kma,Fajfer:2015ycq,Barbieri:2015yvd,Alonso:2015sja,Bauer:2015knc,Das:2016vkr,Deshpand:2016cpw,Sahoo:2016pet,Dumont:2016xpj,Becirevic:2016yqi,Barbieri:2016las,DiLuzio:2017vat,Assad:2017iib,Chen:2017hir,Bordone:2017bld,Altmannshofer:2017poe,Calibbi:2017qbu,Becirevic:2018afm,Fornal:2018dqn,Blanke:2018sro,Crivellin:2017zlb} for this and other leptoquark variants.
Scenario II includes Scenario I, but yields also a right-handed scalar operator, realized for example by a vector leptoquark.
Scenario III involves both left- and right-handed scalar operators, generated for instance by a charged Higgs~\cite{Crivellin:2012ye,Celis:2012dk,Crivellin:2015hha,Celis:2016azn,Chen:2017eby,Iguro:2017ysu,Chen:2018hqy,Li:2018rax} (with a limited capability to accomodate $R(D^*)$ due to the $B_c$ constraint discussed above).
Scenarios IV and V involve the left-handed scalar and tensor operator which are generated proportionally to each other ($C_{S_L}=\pm 4C_T$ at the NP scale $\Lambda$), in the latter case with the addition of the left-handed vector operator, again realized in leptoquark models.
It is also possible to analyze the available data in more general contexts. For example, within SMEFT the right-handed vector current is expected to be universal~\cite{Cirigliano:2009wk,Alonso:2014csa,Cata:2015lta}, see~\cite{Murgui:2019czp} for a global analysis in this framework, while this does not hold when the electroweak symmetry breaking is realized non-linearly~\cite{Cata:2015lta}.
Allowing for additional light degrees of freedom beyond the SM opens the possibility of contributions with right-handed neutrinos, see Refs.~\cite{He:2012zp,Becirevic:2016yqi,He:2017bft,Greljo:2018ogz,Asadi:2018wea,Robinson:2018gza,Azatov:2018kzb,Heeck:2018ntp}.
Once specific models are considered, typically additional constraints apply. Important ones include high-$p_T$ searches, looking for
collider signatures of the mediators related to the
anomaly~\cite{Faroughy:2016osc,Feruglio:2018fxo,Greljo:2018tzh,Altmannshofer:2017yso}, RGE-induced flavor-non-universal effects in
$\tau$ decays~\cite{Feruglio:2017rjo}, lepton-flavor violating decays~\cite{Feruglio:2017rjo}, precision universality tests in quarkonia
decays~\cite{Aloni:2017eny}, charged-lepton magnetic moments~\cite{Feruglio:2018fxo} and electric dipole moments in models with
non-vanishing imaginary parts~\cite{Dekens:2018bci}.
\subsection{Interpretation of experimental results}
\label{ssec::interpretation}
The reconstructed kinematic distributions used in measurements are sensitive to both the modeling of required non-perturbative inputs
(e.g., form factors, light-cone meson wave functions), and to assumptions about the underlying fundamental theory (e.g. possible presence
of operators with chiral structures different from those found in the SM). Current measurements assume the SM operator structure, and
include the non-perturbative uncertainties as they are known at the time of publication. While this is a valid strategy for testing the
SM, if in future the presence of a non-SM contribution with a different chiral structure is established then past measurements will
require reinterpretation.
In order to present experimental results in such a way to allow a-posteriori analyses to have maximum flexibility in the description of
non-perturbative inputs and BSM content, the following strategies might be considered. The techniques to allow for reinterpretation of
results overlap with those used to make differential measurements designed to be sensitive to the chiral structure and non-perturbative
quantities.
A first possibility, is the publication of unfolded distributions (see, for instance, the $B\to D^* \ell\nu$ spectrum presented in
Ref.~\cite{Abdesselam:2017kjf}). This method offers the possibility to fit with ease the experimental results to arbitrary
parametrizations of the form factors~\cite{Bigi:2017njr,Grinstein:2017nlq,Bernlochner:2017jka}; its downside is that it requires
relatively high statistics and that the unfolded distributions do not contain the whole experimental information.
A second option, which has been employed in the untagged Belle analysis of ref.~\cite{Waheed:2018djm}, is to provide \emph{folded}
distributions in which detector effects are not removed and no extrapolation is performed, together with experimental efficiencies and
the detector response matrix (which reproduces detector effects to a given accuracy). This allows the use of any parametrization of SM
and BSM effects in comparing with the experimental result. This approach, while requiring slightly more involved a posteriori fitting
strategies, avoids the statistical problems associated with unfolding and can be extended more easily to higher dimensions.
Finally, the most complete information is contained in the Likelihood function which depends on a set of SM parameters (e.g., for
$B\to \pi\ell\nu$ they could be the coefficients of the $z$-expansion of the form factors and $V_{ub}$) and on the Wilson coefficients
of BSM operators).
This method has not been currently pursued in any $B$ decay measurement, in part because of difficulties related to the extremely large
amount of information that would need to be presented. Two differing approaches are to publish the full experimental Likelihood in the
full parameter space of BSM Wilson coefficients and SM non-perturbative coefficients, or to publish the tools for external readers to be
able to repeat the full experimental fit with the signal model varied. For representing the experimental Likelihood in a high-dimensional
space, possible approaches include the use of Markov chain sampling, or MVA surface modelling. These are the only strategies which would
allow the entirety of the experimental information to be available in a posteriori theoretical investigations. It is essential to this
approach for the experimental measurement to cover the full parameter space in a sufficiently general way, including alternative Likelihoods with
different parametrizations for nonperturbative effects.
\subsection{HAMMER}
Future new physics searches in $b \to c \, \tau \nu_\tau$ decays are a challenging endeavor: most experimental results make use of kinematic properties of the process to discriminate between the signal of interests and backgrounds. For instance recent measurements from the $B$-factories BaBar and Belle used the lepton momentum spectrum and measurements of LHCb use fits to the four-momentum transfer $q^2$. In new physics scenarios, these distributions change and alter the analysis acceptance, efficiencies, and extracted signal yields. In addition, large samples of simulated decay processes play an integral part in those measurements. In most, one of the leading systematic uncertainties is due to the limited availability of such samples. Thus producing large enough simulation samples for a wide range of new physics points, needed to take into account the aforementioned changes in acceptance, etc. is not a viable path. This is where the tool HAMMER~\cite{Bernlochner:2020tfi,Duell:2016maj} can help: it implements an event-level reweighting, assigning a weight based on the ratio of new physics to simulated matrix element, which allows one to re-use the already generated events. In addition, it is capable of providing histograms for arbitrary new-physics parameter values (including also form factor variations), which can be used, for example, in template fits to kinematic observables. These event weights can completely account for acceptance changes and will enable Belle II and LHCb to directly extract limits on the Wilson coefficients present in $b \to c \, \tau \nu$ transitions.
\section{Heavy-to-light exclusive}
\label{h2l_excl}
In this section we present an overview of $b\to u$ exclusive decays.
We start with a discussion of the lattice calculations of the $b$ hadron decay form factors to a light pseudoscalar, vector meson or baryon. We then review the light-cone sum rule calculation of the same form factors and the current experimental situation, as well as the
prospects at Belle II and LHCb. Finally, we briefly discuss a few related subjects, such as
the semitauonic heavy to light decays, the decay $b\to \gamma \ell \nu_\ell$, the non-resonant $B\to \pi\pi \ell\nu$ decays, and some subtlety of the $z$-expansion.
\subsection{Form factors for semileptonic $b$-hadron decays into light hadrons from lattice QCD}
\subsubsection{Form factor parametrizations}
\label{sec:HLparametrizations}
\str{
The matrix elements that describe the hadronic part of the semileptonic transitions $B \to P \ell \nu$ or $B \to P \ell \ell$ are
parametrized in terms of form factors.
Conventionally, one writes:
\begin{eqnarray}
\langle P(p_P) |V^{\mu} |B (p) \rangle & = & \left((p_B+p_P)^\mu - \frac{M_B^2-M_P^2}{q^2}q^\mu\right) \, f_+(q^2) \nonumber \\
& + & \frac{M_B^2-M_P^2}{q^2}q^\mu \, f_0(q^2), \\
\langle P(p_P) |S |B (p) \rangle & = & \frac{M_B^2-M_P^2}{m_b - m_q} f_0(q^2), \\
\langle P(p_P)|T^{\mu\nu}|B (p) \rangle & = & 2\frac{p_B^\mu p_P^\nu - p_B^\nu p_P^\mu}{M_B+M_P} f_T(q^2),
\end{eqnarray}
where $|B(p_B)\rangle$ denotes a $B^0$, $B^{\pm}$, or $B_s$ meson with four-momentum $p_B$ and mass $M_B$, and $|P(p_P)\rangle$
is a pion or kaon with four-momentum $p_P$ and mass $M_P$, $q^\mu \equiv (p_B-p_P)^\mu$, and the current operators are
$V^\mu = \bar{q} \gamma^\mu b$, $S = \bar{q} b$, and $T^{\mu\nu} = i\bar{q} \sigma^{\mu\nu} b$.
The form factors $f_+, f_0, f_T$ are the complete set needed to describe the hadronic contributions to charged-current semileptonic
transitions in and beyond the Standard Model and they also completely describe the factorizable contributions to rare decays.
For the case of semileptonic $B$-meson decays to vector meson final states, we have:
\begin{eqnarray}
\langle V(p_V, \varepsilon) |V^{\mu} |B(p_B) \rangle & = & \frac{2i}{M_B+M_V}\epsilon^{\mu\nu\rho\sigma}
\varepsilon^\ast_\nu p_{V,\rho} p_{B,\sigma} \, V(q^2), \\
\langle V(p_V,\varepsilon) |A^{\mu} |B(p_B) \rangle & = & 2M_V \frac{\varepsilon^\ast\cdot q}{q^2}q^\mu \, A_0(q^2) \\
& + & (M_B+M_V) \left(\varepsilon^{\ast\mu} - \frac{\varepsilon^\ast\cdot q}{q^2}q^\mu\right) A_1(q^2) \nonumber \\
& - & \frac{\epsilon^\ast\cdot q}{M_B+M_V}\left[(p_B+p_V)^\mu - \frac{M_B^2 - M_V^2}{q^2}q^\mu\right]A_2(q^2), \nonumber \\
\langle V(p_V,\varepsilon) |T^{\mu\nu}|B(p_B) \rangle & = &
\end{eqnarray}
where $V(p_V,\varepsilon)\rangle$ denotes a vector meson ($\rho$, $K^*$, $\phi$) with momentum $p_V$, mass $M_V$, and
polarization~$\varepsilon$.
}
The matrix elements that describe the hadronic part of the semileptonic transitions $B\to X\ell\nu$ or $B \to X\ell\ell$ are
parametrized in terms of the form factors in Eqs.~(\ref{eq:ff-scalar})--(\ref{eq:ff-pseudo-tensor}),
where $X$ now denotes a pion or kaon.
The transitions $B\to X^*\ell\nu$ or $B\to X^*\ell\ell$ are parametrized in terms of the form factors in
Eqs.~(\ref{eq:ff-pseudo})--(\ref{eq:ff-tensor}), where $X^*$ now denotes a $\rho$, $K^*$, or $\phi$~meson.
As discussed in Sec.~\ref{sec:param}, modern theoretical calculations of the form factors
employ $z$-parametrizations to describe their shapes, which can be implemented in a
model-independent way, being based on analyticity and unitarity constraints. For the case at
hand, an often used choice for the $z$-parameter defined in Eq.~(\ref{eq:z-def}) is
$t_0 = (M+m)/(\sqrt{M} - \sqrt{m})^2$, which results in a range $|z| < 0.3$, centered around
$z=0$.
In general, the small range of $z$ coupled with unitarity constraints on the
coefficients ensure that the polynomial expansions converge quickly.
As discussed already in Sec.~\ref{sec:param}, for $B$-meson decays to light hadrons with their
larger $q^2$ range, the BCL parametrization~\cite{Bourrely:2008za} is the standard choice, as
the resulting forms satisfy the expected asymptotic $q^2$ and near threshold scaling
behaviors~\cite{Lepage:1980fj,Akhoury:1993uw}:
\begin{eqnarray}
f_+(q^2) &=& \frac{1}{1-q^2/ M_{B^*(1^-)}^2} \sum\limits_{n=0}^{N_z-1} b_n^+(t_0)
\left(z^n -(-1)^{n-N_z}\frac{n}{N_z} z^{N_z}\right),
\label{eq:BCL_f+} \\
f_0(q^2) &=& \frac{1}{1-q^2/ M_{B^*(0^+)}^2} \sum\limits_{n= 0}^{N_z} b_n^0(t_0)\, z^n.
\label{eq:BCL_f0}
\end{eqnarray}
\subsubsection{Lattice QCD results for $B$-meson decay form factors to light pseudoscalars }
Lattice-QCD calculations of the form factors for semileptonic $B_{(s)}$-meson decays to light hadrons proceed along the same lines as discussed in Sec.~\ref{sec:hth_lat}. In particular, there are a number of different, well-developed strategies for dealing with the heavy $b$-quark in lattice QCD, see Ref.~\cite{Aoki:2019cca} for a review.
The same two- and three-point functions as for the heavy-to-heavy case are needed here, albeit with the appropriate valence quark propagators, to describe the heavy-to-light decay process. While this affects the statistical errors in the next step, the fits to the spectral representations of the correlation functions to obtain the desired matrix elements on each gauge ensemble and each recoil momentum, the procedure is essentially the same. The resulting ``lattice data'' are then used in combined chiral-continuum fits coupled with a systematic errors analysis to obtain the form factors in the continuum over the range of recoil energies that are included in the simulations.
Here, a well known challenge is that the recoil energies that are accessible in lattice-QCD calculations cover only a fraction of the entire kinematic region. A related challenge is that the validity of Chiral Perturbation Theory (used to extrapolate or interpolate to the physical pion mass) is limited to pion energies of $\approx 1 $~GeV.
The final step is the $z$-expansion fit, from which the form factors are obtained over the entire kinematic range, albeit with larger errors in the region not directly covered in the lattice calculation.
Lattice-QCD calculations of the $B\to \pi$ vector current form factors $f_+$ and $f_0$ can be used to determine $|V_{ub}|$ from experimental measurements of the $B\to\pi\ell \nu$ decay rate.
There are currently two independent, published lattice-QCD computations that employ the modern methods outlined above, including the model-independent $z$-expansion \cite{Flynn:2015mha,Lattice:2015tia}. The RBC/UKQCD collaboration \cite{Flynn:2015mha} uses ensembles with $N_f=2+1$ flavors of Domain Wall fermions at two lattice spacings with sea-pion masses in the range $[300,400]$~MeV. The Fermilab/MILC collaboration \cite{Lattice:2015tia} uses ensembles with $N_f=2+1$ flavors of asqtad (improved staggered) fermion at four lattice spacings covering the range $a\approx 0.045 -0.12$~fm and a range of sea-pion masses down to $177$~MeV. Earlier work \cite{Bailey:2008wp} used a subset of these ensembles.
The treatment of the $b$-quark is similar in the two works; Ref.~\cite{Flynn:2015mha} uses a variant of the Fermilab approach, called the relativistic heavy quark (RHQ) action, while Ref.~\cite{Lattice:2015tia} employs the original Fermilab formalism. Both groups also use the mostly nonperturbative renormalization method to compute the renormalization factors. The form factors obtained by the two lattice groups are in good agreement with each other, and can be combined in joint fits together with experimental data for an improved $|V_{ub}|$ determination \cite{Aoki:2019cca}.
Ongoing work by RBC/UKQCD is extending the calculation to include more ensembles \cite{Flynn:2019jbg}. Ongoing work by the Fermilab/MILC collaboration employs the HISQ $N_f=2+1+1$ ensembles with sea-pion masses at (or near) the physical point, and the Fermilab formalism for the $b$-quark~\cite{Gelzer:2019zwx}.
The HPQCD collaboration has published a calculation of the scalar form factor for the $B\to\pi$ transition at zero recoil $f_0(q^2_\text{max})$ on a subset of the $N_f=2+1+1$ HISQ ensembles and treating the $b$ -quark in NRQCD~\cite{Colquhoun:2015mfa}, which provides a nice test of the soft-pion theorem, but cannot be used in $|V_{ub}|$ determinations.
Ongoing work includes a calculation of the $B\to\pi$ form factors over a range of $q^2$ on a subset of the asqtad ensembles using NRQCD $b$-quarks and HISQ light-valence quarks \cite{Bouchard:2013zda}. The JLQCD collaboration has an ongoing project to calculate the $B\to\pi$ form factors on $N_f=2+1$ Domain Wall ensembles using also Domain Wall fermions for the heavy and light valence quarks \cite{Colquhoun:2019tyq}. They focus their calculation on small lattice spacings ($a\approx 0.044 - 0.080$~fm) and include a series of heavy-quark masses to extrapolate to the physical $b$-quark mass.
The vector current form factors $f_+$ and $f_0$ needed for rare $B\to\pi \ell\ell$ decay are the same as for $B\to \pi \ell \nu$ decay (up to small isospin corrections), but the tensor form factor $f_T$ is also needed to describe the rare process in the SM, while it can contribute to $B\to \pi \ell \nu$ decay only in BSM theories. So far, $f_T$ has been calculated only by the Fermilab/MILC collaboration~\cite{Bailey:2015nbd} using the same ensembles and methods as for the vector current form factors. However, most (if not all) of the ongoing projects described above, now include the complete set of form factors in their analyses, and new results for this form factor will therefore also be forthcoming.
The $B_s \to K \ell \nu$ process can be used for an alternate determination of $|V_{ub}|$, and there currently are three independent, published lattice-QCD computations of the vector-current form factors \cite{Bouchard:2014ypa,Flynn:2015mha,Bazavov:2019aom}. In Ref.~\cite{Bouchard:2014ypa} the HPQCD collaboration used NRQCD $b$-quarks and HISQ light-valence quarks to calculate the form factors on a subset of asqtad ensembles.
The RBC/UKQCD \cite{Flynn:2015mha} work is already described above, since they calculated the $B_s\to K$ and $B\to\pi$ transition form factors together. The Fermilab/MILC collaboration \cite{Bazavov:2019aom} used the same methods and set-up as for their $B\to\pi$ project \cite{Lattice:2015tia} but on a subset of asqtad ensembles. Both Fermilab/MILC \cite{Bazavov:2019aom} and, in a follow-up paper, HPQCD \cite{Monahan:2018lzv} also computed ratios of $B_s\to K$ and $B_s \to D_s$ observables, which can be used in combination with LHCb measurements to determine $|V_{ub}/V_{cb}|$.
\subsubsection{Challenges of vector mesons }
Lattice calculations of $B_{(s)}$ decay form factors with vector mesons ($\rho$, $K^*$, $\phi$) in the final state are substantially more
challenging, as these vector mesons are unstable resonances for sufficiently light quark masses. The asymptotic final state in the continuum
then contains (at least) two hadrons, and the relation with the finite-volume matrix elements computed on the lattice becomes nontrivial.
The formalism that allows a mapping of finite-volume to infinite-volume $1\to 2$ hadron matrix elements has been
developed~\cite{Lellouch:2000pv,Lin:2001ek,Christ:2005gi,Hansen:2012tf,Briceno:2014uqa,Briceno:2015csa,Agadjanov:2016fbd} and will be discussed in more detail below.
First numerical applications to a form factor with nonzero momentum transfer have been published for the electromagnetic process $\pi \gamma^* \to \pi \pi$, where the $\pi\pi$ final state in a $P$ wave couples
to the $\rho$ resonance~\cite{Briceno:2015dca,Briceno:2016kkp,Alexandrou:2018jbt}.
The lattice QCD calculations of $B_{(s)} \to V$ form factors published to date did not implement this $1 \to 2$ formalism. For the $B \to \rho$ form factors,
there is only an early study by the UKQCD collaboration~\cite{Bowler:2004zb}, performed in the quenched approximation and with heavy up and down quark masses
for which the $\rho$ is stable. For the $B \to K^*$, $B_s \to K^*$, $B_s \to \phi$ form factors, an unquenched lattice QCD calculation is available
\cite{Horgan:2013hoa}. This work used three different ensembles of lattice gauge field configurations with pion masses of approximately
310, 340, and 520~MeV. For the lower two pion masses, the $K^*$ is expected to be unstable, but the analysis was performed as if the $K^*$ were stable.
This entails using only a quark-antiquark interpolating field for the $K^*$, and assuming that the information extracted from exponential fits
to the two-point and three-point correlation functions corresponds to the ``$K^*$'' contribution. The systematic errors introduced by this treatment are difficult to quantify. For unstable $K^*$,
none of the actual discrete finite-volume energy levels directly corresponds to the resonance,
and the actual ground state may be far from the resonance location (for typical lattice volumes, this problem is more severe at nonzero momentum).
However, a quark-antiquark interpolating field couples more strongly to energy levels in the vicinity of the resonance, and ground-state saturation is typically not seen in the correlation
functions before the statistical noise becomes overwhelming. In these cases, exponential fits are still dominated by one or multiple energy levels in the vicinity of the resonance.
In the following, we will denote the vector meson resonance as $V$, and the two pseudoscalar mesons whose scattering shows the resonance as $P_1$ and $P_2$.
The finite-volume energy levels for a given total momentum and irreducible representation of the appropriate symmetry group
are determined by the L\"uscher quantization condition~\cite{Luscher:1990ux} and its generalizations, as reviewed in Ref.~\cite{Briceno:2017max}. In the absence of interactions, they would consist
of $P_1 P_2$ scattering states with energies equal to the sums of the $P_1$ and $P_2$ energies, where the $P_1$ and $P_2$ momenta take on
the discrete values allowed by the periodic boundary conditions. Through the $P_1 P_2$ interactions, these energy levels are shifted away from their noninteracting
values in a volume-dependent way. In the simplest case (considering only elastic scattering and neglecting the partial-wave mixing induced by the finite volume), each interacting finite-volume energy level
can be mapped to a corresponding value of the infinite-volume $P_1 P_2$ scattering phase shift, or, equivalently, scattering amplitude; more complicated cases with coupled
channels and partial-wave mixing can also be treated. The dependence of the scattering amplitude on the $P_1 P_2$ invariant-mass-squared, $s$, can be described by a Breit-Wigner-type function. By analytically continuing
the scattering amplitude to complex $s$, one finds poles on the second Riemann sheet at $s = (m_V \pm i \Gamma_V/2)^2$, where $\Gamma_V$ is the width of the resonance. This procedure
has been applied successfully to the $\rho$, $K^*$, and other resonances (see Ref.~\cite{Briceno:2017max} for a review).
The $B_{(s)}\to V$ form factors correspond to the residues at the pole at $s = (m_V - i \Gamma_V/2)^2$ in the $B_{(s)}\to P_1 P_2$ form factors, where the $P_1 P_2$
final state is projected to the $\ell=1$ partial wave. These $B_{(s)}\to P_1 P_2$ form factors are functions of $q^2$ and $s$. In the single-channel case, the lattice computation involves the following
steps: (i) Determine the $P_1 P_2$ finite-volume energy spectrum, and the $B_{(s)}\to P_1 P_2$ finite-volume matrix elements both for the ground states and multiple
excited states. (ii) Obtain the infinite-volume $P_1 P_2$ scattering amplitude from the finite-volume
energy spectrum using the L\"uscher method, and fit a suitable parametrisation of the $s$-dependence to the data.
(iii) Map the finite-volume $B_{(s)}\to P_1 P_2$ matrix elements to infinite-volume $B_{(s)}\to P_1 P_2$ matrix elements using the Lellouch-L\"uscher factor, which depends
on the energy-derivative of the scattering phase shift and a known finite-volume function.
The finite-volume
formalism requires the center-of-mass energy $\sqrt{s}$ to be small enough so that no more than two particles can be produced by the scattering through the strong interaction (however, the \emph{total} momentum of the $P_1 P_2$ system
can in principle be arbitrarily large). For example, in the case of the $B \to \pi\pi$ form factors, the formalism requires $\sqrt{s} \lesssim 4\, m_\pi$, which becomes
more restrictive when performing the calculation at lighter quark masses. However, it is likely that the coupling to four pions has negligible effects even at somewhat higher values of $\sqrt{s}$, as needed
to map out the $\rho$ resonance region when using physical quark masses.
\subsubsection{$\Lambda_b \to p$ and $\Lambda_b \to \Lambda^{(*)}$ form factors from lattice QCD }
The $\Lambda_b \to p$ form factors relevant for the decay $\Lambda_b \to p \mu^-\bar{\nu}$ have been computed
in lattice QCD together with the $\Lambda_b \to \Lambda_c$ form factors~\cite{Detmold:2015aaa}; some aspects of
this work were already discussed in Sec.~\ref{sec:LbLcLattice}. The lattice data for $\Lambda_b \to p$ cover the kinematic
range from $q^2\approx 15\:{\rm GeV}^2$ to near $q^2_{\rm max}\approx 22\:{\rm GeV}^2$, and consequently
the predicted $\Lambda_b \to p \mu^-\bar{\nu}_\mu$ differential decay rate is most precise in this range. The integrated
decay rates in the Standard Model were found to be
\begin{equation}
\frac{1}{|{V_{ub}}|^2}\Gamma (\Lambda_b \to p\: \mu^- \bar{\nu}_\mu)
= (25.7 \pm 2.6_{\,\rm stat} \pm 4.6_{\,\rm syst})\:\:{\rm ps}^{-1}
\end{equation}
and
\begin{equation}
\frac{1}{|{V_{ub}}|^2}\int_{15\:{\rm GeV}^2}^{q^2_{\rm max}}
\frac{\mathrm{d}\Gamma (\Lambda_b \to p\: \mu^- \bar{\nu}_\mu)}{\mathrm{d}q^2} \mathrm{d} q^2
= (12.31 \pm 0.76_{\,\rm stat} \pm 0.77_{\,\rm syst})\:\:{\rm ps}^{-1}.
\end{equation}
The latter has a total uncertainty of 8.8\% (corresponding to a 4.4\% theory uncertainty in a $|V_{ub}|$ determination from this rate),
and the ratio to the partially integrated $\Lambda_b \to \Lambda_c \mu^-\bar{\nu}$ decay rate (\ref{eq:LbLcPartialRate}) has a total uncertainty
of 9.8\%, corresponding to a 4.9\% theory uncertainty in the determination of $|V_{ub}/V_{cb}|$ performed by LHCb~\cite{Aaij:2015bfa}, commensurate with
the experimental uncertainty. The $\Lambda_b\to p$ form factors from Ref.~\cite{Detmold:2015aaa} can also be used to predict the Standard-Model value of the
baryonic $b \to u\ell\bar{\nu}$ lepton-flavor-universality ratio,
\begin{equation}
\frac{\Gamma (\Lambda_b \to p\: \tau^- \bar{\nu}_\tau)}{\Gamma (\Lambda_b \to p\: \mu^- \bar{\nu}_\mu)} \:=\: 0.689 \:\pm \:0.058_{\,\rm stat} \:\pm\: 0.064_{\,\rm syst}.
\end{equation}
By increasing statistics, removing the partially quenched data sets (cf.~Sec.~\ref{sec:LbLcLattice}), adding one ensemble with physical light-quark masses, and
another ensemble with a third, finer lattice spacing, it will likely be possible to reduce the uncertainties in both the $\Lambda_b \to p$ and $\Lambda_b \to \Lambda_c$ form factors by a factor of 2 in the near future.
The same methods have also been used to compute the $\Lambda_b \to \Lambda$~\cite{Detmold:2016pkz},
$\Lambda_c \to p$~\cite{Meinel:2017ggx}, and $\Lambda_c \to \Lambda$~\cite{Meinel:2016dqj} form factors with lattice QCD.
The latter calculation already includes an ensemble with the physical pion mass, and gave results for the
$\Lambda_c\to\Lambda e^+\nu_e$ and $\Lambda_c\to\Lambda \mu^+\nu_\mu$ branching fractions consistent with, and two times more
precise than, the measurements performed recently by the \mbox{BESIII} Collaboration~\cite{Ablikim:2015prg,Ablikim:2016vqd}.
This is a valuable test of the lattice methods used to determine the heavy-baryon decay form factors.
A lattice-QCD calculation is also in progress for the $\Lambda_b \to \Lambda^*(1520)$ form factors (in the narrow-width approximation)~\cite{Meinel:2016cxo}, which are relevant for the rare decay $\Lambda_b \to \Lambda^*(\to p\,K) \mu^+\mu^-$.
As with $\Lambda_b \to \Lambda_c^*$, discussed in Sec.~\ref{sec:LbLcLattice}, this initial calculation only reaches $q^2$ in the vicinity of $q^2_{\rm max}$.
\subsection{Light-cone sum rules calculations of heavy-to-light form
factors}
QCD sum rules on the light cone (LCSR) is a non-perturbative method for calculating hadronic quantities~\cite{Balitsky:1986st,Balitsky:1989ry,Chernyak:1990ag}. It has been applied to obtain the form factors for $B$ decays (see the definitions in
Section~\ref{sec:param}). The first LCSR calculations relevant for $V_{ub}$ were performed in 1997 when the next-to-leading order (NLO) twist-2 corrections to $f_+(q^2)$ were calculated~\cite{Khodjamirian:1997ub,Bagan:1997bp}. The leading order (LO) corrections up to twist-4 were calculated in Ref.~\cite{Belyaev:1994zk}.
Since the LO twist-3 contribution was found to be large, further improvements were made by calculating the smaller NLO corrections~\cite{Ball:2004ye}. A more recent update where the $\overline{\rm MS}$ mass is used in place of the pole mass for $m_b$ can be found in Ref.~\cite{Duplancic:2008zz,Khodjamirian:2011ub} for the $B\to\pi$ case and in Ref.~\cite{Duplancic:2008tk} for the $B_{s}\to K$ case.
Here we will discuss a selection of the more recent LCSR calculations.
For $B\to\pi$, a NNLO ($O(\alpha_s^2\beta_0)$) calculation of $f_+(0)$ was performed, with the result $f_+(0)= (0.262^{+0.020}_{-0.023})$ with uncertainties $\lesssim 9\%$~\cite{Bharucha:2012wy}. This calculation tested the argument that radiative corrections to $f_+f_B$ and $f_B$ should cancel when both calculated in sum rules (the 2-loop contribution to $f_B$ in QCDSR is sizeable). It was found that despite $\sim 9\%$ $O(\alpha_s^2\beta_0)$ change to $f_B$, the effect on $f_+(0)$ was only $\sim 2\%$.
More recently unitarity bounds and extrapolation were used to perform a Bayesian analysis of the form factor $f_+(q^2)$ for $B\to\pi$~\cite{Imsong:2014oqa}.
Prior distributions were taken for inputs, a likelihood function was constructed
based on fulfilling the sum rule for $m_B$ to $1\%$, and posterior distributions were obtained using Bayes' theorem. The posterior distributions of the inputs differed only for $s_0$, which was pushed to higher values $s_0=41\pm4$~GeV (mainly due to the choice of $m_b$).
Finally the results were fit to the BCL parametrisation, finding a central value of $f_+(0) = 0.31\pm 0.02$.
Obtaining $f_+(q^2)$ and the first two derivatives at 0 and 10 ${\rm GeV}^2$ has allowed the extrapolation to
higher $q^2$ using improved unitarity bounds.
$V_{ub}$ can also be obtained from the channels $B \to\rho/\omega$, and updated LCSR results were made available in 2015~\cite{Straub:2015ica}. The improvements in these results include: the computation of full twist-4 (+partial twist-5) 2-particle DA contribution to FFs, plus the determination of certain so-far unknown twist-5 DAs in the asymptotic limit;
a discussion of the non-resonant background for vector meson final states;
the determination and usage of updated hadronic matrix elements, specifically the decay constants; fits with full error correlation matrix for the $z$~expansion coefficients, as well as an interpolation to the most recent lattice computation.
The result for $|V_{ub}|$ from $B\to\rho\ell\nu$ has comparable errors to the $B\to\pi$ determination. In general the $B\to V$ results agree with previous exclusive determinations and global fits within errors.
Future prospects for exclusive $V_{ub}$ from LCSR include extending the subset of NNLO corrections calculated both in $q^2$ and to include all NNLO twist 2 and 3 contributions.
It would also be beneficial to perform a Bayesian uncertainty analysis of all $B \to P$,$ D\to P$ LCSRs (along the lines of the aforementioned analysis for $B \to\pi$~\cite{Imsong:2014oqa}).
Finally the measurement of $B_s\to K\ell\nu$ at LHCb/Belle II will allow an important complementary determination of $V_{ub}$ using results from Ref.~\cite{Khodjamirian:2017fxg}.
\subsection{Measuring $|V_{ub}|$ exclusively and the prospects for Belle II}
The most precise exclusive determinations of $|V_{ub}|$ will ultimately come from the most theoretically clean $b \to u \ell^{-} \bar{\nu}_{\ell}$ modes: $\bar{B}^{0} \to \pi^{+} \ell^{-} \bar{\nu_{\ell}}$, $\bar{B}^{0}_{s} \to K^{+} \ell^{-} \bar{\nu_{\ell}}$ and $\Lambda^{0}_{b} \to p \ell^{-} \bar{\nu_{\ell}}$, which involve ground state hadrons in the final state. The main challenge facing measurements of $|V_{ub}|$ from these modes is the large background from $b\to c \ell^{-} \bar{\nu}_{\ell}$ decays, which is O($|V_{cb}|^{2} / |V_{ub}|^{2}) \approx 100$ more likely to occur. This background is difficult to separate from signal given the need to partially reconstruct the missing signal neutrino.
Several measurements of exclusive $\bar{B}^{0} \to \pi^{+} \ell^{-} \bar{\nu_{\ell}}$ decays were made at the $B$ factories CLEO, BaBar and Belle. These measurements fall in to two categories of tagged and untagged measurements, which exploit the unique $e^{-}e^{+} \rightarrow \Upsilon(4S) \rightarrow B \bar{B} $ topology and fully hermetic detector design of the $B$ factories. In tagged measurements~\cite{Sibidanov:2013rkk} the non-Signal $B$ meson in the event is first reconstructed in a number of hadronic modes before selecting the signal pion and lepton. Exploiting the known energies and momenta of the interacting $e^{+}e^{-}$ beams allows for neutrino 4-momentum, $p_{\nu}$ to be reconstructed and the signal to be extracted using the missing mass squared of the neutrino, $M^{2} = p^{2}_{\nu}$. In untagged measurements~\cite{Ha:2010rf,delAmoSanchez:2010af} the signal pion and lepton are first selected with a tight selection to reduce background from $b\to c \ell^{-} \bar{\nu}_{\ell} $ decays. The neutrino is then reconstructed by inclusively reconstructing the other $B$ in the event as a sum of remaining tracks and photons. The beam constrained mass, $M_{bc}$, and beam energy difference~\footnote{Here $M_{bc} = \sqrt{E^{*2}_{beam} - P^{*2}_{B}}$ and $\Delta E = E^{*}_{Beam} - E^{*}_{B}$ where $E^{*}_{beam}$ and $E^{*}_{B}$ are beam and $B$ meson energies in the centre of mass frame.} are used as fit variables to simultaneously extract the signal. While tagged measurements give a high purity and better $q^{2}$ resolution they suffer from a much lower efficiency resulting from the branching fractions and reconstruction efficiencies for tagged modes.
In both tagged and un-tagged measurements the exclusive $\bar{B}^{0} \to \pi^{+} \ell^{-} \bar{\nu_{\ell}}$ signal is fitted in bins of $q^{2}$ to determine the partial branching fraction in each bin. These measurements together with LQCD and LCSR predictions can be used as constraints to simultaneously fit the form factors of decays and determine the parameter $|V_{ub}|$. HFLAV performed a fit for $|V_{ub}|$ the $\bar{B}^{0} \to\pi^{+} \ell^{-} \bar{\nu_{\ell}}$ form factor, $f_{+}(q^{2})$, under a BCL parametrisation utilising BaBar and Belle tagged and untaggged datasets and state of the art theory predictions~\cite{Amhis:2016xyh}. This resulted in the most precise determination of $|V_{ub}|$ to date, $|V_{ub}|=3.67\pm 0.09 (exp) \pm 0.12 (theo)$, which has a total uncertainty of~4\%.
Untagged and tagged measurements of $|V_{ub}|$ from $\bar{B}^{0} \to\pi^{+} \ell^{-} \bar{\nu_{\ell}}$ decays at Belle II will significantly improve the precision on $|V_{ub}|$. In order to project the reduction in uncertainty both tagged and untagged analyses were performed on simulated Belle II Monte Carlo. The expected uncertainty on $|V_{ub}|$ was determined for a given luminosity by extracting the partial branching fractions from pseudo-datasets generated from Monto Carlo expectations and fitting these together with LQCD predictions. With $50$~ab$^{-1}$ and future expected improvements in LQCD predictions the projected uncertainties on $|V_{ub}|$ from $\bar{B}^{0} \to \pi^{+} \ell^{-} \bar{\nu_{\ell}}$ decays were 1.7\% (tagged) and 1.3\% (untagged). The dominant systematic for the tagged analysis is the calibration of the tagging efficiency which is assumed irreducible at $1\%$ on $|V_{ub}|$. For the untagged analysis the dominant systematic uncertainty results from the uncertainty on the number of $B\bar{B}$ pairs which is assumed irreducible at $0.5\%$. Several systematics relating to the branch fractions and form factors of $b \to c \ell^{-} \bar{\nu}_{\ell}$ and $b \to u \ell^{-} \bar{\nu}_{\ell}$ decays are also considered irreducible in the untagged analysis given the lower purity than the tagged analysis.
\subsection{ Measuring $|V_{ub}|/|V_{cb}|$ at LHCb}
\vspace{-1mm}
All $b$-hadron species are accessible at hadron colliders thus opening to LHCb a wide possibility of $|V_{ub}|$ measurements from exclusive
$b \to u$ transitions, while inclusive $|V_{ub}|$ measurements do not seem feasible at the moment.
In proton-proton collision at high energy $b\bar{b}$ quark pairs are produced mainly from gluon splitting and hadronize independently,
as a consequence $b$-hadrons have a wide continuum momentum spectrum and the reconstruction of semileptonic decays can not profit of
the beam-energy constraints used at B-factories.
However, thanks to the large boost acquired by the $b$-hadrons, the direction of the momentum can be well determined
by the vector connecting the primary vertex of proton-proton interactions and the $b$-hadron decay vertex.
By imposing the $b$-hadron mass constraint, the missing neutrino momentum can be calculated with a two-fold ambiguity.
A small fraction of unphysical solutions arises from the imperfect reconstruction of vertices positions.
The best way to choose between the two solutions depends on the specific decay mode under study.
The choice can be optimized considering additional variables related to the decay kinematics by using linear regression algorithms~\cite{Ciezarek:2016lqu}.
The precise determination of an absolute branching fraction requires the precise knowledge of the total $b$-hadron production rate
and of the experimental detection efficiency, which includes reconstruction, trigger and final states selection.
To minimize the experimental uncertainty it is preferred to determine ratios of branching fractions, normalizing the $b$-hadron decay mode
under study to a well-known $b$-hadron decay mode, that has as similar as possible decay topology.
Choosing a decay of the same $b$-hadron removes the dependence on the production fraction of the specific $b$-hadron.
The first determination of $|V_{ub}|$ at LHCb was done with baryons, measuring the branching fractions
for $\Lambda_b\to p \mu^- {\overline{\nu}}$ and $\Lambda_b^0 \to \Lambda_c^+ \mu^- {\overline{\nu}}$ decays~\cite{Aaij:2015bfa}.
What is directly determined is the ratio of the CKM matrix elements
\begin{displaymath}
\frac{|V_{ub}|^2} {|V_{cb}|^2} =\frac { \mathcal{B}(\Lambda_b^0 \to p \mu^- {\overline{\nu}}) } { \mathcal{B}(\Lambda_b^0 \to \Lambda_c^+ \mu^- {\overline{\nu}}) }\times R_{FF}
\end{displaymath}
where $R_{FF}$ is the ratio of the relevant form factors, calculated using LQCD.
The ratio represents a band in the $|V_{ub}|$ versus $|V_{cb}|$ plane and can be converted into a measurement
of $|V_{ub}|$ using existing measurements of $|V_{cb}|$.
Approximately 10\% of $b$-hadrons produced at LHC are $\Lambda_b$ and
a clean signal identification is possible imposing stringent proton identification requirements.
The large background from $b$-hadron decays with additional charged tracks in the decay products is strongly reduced
employing isolation criteria by means of multivariate machine-learning algorithms.
The signal yields are determined from a $\chi^2$ fit to the B corrected mass distributions of
$\Lambda_b^0 \to p \mu^- {\overline{\nu}}$ and $\Lambda_b^0 \to \Lambda_c^+ \mu^- {\overline{\nu}}$
candidates.
The corrected mass is defined as
$m_{corr}=\sqrt{ m_{h\mu}^2+p_{\perp}^2} +p_{\perp}$
where $p_{\perp}$ is the momentum of the hadron-$\mu$ pair transverse to the $\Lambda_b^0$ flight direction.
The LQCD form-factors that are used in the calculation of $|V_{ub}|$~\cite{Detmold:2015aaa} are most precise in the
kinematic region where $q^2$, the invariant mass squared of the leptonic system, is high.
When the branching fractions of the $b \to u$ ($b \to c$) decays are integrated in the region $q^2>15 (7) \text{GeV}^2$ the
theory uncertainty on ${|V_{ub}|}/ {|V_{cb}|}$ is 4.9\%.
This measurements, performed with Run 1 data, gives ${|V_{ub}|}/ {|V_{cb}|} =0.83\pm0.004$ (stat) $\pm0.004$ (syst),
consistent with previous exclusive measurements of the two CKM matrix elements.
A new measurement of this type is currently under study at LHCb. It uses
$B_s^0 \to K^+\mu^- {\overline{\nu}}$ decays whose branching fraction is predicted to be of the same order of magnitude of the
$\Lambda_b^0 \to p \mu^- {\overline{\nu}}$ one.
The signal selection is challenging due to the large background from partially reconstructed decays of all species of $b$-hadrons,
but it can exploit the good efficiency and purity of kaon and muon identification provided by the LHCb detector,
the separation of the $K\mu$ vertex from primary vertex and the already mentioned isolation tools.
The chosen normalization mode $B_s^0 \to D_s^+\mu^- {\overline{\nu}}$, $D_s^+ \to K^- K^+\pi^+$
benefits of small uncertainty in the $D_s^+$ branching fraction.
The good identification of this decay mode, despite the large feed-down from $B^0_s$ decays to excited $D_s$ mesons
with un-reconstructed neutral particles, has been proven to be possible at LHCb with the
measurement of $B_s^0$ lifetime~\cite{Aaij:2017vqj}.
Form factors for the $B_s^0$ mesons decays to $K$ and $D_s$ have been calculated with LQCD by several groups~\cite{Bouchard:2014ypa,Flynn:2015mha}.
The calculation are performed in the high $q^2$ region and extrapolated to the full region with BGL or BCL z-expansions.
Different calculations agree at high $q^2$, but there is currently a disagreement in the $q^2=0$ extrapolated value.
For $B_s^0 \to K^+\mu^- {\overline{\nu}}$ in the low $q^2$ region (up to 12~GeV$^2$) form factors calculated with LCSR
are also available~\cite{Khodjamirian:2017fxg}.
The uncertainties on the experimental measurement of the $B_s^0 \to K^+\mu^- {\overline{\nu}}$ yield increase at high $q^2$
(low kaon momentum) due to the reduced efficiency and the larger background contamination.
It is foreseen to perform the measurement in few $q^2$ bins so that the use of different calculations of form factors will be possible.
Larger data samples, accumulated during the LHCb Upgrade period will allow a differential measurement in finer $q^2$ bins.
Purely leptonic $B^-\to \mu^- {\bar{\nu}}$ decays are not accessible at LHCb. An alternate way has been tested, searching for the
decay $B^-\to \mu^- {\bar{\nu}} \mu^+ \mu^-$ where an hard photon is irradiated from the initial state and materializes into two muons.
This decay has the experimental advantage of the presence of additional particles in the final state and of a larger branching fraction,
due to the removal of the helicity suppression. An upper limit on the branching fraction of
$1.6 \times 10^{-8}$ has been determined with $4.7~\text{fb}^{-1}$ of integrated luminosity~\cite{Aaij:2018pKa}, making it a possible candidate for a $|V_{ub}|$ measurement
in the LHCb Upgrade period~\cite{LHCbUpgrade2}.
\subsection{Related issues}
\subsubsection{$R_\pi$}
The experimental signature of $B \to \pi \tau \nu_\tau$ is challenging: low in rate due to CKM suppression, this final state can only be isolated from backgrounds using multivariate analysis techniques. Due to the pseudoscalar nature of the pion in the final state, an increased sensitivity to certain new physics models involving scalar exchange particles is expected and measurements of this branching fraction offer an orthogonal path to probe the anomalies observed in $R(D)$ and $R(D^*)$. The first limit on the branching fraction using leptonic and one-prong $\tau$ decay modes was reported by Ref.~\cite{Hamer:2015jsa}. They reported
\begin{equation}
\mathcal{B}(B^0\to \pi^- \, \tau^+ \, \nu_\tau) < 2.8 \times 10^{-4} \quad \text{at 95\% CL} \, ,
\end{equation}
using a frequentist method. This result can be converted into a value of $R_\pi = \Gamma(B^0 \to \pi^- \, \tau^+ \, \nu_\tau)/\Gamma(B^0 \to \pi^- \, \ell^+ \, \nu_\tau)$ with $\ell = e,\mu$ of
\begin{equation}\label{eq:rpiMeas}
R_\pi = 1.05 \pm 0.51 \, ,
\end{equation}
which can in turn be compared to the SM prediction of Refs.~\cite{Lattice:2015tia,Bernlochner:2015mya} of
\begin{equation}
R_\pi = 0.641 \pm 0.016 \, .
\end{equation}
Although the current precision is very limited, this result can already exclude the model parameter space of new physics models e.g. charged Higgs bosons, cf.\ Ref.~\cite{Bernlochner:2015mya}. Albeit a challenging signature, the final state with a charged pion has excellent prospects to be discovered in the large future Belle II data set. A naive extrapolation of Eq.~\ref{eq:rpiMeas} using assuming SM couplings results in evidence with 4~ab${}^{-1}$ and discovery with 11~ab${}^{-1}$ of integrated luminosity. The theoretical precision in $R_\pi$ will further increase with progress in lattice and with combined light lepton and lattice fits (the measured spectra can constrain the low $q^2$ region, which the lattice has difficulties in predicting reliably).
\subsubsection{ Experimental status and prospects of $B\to\ell\nu_\ell\gamma$}
The experimental study of $B\to\ell\nu_\ell\gamma$ with $\ell = e, \mu$ is challenging and requires the clean laboratory of an $e^+ \, e^-$ machinery: in such a setting the known initial state and the full reconstruction of the second $B$-meson produced in the collision provide the necessary constraint to successfully identify this signature. In addition, to not be overwhelmed with background, only photons at high energies ( $\approx 1$~GeV or larger) can be studied this way. The difficulties lie in the low efficiency of the reconstruction performance of the second $B$-meson, which have to happen in low branching fraction hadronic modes, and the still sizeable cross-feed from $B \to \pi^0 \, \ell \bar \nu_\ell$ and $B \to \eta \, \ell \bar \nu_\ell$ decays. These two semileptonic processes produce very similar final states, namely $B\to\ell\nu_\ell\gamma\gamma$, but can be reduced by looking for a unassigned second high-energetic photon in the collision event under study. To separate $B\to\ell\nu_\ell\gamma$ from such decays successfully a fit to
\begin{eqnarray}
m_\nu^2 \simeq m_{\rm miss}^2 = \left( p_{B_{\rm sig}} - p_\ell - p_\gamma \right)^2
\end{eqnarray}
can be carried out. Here $p_\ell$ and $p_\gamma$ denote the reconstructed four-vectors of the visible final states of $B\to\ell\nu_\ell\gamma$. The four-vector of the decaying signal $B$-meson, $p_{B_{\rm sig}}$, can be reconstructed using the information from the reconstructed tag-side $B$-meson. Correctly reconstructed signal decays peak at $m_\nu^2 \approx 0$~GeV${}^2$, whereas the dominant semileptonic decays are shifted to higher values due to the absence of the additional photon in the four-vector sum. The sensitivity can be further increased by explicitly reconstructing the semileptonic backgrounds and combine this information into a global analysis. This was the strategy pursuit by Ref.~\cite{Gelb:2018end}, which constrained the $\pi^0$ semileptonic background this way. The current experimental limit with a lower photon energy cut of $1$~GeV is
\begin{equation}\label{eq:dBFlnug}
\Delta \mathcal{B}(B \to \ell \nu_\ell \gamma ) < 3.0 \times 10^{-6} \quad \text{at 95\% CL} \, .
\end{equation}
The above limit was determined using a flat Bayesian prior.
The discovery prospects for this decay at Belle II are excellent: the improved tracking capabilities, better calorimeter electronics, and the continuous development of modern tagging algorithms such as Ref.~\cite{Keck:2018lcd} will help improving the sensitivity. Extrapolating from the central value and uncertainty of the currently most precise limit of Eq.~\ref{eq:dBFlnug} of $\Delta \mathcal{B}(B \to \ell \nu_\ell \gamma ) = \left(1.4 \pm 1.1 \right) \times 10^{-6}$, evidence should be possible with 5~ab${}^{-1}$ and a discovery is possible with 50~ab${}^{-1}$~\cite{Gelb:49050}. In principle, after discovery the value of $\left| V_{ub} \right|$ could be extracted from this decay as well, along with the first inverse momentum of the light-cone distribution amplitude, $\lambda_B$. An extrapolation from the current sensitivity is shown in Figure~\ref{fig:lnugamma_lb_vub_belle2_prospects}, based on the numbers from Ref.~\cite{Gelb:49050}. The sensitivity for $\left| V_{ub} \right|$ will not be competitive with other methods (leptonic and semileptonic), but the achievable precision on $\lambda_B$ will help measurements and interpretations, which rely on our understanding of the light-cone distribution amplitude properties.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.7\textwidth]{figures/fig_belle2_lnugamma.pdf}
\end{center}
\caption{Projection of the extraction of $\lambda_B$ and $\left| V_{ub} \right|$ for the expected Belle II data sets. The ellipses correspond to the expected uncertainty. The figure is from Ref.\cite{Gelb:49050}.}
\label{fig:lnugamma_lb_vub_belle2_prospects}
\end{figure}
\subsubsection{ Theoretical progress for $B\to \gamma \ell\nu_\ell$}
The photoleptonic decay $B\to \gamma \ell\nu_\ell$
determined by two independent form factors
is the simplest probe of the $B$-meson light-cone distribution amplitudes (LCDAs),
which represent one of the most important inputs in the theory of semileptonic and
nonleptonic $B$-decays based on QCD factorization and LCSRs. The
calculation of the form factors in HQET and at large photon recoil in the leading power
is well developed and can be found in Ref.~\cite{Beneke:2011nf}. The
$1/m_b$ and $1/E_\gamma$ power suppresed effects, expressed in a form
of the soft overlap part of the form factors, were quantified using
a technique~\cite{Braun:2012kp} based on dispersion relations and quark-hadron duality
(see also Ref.~\cite{Wang:2016qii}). The most advanced calculation of the $B\to \gamma \ell\nu_\ell$ form factors,
including power suppressed terms, was done recently~\cite{Beneke:2018wjp} resulting in the prediction of the decay branching
fraction at $E_\gamma >1.0$~GeV as a function of the key unknown
theoretical quantity: the inverse moment $\lambda_B$ of the B-meson LCDA.
An alternative approach~\cite{Wang:2018wfj} calculates the power-suppressed corrections due to photon emission at long distances
in terms of the photon LCDAs in the LCSR framework.
The proof of concept for a lattice QCD calculation of radiative leptonic decays was recently done in~\cite{Kane:2019jtj},
see also~\cite{deDivitiis:2019uzm}.
\subsubsection{$B\to \pi\pi \ell\nu_\ell$ decay beyond $\rho$}
Calculations of $B\to \rho$ form factors both in lattice QCD and from LCSRs usually
adopt a narrow $\rho$ approximation and by default ignore the influence of nonresonant
effects (radially excited $\rho$'s) in the mass interval around $\rho$.
The role of these effects has to be assessed at a quantitative level.
In Refs.~\cite{Hambrock:2015aor,Cheng:2017sfk} the first
attempt to calculate more general $B\to\pi\pi$ form factors from LCSRs, using
two-pion LCDAs at low mass of dipion system and at large recoil, was undertaken.
The currently limited knowledge of these nonperturbative inputs
calls for their further development and also for alternative methods.
In Ref.~\cite{Cheng:2017smj} a different version of LCSRs with $B$ meson LCDAs was obtained
which predicts the convolutions of the
$\bar{B}^0\to \pi^+\pi^0$ form factors in $P$ wave with the timelike pion
form factor. In the narrow $\rho$-meson limit
these sum rules reproduce analytically the known LCSRs for $B\to \rho$ form factors. Using data for the pion vector form factor from $\tau$ decay, the finite-width effects and the contribution of excited $\rho$-resonances
to the $B\to\pi\pi$ form factors were found to amount up to $\sim 20\%$
in the small dipion mass region where they can be interpreted as a nonresonant ($P$-wave) background to the $B\to\rho$ transition. For a more general analysis
of $B\to \pi\pi\ell\nu_\ell$ decays see e.g.\ Refs.~\cite{Faller:2013dwa,Kang:2013jaa}.
\subsubsection{Remarks on the $z$ expansion}
\label{sec:z-remarks}
The use of the so-called $z$ expansion for form factors has become a standard practice for semileptonic decays, see
Refs.~\cite{Boyd:1997qw,Hill:2006ub} for a pedagogical discussion.
In the workshop several issues concerning it were discussed, in particular its application to baryon form factors.
Form factors which parametrize matrix elements of the form $\langle L|J|H\rangle$ have known analytic structure.
In particular, they are analytic in the complex $t=q^2$ plane outside a cut on the real axis.
The cut starts at some positive $t_\text{cut}$ equals to the invariant mass squared of the lightest state the current $J$ can produce. The domain of analyticity can be mapped onto the unit circle via $z=\left(\sqrt{t_\text{cut}-t}-\sqrt{t_\text{cut}-t_0}\right)/\left(\sqrt{t_\text{cut}-t}+\sqrt{t_\text{cut}-t_0}\right)$, where $t_0$ is a free parameter denoting the point that is mapped to $z=0$. The form factor can be expanded as a Taylor series in $z$ which is a model-independent parametrization. For heavy-to-light form factors the maximum value of $z$ is related to the distance between $(m_H-m_L)^2$ and $t_\text{cut}$. As a result, increasing $t_\text{cut}$ decreases the maximum value of $z$ leading to a faster convergence of the series.
Naively one might assume that the lightest state is the two-particle state $\bar H L$. This would imply that $t_\text{cut}=(m_H+m_L)^2$, but this is not the case in general. For example, for the proton electric and magnetic form factors $(H=L=p)$ the cut starts at the two-pion threshold and not at the $p\bar p$ threshold. As another example, for one of the $B\to \pi$ form factors ($f_+)$ the cut starts at $m^2_{B^*}$. Since this is a simple pole, it can be easily ``removed" by considering $(t-m_{B^*}) f_+$ as a Taylor series in $z$. For $(t-m_{B^*}) f_+$ the cut starts at $(m_B+m_\pi)^2$. If one uses a higher value of $t_\text{cut}$ than the physical one, one faces the danger of trying to expand the form factor in a region where it is not analytic. One of the immediate results of the workshop was the identification of such a problem in the literature. For baryon form factors, e.g. $\Lambda_B\to p$, analyses have used the wrong value of $t_\text{cut}=(m_{\Lambda_B}+m_p)^2$, see Ref.~\cite{Khodjamirian:2011jp} and arXiv.org version 2 of Ref.~\cite{Detmold:2015aaa}. In fact, $t_\text{cut}$ for the baryon form factors is the same as for the meson form factors of analogous decays.
Another issue discussed in the workshop is the use (or lack of use) of bounds on the coefficients of the $z$ expansion. Although the form factor is expressed as an infinite series, in practice the series is truncated after a few terms. One would like to ensure that the value of a physical parameter such as $|V_{ub}|$ is independent of the number of parameters used, by bounding the coefficients. For example, one can use a unitarity bound~\cite{Bourrely:1980gp} or a bound from the heavy quark expansion~\cite{Becher:2005bg}. It seems that currently there is no consistent use of bounds in the extraction of $|V_{ub}|$. As the analysis \cite{Hill:2010yb} shows, this can be a problem as the data improve and the number of necessary parameters increases. This can be especially problematic if one needs to use the $z$-expansion for extrapolation. The community needs to be aware of this issue and at least test that results do not change if bounds are applied to the coefficients.
The unitarity bounds for meson decays such as $B\to \pi$ rely on the fact that for $(t-m_{B^*}) f_+$ the cut starts at the $(m_B+m_\pi)^2$. For baryon decays such as $\Lambda_B\to p$, unitarity can only constrain the region above $(m_{\Lambda_B}+m_p)^2$. The region between $(m_B+m_\pi)^2$ and $(m_{\Lambda_B}+m_p)^2$ is left unconstrained. Following the analysis of Ref.~\cite{Hill:2010yb}, one might worry that the contribution of the latter region is the dominant one. While considering together mesons and baryons contributions to the dispersive bounds might overcome the problem \cite{Cohen:2019zev}, further study is warranted.
\section{Quark masses and leptonic decays}
\label{dc_qm}
\input dc_qm
\section{Heavy-to-heavy inclusive}
\label{h2h_incl}
\subsection{Heavy Quark Expansion for $b\to c$}
\subsubsection{Review of the Current Status}
The heavy quark expansion (HQE) for the inclusive semileptonic $b\to c$ transitions
starts form a correlation function for the $b\to c$ current
\begin{eqnarray}
&& d \Gamma \propto \sum_X (2 \pi)^4 \delta^4 (P_B - P_X -q)
\langle B(v) | \bar{b} \gamma_\mu (1-\gamma_5) c | X \rangle \, \langle X | \bar{c} \gamma_\nu (1-\gamma_5) b | B(v) \rangle
\nonumber \\
&& =
\int d^4 x \, e^{iq\cdot x} \, \langle B(v) | \bar{b}(x) \gamma_\mu (1-\gamma_5) c (x) \bar{c} \gamma_\nu (1-\gamma_5) b | B(v) \rangle
\nonumber \\
&&= 2 \mbox{ Im}
\int d^4 x \, e^{iq\cdot x} \, \langle B(v) |T \{ \bar{b}(x) \gamma_\mu (1-\gamma_5) c (x) \bar{c} \gamma_\nu (1-\gamma_5) b \} | B(v) \rangle
\\
&&= 2 \mbox{ Im}
\int d^4 x \, e^{-i (m_b v - q) \cdot x}
\langle B(v) |T \{ \bar{b}_v (x) \gamma_\mu (1-\gamma_5) c (x) \bar{c} \gamma_\nu (1-\gamma_5) b_v \} | B(v) \rangle \nonumber
\end{eqnarray}
with
$$
b(x) = e^{-im_b v\cdot x} b_v (x) \, .
$$
The time ordered product in the last line can be expanded in an operator product expansion which for large $m_b$ and $m_c$ yields
an expansion in terms of local hadronic matrix elements which parametrize the hadronic input. Within this approach, the
differential rate can be expressed as a series in $1/m$
\begin{eqnarray}
d \Gamma &=& d \Gamma_0 + \left(\frac{\Lambda_{\rm QCD}}{m_b}\right)^2 d \Gamma_2
+ \left(\frac{\Lambda_{\rm QCD}}{m_b}\right)^3 d \Gamma_3 +
\left(\frac{\Lambda_{\rm QCD}}{m_b}\right)^4 d \Gamma_4
\nonumber \\ \nonumber
&& + d \Gamma_5 \left( a_0 \left(\frac{\Lambda_{\rm QCD}}{m_b}\right)^5
+ a_2 \left(\frac{\Lambda_{\rm QCD}}{m_b}\right)^3 \left(\frac{\Lambda_{\rm QCD}}{m_c}\right)^2 \right) \\
&& + ... + d \Gamma_7 \left(\frac{\Lambda_{\rm QCD}}{m_b}\right)^3 \left(\frac{\Lambda_{\rm QCD}}{m_c}\right)^4
\label{HQE1}
\end{eqnarray}
The coefficients $d\Gamma_i$ are given by
\begin{equation}
d \Gamma_i = \sum_k C_i^{(k)} \langle B(v) | O_i^{(k)} | B(v) \rangle
\end{equation}
where the $O_i^{(k)}$ are operators of mass-dimension $i+3$ and the sum over $k$ runs over all elements of the operator basis,
$C_i^{(k)}$ are coefficients that can be calculated in QCD perturbation theory as a series in $\alpha_s (m_b)$.
Note that starting at order $1/m_b^3$ the $b \to c$ HQE exhibits an infrared sensitivity to the charm quark mass; for the total
rate, $\Gamma_3$ contains a $\log(m_c^2)$ while $ \Gamma_5$ contains inverse powers of $m_c^2$ which are explicitly shown in
Eq.~(\ref{HQE1}).
The leading term $d \Gamma_0$ is the partonic result which turns out to be independent of any unknown hadronic matrix element.
This term is fully known (triple differential rate) at tree level, at order $\alpha_s$~\cite{Trott:2004xc,Aquila:2005hq} and
order $\alpha_s^2$~\cite{Aquila:2005hq,Pak:2008qt,Biswas:2009rb,Melnikov:2008qs,Gambino:2011cq}.
Due to heavy quark symmetry, there is no term $d\Gamma_1$ and the leading power corrections appear at order $1/m^2$.
These are given in
terms of two non-perturbative matrix elements
\begin{align}
2 M_B \mu_\pi^2 &= - \langle B (v)|\bar{b}_v (iD)^2 b_v | B (v) \rangle
\label{MuPi} \\
2 M_B \mu_G^2 &= -i\langle B (v)|\bar{b}_v \sigma_{\mu \nu} (iD^\mu )( iD^\nu) b_v| B (v) \rangle
\label{MuG}
\end{align}
The coefficients of these two matrix elements are known to order
$\alpha_s$~\cite{Becher:2007tk,Alberti:2012dn,Alberti:2013kxa,Mannel:2014xza,Mannel:2015jka}.
At order $1/m_b^3$ there are again only two matrix elements which are given by
\begin{align}
2 M_H \rho_D^3 &= - \langle B (v) |\bar{b}_v (iD_\mu) (ivD) (iD^\mu) b_v | H (v) \rangle \\
2 M_H \rho_{LS}^3 &= -i\langle B (v) |\bar{b}_v\sigma_{\mu \nu} (iD^\mu ) (ivD) ( iD^\nu) b_v| B (v) \rangle
\end{align}
For these matrix elements only the tree level coefficients are known.
Furthermore, if the matrix elements are defined as above\footnote{More commonly used definitions differ by $O(1/m_b)$ terms.},
the coefficient of $\rho_{LS}^3 $ vanishes for the total rate,
which is related to reparametrization invariance of the HQE~\cite{Mannel:2018mqv}.
The HQE predictions of the inclusive semileptonic rates depend on $m_b$ and $m_c$, and the size of the perturbative QCD corrections
depends on the choice of the quark-mass scheme. The quark masses are discussed in detail in a different section of this paper, and we refer the
reader to this section.
\subsubsection{Higher power corrections}
At order $1/m_b^4$ and higher the number of independent nonperturbative parameters starts to proliferate. In addition, due to the
dependence on powers of $1/m_c$ the power counting needs to be re-defined: since we have parametrically
$m_c^2 \sim \Lambda_{\rm QCD} m_b$ one has to count the term $d \Gamma_5 a_2$ as a part of $d \Gamma_4$, see (\ref{HQE1}).
Thus the full complexity of the dim-8 operators already enters an analysis of the $1/m_b^4$ contribution.
We shall not list the independent matrix elements appearing at order $1/m_b^4$ and $1/m_b^5$, rather we refer the reader to the list given in Refs.~\cite{Mannel:2010wj,Heinonen:2014dxa}.
However, the proper counting of the number of independent operators has been settled only recently~\cite{Kobach:2017xkw},
using the method of Hilbert series.
It turns out that at tree level there are 9 dimension 7 operators~\cite{Mannel:2010wj} while QCD
corrections will increase this number to 11~\cite{Kobach:2017xkw}.
The reason is very simple. At order $1/m_b^4$ we have operators with four covariant derivatives, which can be written as
$ \langle \bm{E}^2 \rangle $ (chromoelectric field squared) and
$ \langle \bm{B}^2 \rangle $ (chromomagnetic field squared)
where $\bm{E}$ and $\bm{B}$ are both color-octets. Thus the combination appearing at tree level is
\begin{equation}
\bm{E}^2 = \bm{E}^a \cdot \bm{E}^b \,\, T^a T^b \quad \mbox{and likewise for} \, \bm{B}^2 \, .
\end{equation}
However, the symmetric product of $T^a$ and $T^b$ contains a singlet and an octet component
\begin{equation}
\frac{1}{2} ( T^a T^b + T^b T^a ) = \delta^{ab} + d^{abc} T^c \, .
\end{equation}
The two terms on the right-hand side acquire different coefficients once QCD corrections are taken into account, and thus become
independent operators. Although this observation~\cite{Kobach:2017xkw} is correct, it has no impact unless QCD corrections are
considered at order $1/m_b^4$.
The same argument explains the different counting at order $1/m_b^5$ where we have 18 parameters at tree
level~\cite{Mannel:2010wj}, while the general case involves 25 matrix elements~\cite{Kobach:2017xkw}.
Clearly the number of independent parameters appearing at order $1/m_b^{4,5}$ is too large to extract them from experiment,
even if data will become very precise in the future. To this end, one has to rely on some additional theoretical input,
which should better be model dependent.
A systematic approach has been proposed in Ref.~\cite{Mannel:2010wj} and refined in Ref.~\cite{Heinonen:2014dxa}:
it is based on the
``lowest-lying state saturation Ansatz'' (LLSA) and corresponds to a naive factorization of the matrix elements.
The LSSA allows us to write
all matrix elements appearing in $1/m_b^4$ and $1/m_b^5$ in terms of four parameters, which are $\mu_\pi^2$ and $\mu_G^2$
(see Eqs.~(\ref{MuPi}) and~(\ref{MuG})) and $\epsilon_{1/2}$ and $\epsilon_{3/2}$, where $\epsilon_j$ are the excitation energies of the lowest
orbitally excited spin symmetry doublets with $j$ the spin of the light degrees of freedom. Note that in this setup also $\rho_D$ and
$\rho_{LS}$ can be computed which may serve as a check, since these parameters can also be extracted from experiment.
The LLSA has been used to study the impact of the $1/m_b^{4,5}$ terms on the extraction of $V_{cb} $ in
Ref.~\cite{Gambino:2016jkc}.
It turns out that, even if a generous margin is allowed for the uncertainties, the shift in the extracted $V_{cb}$ remains well
below~1\%, and with the default choices of Ref.~\cite{Gambino:2016jkc} a shift of $-0.25\%$ is found.
Recently the impact of the reparametrization invariance on the HQE has been re-investiga\-ted.
In Ref.~\cite{Mannel:2018mqv,Fael:2018vsp} it has been shown that the number of independent parameters in higher orders can be
reduced by reparametrization invariance, for the total rate and the $q^2$ moments. While the number of HQE parameters up to order
$1/m_b^2$ is still two, there is only one parameter at $1/m_b^3$, since the spin-orbit term can be absorbed into $\mu_G^2$.
At order $1/m_b^4$ there will be only four parameters, which opens up the possibility of constraining the higher dimensional matrix
elements directly with experimental data, at least if Belle~II will be able to measure several moments of the $q^2$ distribution.
\subsubsection{Heavy Quark Expansion for $B \to X_c \tau \bar{\nu}$}
The recent data on the exclusive decays $B \to D^{(*)} \tau \bar{\nu}$ indicate that the branching ratios of these channels
lie above the prediction of the SM. This issue is discussed in detail in sec.~\ref{sec:Vcb-RD-RDs}, but we may also consider the
inclusive decay $B \to X_c \tau \bar{\nu}$ for which the HQE provides us with a precise prediction.
While a new measurement of $B \to X_c \tau \bar{\nu}$
has to wait until Belle II has collected a sufficient data sample, we may compare with a measurement
performed at LEP resulting in~\cite{PDG2018}
$$
{\rm Br}(b\mbox{-admix} \to X \tau \bar{\nu}) = (2.41 \pm 0.23)\%
$$
where $b\mbox{-admix}$ refers to the $b$-hadron admixture produced in a $Z$ decay. Since to leading order the
inclusive semitauonic branching fraction of all $b$-hadrons are the same, we may take this as an estimate of $B \to X_c \tau \bar{\nu}$.
This has to be compared with the measured sum of $B \to D \tau \bar{\nu}$ and $B \to D^{*} \tau \bar{\nu}$
$$
{\rm Br}(B \to [D + D^*] \tau \bar{\nu}) = (2.68 \pm 0.16)\%,
$$
indicating that the two ground states tend to oversaturate the inclusive decay.
The decay $B \to X_c \tau \bar{\nu}$ has been studied in the HQE~\cite{Ligeti:2014kia} up to $1/m_b^2$ and $\alpha_s^2$ in the $1S$ scheme, resulting in
$$
{\rm Br}(B^- \to X_c \tau \bar{\nu}) = (2.42 \pm 0.05)\% .
$$
More recently, sizable effects of order $1/m_b^3$ have been found~\cite{Mannel:2017jfk}, which using the kinetic scheme, but without $O(\alpha_s^2)$ contributions, found
$$
{\rm Br}(B^- \to X_c \tau \bar{\nu}) = (2.26 \pm 0.05)\% .
$$
The additional inclusion of $O(\alpha_s^2)$ effects in the kinetic scheme appears to lead to a very similar
value~\cite{Bhattacharya:2018kig}.
These HQE calculations are compatible with the LEP measurement.
\begin{figure}
\centering
\includegraphics[scale=0.6]{BR-incl-contour-plot.pdf}
\caption{Fit of the data to the parameters $\alpha$ and $\beta$. The green ellipse represents the fit result to the
exclusive channels, the green band represents the LEP measurement, the red band the SM result obtained form the HQE.}
\label{fig:albe}
\end{figure}
However, the LEP measurement is not very precise and thus leaves room for new physics contributions. In the context
of $R(D^{(*)})$ many new physics scenarios have been discussed, and we will not repeat any of this here.
Instead we use a very simple ansatz to explore qualitatively the effect of new physics.
To this end, we add an additional interaction of the form
\begin{equation} \label{HNP}
\mathcal{H}_{\rm NP} = \frac{G_F V_{cb}}{\sqrt 2}
\left( \alpha \, O_{V+A} + \beta \, O_{S-P} \right)
\end{equation}
with
\begin{eqnarray}
O_{V+A} & = & \left( \bar c \gamma_\mu (1 + \gamma_5) b \right)
\left(\bar \tau \gamma^\mu (1 - \gamma_5) \nu \right),
\label{eq:operators} \\
O_{S-P} & = & \left( \bar c (1 - \gamma_5) b \right)
\left(\bar \tau (1 - \gamma_5) \nu \right)
\nonumber
\end{eqnarray}
We may fit the two parameters $\alpha$ and $\beta$ to the data on $B \to D^{(*)} \tau \bar{\nu}$ and find
$\alpha = -0.15 \pm 0.04$ and $\beta = 0.35 \pm 0.08$~\cite{Mannel:2017jfk}. This may be inserted back into the calculation of the
total rate for $B \to X_c \tau \bar{\nu}$ for which we find
\begin{equation}
{\rm Br}(B^- \to X_c \tau \bar{\nu}) = (3.15 \pm 0.19)\%
\end{equation}
indicating a significant shift of the inclusive rate. This result is graphically presented in fig~\ref{fig:albe} and indicates that generically
the exclusive and inclusive data are in tension, unless the new physics is such that it almost cancels in the inclusive rate.
\subsection{Inclusive processes in lattice QCD}
Until recently, the application of lattice QCD has been limited to the
calculation of form factors of exclusive processes such as
$B\to D^{(*)}\ell\nu$ or $B\to\pi\ell\nu$,
for which initial and final states contain a single hadron.
A first proposal to evaluate the structure
functions relevant to the inclusive decays $B\to X_{u,c}\ell\nu$ in lattice QCD
was put forward in \cite{Hashimoto:2017wqo}.
As mentioned above,
the differential decay rate for the inclusive decay
$B(p_B)\to X_c(p_X)\ell(p_\ell)\nu(p_\nu)$
may be written in terms of the structure functions of $W_{\mu\nu}(p_B,q)$,
which contains the sum over all possible final states:
\begin{equation}
W_{\mu\nu}(p_B,q) = \sum_X (2\pi)^3\delta^4(p_B-q-p_X)
\frac{1}{2M_B}
\langle B(p_B)|J_\mu^\dagger|X(p_X)\rangle
\langle X(p_X)|J_\nu|B(p_B)\rangle,\nonumber
\end{equation}
where $J_\mu$ stands for the $b\to c$ weak current and
$q^\mu=(p_\ell+p_\nu)^\mu$ is the momentum transfer.
The optical theorem relates this to the forward scattering matrix
element $T_{\mu\nu}(p_B,q)$,
\begin{equation}
T_{\mu\nu}(p_B,q) = i\int d^4\! x\, e^{-iqx}
\frac{1}{2M_B}
\langle B(p_B)|T\{J_\mu^\dagger (x) J_\nu(0)\}|B(p_B)\rangle,
\end{equation}
as $-(1/\pi)\mathrm{Im} T_{\mu\nu}=W_{\mu\nu}$, see for instance
\cite{Manohar:1993qn,Blok:1993va}.
One can calculate these forward matrix elements on the lattice as long as the
momenta $p_B$ and $q$ are in the region where no singularity develops.
It means that the lattice calculation is possible in an unphysical
kinematical region where no real decay is allowed.
This kinematical region corresponds to the situation where the energy
given to the final charm system $p_X^0$ is too small to create
real states such as the $D$ and $D^*$ mesons or the $D\pi$ continuum
states.
The connection to the physical region can be established by
using Cauchy's integral on the complex plane of $p_X^0$.
An alternative method is to reconstruct the spectral density
(of the states $X$ appearing in the sum)
directly from the lattice correlation function
\cite{Hansen:2017mnd}.
An exploratory lattice calculation has been performed at relatively
light $b$ quark masses
\cite{Hashimoto:2017wqo}.
The numerical results suggest that the matrix element is nearly
saturated by the ground state $D^{(*)}$ meson contribution
at the zero-recoil limit.
Since the non-perturbative lattice calculation may be obtained at the
kinematical point away from the resonance region, it may also be used
to validate the heavy quark expansion (HQE) method.
So far, the HQE calculation is available in the unphysical region only at the
tree-level, $O(\alpha_s^0)$.
The one-loop and two-loop corrections have been calculated for the
differential decay rate.
They have to be transformed to the unphysical kinematical point
by applying the Cauchy integral. Such work is in progress.
As already mentioned, the lattice calculation can be made only in the
unphysical kinematical region and its comparison with the experimentally
observed $B$ decay distribution is not straightforward. One should first
perform an integral of the experimental data with an appropriate weight
to reproduce Cauchy's integral in the complex plane of $p_X^0$, which
requires the experimental data obtained as a function of two kinematical
variables $q^2$ and $p_B\cdot q$. It still doesn't cover the whole
complex plane, and one need to supplement by a perturbative QCD
calculation for the region of $p_X^0>p_B^0$. The perturbative expansion
in this unphysical region should be well-behaved, but the details should
be investigated further.
More recently, a different approach that in principle allows to calculate the total decay rate has been
proposed \cite{Gambino:2020crt}.
In the new method, the integral corresponding to the phase space of
the $B\to X_c\ell\nu$
is directly performed rather than the Cauchy's integral. As a result,
information about the
unphysical kinematical region is no longer necessary. A first comparison with the HQE
with a small $m_b\sim 2.7$GeV shows good agreement with the lattice calculation, despite large uncertainties.
This method may open an opportunity to
compute the inclusive decay rate fully non-perturbatively using
lattice QCD, and
can also be applied to calculate various moments of the $B\to X_c\ell\
\nu$ decays, as well as
the more challenging $B\to X_u\ell\nu$ decays.
\subsection{HQE matrix elements from lattice QCD}
\label{sec:HQE-LQCD}
The same hadronic parameters appearing in the OPE analysis of inclusive semileptonic $B$-meson decays appear also in the HQE of
the pseudoscalar (PS) and vector (V) heavy-light meson masses.
Therefore, one can try to determine them from a lattice calculation of the latter at different values of the heavy quark mass.
After the pioneering work of Ref.~\cite{Kronfeld:2000gk}, new unquenched results have been presented
recently~\cite{Bazavov:2018omf,Gambino:2017vkx}.
These papers are mentioned in Sec.~\ref{sec:qm} for their results on quark masses.
In Ref.~\cite{Gambino:2017vkx} a precise lattice computation of PS and V heavy-light meson masses has been performed for
heavy-quark masses ranging from the physical charm mass up to $\simeq 4$ times the physical $b$-quark mass, adopting the gauge
configurations generated by the European Twisted Mass Collaboration (ETMC) with $N_f = 2+1+1$ dynamical quarks at three values
of the lattice spacing ($a \simeq 0.062, 0.082, 0.089$~fm) with pion masses in the range $M_\pi \simeq 210$--450~MeV.
The heavy-quark mass is simulated directly on the lattice up to $\simeq 3$ times the physical charm mass.
The interpolation to the physical $b$-quark mass is obtained with the ETMC \emph{ratio
method}~\cite{Blossier:2009hg,Bussone:2016iua}, based on ratios of the spin-averaged meson masses computed at nearby heavy-quark
masses, and the kinetic scheme is adopted.
The extrapolation to the physical pion mass and to the continuum limit yields $m_b^{\rm kin}(1~\mbox{GeV}) = 4.61 (20)$~GeV,
corresponding to $\overline{m}_b(\overline{m}_b) = 4.26 (18)$~GeV in the $\overline{\rm MS}$ scheme, in agreement with other
$m_b$ determinations; see Sec.~\ref{sec:qm}.
The ratio method is applied above the physical $b$-quark mass to provide heavy-light meson masses towards the static point.
The lattice data are analyzed in terms of the HQE and the matrix elements of dimension-4 and dimension-5 operators are determined
with good precision, namely:
\begin{align}
\label{eq:dim4_final}
\overline{\Lambda} &= 0.552 ~ (26) ~\text{GeV} , \\
\label{eq:dim5_1_final}
\mu_\pi^2 &= 0.321 ~ (32)~\text{GeV}^2 , \\
\label{eq:dim5_2_final}
\mu_G^2(m_b) &= 0.253 ~ (25)~\text{GeV}^2 .
\end{align}
The size of two combinations of the matrix elements of dimension-6 operators is also determined:
\begin{align}
\label{eq:dim6_1_final}
\rho_D^3 - \rho_{\pi \pi}^3 - \rho_S^3 &= 0.153 ~ (34) ~\text{GeV}^3 ~ , \\
\label{eq:dim6_2_final}
\rho_{\pi G}^3 + \rho_A^3 - \rho_{LS}^3 &= -0.158 ~ (84) ~\text{GeV}^3 ~ ,
\end{align}
with the full covariance matrix provided in Ref.~\cite{Gambino:2017vkx}.
Although all the above results refer to the asymptotic limit, namely to infinitely heavy quarks, and differ from the matrix
elements extracted in the inclusive fits described above by higher power corrections, they are found to be mutually consistent.
In the future lattice results could be used as additional constraints in the semileptonic fits. Another interesting future
application concerns the heavy-quark sum rules for the form factor entering the semileptonic decay $B \to D^* \ell \nu$ at
zero-recoil; here the non-local correlators $\rho_{A, S, \pi \pi, \pi G}$ play an important role; see Ref.~\cite{Gambino:2012rd}.
The analysis by the Fermilab, MILC and TUMQCD Collaborations~\cite{Bazavov:2018omf}, based on~\cite{Brambilla:2017hcq}, employs
only PS mesons and the minimal renormalon subtracted (MRS) heavy quark mass. The results are obtained using MILC ensembles with
five values of lattice spacing ranging from approximately 0.12~fm to 0.03~fm, enabling good control over the continuum
extrapolation, and both physical and unphysical values of the two light and the strange sea-quark masses.
This leads to
\bea
\overline{\Lambda}_{\rm MRS}= 0.555~(31)~\text{GeV}
\end{eqnarray}
while power corrections are controlled by the difference $\mu_\pi^2-\mu_G^2(m_H)$.
Assuming $\mu_G^2(m_b)=0.35(7)\mbox{GeV}^2$ as a prior, the authors find $\mu_\pi^2=0.05(21)\mbox{GeV}^2$.
Notice that the definition of $\mu_\pi^2$ used here still has a renormalon ambiguity of order $\Lambda_{\rm QCD}^2$.
\subsection{Experimental status}
\subsubsection{Measurements of inclusive observables in $B\to X_c\ell\nu$}
Several experiments have measured the partial branching fraction of the
inclusive decay~$B\to X_c\ell\nu$ ($\ell=e,\mu$) as a function of the lower
threshold on the lepton momentum ($E_\mathrm{cut}$), or other inclusive
observables in this decay such as the moments of the lepton energy and of the
$X_c$~mass distribution. Available measurements are listed in
Table~\ref{tab:mom_exp}, where it should be noted that the most recent
experimental result is from the year 2010.
\begin{table}
\caption{List of available measurements of inclusive moments in
$B\to X_c\ell\nu$. We also specify the types of the lepton energy
$E_\ell$ and hadronic mass $M(X_c)$ spectrum moments which have been
determined in the respective publications. The zeroth order moment of the
lepton energy spectrum ($n=0$) refers to a measurement of the partial
branching fraction.} \label{tab:mom_exp}
\begin{center}
\begin{tabular}{lll}
\hline
Experiment &
Lepton spectrum moments $\langle E^n_\ell\rangle$ &
Hadron spectrum moments $\langle M^{2n}_X\rangle$\\
\hline \hline
BaBar &
$n=0,1,2,3$~\cite{Aubert:2009qda,Aubert:2004td} &
$n=1,2,3$~\cite{Aubert:2009qda}\\
Belle &
$n=0,1,2,3$~\cite{Urquijo:2006wd} &
$n=1,2$~\cite{Schwanda:2006nf}\\
CDF & &
$n=1,2$~\cite{Acosta:2005qh}\\
CLEO & &
$n=1,2$~\cite{Csorna:2004kp}\\
DELPHI &
$n=1,2,3$~\cite{Abdallah:2005cx} &
$n=1,2$~\cite{Abdallah:2005cx}\\
\hline
\end{tabular}
\end{center}
\end{table}
The Belle collaboration has measured spectra of the lepton energy~$E_\ell$ and
the hadronic mass $M(X_c)$ in $B\to X_c\ell\nu$ using 152~million
$\Upsilon(4S)\to B\bar B$ events~\cite{Urquijo:2006wd,Schwanda:2006nf}. These
analyses proceed as follows: first, the decay of one $B$~meson in the event is
fully reconstructed in a hadronic mode ($B_\mathrm{tag}$). Next, the
semileptonic decay of the second $B$~meson in the event ($B_\mathrm{sig}$) is
identified by searching for a charged lepton amongst the remaining particles
in the event. In Ref.~\cite{Urquijo:2006wd}, the electron momentum spectrum in the
$B$~meson rest frame is measured down to 0.4~GeV. In ~\cite{Schwanda:2006nf},
all remaining particles in the event, excluding the charged lepton
(electron or muon), are combined to reconstruct the hadronic
$X$~system. The $M(X_c)$ spectrum is measured for different lepton energy
thresholds in the $B$~meson rest frame. The observed spectra are distorted by
resolution and acceptance effects and cannot be used directly to obtain the
moments. In the Belle analyses, acceptance and finite resolution
effects are corrected by unfolding the observed spectra using the
Singular Value Decomposition (SVD) algorithm~\cite{Hocker:1995kb}. Belle
measures the energy moments $\langle E^k_\ell\rangle$ for $k=0,1,2,3,4$ and
minimum lepton energies ranging from 0.4 to 2.0~GeV. Moments of the hadronic
mass~$\langle M^k_X\rangle$ are measured for $k=2,4$ and minimum lepton
energies from 0.7 to 1.9~GeV.
\begin{figure}
\centering
\includegraphics[width=0.45\columnwidth]{figures/belle_eel_bp.pdf}
\includegraphics[width=0.43\columnwidth]{figures/belle_mx.pdf}
\caption{Belle measurements of the electron energy (left) and hadronic mass
(right) spectra~\cite{Urquijo:2006wd,Schwanda:2006nf}.}
\end{figure}
BaBar has measured the lepton energy and hadronic mass moments in
$B\to X_c\ell\nu$~\cite{Aubert:2004td,Aubert:2009qda}. Furthermore, first
measurements of combined hadronic mass and energy moments of the form
$\langle n^k_X\rangle$ with $k=2,4,6$ are presented. They are
defined as $n^2_X=M^2_X-2\widetilde\Lambda E_X+\widetilde\Lambda^2$,
where $M_X$ and $E_X$ are the mass and the energy of the $X$~system and the
constant $\widetilde\Lambda$ is taken to be 0.65~GeV. The most recent analysis
is the one of hadronic mass $M(X_c)$ moments, which are determined using a
data sample of 232 million $\Upsilon(4S)\to B\bar B$
events~\cite{Aubert:2009qda}. The experimental method is similar to
the Belle analysis discussed previously, \emph{i.e.}, one $B$~meson is
fully reconstructed in a hadronic mode and a charged lepton with
momentum above 0.8~GeV in the $B$~meson frame identifies
the semileptonic decays of the second $B$. The remaining particles
in the event are combined to reconstruct the hadronic system $X$.
The resolution in $M(X_c)$ is improved by a kinematic fit to the whole event,
taking into account 4-momentum conservation and constraining the missing
mass to zero. To derive the true moments from the reconstructed ones, BaBar
applies a set of linear corrections. These corrections depend on
the charged particle multiplicity of the $X$~system, the normalized missing
mass, $E_\mathrm{miss}-p_\mathrm{miss}$, and the lepton momentum. In this way,
BaBar measures the moments of the hadronic mass spectrum up to
$\langle M^6_X\rangle$ for minimum lepton energies ranging from 0.8 to 1.9~GeV.
\subsubsection{Determination of $|V_{cb}|$ from inclusive decays}
The Heavy flavor Averaging Group (HFLAV) has used the measurements
discussed in the previous section to determine $|V_{cb}|$ from a fit to HQEs of
inclusive observables~\cite{Amhis:2016xyh}. Using expressions in the so-called
kinetic
scheme~\cite{Benson:2003kp,Gambino:2004qm,Gambino:2011cq,Alberti:2013kxa,Alberti:2014yda}
and a precise determination of the $c$-quark mass,
$m_c^{\overline{\rm MS}}(3~{\rm GeV})=0.986\pm 0.013$~GeV~\cite{Chetyrkin:2009fv},
as external input, HFLAV obtains
\begin{eqnarray}
|V_{cb}| & = & (42.19\pm 0.78)\times 10^{-3}~, \\
m_b^{\rm kin} & = & 4.554\pm 0.018~{\rm GeV}~, \\
\mu^2_\pi & = & 0.464\pm 0.076~{\rm GeV^2}~.
\end{eqnarray}
The $\chi^2$ of the fit is 15.6 for $43$ degrees of freedom. Using
expressions in the so-called $1S$ scheme~\cite{Bauer:2004ve,Bauer:2002sh} the same set of
measurements results in
\begin{eqnarray}
|V_{cb}| & = & (41.98\pm 0.45)\times 10^{-3}~, \\
m_b^{1S} & = & 4.691\pm 0.037~{\rm GeV}~, \\
\lambda_1 & = & -0.362\pm 0.067~{\rm GeV^2}~,
\end{eqnarray}
with a $\chi^2$ of the fit of 23.0 for $59$ degrees of freedom. This analysis
uses measurements of the photon energy moments in $B\to
X_s\gamma$~\cite{Aubert:2005cua,Aubert:2006gg,Limosani:2009qg,Chen:2001fja}
to constrain the $b$-quark mass and does not include higher order corrections of $O(\alpha_s^2)$ and $O(\alpha_s/m_b^2)$.
As mentioned above, the semileptonic moments have been analysed also including higher order power corrections
estimated using the LSSA~\cite{Gambino:2016jkc}.
In this case a kinetic scheme fit to the experimental data that additionally includes
a constraint $m_b^{kin} = 4.550(42)$GeV from PDG (after scheme conversion) leads to a slightly more precise value,
\begin{eqnarray}
|V_{cb}| & = & (42.00\pm 0.64)\times 10^{-3}~.
\end{eqnarray}
\section{Heavy-to-light inclusive}
\label{h2l_incl}
\subsection{Introduction and theoretical background}
Inclusive semileptonic heavy to light decays can in principle be analyzed similarly to $B\to X_c \ell\nu$ by using a local OPE.
In practice, due to the large charm background, experimental cuts are generally imposed and reduce the ``inclusivity" of the
theoretical prediction.
In particular, the local OPE does not converge well when the invariant mass of the hadronic system is $M_X\lesssim M_D$.
In such a case the decay spectra are described using a
``non-local" OPE~\cite{Neubert:1993ch,Neubert:1993um,Bigi:1993ex}, where perturbative coefficients are convoluted with
non-perturbative ``Shape Functions" (SFs), the $B$ meson analogs of parton distribution functions.
In this SF region, the perturbative coefficients themselves can be factorized into ``hard" and ``jet" pieces, where the former has
a typical scale of $m_b$ and the latter has a typical scale of $\sqrt{m_b\Lambda_{\mbox{\scriptsize QCD}}}$.
In the infinite mass limit $m_b\to\infty$ there is a single non-perturbative SF.
Power corrections start at $1/m_b$ and include multiple ``subleading"
SFs~\cite{Lee:2004ja,Bosch:2004cb,Beneke:2004in,Bauer:2001mh,Leibovich:2002ys}.
One can classify the terms based on their suppression by $1/m_b$ and $\alpha_s$.
The perturbative components of the leading power term are known at $O(\alpha_s^2)$~\cite{Gambino:2006wk,Bonciani:2008wf,%
Asatrian:2008uk,Beneke:2008ei,Bell:2008ws,Brucherseifer:2013cu}.
The $1/m_b$ power corrections include terms convoluted with the leading power SF whose perturbative parts are known at
$O(\alpha_s)$~\cite{Paz:2009ut} and terms convoluted with subleading SFs whose perturbative parts are known at
$O(\alpha_s^0)$~\cite{Lee:2004ja,Bosch:2004cb,Beneke:2004in}.
At this order one can still use subleading functions of one light-cone variable. The inclusion of $O(\alpha_s)$ contributions
of subleading SFs requires functions of multiple light-cone momenta in analogy to higher twist effects in Deep Inelastic Scattering~\cite{Ellis:1982cd}.
Schematically, in the SF region we have the factorization formula
\begin{equation}\label{factorization}
d\Gamma\sim H\cdot J\otimes S+\frac{1}{m_b}\sum_{i}\,h\cdot J_0\otimes s_i
+\frac{1}{m_b}\sum_{k}\,h\cdot j_k\otimes S+\,O\left(\frac{1}{m_b^2}\right)\,,
\end{equation}
where $H$ is the leading power hard function, $J$ is the leading power jet function, both known at $O(\alpha_s^2)$,
$J_0$ is the $O(\alpha_s^0)$ part of $J$, $h=1+O(\alpha_s)$, $s_i$ are given in
Refs.~\cite{Lee:2004ja,Bosch:2004cb,Beneke:2004in}, and $j_k$ in Ref.~\cite{Paz:2009ut}.
The symbol $\otimes$ denotes an integral over the light-cone momentum.
The moments of the leading and subleading SFs are related to the HQE parameters measured in the inclusive semileptonic decays to charm.
The relations are known for the leading SF up to at least the fifth moment~\cite{Gunawardana:2017zix}, although the current large
uncertainty of higher HQE parameters~\cite{Mannel:2010wj,Gambino:2016jkc} might limit the use of higher moments relations.
The formalism in Ref.~\cite{Gunawardana:2017zix} allows to construct such relations for the subleading SFs too, but at present
only the first three moments are known \cite{Bauer:2001mh,Bauer:2002yu}.
A detailed knowledge of the SFs is necessary only in a portion of the phase space where $p_+=E_X-p_X\sim \Lambda_{\rm QCD}$;
elsewhere only the first few moments of the SFs are relevant and one recovers the local OPE description.
The present $|V_{ub}|$ determination by HFLAV~\cite{Amhis:2016xyh} is based on various approaches which are all rooted in
(\ref{factorization}) and differ in the inclusion and treatment of perturbative and nonperturbative contributions,
see Ref.~\cite{Antonelli:2009ws} for a detailed discussion.
The approach known as BLNP (Bosch-Lange-Neubert-Paz)~\cite{Lange:2005yw} aimed at a precision extraction of $|V_{ub}|$ from $B\to X_u \ell\nu$ and
$B\to X_s\gamma$, based on the knowledge in 2005. It used the first two terms in (\ref{factorization}), in particular the
$O(\alpha_s)$ expression for $H\cdot J\otimes S$ and the $O(\alpha^0_s)$ expression for the
$h\cdot J_0\otimes s_i$ terms. Kinematical corrections that scale as $\alpha_s/m_b$ and $\alpha_s/m^2_b$~\cite{DeFazio:1999ptt},
as well as $1/m_b^2$ corrections~\cite{Manohar:1993qn,Blok:1993va}, for which factorization formulas were not known, were also
included by convolution with the leading power shape function. Using Renormalisation Group methods $H$ is evolved from ``hard" to the ``jet" scale
to resum Sudakov double logs. As for the non-perturbative inputs, the leading order SF
was to be taken from $B\to X_s\gamma$ and subleading SFs $s_i$ to be modeled using $\sim700$ models. In practice, the current
treatment of $S$ by experiments is to use an exponential or Gaussian model constrained by the first two moments of $S$ obtained
from the global fit of HQE parameters in the kinetic scheme~\cite{Amhis:2016xyh}.
Since Ref.~\cite{Lange:2005yw} appeared, there have been many theoretical advances.
Two-loop calculations of $H$~\cite{Bonciani:2008wf,Asatrian:2008uk,Beneke:2008ei,Bell:2008ws} and $J$~\cite{Becher:2006qw} as well
as one-loop calculation of $j_k$~\cite{Paz:2009ut} became available.
The free quark differential decay rate were calculated at $O(\alpha_s^2\beta_0)$~\cite{Luke:1994du,Bauer:2001rc,Hoang:2005pj,Gambino:2006wk} and at complete
$O(\alpha_s^2)$~\cite{Brucherseifer:2013cu}.
Running effects from the ``hard" to the ``jet" at $O(\alpha_s^2)$ were studied~\cite{Greub:2009sv}.
It was found there that the factorization of the perturbative coefficient into jet and hard functions is not strictly necessary.
More recently, three loop calculations of $J$ \cite{Bruser:2018rad} and the \emph{partonic} $S$ \cite{Bruser:2019yjk} were performed. Implementing these within the BLNP framework would probably require also the calculation of $H$ at three-loops, which is not available yet.
There were also theoretical advances in the description of non-perturbative effects in $B\to X_s\gamma$~%
\cite{Lee:2006wn,Benzke:2010js,Benzke:2010tq}.
In particular, new subleading shape functions unique to $B\to X_s\gamma$ were identified~\cite{Benzke:2010js}, making it more
difficult to use data from radiative $B$ decays as input for the extraction of $|V_{ub}|$.
These new features are not yet implemented in the BLNP approach.
An alternative implementation of the same conceptual framework has been presented in Ref.~\cite{Ligeti:2008ac}, together with a
systematic procedure to account for the uncertainties in the modelling of the leading SF, to be discussed below.
The GGOU (Gambino-Giordano-Ossola-Uraltsev) approach~\cite{Gambino:2007rp} avoids the expansion in $1/m_b$ and the introduction of subleading SFs.
The perturbative coefficients are computed at fixed order to $O(\alpha_s^2\beta_0)$ in the kinetic scheme.
The effect of RGE evolution in the SF region and all subleading SFs are absorbed into three $q^2$-dependent SF $F_i(k, q^2)$,
whose first moments are fixed by present semileptonic fits. The uncertainty due to the functional form is estimated comparing
$\sim100$ models.
The emergence of the SF can also be seen in perturbation theory: soft-gluon resummation together with an infrared prescription gives
rise to a $b$ quark SF.
In the DGE (Dressed-Gluon Exponentiation) approach~\cite{Andersen:2005mj,Gardi:2008bb} this is achieved by an internal resummation
of running coupling corrections in the Sudakov exponent, thus providing a perturbative model for the leading SF.
A somewhat similar line of action is followed in Ref.~\cite{Aglietti:2007ik} where the infrared prescription is provided by the
so-called analytic QCD coupling.
The so-called Weak Annihilation (WA) contributions are a source of theoretical uncertainty common to all approaches.
In the local OPE they emerge at $O(1/m_b^3)$ but are enhanced by a large Wilson coefficient~\cite{Bigi:1993bh} and may give rise to
a difference between $B^+$ and $B^0$ decays.
As they are expected to be much more important in charm decays, the latter constrain them most effectively at present.
In particular, the $D^0$, $D^+$ and $D_s$ total semileptonic rates and the electron spectra measured by the CLEO
Collaboration~\cite{Asner:2009pu} have been employed~\cite{Bigi:2009ym,Ligeti:2010vd,Gambino:2010jz}.
From the absence of clear indications for WA effects in semileptonic charm decays, one can conclude that the WA correction to the
total rate of $B\to X_u \ell\nu$ must be smaller than about 2\%~\cite{Gambino:2010jz}.
However, WA is localized in the high $q^2$ region and therefore the related uncertainty on $|V_{ub}|$ depends on the kinematical
cuts, and this is taken into account in the current HFLAV averages.
Because the high $q^2$ tail is particularly sensitive to higher power corrections (and not to the SFs), see for instance
Refs.~\cite{Bauer:2000xf,Bauer:2001rc,Gambino:2007rp}, one might eventually expect the cleanest determinations of $|V_{ub}|$ to come from the
low~$q^2$ region only.
An upper cut on $q^2$ might therefore be beneficial \cite{Lange:2005yw,Gambino:2007rp}.
A few recent experimental analyses \cite{Urquijo:2009tp,Lees:2011fv} have relaxed the kinematic cuts, making use of experimental
information to subtract the background.
As a result, most of the $B\to X_u \ell\nu$ phase space is taken into account and the sensitivity to the SFs is substantially
reduced, while a description based on the local OPE sets in.
In these cases the quoted theoretical uncertainties are smaller, but one should keep in mind that these analyses still depend
on the SFs treatment and modelling for the determination of the reconstruction efficiencies, whose uncertainty contribute to the
final experimental systematic error.
As will be discussed later on, a realistic signal simulation requires the implementation of so-called hybrid models that transform
the inclusive predictions of the approaches mentioned above into individual final hadronic states.
The uncertainties related to such hybrid models remain a major issue for the inclusive determination of $|V_{ub}|$.
\subsection{Status of the experimental results}
The most difficult task of the inclusive measurements is the discrimination between the $B\to X_u\ell\nu$ signal and the much more abundant decays involving Cabibbo-favoured $B\to X_c\ell\nu$ decays.
The signal events are studied in restricted regions of the phase space to improve the signal-to-background ratio.
Compared to $B\to X_c\ell\nu$ events, the signal tends to have higher lepton momenta $p_\ell$, lower invariant mass of the $X_u$ state $M_X$,
higher $q^2$, and smaller values of the light-cone momentum $P_+=E_X-|\bm{p}_X|$, where $E_X$ and $\bm{p}_X$ are energy and momentum of
the hadronic system $X_u$ in the $B$ meson rest frame. As explained above, these restrictions introduce difficulties in the
calculation of the expected partial branching fraction, enhancing perturbative and nonperturbative QCD corrections which lead to large
theoretical uncertainties in the measurement of~$|V_{ub}|$.
The measurement of the partial branching fraction $\Delta\cal{B}$ can be obtained with \emph{tagged} or \emph{untagged} analyses.
\subsubsection{Tagged Analyses}
In tagged analyses, the $\Upsilon(4S)\to B{\overline B}$ events are identified by reconstructing one of the $B$ mesons, $B_{reco}$, via fully hadronic decays. The signal decay of the second $B$ meson ($B_{signal}$) is identified just by the presence of an electron or a muon.
The tracks and neutral objects not associated with the $B_{reco}$ can be uniquely assigned to the signal side, so that the inclusive $X_u$ state can be clearly reconstructed. The neutrino four-momentum $p_\nu$ can be estimated from the missing momentum $p_{miss}=p_{e^+e^-}-p_{B_{reco}}-p_{X_u}-p_\ell$, where $p_{e^+e^-}$ is the initial state four-momentum. From this, all the kinematic variables of the signal state can be easily computed.
Because the momentum of the signal $B$ meson is determined from of the $B_{reco}$, the signal decay products can be computed directly in the $B$-meson rest frame, resulting in an improved resolution of the accessible observables. Moreover, the constrained kinematics allow for a better separation of the signal from the background.
The downside of the tagged analysis is the low signal efficiency (about 0.3-0.5\%) which implies that
for kinematic variables like the lepton momentum $p_\ell$, the untagged analyses at the B-factories can give competitive or better
results.
Undetected and poorly reconstructed tracks or photons lead to irreducible background from the dominant $B\to X_c$ decays even in regions
of the phase space potentially free of such background, and this can affect the final resolution on the signal kinematics.
The hadronic $B$-tagging approach was used for the first time by BaBar to extract the $B\to X_u\ell\nu$ signal in the phase space region $M_X<1.55$~GeV, with the further requirement that $p_\ell>1$~GeV \cite{Aubert:2003zw}. Using the same sample, BaBar removed the constraint on $M_X$ and obtained the $B\to Xu\ell\nu$ partial branching ratio requiring only $P_\ell>1$~GeV, which covers about $90\%$ of the signal phase space \cite{Aubert:2006qi}. This challenging analysis was affected by the large statistical uncertainties and limited by the knowledge on the $B\to X_c$ background components and the signal composition available when it was published.
Exploiting the full datasets collected, both Belle ~\cite{Urquijo:2009tp} and BaBar ~\cite{Lees:2011fv} published measurements of $B\to X_u\ell\nu$ partial branching fraction, performing a fit in $M_X$ and $q^2$, and requiring only $p_\ell > 1$~GeV. BaBar determined also the partial branching fractions in other several restricted regions of the phase space.
\subsubsection{Untagged Analyses}
The untagged measurements allow to collect large samples but are affected by considerable backgrounds.
The untagged measurements have access only to a few kinematic variables, namely the lepton momentum $p_\ell$, and the $q^2$ spectra,
\begin{itemize}
\item lepton spectrum: this can be studied inclusively without requirements on the rest of the event.
In this case the momentum spectrum can only be given in the $\Upsilon(4S)$ rest frame.
\item $q^2$ distribution: this requires the reconstruction of the neutrino 4-momentum, which exploits the high hermeticity of the
$B$~factories' detectors.
The neutrino 4-momentum is given by the event missing 4-momentum, $p_{miss}=p_{e^+e^-}-p_{vis}$, where $p_{e^+e^-}$ is the initial state
4-momentum, and $p_{vis}$ is the total visible 4-momentum determined by all the charged tracks from the collision point, identified pairs
of charged tracks from $K_s$, $\Lambda$ and $\gamma\to e^+e^-$, and energy deposits in the electromagnetic calorimeter.
\end{itemize}
The lepton momentum spectrum is affected by large backgrounds
from $B\to X_c\ell\nu_\ell$ via the $D\ell\nu$, $D^{*}\ell\nu$, $D^{**}\ell\nu$ (where by $D^{**}$ is a mixture of charm excited state and non resonant $D^{(*)}-n\pi$ transitions), $D_s K\ell\nu X$ and also secondary leptons from $D$ mesons decays, and a background from $e^+e^-\to q\overline{q}$ events, where the main contribution comes from $c\overline{c}$, which is assessed from control data samples recorded below the $\Upsilon(4S)$ resonance.
Because of the large background, usually the signal is extracted only for regions with high momentum lepton, typically $p_\ell>1.9-2.1$ \mbox{GeV}. Old analyses of the lepton endpoints are from CLEO \cite{Bornheim:2002du}, Belle \cite{Limosani:2005pi} and BaBar \cite{Aubert:2005mg}.
Recently, BaBar published a study \cite{TheBABAR:2016lja} of the lepton spectrum using the full data set, and exploiting all the knowledge about the rate and the form factors of the various $B\to X_c\ell\nu$ exclusive decays which are the major source of backgrounds.
The signal is extracted from a fit to the electron momentum spectrum, which is described as the sum of predicted signal (model dependent shape) and various specific backgrounds yields with shapes fixed by MC. The fit covers lepton momentum in the $\Upsilon(4S)$ rest frame from 0.8 to 2.7 \mbox{GeV}, in 50 \mbox{MeV} bins, except for the data in the interval 2.1 to 2.7 \mbox{GeV} which are combined in a single bin to avoid effects from differences in the shape of the theoretically predicted signal spectrum. In a given momentum interval, the excess of events above the sum of the fitted background contributions is taken as the number of signal events.
An important difference of this analysis with respect to the other ones is that different theoretical models are considered in the extraction of the partial branching fractions. Instead, all other measurements determine the partial branching fraction by using a single model, and its partial rate is then converted in a measurement of $|V_{ub}|$ by taking the corresponding partial rate predicted by the theory calculations.
The extracted inclusive signal branching fractions and the values of $|V_{ub}|$ agree well for GGOU and
DGE, although they are about 13\% smaller than the average of the other measurements. This difference can be attributed to the shape of the predicted signal spectrum and/or the shapes of some of the large background contributions above 2 \mbox{GeV} where the signal fraction is largest. On the other hand, the value of $|V_{ub}|$ based on BLNP agrees well with other measurements.
A subset of all the measurements of the inclusive $|V_{ub}|$ are reported in Fig.\ref{fig:vub_incl_results} for the various frameworks considered, see \cite{Amhis:2019ckw} for more details.
\begin{figure}
\centerline{\includegraphics[scale=0.60]{figures/inclusive_vub_results.pdf} }
\caption{Measurements of inclusive $|V_{ub}|$ and their averages
based on BLNP, DGE and GGOU calculations. The HFLAV average of $|V_{ub}|$ results from $B\to\pi\ell\nu_\ell$ decay is also reported for comparison.}
\label{fig:vub_incl_results}
\end{figure}
\subsubsection{Lessons learned from the past }
The measurements based on tagged samples have considerably larger statistical uncertainties. The sample size allows for only a few bins in the 2D fit, but there are regions of the phase space (e.g. low $M_X$) where the background fractions are modest. The current sensitivity to the details of the shapes of the signal and background distributions is however limited.
For untagged measurements only the high end of the spectrum is sensitive to the signal and also to the background near their kinematic endpoints.
Both approaches have their pros and cons, given the size of the currently available data.
The latest BaBar measurement of the lepton spectrum, shows a high dependence of the result from the signal model. The same effect, even if not directly evident, was observed also in tagged measurements from the sensitivity of the signal yield extraction on the shape function parameters in the analyses that cover larger portion of the phase space.
Semileptonic $B\to X_u\ell\nu$ decays are simulated as a combination of resonant decays with $X_u=\pi,\eta,\eta',\rho,\omega$, and decays to nonresonant hadronic final states $X_u$.
The latter is simulated with a continuous invariant mass spectrum following the theory predictions by De Fazio and Neubert \cite{DeFazio:1999ptt}, which depend on the SF parameters and $m_b$.
The nonresonant and the resonat part are combined such that the sum of their branching fractions is equal to the measured one for the inclusive $B\to X_u\ell\nu$.
The events generated with this model, are reweighted to obtain predictions for different SF parameters and different branching fraction of the resonant states. This model is usually called "hybrid model". Belle in \cite{Urquijo:2009tp}, corrects the hybrid model to match the moments of the $M_X$ and $q^2$ distributions predicted by the the GGOU model. A picture of the model of the invariant mass $M_X$ shape used to describe the $B\to X_u\ell\nu$ is reported in Fig.\ref{fig:vub_incl_hybrid}.
\begin{figure}
\centerline{\includegraphics[scale=0.3]{figures/hybrid_picture.pdf} }
\caption{Model of the hadronic invariant mass $M_X$ for the signal $B\to X_u\ell\nu$ events, separately for $B^0$ (top) and $B^+$ (bottom). }
\label{fig:vub_incl_hybrid}
\end{figure}
Another effect not considered so far, is the impact of the fragmentation of the generated $u$ quark into final state hadrons, which is performed using JETSET. The modeling of the final state multiplicity could affect both the signal efficiency and the signal templates used to separate signal from background.
The measurement of the partial branching fraction separately for neutral and changed $B$ mesons has been used
to constrain the WA contribution. Both tagged approach, in various regions of the phase space \cite{Lees:2011fv}, and untagged approach, in the high lepton region \cite{Aubert:2007tw}, have been used, but these have given weak upper limits mainly because of the large statistical uncertainties.
More stringent upper limit on WA has been obtained by CLEO which used a model dependent approach studying the high $q^2$ region in $B\to X_u\ell\nu$ decays \cite{Rosner:2006zz}. Both these bounds are milder than those estimated from $D$ and $D_s$ semileptonic decays in Refs.\cite{Ligeti:2010vd,Gambino:2010jz} which were mentioned above.
In the tagged measurements the suppression of the $b\to c$ background is performed by vetoing events where a $K^+$
or a $K_s^0$ is detected in the hadronic $X$ system. This causes a loss in the signal contribution where a $s{\bar s}$ pair is produced (usually called $s {\bar s}$-popping). The fraction of these events is about $12\%$ of the non-resonant component and it is
fixed in the fragmentation parameters of JETSET/PYTHIA. The uncertainty on this fraction is assumed to be about $30\%$, so for analyses that aim to cover larger regions of the phase space, with higher statistics this could be an irreducible source of systematic uncertainty. This is another point that should be improved in future analyses at Belle~II.
\subsection{Fitting distributions: SIMBA and NNVub}
\label{subsect:simba}
As we discussed above, SFs modelling is an important source of theoretical
uncertainty in the study of $B\to X_u \ell \nu$ and particularly in the extraction
of $|V_{ub}|$ from these decays. While the first few moments of the SFs must
satisfy OPE constraints, direct experimental information on the SFs is somewhat limited.
Indeed, the measured photon spectrum in $B\to X_s\gamma$ is sensitive to a different set of subleading SFs.
However differential distributions in $B\to X_u \ell \nu$ such as the lepton energy and the invariant mass distributions depend directly on all the SFs and can therefore be used to constrain them. Conversely, they can be used to validate SFs models and approaches
where the SFs are calculated, such as DGE. The high luminosity expected makes the measurement of differential distributions possible at Belle~II.
The extraction of $|V_{ub}|$ performed by HFLAV in the BLNP and GGOU frameworks assumes a set of two-parameter functional forms, and it is unclear to what extent the chosen set is representative of the available functional space, and whether the estimated uncertainty really reflects the limited knowledge of the SFs.
This point was first emphasized in Ref.~\cite{Ligeti:2008ac}, where a different strategy was proposed, based on the expansion of the leading SF in a basis of orthogonal functions, whose coefficients are fitted to the $B\to X_s \gamma$ spectrum, and on the modeling of the subleading SFs. The SIMBA project
\cite{Bernlochner:2013gla} aims at performing a global fit to $B\to X_s \gamma$ and $B\to X_u \ell\nu$ spectra, to simultaneously determine $|V_{ub}|$, $m_b$,
the leading SF, as well as the Wilson coefficient of radiative $b$ decays. Additional external constraints, such from $B\to X_c\ell\nu$, can also be employed.
Another strategy, called NNVub and explored in \cite{Gambino:2016fdy} for the GGOU approach, employs artificial neural networks as unbiased interpolants for the SFs, in a way similar to what the NNPDF Collaboration do in fitting for Parton Distribution Functions \cite{Ball:2008by}. This method allows for unbiased estimates of the SFs functional form uncertainty, and for a straightforward implementation of new experimental data, including $B\to X_s \gamma$ and $B\to X_u \ell\nu$ spectra and other inputs on quark masses and OPE matrix elements.
Both SIMBA and NNVub appear well posed to analyse the Belle~II data in a model independent and efficient way.
\subsection{Prospect for the future: Belle~II outlook}
The measurements of fully differential spectra on the kinematic variables, e.g. $q^{2}$, $M^{2}_{X}$, $p^{\pm}_{X}$, $E_{l}$, and separate measurements for charged and neutral B-meson decays are required to allow for an improved extraction of $|V_{ub}|$ in the long term. Therefore, the future measurements should provide these unfolded spectra independent of theoretical assumptions.
Combining both $B\to X_{u} \ell \nu$ and $B\to X_{s}\gamma$ as well as constraints on the SF moments from $B\to X_{c} \ell \nu$ in a global fit can simultaneously provide the inclusive $|V_{ub}|$ and the leading SF functional form with its uncertainties as they follow from the uncertainties in the included experimental measurements. Fig.~\ref{fig:simba_belle2_vub} shows the projections for a global fit in the SIMBA framework with two projected single-differential spectra of $M_{X}$ and $E_{\ell}$ for $B\to X_{u} \ell \nu$ and a $E_{\gamma}$ spectrum for $B\to X_{s}\gamma$ from 1~ab$^{-1}$ and 5~ab$^{-1}$ Belle~II data set~\cite{Kou:2018nap}.
\begin{figure}
\centerline{\includegraphics[scale=0.25]{figures/simba_vub.png} }
\caption{Belle~II projection for a global fit in the SIMBA approach of $|V_{ub}|$ with 1~ab$^{-1}$ and 5~ab$^{-1}$. Theory uncertainties are not included in the fit and are expected to be of similar size.}
\label{fig:simba_belle2_vub}
\end{figure}
The new tagging algorithm developed for Belle~II can perform better than the old neural network method used in the previous Belle publications with about 3 times higher efficiency~\cite{Keck:2018lcd}. With a larger data set, the systematic uncertainties counted for reconstruction efficiencies, fake leptons and continuum background knowledge are expected to improve for this measurement. The projections for inclusive $|V_{ub}|$ are summarized in Table~\ref{tab:belle_vub_error}.
\begin{table}
\tabcolsep 4pt
\centering
\caption{Expected percentage uncertainties in inclusive $|V_{ub}|$ measurements with the Belle full data sample, 5~ab$^{-1}$ and 50~ab$^{-1}$ Belle~II data~\cite{Kou:2018nap}.
}
\label{tab:belle_vub_error}
\begin{tabular}{cccccc}
\hline\hline
Int. Luminosity & Statistical & \begin{tabular}[c]{@{}l@{}}Systematic\\ (reducible, irreducible)\end{tabular} & Total Exp. & Theory & Total \\
\hline
605 fb$^{-1}$ (old B tag) & 4.5 & (3.7, 1.6) & 6.0 & 2.5-4.5 & 6.5-7.5 \\
5 ab$^{-1}$ & 1.1 & (1.3, 1.6) & 2.3 & 2.5-4.5 & 3.4-5.1 \\
50 ab$^{-1}$ & 0.4 & (0.4, 1.6) & 1.7 & 2.5-4.5 & 3.0-4.8 \\
\hline\hline
\end{tabular}
\end{table}
\section{Outlook}
\label{summ}
We have summarized our main results in Sec.~1. In this final Section, we would like to
look at the prospects of our field over the next five years.
What can we expect for semileptonic $b$ decays at the two main experiments? What kind of progress can we reasonably
anticipate in lattice QCD and continuum calculations?
Belle II has started data taking with a complete detector in March 2019 and recorded about 10/fb in its first year of operation. By introducing the crab waist scheme at the collision point, SuperKEKB achieved the world's highest instantaneous luminosity of $2.4\times 10^{34}/$cm$^2$/s in June 2020 with acceptable background conditions for Belle II to take data. In total 64/fb of $\Upsilon(4S)$~data were recorded in the spring 2020 run, bringing the total to 74/fb. Data taking will resume in October 2020 with the goal to reach a total integrated luminosity of more than 100/fb before the end of the year break. Belle II plans to accumulate a data set equivalent to the Belle luminosity of about 1/ab by the end of 2021. In 2022 the experiment will enter a long shutdown to install the second pixel detector layer and replace the silicon photomultipliers in the barrel particle identification device. Data taking will resume in 2023 and by 2025 Belle II expects to have recorded a data sample exceeding 10/ab.
Given these luminosity prospects, competitive Belle II results for semileptonic $B$~decays can be expected in the years to follow. In addition, a three times more efficient hadronic tag and better low momentum tracking of the slow pion from the $D^*$~decay will further benefit semileptonic analyses in particular. This will allow to take a fresh look at the CKM matrix element magnitudes $|V_{cb}|$ and $|V_{ub}|$ and to improve measurements which are still statistically limited, such as $R(D)$ and $R(D^*)$.
The LHCb experiment has shown great capabilities with the results on $R(D^*)$, $|V_{ub}|/|V_{cb}|$ with $\Lambda_b$ decays, and $|V_{cb}|$ with $B_s$ decays. These measurements are based on the data collected in 2011 and 2012 (Run 1), corresponding at $3/$fb of integrated luminosity.
The data collected in 2015-2018 (Run 2) at $pp$ collision energy of $\sqrt{s}=13~$ TeV, correspond to about $6/$fb of integrated luminosity. There are various ongoing analyses on the full dataset.
Most of the measurements are limited by systematic uncertainties,
among which the largest ones are generally due to
external inputs from other experiments and to the limited available samples of Monte Carlo simulations. Nevertheless the large dataset available is going to be fully exploited.
The LHCb experiment is at present undergoing a major upgrade of the detector.
The construction and commissioning should end in 2021, when LHC will resume the activity.
The upgrade will allow to collect data at higher instantaneous luminosity, so about five $pp$ collisions per bunch crossing are foreseen,
to be compared with about one-two $pp$ collisions in Run1 and Run2.
To handle the higher occupancy expected in the detector, besides the improvements in the various subdetectors, a full software L0 trigger
will be employed.
The software L0 trigger will add flexibility to the data taking, allowing to reduce the thresholds for muon and hadron trigger decisions,
enlarging in this way the physics capabilities.
The analyses of semileptonic decays with taus and electrons will benefit from the lower trigger thresholds in terms of signal
efficiencies.
With this upgraded detector, LHCb is planning to integrate a luminosity of $23/$fb by 2024, and
to collect a total sample of $50/$fb by 2028-2029, after LHC will have switched to higher luminosity.
By now, lattice QCD is the tool of choice for the form factors describing semileptonic decays of $b$-hadrons.
At present, the most urgent need is the $q^2$ (or, equivalently, $w$) dependence of the form factors of $B\to D^*l\nu$, both to see how
the form-factor slopes affect the $|V_{cb}|$ determination and to solidify the SM prediction of~$R(D^*)$.
A few such calculations are underway.
Given the success of LHCb with $\Lambda_b$ semileptonic decays, updates of the baryon form factors are desirable, and we encourage other
lattice-QCD practitioners to turn their attention to these decays.
Another topic for future research are rigorous calculations with a $\rho$ or $\phi$ vector meson in the final state.
The leptonic decay constants are now at the subpercent level of uncertainty, and the prospects for extending these methods to semileptonic
form factors are underway.
In general, near-term lattice-QCD calculations of this precision will be based on the MILC collaboration's HISQ ensembles, which, among
all lattice data sets, span the largest range of lattice spacing at physical light-quark masses and with high statistics.
We consider it important that other ensemble sets be extended to a similar range, to enable further (sub)percent-level calculations with
different systematics from the fermion discretization.
The inclusive determination of $|V_{cb}|$ will benefit from the calculation of new higher order effects, such as the $O(\alpha_s^3)$ contributions to the total width, and from a reassessment of QED effects. However, the next frontier is represented by the integration with lattice QCD calculations
to improve the determination of HQE matrix elements, and eventually by the calculation of the inclusive rates directly on the lattice.
For what concerns inclusive charmless decays, the general theoretical framework appears solid but needs to be updated in the light of
recent higher order calculations and should be extensively validated by experimental data which will become available at Belle~II.
In particular, the measurement of the lepton energy and hadronic invariant mass distributions will provide important information on the
Shape Functions, while the $q^2$ distribution will allow us to constrain and possibly avoid the effect of Weak Annihilation.
The wealth of data expected at Belle~II, a close cooperation between theorists and experimentalists, and hopefully new lattice data should help resolve various open issues, so that we might
eventually expect the uncertainty on inclusive $|V_{ub}|$ to become lower than 3\%.
\begin{acknowledgement}
This work was supported by the Mainz Institute for Theoretical Physics (MITP) of the Cluster of Excellence PRISMA+ (Project ID 39083149) which hosted the workshop. We acknowledge the friendly effectiveness of its staff and thank the scientific coordinator Tobias Hurth and director Matthias Neubert for their encouragement. We are grateful to G.~Caria, B.~Dey, A.~Greljo, B.~Grinstein, N.~Gubernari, G.~Herdoiza, Y.~Kwon, H.~Meyer, R.~Laha, W.~Lee, P.~Owen, S.~Stefkova, P.~Urquijo, who also participated in the workshop.
F.~Bernlochner and L.~Cao were supported by the DFG Emmy-Noether Grant No.\ BE~6075/1-1.
C.~Davies was supported by the UK Science and Technology Facilities Council.
A.~El-Khadra was supported by the U.S.\ Department of Energy, Office of Science, Office of High Energy Physics under Award Number DE-SC0015655 and by the Fermilab Distinguished Scholars Program.
P.~Gambino and M.~Jung were supported by the Italian Ministry of Research (MIUR) under grant PRIN 20172LNEEZ.
S.~Hashimoto was supported by JSPS KAK- ENHI Grant Number JP26247043 and by the Post-K and Fugaku supercomputer project through the Joint Institute for Computational Fundamental Science (JICFuS).
The work of A.~Khodjamirian and T.~Mannel was supported by the DFG (German Research
Foundation) under grant 396021762 - TRR 257 "Particle Physics Phenomenology after the Higgs Discovery".
Z.~Ligeti was supported in part by the Office of High Energy Physics of the U.S.\ Department of Energy under contract DE-AC02-05CH11231. S.~Meinel was supported by the U.S.~Department of Energy, Office of Science, Office of High Energy Physics under Award Number DE-SC0009913.
G.~Paz was supported by the U.S. Department of Energy grant DE-SC0007983 and by a Career Development Chair award from Wayne State University.
S.~Schacht was supported by a DFG For\-schungs\-stipen\-dium under contract No.\ SCHA 2125/1-1. A.~Vaquero was supported by the U.S. National Science Foundation under grants PHY14-14614 and PHY17-19626.
This manuscript has been authored by Fermi Research Alliance, LLC under Contract No.~DE-AC02-07CH11359 with the U.~S.\ Department of
Energy, Office of Science, Office of High Energy Physics.
\end{acknowledgement}
\bibliographystyle{spphys}
|
1,108,101,566,656 | arxiv | \section{Introduction}
Essential aspects of the glass transition of the supercooled liquids remain still
elusive despite of decades of study.
Many theories and scenarios have been proposed to explain
the dramatic slow down of the systems and the
associated growing cooperative length scales near the glass transition
point~\cite{Debenedetti2001,Cavagna2009b,Biroli2009,Berthier2011d}.
They can explain the experimental results
equally well or equally poorly but none of them have been proved to be decisively better than other.
Even a satisfactory mean-field picture of the glass transition
has not been established~\cite{Ikeda2010,Schmid2010b}.
Numerical simulation of simple model fluids is an ideal route to examine
the competing theories.
Considerable efforts have been put forward to gain insight from the
dynamical behaviors of simple model glassformers {\it in silico}, but compelling answers are
still lacking.
There are several reasons why the simulation studies are
not successful in sorting out numerous scenarios and theories.
First, the model systems are more or less similar;
the pair potentials of canonical glassformers studied in the past are exclusively
characterized by short-ranged strong repulsions.
Examples are Lennard-Jones, its WCA counterpart, soft-core, and the
hard sphere potentials.
Since the strong repulsion dominates thermodynamic and dynamic properties of
dense fluids, it is hardly surprising that the results for these models
are
qualitatively similar~\cite{Andersen2005,Berthier2009d}.
Studies of a completely different class of potential systems may potentially
diversify our views and perspectives on the glass transition within
the limited accessible time windows of the simulations.
Secondly, the model systems are not clean enough.
Even the simplest class of model glassformers (with a few
exceptions~\cite{Sausset2010b,Charbonneau2010})
are inevitably bidisperse or polydisperse
in order to avert the
nucleation to the crystalline phase~\cite{Andersen2005}.
This complicates quantitative assessment of the simulation results.
Finally, we still lack a realistic model glassformer which conforms
to the mean-field picture in finite dimensions.
Concept of the mean-field scenario of the structural glass transition is
basically borrowed from the mean-field theory developed in the spin
glass
communities~\cite{Kirkpatrick1989,Cavagna2009,Biroli2009,Berthier2011d}.
The replica theory~\cite{Mezard1999,Parisi2010} and mode-coupling
theory (MCT)~\cite{Gotze2009} are believed to be the static and dynamic
versions of the mean-field theory of the glass transition, simply
because of their apparent resemblance to the spin-glass counterparts.
The mosaic pictures of the random first order transition theory
has been developed as the finite dimension version of this mean field
pictures~\cite{Kirkpatrick1989,Lubchenko2007,Biroli2009}.
Accumulated simulation data are not inconsistent
qualitatively from the prediction of the mean field theories but
the quantitative agreement between simulation results and theoretical predictions
are far from compelling.
The best way to verify the mean-field scenario would be to
take the mean-field limit by either going to higher dimensions or making
the system's interactions longer-ranged.
Recently, simulations for four dimensional systems have been
performed~\cite{Eaves2009,Charbonneau2010}.
Results therein hint that the dynamic heterogeneities are suppressed
compared with three dimensional systems
and agreement with MCT moderately improves~\cite{Charbonneau2010}.
However, considering the current computational abilities,
it would be hard to simulate the system beyond four dimension, whereas
the upper critical dimension of the glass transition is argued to be
eight~\cite{Biroli2007b,Biroli2006b}.
On the other hand, few studies have been done for realistic liquids with
long-ranged particle interactions~\cite{Zaccarelli2008b,Dotsenko2004,Mari2011}.
The Gaussian core model (GCM) is a candidate to dispel all of the
above-mentioned concerns and could be an ideal and clean bench to
compare with various glass theories.
The GCM consists of the point particles interacting with a Gaussian shaped
repulsive
potential~\cite{Stillinger1976,Stillinger1997,Lang2000,Louis2000b,Prestipino2005,Mladek2006,Mausbach2006,Zachary2008,Krekelberg2009c,Shall2010};
\begin{eqnarray}
v(r) = \epsilon \exp[-(r/\sigma)^2],
\end{eqnarray}
where $r$ is the interparticle separation,
$\epsilon$ and $\sigma$ are the parameters which characterize the energy
and length scales, respectively.
The GCM is one of the simplest models of the so-called ultrasoft potential
systems which are characterized by the bounded and long-tailed repulsive
potential~\cite{Likos2001}.
Recently, we have reported that the one-component GCM vitrifies at very
high densities~\cite{Ikeda2011}.
The GCM or the ultrasoft particles in general
have very distinct and exotic properties both thermodynamically and
dynamically~\cite{Stillinger1976,Stillinger1997,Lang2000,Louis2000b,Prestipino2005,Mladek2006,Mausbach2006,Zachary2008,Krekelberg2009c,Shall2010,Ikeda2011,Ikeda_I},
such as the re-entrant melting at high densities, negative thermal
expansion coefficient, and anomalous density dependence of the
diffusion coefficient.
There are several studies on the glass transition of the ultrasoft
particles~\cite{Foffi2003b,Zaccarelli2005c,Berthier2009c,Berthier2010i}
and it was found that they exhibit rich dynamical behaviors
different from conventional model glassformers~\cite{Foffi2003b,Zaccarelli2005c}.
One of the advantages to study the glass transition of the ultrasoft particles is that,
due to the mild repulsion tail of the potential, the density as well as
the temperature can be used as a parameter to control the system.
Exploring the wide range of density--temperature parameter space
makes it easier to establish various scaling laws, to bridge
the gaps between temperature-driven ordinary glasses and density-driven colloidal glasses,
and to help unifying the concepts of the finite-temperature glass
transition and the zero-temperature jamming transition~\cite{Berthier2009c,Berthier2010i}.
However, most studies in the past focused on the relatively low density
regime, where the generic nature of the glass transition is not
extremely different from that of the conventional model glassformers.
The systems at low densities, including the GCM, also had to be either
polydisperse or bidisperse in order to avoid crystallization.
The GCM at very high densities is very different~\cite{Ikeda2011}.
First of all, the system vitrifies without poly(bi)dispersity.
The nucleation rate systematically decreases as the density increases
and the system starts exhibiting typical slow dynamics observed in
supercooled fluids near the glass transition point.
Furthermore, the dynamics is quantitatively well-described by
MCT.
Especially, the MCT nonergodic transition point extracted from the
simulation unprecedentedly matches with the theoretical prediction.
Besides, the violation of the Stokes-Einstein relation and the amplitude
of the non-Gaussian parameter, both of which is the manifestation of the
heterogeneous fluctuations of dynamics, are suppressed.
We conjecture that these facts can be attributed to the long-ranged
nature of the interaction potential at the high densities where
particles are overlapped.
These results suggest that the high density GCM is not only one of the
cleanest model glassformers {\it in silico}, but also the closest to the
mean-field model.
In this paper, we present thorough and complete numerical analysis of
the nucleation and glassy dynamics of the high-density and one-component GCM.
We not only present the exhaustive set of the
numerical results but also provide with the new evidence which
bolsters the validity of MCT.
Detailed analysis of thermodynamic and structural properties of the high
density GCM, such as the phase diagram and the static structure factors
are discussed in Ref.~\cite{Ikeda_I}.
In the previous study~\cite{Ikeda2011}, we have attributed the
weak violation of the SE relation and smaller non-Gaussian parameter to
the suppression of the dynamic heterogeneities.
We provide stronger and more direct evidence that intermittent heterogeneous
motion is suppressed by monitoring the distribution of the particle
displacement as a function of time.
We also evaluate the correlation functions of single and
collective density fluctuations.
Surprisingly we find that dynamics of the collective density
decouple from the single particle density at large length scales, where
the former relaxes much faster than the latter.
This is in stark contrast with the ordinary model glassformers
for which the slow glassy dynamics set in over the whole length
scales for both collective and single particle densities alike.
We compare these simulation results with MCT predictions and
find that MCT beautifully captures the decoupling of dynamics at the large
length scales.
However, we also find a subtle but noticeable disagreement
of MCT from the simulation results at intermediate length scales, where
the nonergodic parameter (the plateau height of the
two step relaxation in the density correlators)
predicted by MCT shows a weak shoulder which tends to grow as the density increases.
This shoulder is reminiscent of those found for the $d$-dimensional hard sphere glasses
at large $d$ evaluated from MCT~\cite{Ikeda2010,Schmid2010b} and may be
a signal of breakdown of MCT at the mean field limit.
This paper is organized as follows.
In Sec.~II, we summarize the simulation method, theoretical background,
and the setting of the system.
The nucleation dynamics from fluid to crystalline phase is discussed in
Sec.~III.
In Sec.~IV, we present all simulation results on various static and
dynamical observables.
Detailed analysis and careful comparison of the simulation results with
the MCT predictions are made.
Suppression of the dynamic heterogeneities are also discussed.
Finally, Sec.~IV concludes the paper with a summary.
\section{Preliminaries}
\subsection{Simulation Methods}
\begin{figure}[t]
\begin{center}
\includegraphics[width=1.0\columnwidth]{fig1_phase.eps}
\caption{
State points at which MD simulations were performed (crosses).
Squares with solid line and filled circles with dotted line
are the solid-fluid phase boundary obtained numerically
by us~\cite{Ikeda2011,Ikeda_I} and Prestipino {\it et
al.}~\cite{Prestipino2005}, respectively.
The melting and freezing lines are indistinguishable at this scale.
}
\vspace*{-0.3cm}
\label{phase}
\end{center}
\end{figure}
We investigate the dynamics of the one-component GCM using a molecular dynamics (MD) simulation in the $NVT$ ensemble
with a Nos\'{e} thermostat.
The system is a cubic cell and a periodic boundary condition is imposed.
A time-reversible integrator, similar to the velocity-Verlet method, is
used with a potential cut-off at $r=5\sigma$~\cite{Frenkel2001}.
Hereafter, $\sigma$, $\epsilon/\kb$, and $\sigma(m/\epsilon)^{1/2}$ are
taken as the units of the length, temperature, and time, respectively.
The time step is fixed at 0.2, which is sufficiently small to conserve
the Nos\'{e} Hamiltonian during the long simulation runs.
We focus on the four densities, $\rho = 0.5$, $1.0$, $1.5$, and $2.0$,
and perform the MD simulations for various temperatures in the vicinity
of the melting temperature $T_m$.
The state points which we performed simulation are shown in
Fig.~\ref{phase} along with the solid-fluid phase boundary
line~\cite{Prestipino2005,Ikeda2011,Ikeda_I}.
As discussed in detail in the previous study~\cite{Ikeda_I},
the melting temperature, $T_m$, at the high density regime $\rho \gtrsim
1$ obeys an asymptotic scaling $\log T_m \propto -\rho^{2/3}$ which was
originally conjectured by Stillinger~\cite{Stillinger1976}.
For all densities which we study, thermodynamically stable crystalline structure is bcc~\cite{Ikeda2011,Ikeda_I}.
We run the simulations for the total run time always 50 times longer than
the structural relaxation time.
For example, the simulation time was $t_{sim} = 10^7$ for the lowest temperature
at $\rho = 2.0$.
This was confirmed to be sufficiently long to neglect aging effect.
The first half of the simulation run was used for the equilibration and
we used the trajectories of the second half for the analysis of the
stationary dynamics.
For each state point, five independent runs are performed and the
results are obtained by averaging over those trajectories in order
to improve the statistics.
Configurations obtained from the high temperature simulation were used
as the initial configurations.
The system size is fixed at $N=3456$.
The simulations for $N=2000$ and $9826$ confirmed that the finite-size effect is negligible.
\subsection{Mode Coupling Theory}
In this work, we compare our simulation results for dynamics of the high
density GCM in the supercooled state with the prediction of MCT.
In the context of the glass transition, MCT is commonly expressed
as a set of the self-consistent nonlinear equations for
correlation functions.
These correlation functions are the intermediate
scattering function (the correlation of the collective density),
$F(k,t) \equiv \ave{\delta \rho(\vec{k},0) \delta \rho(-\vec{k},t)}/N$,
where $\delta \rho(\vec{k},t)$ is the $k$-dependent density fluctuation,
and the self intermediate scattering function or the correlation
of the single particle density,
$F_s(k,t)\equiv \ave{\rho_s(\vec{k},0) \rho_s(-\vec{k},t)}$,
where $\rho_s(\vec{k},t)$ is the density of a single particle.
The time evolution of $F(k,t)$ is given by the generalized Langevin equation
\begin{eqnarray}
\begin{aligned}
\Omega^{-2}(k) \ddot{F}(k,t) + F(k,t) + \int^t_0\!\! ds \ M(k,t-s) \dot{F}(k,s) = 0,
\end{aligned}
\label{eq:mctF}
\end{eqnarray}
where $\Omega(k)= \sqrt{\kb T k^2/mS(k)}$ is
the frequency term.
$S(k)= F(k,t=0)$ is the static structure factor.
$M(k,t)$ is the memory kernel which, according to MCT, is approximated as
\begin{eqnarray}
\begin{aligned}
M(k,t) = \frac{\rho S(k)}{2k^2} \int\!\! \frac{d\vec{q}}{(2 \pi)^3}
V_{\vec{k}}^2(\vec{q},\vec{k}-\vec{q}) F(q,t) F(|\vec{k}-\vec{q}|,t).
\end{aligned}
\label{mem}
\end{eqnarray}
Here $V_{\vec{k}}(\vec{q},\vec{p}) \equiv \{ \vec{k}\cdot\vec{q}c(q) + \vec{k}
\cdot\vec{p}c(p)\}/k$ is the vertex, where
$c(k)=\{1-1/S(k)\}/\rho$ is the direct correlation function.
In Eq.~(\ref{mem}), we neglect the short time contribution for the
memory kernel, which does not affect the slow dynamics.
MCT predicts that $F(k,t)$ undergoes
the ergodic-nonergodic transition at a finite temperature, $T_c$,
below which
$\lim_{t\rightarrow \infty}F(k,t)= F_{\infty}(k)$ remains finite.
$F_{\infty}(k)$ is referred to as the nonergodic parameter.
The nonergodic parameter and $T\mct$ can be evaluated by taking
$t \to \infty$ of Eqs.~(\ref{eq:mctF}) and (\ref{mem}), which is
expressed as
\begin{equation}
\begin{aligned}
\frac{F_{\infty}(k)/S(k)}{1 - F_{\infty}(k)/S(k)} = M_{\infty}(k),
\end{aligned}
\label{eq:nep}
\end{equation}
where $M_{\infty}(k)$ is the long time limit of the memory kernel.
As the temperature approaches to $T\mct$ from above,
MCT first predicts that $F(k,t)$ exhibits
the two-step relaxation behavior characterized by a finite plateau and
the slow structural relaxation.
The height of the plateau is identical to $F_{\infty}(k)$ at $T=T\mct$.
The structural relaxation or the alpha relaxation time, $\tau_{\alpha}$,
increases and eventually diverges at $T\mct$.
MCT predicts that the increase of $\tau_{\alpha}$ is given by a power
law $\tau_{\alpha} \sim |T-T\mct|^{-\gamma}$, where $\gamma$ is a
system-dependent parameter which can be evaluated from the MCT equation.
Likewise, the MCT equation for the self intermediate scattering function,
$F_s(k,t)$, is written in the same form as Eq.~(\ref{eq:mctF}), but with
the frequency term $\Omega_s(k)=
\sqrt{\kb T k^2/m}$ instead of $\Omega(k)$ and the self memory kernel
\begin{equation}
\begin{aligned}
M_s(k,t)
\!= \!\frac{\rho}{2k^2}\! \int\!\!\frac{d\vec{q}}{(2 \pi)^3}
\left\{\! \frac{\vec{k} \cdot \vec{q}}{k}c(q)\! \right\}^2 \!\!
F_s(q,t) F(|\vec{k}-\vec{q}|,t)
\end{aligned}
\label{mems}
\end{equation}
instead of $M(k,t)$ in Eq.~(\ref{mem}).
The MCT equation for $F_s(k,t)$ undergoes the nonergodic transition
exactly at the same temperature, $T\mct$, as for $F(k,t)$,
at least for most model systems studied in the past (see Ref.~\cite{Voigtmann2010}
for exceptions).
By taking the small $k$-limit of the MCT equation for $F_s(k,t)$, we can
also construct the self-consistent equation for the mean square
displacement $\ave{R^2(t)}$.
MCT predicts that the self-diffusion coefficient $D \equiv
\lim_{t\rightarrow \infty}\ave{R^2(t)}/6t$ follows the power law
$D \sim |T-T\mct|^{\gamma}$ and vanishes at $T\mct$.
Note that the power law exponent $\gamma$ is identical with that for $\tau_{\alpha}$.
In addition to the MCT nonergodic transition and power law of the transport
coefficients, MCT predicts many important dynamical properties
such as the dynamic scaling known as von Schweidler's law at the
plateau regime (the beta regime) and the time-temperature superposition at
the alpha relaxation regime~\cite{Binder2005}.
In order to solve the MCT equations, the static structure factor, $S(k)$, is required as an input.
We used $S(k)$ obtained directly from simulations.
For the numerical integration of Eq.~(\ref{mem}) and (\ref{mems}),
we employed equally spaced 400 grids with the grid spacing $\Delta k = 0.16$.
\section{Crystallization}
\begin{figure}[t]
\begin{center}
\includegraphics[width=1.0\columnwidth]{fig2_pot.eps}
\caption{
The time dependence of the bond order parameter $q_6$ and potential energy $U$
of the representative trajectories measured from the time when the
system is prepared.
(a) $\rho=0.5$, $T=2.5 \times 10^{-3}$, (b) $\rho=1.5$, $T=2.6 \times
10^{-5}$, (c) $\rho=1.5$, $T=2.3 \times 10^{-5}$, and (d) $\rho=2.0$, $T=2.93 \times 10^{-6}$.
The short bold line in each figure indicates the time scale of $\tau_\alpha$.
}
\vspace*{-0.3cm}
\label{pot}
\end{center}
\end{figure}
Ordinary simple atomic fluids nucleate to form crystals quickly as
the temperature is lowered below the melting point.
In this section, we analyze the crystal nucleation dynamics of the high
density GCM and show that the nucleation rate systematically decreases
as the density increases.
In order to monitor the crystallization from the homogeneous fluid phase,
we use the potential energy $U$ and the bond order parameter $q_6$~\cite{Steinhardt1983}.
The bond order parameter is defined by
\begin{equation}
q_6 \equiv \frac{1}{N}\sum_{i=1}^{N} q_6(i),
\end{equation}
where $q_l(i)$ is the $l$-th bond order parameter of the $i$-the
particle defined by
\begin{eqnarray}
q_{l}(i) = \sqrt{\frac{4 \pi}{2l + 1} \sum_{m=-l}^l |q_{lm}(i)|^2}.
\label{eq:qli}
\end{eqnarray}
Here $q_{lm}(i)$ is the complex bond parameter of the $i$-th particle
given by
\begin{eqnarray}
q_{lm}(i) = \frac{1}{N_b(i)} \sum_{j=1}^{N_b(i)} Y_{lm}(\vec{R}_i-\vec{R}_j),
\label{eq:qlm}
\end{eqnarray}
where
$\vec{R}_i$ is the position of the $i$-th particle,
$N_b(i)$ is the number of nearest neighbor particles around the
$i$-th particle, and
$Y_{lm}(\vec{r})$ is the spherical harmonic function of the degree $l$
and the order $m$.
$q_6$ is zero in the fluid phase and $q_6\approx 0.5$ for a
perfect bcc crystal~\cite{Steinhardt1983}.
In Fig.~\ref{pot}, we show $q_6$ and $U$ of the five representative trajectories
as a function of the lapse of time measured from the moment when the system is prepared.
At a relatively low density $\rho=0.5$ and
temperature just below the melting point $T=2.5\times 10^{-3}$ (Fig.~\ref{pot} (a)),
one observes that $q_6$'s of all five trajectories abruptly increase from
zero to a finite value and concomitantly $U$'s decrease.
These behaviors are the hallmark of the crystal nucleation.
This figure shows that the nucleation initiates only after the lapse of
time several times longer than the structural relaxation time
$\tau_{\alpha}$
which is indicated by the short bold lines in the figures
(the precise definition and compiled data set of $\tau_{\alpha}$ are given in Sec.~\ref{sec:Glassy Dynamics}).
The degree of supersaturation defined by $\Delta = 1 - T/T_m$ at this state point is 0.43.
Next, we look at the higher density $\rho=1.5$.
Five runs of $q_6$ and $U$ at $T=2.6 \times 10^{-5}$ are shown in
Fig.~\ref{pot} (b).
Despite of the deeper supersaturation ($\Delta = 0.55$) and much longer
simulation runs (over 40 $\tau_{\alpha}$) than Fig.~\ref{pot} (a),
$q_6$ and $U$ do not show any sign of nucleation.
Decreasing the temperature further to $T=2.3 \times 10^{-5}$ where
$\Delta = 0.6$ (Fig.~\ref{pot} (c)), one eventually observes the crystallization for the two
out of five trajectories.
Note that it takes decades of the structural relaxation time
(which itself also increases with the degree of supersaturation) before
the precipitous nucleation takes place.
At even higher density $\rho=2.0$, all five trajectories fail to
nucleate even at a very low temperature $T=2.93 \times 10^{-6}$ with
the similar degree of the supersaturation, $\Delta = 0.6$,
for the whole simulation runs.
\begin{figure}[t]
\begin{center}
\includegraphics[width=1\columnwidth]{fig3_qmap.eps}
\caption{
The $\bar{q}_4$-$\bar{q}_6$ correlation map for the configurations
obtained at the end of all the five simulation runs
at
$(\rho, T) = (1.5, 2.3 \times 10^{-5})$ (left panel)
and $(\rho, T) = (2.0, 2.93 \times 10^{-6})$ (right panel).
Four circles are the characteristic distribution for the bcc,
hcp, fcc crystal, and fluid phase.
}
\vspace*{-0.3cm}
\label{qmap}
\end{center}
\end{figure}
In order to ensure that the nucleated samples
are unambiguously the bcc crystal and that samples which failed to nucleate
remain in the homogeneous fluid phase,
we evaluate new parameters
which were recently introduced by Lechner {\it et al.}~\cite{Lechner2008}.
They have used the two averaged bond order parameters $\bar{q}_4(i)$ and $\bar{q}_6(i)$
and demonstrated that the correlation map of them
improves ability to determine the crystalline structures~\cite{Lechner2008,Kawasaki2010}.
The averaged bond order parameter is defined by replacing $q_{lm}(i)$ in
Eq.~(\ref{eq:qli}) with the averaged value $\bar{q}_{lm}(i)$ defined by
\begin{eqnarray}
\bar{q}_{lm}(i) = \frac{1}{\tilde{N}_b(i)} \sum_{k=0}^{\tilde{N}_b(i)} q_{lm}(k),
\end{eqnarray}
where $q_{lm}(k)$ is given by Eq.~(\ref{eq:qlm}) and the sum runs from
$k$ over all $\tilde{N}_b(i)$ neighbors of the $i$-th particles,
including the $i$-th particle itself ($k=0$ in the sum).
In Fig.~\ref{qmap}, we placed all $\bar{q}_4(i)$ and $\bar{q}_6(i)$
($i=1,2,\cdots, N$) in the correlation map for
the configurations obtained at the end of simulation
runs of the two state points $(\rho, T) = (1.5, 2.3 \times 10^{-5})$
and $(\rho, T) = (2.0, 2.93 \times 10^{-6})$.
The four circles represent the characteristic areas for
the bcc, hcp, fcc crystals, and fluid phase~\cite{Lechner2008}.
The results for $(\rho, T) = (1.5, 2.3 \times 10^{-5})$ show
that the two trajectories remain in the fluid phase whereas the rest
formed the bcc crystal.
It is clear that no other structures are formed in the course of the
simulations.
Note that the results for the three trajectories which nucleated
slightly deviate from the bcc region, which we presume
is due to defects or imperfectness of the obtained crystalline structures.
On the other hand, all the five trajectories for
$(\rho, T) = (2.0, 2.93 \times 10^{-6})$ do not show any hint of
crystal nucleation and the configurations remain completely disordered.
Hereafter, we focus on the densities $\rho=1.5$ and 2.0
because the crystal nucleation is sufficiently slow that canonical
glassy dynamics are observed.
\section{Glassy Dynamics}\label{sec:Glassy Dynamics}
\subsection{Structural functions}
\begin{figure}[t]
\begin{center}
\includegraphics[width=1.0\columnwidth]{fig4_sk.eps}
\caption{
The radial distribution function $g(r)$ (left panels)
and the static structure factors $S(k)$ (right panels).
(a) and (b) are for $\rho = 1.5$ at $T = 7.0 \times
10^{-5}$ (dashed line) and $T = 2.4 \times 10^{-5}$ (solid line).
(c) and (d) are for $\rho = 2.0$
at $T = 7.0 \times 10^{-6}$ (dashed line) and $T = 2.93 \times 10^{-6}$ (solid line).
The insets of (b) and (d) are the closeup of $S(k)$ at small $k$'s in the semilog plot.
Dotted lines in (a) and (c) are the bare potential $v(r)$.
}
\vspace*{-0.3cm}
\label{sk}
\end{center}
\end{figure}
Before discussing the slow dynamics in the supercooled state,
we summarize the fluid structures of the high density GCM
to demonstrate the difference from those of conventional model glassformers.
In Fig.~\ref{sk}, we plot
the radial distribution functions $g(r)$ and static structure factors $S(k)$
of the GCM for $\rho=1.5$ and 2.0 near and below the melting temperatures.
Both $g(r)$ and $S(k)$ show typical behaviors of dense fluids
characterized by the prominent peaks near
the position and the wavevector corresponding to the first coordination shell.
Their peak heights increase as the temperature decreases.
As density increases from $\rho=1.5$ to 2.0,
the peak position of $g(r)$ shifts from $r=0.94$ to 0.85
and for $S(k)$ from $k=7.8$ to 8.4.
The noticeable feature of the high density GCM
is that the tail of the potential $v(r)$ stretches beyond the first
coordination shell, as demonstrated in Figs.~\ref{sk} (a) and (c).
This considerable overlap of particles imparts the character of the
long-ranged interaction systems to the high density GCM.
The long-ranged nature also appears
as the anomalously small $S(k)$ at small wavevectors.
The insets of Figs.~\ref{sk} (b) and (d) show that $S(k\approx 0)$, or the
compressibility, is far smaller than the other model fluids at
compatible supersaturations,
implying that the density fluctuations at
large length scales are strongly suppressed.
This is the common features of the long-range interacting
systems.
A well-known example is the one component classical
plasma~\cite{Ichimaru1982}, where $S(k)$ vanishes at $k \to 0$.
More detailed analysis of the simulation results for the structural
functions and comparisons with the predictions of the liquid state theory have been
reported in Ref.~\cite{Ikeda_I}.
$S(k)$'s obtained here are used in the MCT analysis discussed below.
\subsection{Mean square displacement and self intermediate scattering function}
\begin{figure*}[t]
\begin{center}
\includegraphics[width=1.7\columnwidth]{fig5_msdfskt.eps}
\caption{
$\ave{R^2(t)}$ ((a) and (c)) and
$F_s(k_{\max},t)$ ((b) and (d)).
The filled circles are simulation results for
$\rho = 1.5$ and from left to right, $T\times 10^5=7$, $4$, $3$, $2.6$
and $2.4$ (upper panel),
and for $\rho = 2.0$ and from left to right, $T\times 10^6=10$, $7$,
$5$, $4$, $3.4$, $3.2$, $3$ and $2.93$ (lower panel).
The dashed line in (c) is the mean square displacement of the KA model
at $T=0.475$~\cite{Kob1994} shifted to fit with the GCM's result
at the lowest temperature at long times (see text).
The dashed lines in (b) and (d) are the MCT solutions obtained using the
same reduced temperatures, $\varepsilon$, as those for the simulation data.
}
\vspace*{-0.3cm}
\label{msdfskt}
\end{center}
\end{figure*}
\begin{figure}[t]
\begin{center}
\includegraphics[width=1.0\columnwidth]{fig6_ttsp.eps}
\caption{
Same as Fig.~\ref{msdfskt} (d) but plotted against $t$ scaled by $\tau_{\alpha}$.
Filled circles are the simulation results for $\rho = 2.0$ and $T\times
10^6=5$, $4$, $3.4$, $3.2$, $3$, $2.93$ from right to left.
The solid line is a fit by a stretched exponential function.
}
\vspace*{-0.3cm}
\label{ttsp}
\end{center}
\end{figure}
In this subsection, we evaluate various dynamic quantities
and observe their slow dynamics, focusing on the trajectories
which did not crystallize even when deeply supercooled.
The mean square displacement
$\ave{R^2(t)} \equiv N^{-1}\sum_{i=1}^{N} \ave{|\vec{R}_i(t) - \vec{R}_i(0)|^2}$,
the self intermediate correlation function $F_s(k,t)$,
and the intermediate correlation function $F(k,t)$
are evaluated for the densities $\rho=1.5$ and 2.0.
Fig.~\ref{msdfskt} shows
$\ave{R^2(t)}$ and $F_s(k,t)$ at several temperatures well
below the melting temperature.
These figures clearly display the canonical behaviors of the supercooled
liquids near the glass transition point.
For $\rho=1.5$, we could not observe the
glassy dynamics below $T=2.4\times 10^{-5}$ because the crystallization intervened.
At $\rho=2.0$, all trajectories did not crystallize down to the lowest temperature which we accessed.
In Figs.~\ref{msdfskt} (a) and (c),
one observes that, as the temperature is lowered, $\ave{R^2(t)}$ develops the long plateau
regimes followed by the usual diffusive behaviors $\ave{R^2(t)}\propto t$ at longer times.
The appearance of the plateau signals the formation of a cage of a
particle surrounded by its neighbors and is the hallmark of the supercooled
fluid near the glass transition point.
The value of $\sqrt{\ave{R^2(t)}}$ at the plateau region is
a measure of the sizes of the cages.
They are about $\sqrt{\ave{R^2(t)}}\approx $0.17 for $\rho=1.5$ and 0.14
for $\rho=2.0$.
These values are slightly smaller than the values for conventional model
glassformers.
For example, $\sqrt{\ave{R^2(t)}}\approx 0.2$ for the Kob-Anderson Lennard-Jones
mixture (KA model)~\cite{Kob1994}.
In Fig.~\ref{msdfskt} (c) and (d), we plot $F_s(k=k_{\max},t)$ for several temperatures,
where $k_{\max}$ is the wavevector where $S(k)$ show the maximum peak.
$F_s(k_{\max},t)$ relaxes exponentially at high temperatures.
As the temperature decreases, a plateau with a finite height appears
and it stretches over longer times as the temperature decreases
further, while the plateau height remains almost constant.
This two-step relaxation behavior is another hallmark of the slow
dynamic near the glass transition point.
The terminal relaxation following the plateau is called the
structural or alpha relaxation.
We define the structural relaxation time $\tau_{\alpha}$ by $F_s(k_{\max},t=\tau_{\alpha}) = e^{-1}$.
In Fig.~\ref{ttsp}, we plot $F_s(k_{\max},t)$ against the time scaled by
$\tau_{\alpha}$.
The result shows that relaxation curves are collapsed at the alpha relaxation regime.
This is the universal property of the glassy systems known as the
time-temperature superposition (TTS)~\cite{Binder2005}.
Furthermore, the all curves where TTS holds are fitted by a stretched exponential function
$e^{-(t/\tau_{\alpha})^{\beta}}$ with the exponent $\beta \approx 0.8$.
This value is comparable with that for the KA model~\cite{Kob1994} and for the hard sphere mixture~\cite{Foffi2004}.
\begin{figure}[t]
\begin{center}
\includegraphics[width=1\columnwidth]{fig7_angell.eps}
\caption{
The temperature dependence of the structural relaxation time (filled circles)
and the inverse of the diffusion coefficient (empty squares) for (a) $\rho = 1.5$ and (b) $\rho = 2.0$.
Inset: $\tau_{\alpha}^{-1/\gamma}$ and $D^{1/\gamma}$ as a function of
inverse temperature, where $\gamma$ is fixed to 2.7.
}
\vspace*{-0.1cm}
\label{angell}
\end{center}
\end{figure}
In Fig.~\ref{angell}, the structural relaxation time $\tau_{\alpha}$ and
the self diffusion constant defined by $D \equiv \lim_{t \to \infty}
\ave{R^2(t)}/6t$ are plotted against the inverse temperature.
We plotted $D^{-1}$ and adjusted its ordinate so that the data collapses
with $\tau_{\alpha}$
at high temperatures.
For both densities, $\rho=1.5$ and $2.0$,
$\tau_{\alpha}$ and $D^{-1}$ drastically increase
as the temperature is lowered.
Both data almost collapse to each other for the whole temperatures
except for the slight deviation at the lowest temperature.
As we shall discuss later, this is the direct reflection of a weak
violation of the Stokes-Einstein relation.
So far, all simulation data show no sign of peculiarity in
the slow dynamics of the high density GCM at the qualitative level.
They are all similar to conventional model glassformers.
In order to assess the properties of the high density GCM more
quantitatively, we compare the simulation results with the predictions of MCT.
For this purpose, we solve the MCT equations
Eqs.~(\ref{eq:mctF})--(\ref{mems}) by numerically integrating the
equations in a self-consistent manner.
As inputs, we used $S(k)$ obtained numerically in the previous subsection.
First, we compute the MCT transition temperature $T\mct$ by solving Eq.~(\ref{eq:nep}).
The results are $T\themct = 2.66 \times 10^{-5}$ and $3.17 \times 10^{-6}$ for $\rho=1.5$ and 2.0, respectively.
Here, we denote the transition temperature as $T\themct$ in order to emphasize that they
are obtained by solving the MCT equations.
The exponent $\gamma \approx 2.7$ is also obtained from the MCT solutions.
MCT predicts that both the self-diffusion coefficient and the structural
relaxation time follow the power law $D^{-1}, \tau_{\alpha} \propto
|T-T\mct|^{-\gamma}$ with the same parameters $\gamma$ and $T\mct$.
We fitted $D^{-1}$ and $\tau_\alpha$ obtained by simulation with this
MCT power law, using $T\mct$ as a fitting parameter.
We denote it as $T\simmct$.
By plotting $D^{\gamma}$ and $\tau_{\alpha}^{-\gamma}$ against $T^{-1}$,
we found that they both vanish at the same temperature and we identified
$T\simmct = 2.07 \times 10^{-5}$ and $2.68 \times 10^{-6}$ for $\rho =
1.5$ and 2.0, respectively (see the insets of Fig.~\ref{angell}).
In Fig.~\ref{scale}, we replotted $\tau_{\alpha}$ in Fig.~\ref{angell}
using $\varepsilon \equiv 1 - T/T\simmct$ instead of $1/T$.
The results for the KA model~\cite{Kob1994} are also
plotted.
These data are scaled by a time unit, $t_0$, defined by a relaxation time
at the short time scale, $F_s(k_{\max},t= t_0) = 0.95$.
This figure shows that the relaxation times for both the GCM and KA model
ride on the MCT power law for the range of temperatures which the
simulation can access.
Collapse of the data of two systems on a single power law is a reflection that
the values of $\gamma$'s of both systems are close ($\gamma\approx 2.5$ for
the KA model~\cite{Flenner2005d}).
This figure also demonstrates that $\varepsilon$ is
a good parameter to measure the distance from the onset of the
glassy slow dynamics for different systems.
Hereafter, we refer to $\varepsilon$ as the reduced temperature.
In Fig.~\ref{msdfskt} (c), we plotted the simulation data of $\ave{R^2(t)}$ for the KA model at
$T=0.475$ by shifting the time unit in such a way that
the long time diffusive regime collapses with the data for the GCM
at $T=2.93 \times 10^{-6}$ and $\rho=2.0$ whose reduced temperature
is about the same.
Almost perfect collapse of the results for two distinct systems
for the whole time window, including the short time ballistic behavior
and the entry to the plateau regime, suggests that the slow
diffusive behavior of the high density GCM is qualitatively similar to
that of canonical glassformers at least above $T\simmct$, where
our MD simulation can access.
However, there are two noticeable differences between the high
density GCM and conventional model glassformers.
First, the MCT transition temperature obtained from fitting the
simulation data, $T\simmct$, is unprecedentedly close to the theoretical
prediction $T\themct$ for the GCM.
The agreement improves as the density increases.
The deviation of $T\simmct$ from $T\themct$ are only 32 \% for $\rho=1.5$ and
20 \% for $\rho=2.0$.
It is in stark contrast with the KA model for which
$T\simmct = 0.435$ and $T\themct = 0.92$ with the deviation of more
than 100\%~\cite{Kob2002,Flenner2005e}.
The KA model at $T\themct$ is still a high-temperature fluid and $F_s(k,t)$ decays
exponentially without a sign of two-step relaxation.
Contrarily, the GCM at $T\themct$ already lies deep in the region where
the plateau of $F_s(k,t)$ is well developed (see Fig.~\ref{msdfskt} (d)).
Considerable deviation of $T\simmct$ from $T\themct$ for conventional model
glassformers is known as one of serious drawbacks of MCT.
These deviations have been attributed to the effect of
the activated processes in the ragged energy landscapes, which smears out
the clear-cut dynamical
transition~\cite{sastry1998,Brumer2004b,Mayer2006b,Bhattacharyya2008}.
Second, the MCT parameters $T\mct$ and $\gamma$ obtained from fitting simulation data for
$\tau_{\alpha}$ match very well with that obtained from the data of $D^{-1}$.
This is also in contrast with the model glassformers
such as the KA model~\cite{Kob1994,Flenner2005e} and
poly(bi)disperse hard spheres~\cite{Foffi2004,Kumar2006},
for which $T\simmct$ (or the transition density $\rho\simmct$) and $\gamma$ obtained from fitting the
simulation data vary depending on the observables ($\tau_{\alpha}$ or $D^{-1}$)
and also on the components (large or small particles components of the
binary systems).
These variances are partly attributed to the presence of strong dynamic
heterogeneities which decouple the diffusion from the structural relaxation
time, as we shall discuss in the next subsection.
\begin{figure}[t]
\begin{center}
\includegraphics[width=1\columnwidth]{fig8_scale.eps}
\caption{
$\tau_{\alpha}/t_0$ as a function of the reduced temperature
$\varepsilon$ for the GCM and KA model.
$t_0$ is the short-time relaxation time defined by $F_s(k_{\max},t_0)=0.95$.
}
\vspace*{-0.1cm}
\label{scale}
\end{center}
\end{figure}
The direct evidence that MCT works better for the GCM than any other
model glassformers is the remarkable agreement of the simulated
$F_s(k,t)$ with the MCT prediction.
In Fig.~\ref{msdfskt} (b) and (d), we plotted the solutions of MCT
for exactly the same reduced temperatures $\varepsilon$ as the
simulation data.
Only free parameter is the time unit, which is determined solely from the
short time dynamics.
Long time behaviors of the MCT solution agree very well with the simulation
results.
MCT also correctly predicts the exponent of the stretched exponential
relaxation $\beta$.
The agreement is striking given that for other model glassformers, $\varepsilon$
(and sometimes the wavevectors as well) needs to be adjusted at each temperature
to obtain a reasonable fit~\cite{Kob2002,Voigtmann2004}
(an exception is the four-dimensional system~\cite{Charbonneau2010}).
\begin{figure}[t]
\begin{center}
\includegraphics[width=1\columnwidth]{fig9_fkt.eps}
\caption{
The intermediate scattering function at (a) $k=8.4$ and (b)
$k=6.2$.
For both panels, $\rho=2.0$ and the temperatures are, from left to right,
$T \times 10^6 =$ 10, 7, 5, 4, 3.4, 3.2, 3 and 2.93.
The inset in (b) shows a closeup of the weak and long tails of the main panel.
}
\vspace*{-0.1cm}
\label{fkt}
\end{center}
\end{figure}
\subsection{Intermediate scattering function}
Next, we look at the intermediate scattering function
$F(k,t)$.
For conventional model glassformers, it is known that behavior of $F(k,t)$
is qualitatively the same as that of $F_s(k,t)$, except for the wiggly
$k$-dependence of the nonergodic parameter for the former,
reflecting the wiggly profiles of the static structure factor (see the discussion below).
Contrarily, for the high density GCM, $F(k,t)$ and $F_s(k,t)$ differ
from each other considerably.
Fig.~\ref{fkt} shows $F(k,t)$ at two wavevectors.
Fig.~\ref{fkt} (a) is the result at $k = k_{\max}(\approx 8.4)$ which is
the peak position of $S(k)$.
There, the relaxation behavior of $F(k,t)$ is very similar to that
of $F_s(k,t)$, suggesting the relaxations of both functions at
the interparticle length scales are dictated by the same relaxation mechanism.
Fig.~\ref{fkt} (b) is the result at
$k=6.4$, which corresponds to a slightly longer length scale
than the interparticle distance.
The relaxation of $F(k,t)$ is very fast and shows no sign of two
step relaxation.
$F(k,t)$ almost fully relaxed at $t\sim 10$, which is much shorter than the
onset time of the caging where the plateau of $\ave{R^2(t)}$ appears
(see Fig.~\ref{msdfskt}).
The quick decays are followed by the phonon-like oscillations and very weak tails persisting
over the time scale of the structural relaxation time.
This tail vanishes at smaller $k$'s.
This behavior is in sharp contrast with the KA model, where the
relaxation time at small wavevectors is comparable with that at the
interparticle distance and the plateau heights remains finite down to
very small wavevectors~\cite{Gleim1998}.
These results indicate that, in the high density GCM,
the large scale density fluctuations are decoupled from the slow
structural relaxation processes at the shorter length scales.
\begin{figure}[t]
\begin{center}
\includegraphics[width=1\columnwidth]{fig10_nep.eps}
\caption{
Nonergodic parameters for the collective part $F_{\infty}(k)/S(k)$
(upper panel) and self part $F_{s,\infty}(k)$ (lower panel) of the
intermediate scattering functions.
Filled circles are the simulation data
and solid lines are the MCT solutions.
The dotted line in the lower panel is a fit by a Gaussian function.
}
\vspace*{-0.1cm}
\label{nep}
\end{center}
\end{figure}
In order to see this qualitative difference of
$F(k,t)$ of the GCM more clearly,
we plot the $k$-dependence of the plateau heights, or the
nonergodic parameter, $F_{\infty}(k)$ and $F_{s,\infty}(k)$
together with the MCT predictions obtained from Eq.~(\ref{eq:nep}).
In Fig.~\ref{nep}, we show $F_{\infty}(k)/S(k)$ and $F_{s,\infty}(k)$ at $\rho=2.0$
(filled circles) and the MCT predictions at the same density (solid lines).
It beyond doubt demonstrates that MCT beautifully captures the vanishing
plateau and the decoupling between the self and collective dynamics at small
wavevectors.
One observes that $F_{\infty}(k)/S(k)$ above $k_{\max}$
remains compatible with that of $F_{s,\infty}(k)$, while keeping a wiggly
behavior characteristic of the
collective density fluctuations.
Absence of slow dynamics
at small $k$'s is a consequence of the anomalous structural properties inherent
in the high density GCM.
In the previous subsection, we discussed that the static structure
factor at the small wavevectors, or the compressibility,
is extremely small compared with those of ordinary model glassformers.
This makes the amplitude of the memory kernel at small $k$'s
negligibly small (see Eq.~(\ref{mem})).
Consequently the large scale fluctuations decouple from the
fluctuations at the length scales of the interparticle distance which
trigger the glassy slow dynamics.
We argue that this decoupling between small and long length
scales should be commonly observed for the systems with small
compressibilities which are an universal feature of the
dense and long ranged interaction systems including the Coulomb
interaction systems as predicted in the framework of MCT~\cite{Shiroiwa2010}.
The nonergodic parameters in Fig.~\ref{nep} exhibit another subtle but
noticeable feature which may have relevance to fundamental problems of MCT as the mean field
description of the glass transition.
Although MCT reproduces the overall behaviors of the nonergodic parameters
for both $F_{\infty}(k)/S(k)$ and $F_{s,\infty}(k)$,
its prediction systematically overestimates the simulation results at
the intermediate wavevectors (in the range of, say, $5 \lesssim k \lesssim 20$).
As shown in Fig.~\ref{nep} (b), we find that
the simulation data for $F_{s,\infty}(k)$ is well fitted by a Gaussian
function, whereas the MCT nonergodic parameter has a small but
non-negligible shoulder which the Gaussian function can not fit.
This shoulder is reminiscent of those observed in the MCT solution for
hard sphere glasses in large spatial dimensions~\cite{Schmid2010b, Ikeda2010}.
There, we have found that the deviation from the Gaussian function for
$F_{s,\infty}(k)$ increases as the dimension $d$ increases.
This observation has lead us to conclude that MCT is not rigorously
a bona fide mean field theory~\cite{Ikeda2010}.
This glitch of MCT which we found in one of the mean field limits,
{\it i.e.}, the high $d$ limit, could also show up in another mean field limit, that is,
the long-ranged interaction limit, which is realized in the high density
limit of the ultrasoft potential systems such as the GCM.
This may explain the shoulder of the $F_{s,\infty}(k)$ in
Fig.~\ref{nep} (b).
Remember that the anomalously small $S(k)$ at small $k$'s is
also due to the long-ranged interaction.
Interestingly, this small $S(k)$
may explain the anomalous shoulder of the MCT solution.
By artificially enhancing the amplitude of $S(k)$ at small $k$'s
by a minute amount and plugging the modified $S(k)$ into the MCT equation, we find that
the nonergodic parameter $F_{\infty}(k)$ at small $k$'s jumps from zero to
finite values.
At the same time, the shoulder of $F_{s,\infty}(k)$ at the intermediate wavevectors
disappears and MCT's $F_{s,\infty}(k)$ gets closer to the simulation results.
This observation implies that the long range interaction affects the
static properties of the large length scales, which eventually amplifies
the putative non-Gaussian behaviors of the MCT solution.
A subtle interplay between the long and short length fluctuations
may be quite common for the glass or/and jamming transition:
For example,
the the hyper-uniformity (vanishing $S(k)$ at small $k$)
and diverging radial distribution function at the contact length $r =\sigma$
are known to be
the two facets of a universal character of the jamming transition~\cite{Torquato2003}.
\subsection{Violation of Stokes-Einstein relation}
\begin{figure}[t]
\begin{center}
\includegraphics[width=1\columnwidth]{fig11_tauk.eps}
\caption{
(a) Reduced-temperature dependence of $D\tau_{\alpha}$ at $\rho=1.5$ (diamonds) and 2.0 (squares).
The results for the KA model are also plotted (triangles)~\cite{Kob1994}.
All results are normalized by those at a high temperature $(D\tau_{\alpha})_{{\mbox{\scriptsize ref}}}$.
(b) $Dk^2\tau(k)$ for $T\times 10^6=7.0$ (circles), 4.0 (diamonds)
and 2.93 (squares) at $\rho=2.0$. The arrow indicates $k_{\max}$, the first peak of $S(k)$.
}
\vspace*{-0.1cm}
\label{tauk}
\end{center}
\end{figure}
For many glassformers, the Stokes-Einstein (SE) relation $D \approx T/\eta$, where
$\eta$ is the shear viscosity, is violated near the glass transition
point and the violation is believed to be the manifestation of
spatially heterogeneous dynamics which grows as the temperature is
lowered~\cite{ediger2000}.
Indeed, MCT can not capture the SE violation due to its mean field
character.
In this section, we show that the SE violation for the high density GCM is
suppressed.
In Fig.~\ref{tauk} (a), we plot $D\tau_{\alpha}$ for $\rho=1.5$ and 2.0
normalized by the values at a high temperature, as a function of $\varepsilon$.
Note that $D\tau_{\alpha}$ instead of $D\eta$ has been plotted, because
$\eta$ and $\tau_{\alpha}$ are roughly proportional to each other.
In the same figure, we have also plotted the results for the large and
small particles for the KA model~\cite{Kob1994}.
It is obvious that the variations of $D\tau_{\alpha}$
for the GCM is much weaker than that of the KA model.
Similar suppression of the SE violation was observed in the four-dimensional
hard sphere system~\cite{Charbonneau2010}.
$\tau_{\alpha}$ was defined by $F_s(k, \tau_{\alpha})= e^{-1}$ at $k = k_{\max}$.
In order to study the length scales which are relevant to the SE violation,
we generalize the structural relaxation time to the $k$-dependent form,
$\tau(k)$, defined by $F_s(k,\tau(k)) = e^{-1}$.
Note that $\tau_{\alpha} = \tau(k_{\max})$.
In the small wavevector limit,
the self intermediate scattering function behaves as
$F_s(k,t) =e^{-Dk^2t}$. Therefore, $\tau(k) \sim 1/Dk^2$ as
$k\rightarrow 0$.
In the opposite limit, the system
should behave as an ideal gas, so that
$F_s(k,t) = {\displaystyle \e^{-\kb T k^2t^2/m}}$.
Thus, $\tau(k) \propto 1/k$ as $k \rightarrow \infty$~\cite{hansen1986}.
Fig.~\ref{tauk} (b) shows $Dk^2 \tau(k)$ as a function of $k$ for $\rho=2.0$
and several temperatures.
Similar analysis for the KA model has been done by Flenner {\it et al}.~\cite{Flenner2005e}.
At a high temperature $T=7.0\times10^{-6}$ where
the two-step relaxation of $F_s(k,t)$ is set off (see
Fig.~\ref{msdfskt} (d)), $Dk^2\tau(k)$ is nearly constant and almost 1 at small wavevectors
up to $k_{\max}$.
It then decreases as $k$ increases further, followed by a turn over to a
mildly increasing function.
The decrease is a reflection of the vanishing of the cages at
length scales shorter than the interparticle distance.
The increase at larger $k$ is a crossover to the ideal gas limit where
$Dk^2 \tau(k) \propto k$.
The qualitative behavior remains unchanged at $T = 4.0\times10^{-6}$,
but the drop at $k \gtrsim k_{\max}$ is more pronounced,
reflecting the stronger cage effect at lower temperatures.
At the lowest temperature $T=2.93\times10^{-6}$ which corresponds to
about $\varepsilon \approx 0.075$, the drop at $k \gtrsim
k_{\max}$ is more dramatic.
Furthermore, slight positive bump at $3
\lesssim k \lesssim k_{\max}$ is observed.
This deviation corresponds to a weak SE violation observed in Fig.~\ref{tauk} (a).
This behavior is noticeably different from that for the KA model
for which $Dk^2 \tau(k)$ significantly increases as $k$ increase before
dropping near $k_{\max}$~\cite{Flenner2005e}.
\subsection{Non Gaussian dynamics}
\begin{figure}[t]
\begin{center}
\includegraphics[width=1\columnwidth]{fig12_alpha.eps}
\caption{
(a) The non-Gaussian parameter $\alpha(t)$ for $T\times 10^6 = 10$, 7, 5, 4, 3.4, 3.2 3 and 2.93 at $\rho=2.0$.
(b) The temperature dependence of the maximum value of $\alpha(t)$ at $\rho=1.5$ (diamonds) and 2.0 (squares).
The results for the large (up triangles) and small (down triangles)
particles of the KA model are also plotted~\cite{Kob1994}.
}
\vspace*{-0.1cm}
\label{alpha}
\end{center}
\end{figure}
Another good measure to monitor the extent of the departure from
the mean field behavior is the non-Gaussianity of the dynamics.
At high temperatures, $F_s(k,t)$ or its real space expression,
$G_s(r,t)\equiv \sum_i \ave{\delta(|\vec{R}_i(t) - \vec{R}_i(0)|-r)}$,
also known as the van Hove function, becomes
almost a Gaussian function.
However, as the temperature is lowered to the supercooled regime, these
function substantially deviates from the Gaussian.
This deviation is also considered to be a manifestation of dynamic heterogeneities.
To quantify this, it is common to introduce the non-Gaussian parameter defined by
\begin{equation}
\alpha(t) \equiv \frac{3\ave{R^4(t)}}{5\ave{R^2(t)}^2} -1,
\end{equation}
where $\ave{R^4(t)} = N^{-1}\sum_i \ave{|\vec{R}_i(t) - \vec{R}_i(0)|^4}$.
In Fig.~\ref{alpha} (a), we plot $\alpha(t)$ for $\rho=2.0$ at several temperatures.
It shows typical behaviors of the supercooled liquids, characterized by
pronounced peaks at $t$ near or slightly before $\tau_{\alpha}$ whose heights increase
as the temperature decreases.
However, the heights of the peaks are considerably lower than other model
glassformers at the comparable reduced temperatures $\varepsilon$.
Fig.~\ref{alpha} (b) shows the temperature dependence of the maximum
value of the non-Gaussian parameter $\alpha_{max}$ for both $\rho=1.5$ and
2.0.
The results for the KA model are also plotted~\cite{Kob1994}.
Similarly to the result for the SE violation, $\alpha_{max}$ of the
GCM is far smaller than that of the KA model.
Furthermore, one observes that $\alpha_{max}$ for $\rho=2.0$ is slightly smaller than that
for $\rho=1.5$.
These results suggest that the dynamic heterogeneities are suppressed for the GCM and
the suppression is stronger at higher densities.
This is another collateral support that the high density GCM is
more ``mean-field-like'' than other glassformers.
\begin{figure}[t]
\begin{center}
\includegraphics[width=1\columnwidth]{fig13_disp.eps}
\caption{
The probability distribution of the logarithm of the particle displacements.
(a) The results for $T = 7.0 \times 10^{-6}$ and $\rho=2.0$.
From left to right, $t$ = 44, 180, 512, 1024, 2896 and 5792.
(b) The results for $T = 2.93 \times 10^{-6}$ and $\rho=2.0$.
From left to right, $t$ = 500, 32000, 181000, 362000, 1448000, and 4096000.
}
\vspace*{-0.1cm}
\label{disp}
\end{center}
\end{figure}
More direct evidence that the dynamics of the high density GCM is closer
to a Gaussian process
and dynamic heterogeneities are weaker
can be obtained by monitoring the probability distribution of the
particle displacement $r$, denoted as $P(\log_{10} r;t)$.
$P(\log_{10} r;t)$ is related to the van Hove function $G_s(r,t)$
by~\cite{Cates2004,Reichman2005,Flenner2005e}
\begin{equation}
P(\log_{10} r;t)= (\ln 10) 4 \pi r^3 G_s(r,t).
\label{eq:Plog}
\end{equation}
If the dynamics is purely a Gaussian process, $G_s(r,t)$ also becomes a Gaussian function,
\begin{equation}
G_s(r,t) = \left( \frac{3}{2\pi\ave{R^2(t)}} \right)^{3/2} e^{-3r^2/2\ave{R^2(t)}}.
\label{eq:GsGauss}
\end{equation}
From Eqs.~(\ref{eq:Plog}) and (\ref{eq:GsGauss}), $P(\log_{10} r;t)$ becomes a function of solely
$r/\sqrt{\ave{R^2(t)}}$;
\begin{equation}
P(\log_{10} r;t) = (\ln 10) 4 \pi \left( \frac{3 r^2}{2\pi\ave{R^2(t)}} \right)^{3/2} e^{-3r^2/2\ave{R^2(t)}}.
\label{eq:GsGauss2}
\end{equation}
Thus, the shape of $P(\log_{10} r;t)$ for a Gaussian process
should be unchanged as $t$ is varied,
but only shifted if plotted as a function of $\log_{10} r$.
The peak height should be a constant value of $2.13$.
In Fig.~\ref{disp},
we plotted the simulated $P(\log_{10} r;t)$ for $\rho=2.0$
at the two temperatures;
$T= 7.0\times 10^{-6}$ ($\varepsilon \approx 1.2$)
and
$T= 2.93\times 10^{-6}$ ($\varepsilon \approx 0.075$).
The high temperature result in Fig.~\ref{disp} (a) shows that
$P(\log_{10} r;t)$ is almost given by Eq.~(\ref{eq:GsGauss2});
the shape of the function is almost Gaussian
and the peak height remains very close to $2.13$ over the long time.
On the other hand,
the low temperature result in Fig.~\ref{disp} (b) shows that
the peak height of the function becomes lower and the width becomes slightly larger
at $t \sim \tau_{\alpha}$.
This non-Gaussian behavior at the beta to alpha relaxation time regime
is a common properties of $P(\log_{10} r;t)$ at a mildly supercooled state.
Note that, however, the extent of the non-Gaussianity shown in
Fig.~\ref{disp} (b) is much weaker than that of other glassformers such as the
KA model~\cite{Flenner2005e}.
$P(\log_{10} r;t)$ for typical model glassformers
is known to split into the binodal shape at low temperatures,
corresponding to a separation of the constituent particles
into the mobile and immobile ones.
This is one of the most salient feature of the dynamic heterogeneities.
The peak of $P(\log_{10} r;t)$ in Fig.~\ref{disp} (b) does not show any
hint to split into the binodal shape.
$P(\log_{10} r; t)$ of the KA model at $\varepsilon = 0.08$ ($T = 0.47$
in the LJ unit),
a comparable reduced temperature as that of Fig.~\ref{disp} (b),
is completely separated to the two peaks, corresponding to the
distribution of mobile and immobile particles.
The decrease of the peak height of $P(\log_{10} r; t)$ in Fig.~\ref{disp} (b)
is compatible with that of the KA model at much higher temperature,
$\varepsilon = 0.38$ ($T = 0.6$ in the LJ unit)~\cite{Flenner2005e}.
Above results strongly suggest that the dynamics of the high density GCM is
more Gaussian-like than that of the conventional model glassformers and the
dynamic heterogeneities are strongly suppressed.
\section{Summary and Outlook}
In this paper, we presented the detailed analysis of dynamics of the
high density GCM.
The results are summarized below.
(i)
The crystal nucleation becomes slower as the density
increases.
Analysis of the two orientational bond order parameters, $\bar{q}_4$ and
$\bar{q}_6$, reveals that the crystal structure is bcc at all densities
beyond the reentrant point.
(ii) The system which failed to crystallize shows clear two-step and stretched exponential relaxation
in the (both self and collective) intermediate scattering functions, which is the hallmarks of the
supercooled fluid near the glass transition point.
All dynamical properties which we have analyzed are well
described by MCT.
First, the temperature dependence of the diffusion coefficient and the
structural relaxation time is well fitted by the MCT power law.
The parameter $T\simmct$ used to fit the simulation data is
unprecedentedly close to the theoretical prediction.
The time dependence of the self intermediate scattering function $F_s(k,t)$
is well fitted by MCT, using the reduced temperature $\varepsilon$ as a sole parameter.
Furthermore, the nonergodic parameters for both collective and self
intermediate scattering functions, $F_{\infty}(k)$ and $F_{s,\infty}(k)$,
are well described MCT.
Here we find two noticeable differences from the typical glassformers.
First, the shape of $F_{\infty}(k)$ is qualitatively different from
$F_{s,\infty}(k)$ at small wavevectors regime, where$F(k, t)$ decays very fast
and the nonergodic parameter vanishes, whereas $F_s(k,t)$ decays very
slowly and its nonergodic parameter remains finite down to $k=0$.
We conjecture that this decoupling of the collective density dynamics
from the single particle dynamics is universal for the systems with the
long-ranged interactions.
This indicates that the large-scale density fluctuation is decoupled to
the slow structural relaxation processes.
Similar decoupling has been predicted from the MCT analysis of the systems with the power law
interactions $v(r) \sim 1/r^{n}$ with small $n$~\cite{Shiroiwa2010}.
Second, the agreement between MCT and simulation for $F_{s,\infty}(k)$
is satisfactory but conceivably worse than those for other model glassformers
such as the KA model~\cite{Gleim1998,Foffi2004}.
We found a weak shoulder at the intermediate wavevectors.
This shoulder is reminiscent of those found in the MCT analysis of the
hard sphere glasses at the high dimensions~\cite{Ikeda2010}.
We conjecture that the anomalous shoulders are the deficiency of MCT
which appears only at the mean-field limit.
(iii) Dynamic heterogeneities are suppressed in
the high density GCM.
The SE violation is very weak and the peak height of the non-Gaussian
parameter is much lower than the conventional model glassformers at the comparable
reduced temperatures.
The weak dynamic heterogeneities of
the high density GCM was most obvious from
the observation of the probability distribution of the particle
displacement $P(\log_{10} r; t)$.
We find no obvious change in the shape of
$P(\log_{10} r; t)$ which remains almost Gaussian, though the width
slightly widens around the beta to alpha relaxation time regimes.
Even at the lowest reduced temperature, at which the typical
model glassformers exhibit the very clear binodal distribution
of mobile and immobile particles, due to the growing dynamic
heterogeneities, the probability distribution of the GCM remains to be a
single peak function.
We conclude that the high density GCM is the ideal model system to
study the glass transition.
It is not only the cleanest glass model in that it is the one-component
system.
But it is also the closest to the {\it ``mean-field''} model in that
dynamic heterogeneities are strongly suppressed
and the way how MCT predicts simulation results is
synchronized with the way it does for the high dimensional systems.
The mean-field nature comes from the long-range nature of the
interaction potential, which is caused by the overlapping of the
particles at the high densities.
Both the excellent agreement with MCT and small deviation from MCT (the
shoulder of $F_{s,\infty}(k)$) also lead us to reconsider the validity of
MCT as the the mean field theory of the glass transition.
Mean-field models of the glass transition have been proposed and
analyzed by taking the long-range limit of the
interactions~\cite{Dotsenko2004} but
it has never been realized in the simulation box.
The another mean field limit, {\it i.e.}, the high dimension
limit, is another interesting challenge but given the current CPU power,
going beyond $d=5$ would be unrealistic.
In this sense,
the high density GCM might be the first realistic fluid model
which may be able to bridge the gap between the finite dimensional
system with the mean-field limit.
It is tempting to consider the high density limit of the GCM where
the small parameter $1/\rho$ may make the analytical
treatment of especially the static/thermodynamic parameters tractable and
leads us the {\it exact} mode-coupling theory (or alike).
\acknowledgments
This work is partially supported by Grant-in-Aid for JSPS Fellows (AI),
KAKENHI; \# 21540416 (KM), and Priority Areas ``Soft Matter Physics''
(KM).
|
1,108,101,566,657 | arxiv |
\section{Convex codes in dimension 1 and 2}
Megan K Franke and Samuel Muthiah \cite{franke2018every} worked on convex codes and wanted to give an explicit relation between convex and open convex codes. They gave the following conjectures \ref{conMFSM1} and \ref{conMFSM2} stated below:
\begin{conjecture}\cite[Conjecture 1]{franke2018every} \label{conMFSM1}
A code $ \mathcal{C} $ has minimal convex embedding dimension 1 if and only if it has minimal open convex embedding dimension 1.
\end{conjecture}
\begin{conjecture}\cite[Conjecture 2]{franke2018every}
Suppose $ \mathcal{C} $ is open convex and has a minimal open convex embedding dimension of 2. Then the minimal convex embedding dimension of $ \mathcal{C} $ is 2. \label{conMFSM2}
\end{conjecture}
We found the code $ \mathcal{C}=\{12,23\} $ is a counter example for Conjecture \ref{conMFSM1}. The main Theorem \ref{mainth2} of this section is a modification to Conjecture \ref{conMFSM1}. And the theorem gives a relationship between open convex codes and closed convex codes in dimension 1. Later we worked on Conjecture \ref{conMFSM2}. This conjecture seems to hold true. We don't yet have a proof for it, but we have a class of examples which satisfy the Conjecture. At the end of this section(Remark \ref{remarksec3}) we will produce them.
\subsection{Relationship between open convex and closed convex codes}
In this section, we assume $ \forall x,y\in \mathbb{R},\ d(x,y) =\vert x -y \vert,$ the standard metric on $ \mathbb{R}. $ Let $ \mathcal{C} $ be a code on $ n $ neurons which has open convex realization $ \mathcal{U} $ in dimension 1. Let $ \mathcal{U}=\{I_1,I_2,\dots,I_n\} $ be a set of all open intervals in $ \mathbb{R} $ such that $ \mathcal{C}(\mathcal{U})= \mathcal{C} $. For each $j,$ let us assume $ \ I_j=(a_j,b_j) $ where we call $ a_j $ the initial point and $ b_j $ the final point of $ I_j$ and $ a_j\not=b_j $.
Denote $ \epsilon_{u}= \displaystyle\min_{1\leq i,j\leq n} d(b_i,a_j), $ as the epsilon distance of the realization $ \mathcal{U}. $ We further use the ordered pair $ (\mathcal{U},\epsilon_{u}) $ whenever we have a realization $ \mathcal{U} $ with its epsilon distance $ \epsilon_{u} $ for the sake of convenience.
\begin{proposition} \label{openepi}
Let $ (\mathcal{U},\epsilon_{u}) $ (with $ \epsilon_{u} $ possibly zero) be any open convex realization of a code $ \mathcal{C} $ in dimension 1. Then there exists another open convex realization $ (\mathcal{U}',\epsilon_{u'}) $ of $ \mathcal{C} $ such that $ \epsilon_{u'}> 0.$
\end{proposition}
\begin{proof}
\begin{caseof}
\casea{$ \epsilon_u > 0 $}{In this case there is nothing to prove, as $ (\mathcal{U},\epsilon_u) $ itself works.}
\casea{$ \epsilon_u =0 $}{In this case as $ \epsilon_u=0,$ there exist some $ i,j \in [n] $ with $ i\not= j $ such that $ d(b_i,a_j)=0. $ Let these $ i,j $'s be enumerated as $ i_k, j_k$ for $ \ k\in [n-1] $ whenever $ d(b_{i_k},a_{j_k})=0. $ Fix a $ k $ and choose, $ 2\cdot\delta= {b_{i_k}- \displaystyle\max_{a_l,b_l\in [a_{i_k},b_{i_k}).}\{a_l,b_l\}} $. } Then let $ I'_{i_k} = (a_{i_k}',b_{i_k}'), $ where $ a'_{i_k}=a_{i_k} $ and $ b'_{i_k}=b_{i_k}- \delta. $ Do the same procedure for all such $ k $'s and obtain new set of open convex intervals $\mathcal{U}' =\{I'_1,I'_2,\dots,I'_n\}$ with remaining intervals kept unchanged. It is clear that $ \epsilon_{u'} > 0 $ for $ \mathcal{U}'. $
We can see the that the atoms which were singleton sets have become intervals with length $ \delta. $ Moreover, no new atoms are added nor the existing atoms have been deleted. Hence we haven't added any new code in $ \mathcal{C}(\mathcal{U}'). $ So, we have $ \mathcal{C}(\mathcal{U})=\mathcal{C}(\mathcal{U}')=\mathcal{C}. $ Therefore we have a open convex realization of $ \mathcal{C} $ with corresponding epsilon distance greater than zero.
\end{caseof}
\vspace{-0.7cm}
\end{proof}
Now we observe that the Proposition \ref{openepi} guarantees an open convex realization with $ \epsilon > 0, $ whenever the code is open convex in dimension 1. Therefore the Proposition can be restated as follows :
\begin{remark}
Let $ \mathcal{C} $ be a code which is open convex with a realization in $ \mathbb{R} $ (i.e dimension 1). Then we can always find a open convex realization, $ (\mathcal{U},\epsilon) $ of $ \mathcal{C} $ such that $ \epsilon >0. $
\end{remark}
When a code $ \mathcal{C} $ is closed convex in dimension 1 then the sets in its realization $ \mathcal{U} $ can also be singletons, as they are closed in $ \mathbb{R}. $ We now will show that if there are singleton sets in $ \mathcal{U} $ then, we can also find another realization, $ \mathcal{U}' $ of $ \mathcal{C} $ in which all the sets are closed intervals of $ \mathbb{R}. $ For us, when we say closed intervals we do not consider the singletons.
\begin{lemma} \label{lemclo}
Given any closed convex realization $ \mathcal{U}= \{I_i\}_{i=1}^n $ of $ \mathcal{C} $ in dimension 1 with, possibly some $ I_j $'s as singletons for $ j\in [n]. $ Then there exists a closed convex realization $\mathcal{U}'= \{I_i'\}_{1=1}^n $ of $ \mathcal{C} $ in which every $ I'_i $ is a closed interval.
\end{lemma}
\begin{proof}
For some $ j\in [n], $ let $ I_j =\{x\} $ be a singleton set in $ \mathcal{U}. $ We will try to give another realization $ \mathcal{U}' $ with all closed intervals. We will currently assume that there is only one such set $ I_j\in \mathcal{U}. $ If there are more such sets, just repeat the following procedure to all those sets separately.
\begin{caseof}
\casea{$ x $ does not lie on boundary of any $ I_k\ (k\not = j,\ 1\leq k \leq n ) $ }{Let $ I_i=[a_i,b_i], $ for $ i\not=j $ and choose $ 2\cdot\delta= \min\{ d(a_i,x), d(x,b_i)\vert\ i\not = j,\ 1\leq i \leq n \} $ Then let $ I'_j = \left[x- {\delta}, x+ {\delta}\right]$ . Let $ \mathcal{U}' $ have sets $ I'_i=I_i $ when $ i\not= j,$ and $ I_j'. $ This new realization has no new codes as $ I'_j$ doesn't intersect $ I'_i\ (i\not=j) $, also we haven't deleted any old atoms. }
\casea{$ x $ lies on boundary of a few $ I_k$'s}{There can arise two sub-cases here:
\begin{subcaseof}
\subcase{Intersection of all such $ I_k $'s is a closed interval}{In this case, since the intersection is a closed interval, $ x $ is either the initial point or the final point of all such $ I_k $'s and therefore these intervals are nested. We assume that $ x $ lies on the final point of all the $ I_k $'s, i.e $ b_k=x, $ for all such $ k $. The other case in which $ x=a_k, $ for all such $ k $ will be mutatis mutandis. Let $ 2\cdot\delta ={b_{k}-\displaystyle \max_{a_l,b_l\in (-\infty,x)}\{a_l,b_l\}} $, and define $ I'_j=[\delta, x]. $ Also, let $ I_i'=I_i $ for all $ i\not=j.$ With this new collection $ \mathcal{U}' =\{I_i'\}_{i=1}^ n $ one can check that we haven't added or deleted any codes.}
\subcase{Intersection of all such $ I_k $'s is the point $ x $}{In this case, let such $ I_k $'s be $ I_{k_1},I_{k_2},\dots,I_{k_r} (1\leq r\leq n). $ There maybe a few $ I_k $'s such that $ x=a_{k_m} $ and a few in which $ x=b_{k_p}, (m\not = p). $ Label them as $ I_{a_k} $'s and $ I_{b_k} $'s respectively. We will try to create a closed interval using these, so that it becomes easier to replace $ I_j $ by a closed interval. Let $ 2\cdot\delta ={b_{k_p}-\displaystyle \max_{a_l,b_l\in (-\infty,x)}\{a_l,b_l\}}. $ Consider all $ I_{a_k}, $'s and change them to $ I'_{a_k} = [a'_k,b_k]$, where $ a'_k= a_k-\delta. $ Then choose $ I'_j= [a'_k,x]$. With this new collection $ \mathcal{U}' =\{I_i'\}_{i=1}^ n $ one can check that we haven't added or deleted any codes.}
\end{subcaseof}}
\end{caseof}
\vspace{-0.7cm}
Hence the proof.
\vspace{-0.2cm}
\end{proof}
\begin{proposition} \label{closepi}
Given any closed convex realization $ (\mathcal{U},\epsilon_u) $ of a code $ \mathcal{C} $ (with $ \epsilon_{u} $ possibly zero), there exists another closed convex realization $ (\mathcal{U}',\epsilon_{u'}) $ of $ \mathcal{C} $ such that $ \epsilon_{u'}> 0.$
\end{proposition}
\begin{proof}
\begin{caseof}
\casea{Every $ I_j $ in $ \mathcal{U} $ is a closed interval}{ The proof goes similarly as in the Proposition \ref{openepi}, except change $ I'_{j_k}= [a'_{j_k},b'_{j_k}], $ where $ a'_{j_k}=a_{j_k}- \delta $ and $ b'_{j_k}=b_{j_k}. $ }
\casea{If some $ I_j $'s is a singleton set}{Use Lemma \ref{lemclo} to convert single ton sets to closed intervals and then this becomes case 1.}
\end{caseof}
\vspace{-0.7cm}
\end{proof}
Note that Proposition \ref{closepi} can be restated as follows:
\begin{remark}
Let $ \mathcal{C} $ be a code which is closed convex with a realization in $ \mathbb{R} $ (i.e dimension 1). Then there exists a closed convex realization, $ (\mathcal{U},\epsilon) $ such that $ \epsilon >0. $
\end{remark}
\noindent The following theorem is the main result which gives the relationship between open convex and closed convex codes of dimension 1.
\begin{theorem}
Suppose $ \mathcal{C} $ is a code on $ n $ neurons. Then it is open convex with minimum dimension 1 if and only if it is closed convex with minimum dimension 1. \label{mainth2}
\end{theorem}
\begin{proof}
The idea of this proof is to consider $ \epsilon = \displaystyle\min_{l,r\in [n]}\{d(b_l,a_r)\} $ and create intervals with it. The Propositions \ref{openepi} \& \ref{closepi} has ensured that there exist closed and open realizations with $ \epsilon >0. $
Consider $ \mathcal{C} $ to be open convex with $ (\mathcal{U},\epsilon) $ as its open realization such that $ \epsilon>0. $ Let $ J_i=[a'_i,b'_i], $ where $ a'_i=a_i+\epsilon/3 \text{ and } b'_i=b_i-\epsilon/3. $ Since $ \epsilon>0,$ the change in the intervals doesn't add any new code or delete the old ones. This realization, $ \mathcal{U}'=\{J_i\}_{1\leq i\leq n} $ will make $ \mathcal{C} $ closed convex.
The proof for the converse is similar.
\end{proof}
\subsection{Convex codes that are not realizable in $ \mathbb{R} $}
\begin{proposition}
Let $ \{i,j,k,\sigma\} \subseteq
\mathcal{C} $ be a code, such that $i,j,k\in \sigma\subset[n] $ and $ i,j,k $ are all distinct elements in $ [n]. $ Then $ \mathcal{C} $ can never be a convex code in dimension $ 1. $ \label{lemcon}
\end{proposition}
\begin{proof}
We show that for any possible sets in $ \mathbb{R} $ this code cannot be convex realized. We construct sets $ U_i,U_j,U_k $ as a part of a convex realization $ \mathcal{U} $ of $ \mathcal{C}. $ We observe that $ U_l \cap \atom{l}\not=\emptyset
$ and $ U_l\cap\atom{\sigma}\not=\emptyset $ as $l\subseteq \sigma, $ for $ l=i,j,k. $ Since atoms are disjoints that gives us that $ U_l $ contains at-least two points. However as $ U_l $'s are convex sets in $ \mathbb{R}, $ they must be intervals.
Without loss of generality we may assume that $ U_i $ is open, $ U_j $ is clopen (neither closed nor open) and $ U_k $ is a closed set. We choose any $ a_i,b_i\in \mathbb{R} $ and fix $ U_i=(a_i,b_i). $ Since $ ijk \subseteq \sigma \in \mathcal{C} $ we have $ U_i\cap U_j \cap U_k\not=\emptyset. $ Therefore we choose $ a_j $ such that $ a_i < a_j < b_i.$ Also, as $ \atom{j}= U_j\backslash U_i \cup U_k \not= \emptyset $ we must have $ b_j \in U_i^c. $ We choose $ b_j > b_i $ and construct $ U_j=(a_j,b_j]. $
The construction so far can be seen in the fig \ref{figconl}. It is left to construct $ U_k. $ Once again we realize that $ U_k$ must be a part of $ U_i\cap U_j $ and so we choose $ a_k $ such that $ a_j<a_k<b_i. $ And as $ \atom{k}= U_k\backslash U_i \cup U_j \not= \emptyset$ we should have $ b_k $ lying in $ (U_i\cup U_j)^c. $ So we choose $ b_k>b_j, $ and construct $ U_k=[a_k,b_k]. $ But this gives us that $ U_j \subset U_i \cup U_k, $ leaving $ \atom{j}=\emptyset $ which is a contradiction. Note that we have constructed $ U_j $ and $ U_k $ to the right of $ U_i. $ But the proof will follow similar to what we have done (with minor changes) even if we construct the sets on the left side of $ U_i. $
\end{proof}
\begin{proposition}
Let $ \{i,j,k,\sigma_{ij},\sigma_{ik},\sigma_{jk}\} \subseteq
\mathcal{C}' $ be a code, such that $i,j\in \sigma_{ij}\subset[n] $ and $ k\notin \sigma_{ij}. $ Similarly obtain $ \sigma_{ik},\sigma_{jk}. $ Then $ \mathcal{C} $ can never be a convex code in dimension $ 1. $ \label{lemcon2}
\end{proposition}
Proof of Proposition \ref{lemcon2} is similar to roof of Proposition \ref{lemcon}.
\begin{remark}\label{remarksec3}
Thus we have got two classes of examples
\begin{align*}
\mathcal{C} \supseteq& \{i,j,k,ijk\} \qquad (i,j,k\in \sigma\subset[n]) \\ \mathcal{C}' \supseteq & \{i,j,k,\sigma_{ij},\sigma_{ik},\sigma_{jk}\} \qquad (\text{as defined above})
\end{align*}
which surely cannot have convex realization in dimension 1. So, if $ \mathcal{C} $ or $\mathcal{C}' $ have a minimum \textit{open} convex dimension of 2, then $ C $ or $C' $ are supporting class of examples of Conjecture \ref{conMFSM2}. For example
\begin{enumerate}
\item $ \mathcal{C}_1=\{1,2,3,1234\} $ has both open convex and convex minimal dimensions as 2.
\item $ \mathcal{C}_2=\{1,2,3,124,23,135\} $ has both open convex and convex minimal dimensions as 2.
\end{enumerate}
\end{remark}
\begin{figure}[]
\begin{center}
\begin{tikzpicture}[scale=7]
\draw[<->, ultra thick] (0,0) -- (1.5,0);
\foreach \x/\xtext in {0.2/$ a_i $,0.4/,0.6/$ a_j $,0.8/,1/$ b_i $,1.2/$ b_j $,1.4/}
\draw[thick] (\x,0.5pt) -- (\x,-0.5pt) node[below] {\xtext};
\draw (0.2,1pt) node[above] {$U_i$};
\draw[{(-)}, ultra thick, green] (0.2,0) -- (1.0,0); \draw (0.6,1pt)node[above] {$U_j$};
\draw[{(-]}, ultra thick, red] (0.6,.0) -- (1.2,0.00);
\end{tikzpicture}
\end{center}
\caption{This figure gives us the construction of $ U_i, U_j $ of the Proposition \ref{lemcon}} \label{figconl}
\end{figure}
\section{Doublet maximal codes}
A codeword $ \sigma $ is said to be
\textit{maximal} if as a subset $ \sigma \subset [n] $ it is not contained in any other codeword of $ \mathcal{C}. $ Maximal codewords play an important role and we will see that atoms corresponding to them have special properties. The following Lemma gives us one such.
\begin{lemma}
Let $ \tau\in \mathcal{C} $ be the maximal codeword, and let $ \mathcal{C} $ have a convex realization, $ \mathcal{U}=\{U_1,U_2,\dots,U_n\} $ in $ \mathbb{R}^m, $ i.e in dimension $ m, $ then \label{mopen}
\begin{enumerate}
\item $ U_\tau \subseteq \left(\displaystyle\bigcup_{i\not\in \operatorname{supp}(\tau)} U_i \right)^c $ \label{mopen1}
\item If all $ U_i $'s are open (or closed) in $ R^m $ (i.e if $ \mathcal{C} $ is either open convex( or closed convex)) then $ \atom{\tau} $ is open (or closed) in $ R^m $.
\end{enumerate}
\end{lemma}
\begin{proof}
\begin{enumerate}
\item Suppose, if possible let $ U_\tau \not\subseteq \left(\displaystyle\bigcup_{i\not\in \operatorname{supp}(\tau)} U_i \right)^c. $ Then there exists $ x, $ such that $ x\in U_\tau $ and $ x \not \in \left(\displaystyle\bigcup_{i\not\in \operatorname{supp}(\tau)} U_i \right)^c. $ This implies that $ x \in \displaystyle\bigcup_{i\not\in \operatorname{supp}(\tau)} U_i.$ Therefore there exists a $ k\not \in \operatorname{supp}(\tau) $ such that $ x \in U_k. $ Define a codeword $ \beta $ such that $ \operatorname{supp}{\beta}= \{i\in [n]| i\not\in \operatorname{supp}(\tau) \text{ and } x \in U_i\}, $ clearly making $ \beta \not =\emptyset. $ Denote $ \alpha= \tau\cup \beta $. Since $ x\in U_{i} $ for all $ i \in \operatorname{supp}(\beta) $ we have that $ x\in U_\beta.$ This implies $x\in U_\tau\cap U_\beta = U_\alpha. $ Also, we have $ x\not\in \bigcup_{i\not \in \operatorname{supp}(\alpha)} U_i, $ as we can see that $ \operatorname{supp}(\alpha) $ contains all the $ i $'s such that $ x\in U_i. $ Therefore we have $ x\in U_\alpha\backslash \displaystyle \bigcup_{i\not \in \operatorname{supp}(\alpha)}U_i= \atom{\alpha}. $ Hence as $ \atom{\alpha}\not=\emptyset $, we have $ \tau \subset \alpha \in \mathcal{C}(\mathcal{U})=\mathcal{C}, $ which contradicts the maximality of $ \tau. $ Hence the proof.
\item We know that $ \atom{\tau}= U_\tau \Big\backslash \displaystyle \bigcup_{i\not \in \operatorname{supp}(\tau)}U_i = U_\tau \cap \left(\displaystyle\bigcup_{i\not\in \operatorname{supp}{\tau}} U_i \right)^c. $ Thus by part (\ref{mopen1}) we have $ \atom{\tau}= U_\tau. $ And since finite intersection of open (or closed) sets is open (or closed) we have the proof.
\end{enumerate}
\end{proof}
\noindent Next, we work with codes called max-intersection complete. Joshua Cruz, et.al \cite{cruz2019open} defined max-intersection complete codes as follows.
\begin{definition}[max-intersection complete]
The intersection complete of a code $ \mathcal{C} $ is the code that consists of all non-empty intersections of codewords in $ \mathcal{C}: $ $$\widehat{\mathcal{C}}= \left\{\sigma \big\vert \sigma = \bigcap_{v\in \mathcal{C}'} v \text{ for some non-empty subcode } \mathcal{C}' \subset \mathcal{C} \right\}.$$ Denote $ M(\mathcal{C}) \subset \mathcal{C} $ the sub-code consisting of all maximal codewords of $ \mathcal{C}. $ A code is said to be \textit{max-intersection complete }if $ \widehat{M(\mathcal{C})} \subseteq \mathcal{C}. $ Note that, if $ M(\mathcal{C})=\{\tau_1^{},\tau_2^{}\},$ then $ \mathcal{C} $ will be max-intersection complete, if and only if $ \tau_1^{}\cap \tau_2^{} \in \mathcal{C}. $
\end{definition}
Joshua Cruz, et.al \cite{cruz2019open} showed that are codes with max-intersection complete is both open convex and closed convex. Also, they gave an upper bound for the minimal embedding dimension. We tried to look at the converse of the Theorem in dimension 1, i.e. is all open convex codes of dimension 1 max-intersection complete? And, we found a code which was open convex in dimension 1 which is not max-intersection complete. The code is described and explained in figure \ref{figmip}. We observed that having 3 maximal codes did break the converse and hence we proposed the following theorem.
\begin{figure}[]
\begin{tikzpicture}[scale=5]
\draw[<->, ultra thick] (-0.3,0) -- (2.1,0);
\foreach \x/\xtext in {-0.2/0,0/1,0.2/2,0.4/3,0.6/4,0.8/5,1/6,1.2/7,1.4/8,1.6/9,1.8/10,2.0/11}
\draw[thick] (\x,0.5pt) -- (\x,-0.5pt) node[below] {\xtext};
\draw (0.3,1pt) node[above] {$U_1$};
\draw[{(-)}, ultra thick, green] (0.3,0) -- (1.0,0); \draw (0.4,1pt)node[above] {$U_2$};
\draw[(-), ultra thick, red] (0.4,0) -- (0.7,0);
\draw (0.15,1pt)node[above] {$U_3$};
\draw[(-), ultra thick, blue] (0.15,0) -- (0.5,0);
\draw (0.6,1pt)node[above] {$U_4$};
\draw[(-), ultra thick, yellow] (0.6,.0) -- (1.6,0);
\draw (0.8,1pt)node[above] {$U_5$};
\draw[(-), ultra thick, brown] (0.8,.0) -- (1.8,0);
\draw (0.9,-0.6)node[below] {$ \atom{145} $};
\draw[->, thick, black] (0.9,.0) -- (0.9,-0.6);
\draw (0.75,-0.45)node[below] {$ \atom{14} $};
\draw[->, thick, black] (0.75,.0) -- (0.75,-0.45);
\draw (0.65,-0.6)node[below] {$ \atom{124} $};
\draw[->, thick, black] (0.65,.0) -- (0.65,-0.6);
\draw (0.55,-0.45)node[below] {$ \atom{12} $};
\draw[->, thick, black] (0.55,.0) -- (0.55,-0.45);
\draw (0.45,-0.6)node[below] {$ \atom{123} $};
\draw[->, thick, black] (0.45,.0) -- (0.45,-0.6);
\draw (0.35,-0.45)node[below] {$ \atom{13} $};
\draw[->, thick, black] (0.35,.0) -- (0.35,-0.45);
\draw (0.25,-0.6)node[below] {$ \atom{3} $};
\draw[->, thick, black] (0.25,.0) -- (0.25,-0.6);
\draw (1.3,-0.45)node[below] {$ \atom{45} $};
\draw[->, thick, black] (1.3,.0) -- (1.3,-0.45);
\draw (1.7,-0.45)node[below] {$ \atom{5} $};
\draw[->, thick, black] (1.7,.0) -- (1.7,-0.45);
\end{tikzpicture}
\caption{This figure gives a code $ \mathcal{C}=\mathcal{C}(\mathcal{U})= \{3,5,12,13,14,45,123,124,145\}$ realized by $ \{U_1,U_2,U_3,U_4,U_5\}. $ The code $ \mathcal{C} $ is open convex in dimension 1 and $ {123,145} $ are maximal sets, whose intersection is $ 1 $ and 1 doesn't belong to $ \mathcal{C} $. }
\label{figmip}
\end{figure}
\begin{theorem} \label{tcmip}
Let $ \mathcal{C} $ be a code with only two maximal codewords. Then $ \mathcal{C} $ is open convex if and only if $ \mathcal{C} $ is max-intersection complete.
\end{theorem}
\begin{proof}
Let $ \tau_1^{} $ and $ \tau_2^{} $ be the only maximal codewords of $ \mathcal{C}. $ If $ \sigma= \tau_1^{} \cap \tau_2^{}= \emptyset $ we get $ \widehat{M(\mathcal{C})}=\emptyset \subset \mathcal{C} $ and hence the code $ \mathcal{C} $ vacuously satisfy the conditions we require. Therefore we assume $ \sigma\not=\emptyset. $ Let $ \mathcal{U}=\{U_i\}_{i\in[n]}^{} $ be a collection of open convex sets in $ \mathbb{R}^m $ such that $ \mathcal{C}(\mathcal{U})=\mathcal{C}. $ Now, we need to show $ \sigma \in \mathcal{C}. $ Suppose not, then as $ \sigma \not \in \mathcal{C}= \mathcal{C}(\mathcal{U}), $ we get $ \atom{\sigma}=\emptyset \implies \displaystyle\bigcap_{i\in \operatorname{supp}(\sigma)} U_i \Big \backslash \displaystyle\bigcup_{j\not\in \operatorname{supp}{\sigma} } U_j= \emptyset \\ \implies \ U_\sigma^{}=\displaystyle\bigcap_{i\in \operatorname{supp}(\sigma)} U_i \subseteq \displaystyle\bigcup_{j\not\in \operatorname{supp}{\sigma} } U_j $ ( Since $ U_\sigma \not = \emptyset $ as $ U_{\tau_{1}^{}},U_{\tau_{2}^{}} \subseteq U_\sigma $). By Lemma \ref{mopen} we know that $ \atom{\tau_i^{}}= U_{\tau_i^{}} $ for $ i=1,2. $ Moreover we will show that $ U_{\tau_1^{}} $ and $ U_{\tau_2^{}} $ form a separation\footnote{A separation of $ X $ is a pair $ U,V $ of disjoint nonempty open subsets of $ X $ whose union is $ X $. The space $ X $ is not connected if there exist a separation. } of $ U_\sigma. $
As $ \tau_1^{},\tau_2^{} \in \mathcal{C}=\mathcal{C}(\mathcal{U}), $ we have $ U_{\tau_1^{}} \not = \emptyset \not = U_{\tau_2^{}}. $ Also, as atoms are disjoint regions, we have $ U_{\tau_1^{}} \cap U_{\tau_2^{}}=\emptyset. $ From Lemma \ref{mopen} we know that the atoms of maximal sets are open in $ R^m $. Also, we have $ U_{\tau_i^{}}= U_{\tau_i^{}} \cap U_\sigma $ is open in $ U_\sigma, $ for $ i=1,2. $ Now it is only left for us to prove that $ U_\sigma= U_{\tau_1^{}} \cup U_{\tau_2^{}}. $ We can observe that $ U_{\tau_1^{}} =\displaystyle\bigcap_{j\in \operatorname{supp}(\tau_1^{})} U_j= \displaystyle\bigcap_{j\in \operatorname{supp}(\sigma)} U_j \quad \cap \displaystyle\bigcap_{\substack{{i\in \operatorname{supp}(\tau_1^{}) }\\ { i \not \in \operatorname{supp}(\sigma) } }}U_j:= U_{\sigma} \cap U_{\tau_1^{}\backslash \sigma}. $ Similarly we get $ U_{\tau_2^{}}= U_{\tau_1^{}\backslash \sigma}.$ Consider $ U_{\tau_1^{}} \cup U_{\tau_2^{}} = \left\{U_{\sigma} \cap U_{\tau_1^{}\backslash \sigma}\right\} \cup \left\{U_{\sigma} \cap U_{\tau_2^{}\backslash \sigma}\right\}= U_\sigma \cap \left\{U_{\tau_1^{}\backslash \sigma} \cup U_{\tau_2^{}\backslash \sigma}\right\} $
\begin{claim}
$ U_\sigma \subseteq \left\{U_{\tau_1^{}\backslash \sigma} \cup U_{\tau_2^{}\backslash \sigma}\right\}. $ Suppose not, then there exists an $ x \in U_\sigma $ such that $ x\not\in \left\{U_{\tau_1^{}\backslash \sigma} \cup U_{\tau_2^{}\backslash \sigma}\right\},$ i.e. $ x\not \in U_{\tau_1^{}\backslash \sigma} \text{ and } x\not \in U_{\tau_2^{}\backslash \sigma}. $ Also, we have $ U_\sigma \subseteq \displaystyle\bigcup_{j\not\in \operatorname{supp}{\sigma} } U_j.$ This implies that there exists some $ k\not \in \operatorname{supp}(\sigma) $ such that $ x\in U_k. $ i.e. $ k \not \in \operatorname{supp}(\tau_{1}^{}) \cup \operatorname{supp}(\tau^{}_{2}) $ and $ x\in U_k. $ Let us define a codeword $ \beta $ such that $ \operatorname{supp}(\beta)= \{i\in [n]\ |\ i\not\in \operatorname{supp}(\sigma) \text{ and } x \in U_i\}. $ Clearly we have $ \beta \not = \emptyset. $ Denote $ \alpha= \sigma\cup \beta. $ Then, we get $ x \in \atom{\alpha}, $ by working on similar lines as in the proof of part 1 of Lemma \ref{mopen}. Implies $ \atom{\alpha}\not =\emptyset $ and $ \alpha \in \mathcal{C}(\mathcal{U})= \mathcal{C}. $ Since $ \beta \cap \tau_1^{} =\emptyset $ and $ \beta \cap \tau_2^{} =\emptyset. $ We have $ \alpha \not \subset \tau_1 $ and $ \alpha \not \subset \tau_2. $ Therefore we get that either $ \alpha $ is a maximal codeword in $ \mathcal{C} $ or there exist a maximal codeword containing $ \alpha $ which is different from $ \tau_1^{} $ or $ \tau_2^{}. $ This is a contradiction to the hypothesis that $ \mathcal{C} $ has only two maximal codewords, $ \tau_1^{} $ and $ \tau_2^{} $. Hence the supposition is wrong, implying $ U_\sigma \subseteq \left\{U_{\tau_1^{}\backslash \sigma} \cup U_{\tau_2^{}\backslash \sigma}\right\}. $
\end{claim}
Now the equation $ U_{\tau_1^{}} \cup U_{\tau_2^{}}= U_\sigma \cap \left\{U_{\tau_1^{}\backslash \sigma} \cup U_{\tau_2^{}\backslash \sigma}\right\} $ becomes $ U_{\tau_1^{}} \cup U_{\tau_2^{}}= U_\sigma, $ by $ U_\sigma \subseteq \left\{U_{\tau_1^{}\backslash \sigma} \cup U_{\tau_2^{}\backslash \sigma}\right\}. $ This means that $ U_{\tau_1^{}} $ and $ U_{\tau_2^{}} $ form a separation of $ U_\sigma. $ But $ U_\sigma $ is intersection of connected sets so must be a connected set itself. Hence cannot have a separation. Thus we must have $ \sigma\in \mathcal{C}(\mathcal{U})= \mathcal{C}. $
\end{proof}
\begin{remark}
The above Theorem holds when open convex is replaced by closed convex, as the proof changes only in the place where separation will be obtained by the closed sets instead of open.
\end{remark}
\begin{eg}
Consider the sets $ \mathcal{U}= \{U_1,U_2,U_3,U_4,U_5,U_6\} $ in $ \mathbb{R} $ as in Figure \ref{ex2}. Let $ \mathcal{C}= \mathcal{C}(\mathcal{U})=\{2,4,12,23,45,46\}. $ The code $ \mathcal{C} $ has 4 maximal codes and it is both max-intersection closed and open convex. But the interesting fact is one can break the code into $ \mathcal{C}=\mathcal{C}_1 \sqcup\ \mathcal{C}_2, $ where $ \mathcal{C}_1= \{2,12,23\} $ and $ \mathcal{C}_2=\{4,45,46\}. $ The codes $ \mathcal{C}_1 $ and $ \mathcal{C}_2 $ satisfy the hypothesis of the Theorem \ref{tcmip}. This leads us to define a new class of codes called Doublet maximal codes.
\end{eg}
\begin{figure}[H]
\begin{tikzpicture}[scale=5]
\draw[<->, ultra thick] (-0.3,0) -- (2.1,0);
\foreach \x/\xtext in {-0.2/0,0/1,0.2/2,0.4/3,0.6/4,0.8/5,1/6,1.2/7,1.4/8,1.6/9,1.8/10,2.0/11}
\draw[thick] (\x,0.5pt) -- (\x,-0.5pt) node[below] {\xtext};
\draw (0.5,-3.7pt) node[below] {$U_2$};
\draw[{(-)}, ultra thick, green] (0.3,-0.13) -- (0.8,-0.13); \draw (0.4,1pt)node[above] {$U_1$};
\draw[(-), ultra thick, red] (0.3,0) -- (0.5,0);
\draw (0.8,1pt)node[above] {$U_3$};
\draw[(-), ultra thick, blue] (0.5,0) -- (0.8,0);
\draw (0.96,-6pt)node[above] {$U_4$};
\draw[(-), ultra thick, yellow] (1,-0.13) -- (1.8,-0.13);
\draw (1,1pt)node[above] {$U_5$};
\draw[(-), ultra thick, brown] (1,.0) -- (1.3,0);
\draw (1.8,1pt)node[above] {$U_6$};
\draw[(-), ultra thick, blue] (1.3,.0) -- (1.8,0);
\draw (1.1,0.42)node[below] {$ \atom{45} $};
\draw[->, thick, black] (1.1,.0) -- (1.1,0.3);
\draw (0.65,0.42)node[below] {$ \atom{23} $};
\draw[->, thick, black] (0.65,.0) -- (0.65,0.3);
\draw (0.5,0.42)node[below] {$ \atom{2} $};
\draw[->, thick, black] (0.5,.0) -- (0.5,0.3);
\draw (0.35,0.42)node[below] {$ \atom{12} $};
\draw[->, thick, black] (0.35,.0) -- (0.35,0.3);
\draw (1.3,0.42)node[below] {$ \atom{4} $};
\draw[->, thick, black] (1.3,.0) -- (1.3,0.3);
\draw (1.62,0.42)node[below] {$ \atom{46} $};
\draw[->, thick, black] (1.62,.0) -- (1.62,0.3);
\end{tikzpicture}
\caption{This figure gives a code $ \mathcal{C}= \{2,4,12,23,45,46\}.$ }
\label{ex2}
\end{figure}
\begin{definition}[Doublet maximal codes]
A code $ \mathcal{C} $ is called a \textit{Doublet maximal codes} if $M(\mathcal{C})=\{\tau_i^{}\}_{i \in [p]}, $ the set of all maximal codewords of $ \mathcal{C} $ have the property that for every $ i\in [p] $ there exists atmost one $ j\not= i $ such that $ \tau_i\cap \tau_j\not=\emptyset. $
\end{definition}
\begin{eg}
\begin{enumerate}
\item Let $ \mathcal{C}_1 =\{2,4,12,23,45,46\}.$ This is a Doublet maximal code with two pairs of maximal codewords $ \{12,23\} $ and $ \{45,46\}. $
\item Let $ \mathcal{C}_2=\{2,4,12,23\}$ This is a Doublet maximal code with one pair, $ \{12,23\} $ and and one singleton, $ \{4\} $ as maximal codewords.
\item Let $ \mathcal{C}_3 =\{3,5,12,13,14,45,123,124,145\}. $ This is a non-example. This code has 3 maximal codewords with all pairwise intersections being non-empty. Also from Figure \ref{figmip} we can see that this code is not maximal intersection complete.
\end{enumerate}
\end{eg}
\begin{theorem} \label{tdmc}
Let $ \mathcal{C} $ be a Doublet maximal code then $ \mathcal{C} $ is open (or closed) convex if and only if $ \mathcal{C} $ is max-intersection complete.
\end{theorem}
Let $ M(\mathcal{C})_2 $ be the set of all pairs of maximal codewords whose intersection is non-empty. Proof of Theorem \ref{tdmc} is then obtained using Theorem \ref{tcmip} iteratively on $ M(\mathcal{C})_2. $
\section{Neural ring homomorphisms and max-intersection complete codes}
\subsection{Background and Preliminaries}
In this section we consider a codeword in a code $ C $ with $ n $ neurons in its binary form. That is if $ c\in \mathcal{C} $ then $ c=(c_1^{}c_2^{}\cdots c_n^{}), $ where $ c_i\in\{0,1\}. $ This is same as seeing $ \mathcal{C}\subset\{0,1\}^n. $ Carina Curto and Nora Youngs \cite{curto2020neural} gave a description of Neural ring homomorphisms as follows
\begin{definition}
Let $ \mathcal{C} \subset \{0,1\}^n $ and $ \mathcal{D}\subset \{0,1\}^m $ be neural codes, and let $ \ring{\mathcal{C}}= \mathbb{F}_2[y_1,\dots,y_n]/I_{\mathcal{C}} $ and $ \ring{\mathcal{D}}= \mathbb{F}_2[x_1,\dots,x_m]/I_{\mathcal{D}} $ be the corresponding neural ring. A ring homomorphism $ \phi:\ring{\mathcal{D}}\rightarrow \ring{\mathcal{C}} $ is a neural ring homomorphism if $ \phi(x_j)\in\{y_i\vert i \in [n]\} \cup \{0,1\} $ for all $ j\in [m],$ where $ x_i=\displaystyle\sum_{\{d\in\mathcal{D}|d_i=1\}} \rho_d $. We say that a neural ring homomorphism $ \phi $ is a neural ring isomorphism if it is a ring isomorphism and its inverse is also a neural ring homomorphism.
\end{definition}
In the beginning of the paper \cite{curto2020neural}, Curto and Youngs discuss ring homomorphisms between two neural rings. Then they proved that there is a 1-1 correspondence between code maps $ q:\mathcal{C}\rightarrow \mathcal{D} $ and the ring homomorphisms $ \phi:\ring{\mathcal{D}}\rightarrow \ring{\mathcal{C}}. $ Usually the map $ q, $ associated with the ring homomorphism $ \phi $ is denoted by $ q_\phi. $ Later, the authors classify all the neural ring homomorphisms using the following theorem:
\begin{theorem}\cite[Theorem 3.4]{curto2020neural} \label{thmnycc}
A map $ \phi:\ring{\mathcal{D}}\rightarrow \ring{\mathcal{C}} $ is a neural ring homomorphism if and only if $ q_\phi $ is a composition of the following elementary code maps:
\begin{enumerate}
\item Permutation
\item Adding a trivial neuron(or deleting a trivial neuron)
\item Duplication of a neuron(or deleting a neuron that is a duplicate of another)
\item Neuron projection (or deleting a not necessarily trivial neuron)
\item Inclusion ( of one code into another)
\end{enumerate}
Moreover, $ \phi $ is a neural ring isomorphism if and only if $ q_\phi $ is a composition of maps $ (1)-(3). $
\end{theorem}
Lastly, Curto and Youngs \cite{curto2020neural} bridge the idea of codes being open convex and neural ring homomorphisms using the following theorem
\begin{theorem}\cite[Theorem 4.3]{curto2020neural} \label{thnrh}
Let $ \mathcal{C} $ be a code containing the all-zeros codeword and $ q:\mathcal{C} \rightarrow\mathcal{D} $ a surjective code map corresponding to a neural ring homomorphism. Then if $ \mathcal{C} $ is convex(open convex), $ \mathcal{D} $ is also convex(open convex) with $ d(\mathcal{D}) \leq d(C).$
\end{theorem}
\begin{remark}
Curto and Youngs \cite{curto2020neural} proved the above theorem for open convex. We can see that if the code were to be closed convex a similar theorem will hold with little changes to the proof.
\end{remark}
\subsection{Main Theorem}
Now we will try to connect neural ring homomorphisms and max-intersection complete property. For the remainder of the section we assume that $ \mathcal{C} $ is a code on $ n $ neurons and $ \mathcal{D} $ is a code whose neuron number is described if and when required.
\begin{obs} \label{obssig}
Let $ q:\mathcal{C} \rightarrow \mathcal{D} $ be a code map corresponding to a given neural ring homomorphism $ \phi:\ring{\mathcal{D}}\to \ring{\mathcal{C}}. $ If $ \sigma_i^{} \subset \sigma_j^{} $ in $ \mathcal{C} $ then $ q(\sigma_i^{})\subset q(\sigma_j^{}) $ in $ \mathcal{D}. $
By theorem $ \ref{thmnycc} $ we now know that there are only 5 possibilities of a code map associated to Neural ring homomorphism. The observation is fairly computational and can be obtained by applying an arbitrary code (say $ \sigma =\sigma_1^{}\sigma_2^{}\cdots\sigma_n^{} $) to all the 5 maps. Further in this section it becomes easier to see the neural code in $ 0 $ and $ 1 $'s, i.e if $ \sigma=12 $ and $ n=3 $ we write the $ \sigma=110. $ Basically we express the support of $ \sigma. $
\end{obs}
\begin{lemma}
Let $ q:\mathcal{C} \rightarrow \mathcal{D} $ be either a permutation, or adding/ deleting a trivial or duplicate neuron, then $\tau \in \mathcal{C} $ is a maximal element if and only if $ q(\tau) \in \mathcal{D} $ is a maximal element. \label{maxiso}
\end{lemma}
\begin{proof}
If $ q $ is either a permutation, or adding/ deleting a trivial or duplicate neuron then the corresponding neural ring homomorphism is an isomorphism. This implies that $ q $ is a bijection \cite[Proposition 2.3]{curto2020neural}.
Let $ \tau\in C $ be a maximal element. Suppose if possible $ q(\tau) $ not be a maximal element in $ \mathcal{D} $. Then there exists $ q(\tau_1^{})\in D $ such that $ q(\tau)\subsetneq q(\tau_1^{}). $ This implies $ \tau \subsetneq \tau_1^{} $ using Observation \ref{obssig} in view of $ q $ being a bijection. This is a contradiction to the fact that $ \tau $ is a maximal element in $ \mathcal{C}. $
Conversely, if $ q(\tau) $ is maximal element in $ \mathcal{D}. $ Then one can show that $ \tau $ is a maximal element in $ \mathcal{C} $ using $ q^{-1} $ and the idea from necessary part of the proof. This works because $ q^{-1} $ is again either a permutation, or adding/ deleting a trivial or duplicate neuron and so fits the hypothesis of the necessary conditions.
\end{proof}
\begin{lemma}
Let $ q: \mathcal{C} \rightarrow \mathcal{D} $ be a projection. If $ \sigma \in \mathcal{D} $ is a maximal element then there exists a maximal element $ \tau \in \mathcal{C} $ such that $ q(\tau)= \sigma. $ \label{maxpro}
\end{lemma}
\begin{proof}
Let $ \sigma= \sigma_1^{}\sigma_2^{}\cdots\sigma_{n-1}^{}$ and since $ q $ being a projection map we know that $ q $ is surjective. Therefore there exists $ \tau \in \mathcal{C} $ such that $ q(\tau)= \sigma. $ Moreover we precisely know the choices of $ \tau, $ it can either be $ \sigma $ followed by $ 1 $ or $ 0. $ Label $ \tau_0:= \sigma_1^{}\sigma_2^{}\cdots\sigma_{n-1}^{}0$ and $ \tau_1^{}:=\sigma_1^{}\sigma_2^{}\cdots\sigma_{n-1}^{}1. $ Remember that $ \mathcal{C} $ can have either $ \tau_0^{} $ or $ \tau_1^{}, $ or both and so we have 3 cases. It is clear that $ \tau_0^{} \subset \tau_1^{}, $ therefore the case in which both $ \tau_0^{} $ and $ \tau_1^{} $ exist is redundant.
\begin{caseof}
\casea{$\tau_1^{} \in \mathcal{C}$}{In this case we claim $ \tau_1^{} $ is a maximal element in $ \mathcal{C}. $ Suppose not, then there exist a $ \tau_2^{}\in \mathcal{C} $ such that $ \tau_1^{}\subsetneq \tau_2^{}$. By Observation \ref{obssig} we have $ q(\tau_1^{})\subset q(\tau_2^{}). $ But as $ \sigma=q(\tau_1^{}) $ is a maximal element in $ \mathcal{D} $ we get $ q(\tau_1^{}) = q(\tau_2^{}). $ This implies $ \tau_2^{}=\tau_1^{} $ or $ \tau_2^{}=\tau_0^{}, $ this is a contradiction since $ \tau_1^{} \subsetneq \tau_2^{} $ and $ \tau_0^{}\subset \tau_1^{} $.}
\casea{$ \tau_1^{}\not\in \mathcal{C} $}{In this case we claim that $ \tau_0^{} $ is maximal element and the proof is similar to the previous case.}
\noindent Hence the proof.
\vspace{-0.7cm}
\end{caseof}
\end{proof}
\begin{remark}
\begin{enumerate}
\item Converse of Lemma \ref{maxpro} need not hold. For example consider the code\linebreak $ \mathcal{C}=\{100,010,001,011,101,110\} $ and project the code to get $ \mathcal{D}= \{00,10,01,11\} . $ Clearly, $ 011 \in \mathcal{C} $ is a maximal code but $ q(011)=01 \subset 11. $ Implies that it is not a maximal code after projection.
\item Let $ \tau_1^{},\tau_2^{} \in \mathcal{C} $ be two codewords and $ \tau_3^{}= \tau_1^{}\cap \tau_2^{}. $ For $ i\in [3] $ let $ \tau_i^{}= \tau_{i1}^{}\tau_{i2}^{}\cdots\tau_{in}^{} $ then we observe that $ \tau_3 $ is given as: $ \tau_{3j}^{} = \begin{cases}
1\qquad &\text{ if } \tau_{1j}^{}=\tau_{2j}^{}=1 \\
0\qquad & \text{ otherwise}
\end{cases}. $
\end{enumerate}
\end{remark}
\begin{theorem}
Let $ q:\mathcal{C} \rightarrow\mathcal{D} $ be a surjective code map corresponding to a neural ring homomorphism. Then if $ \mathcal{C} $ is max-intersection complete, so is $ \mathcal{D}. $ \label{thmic}
\end{theorem}
\begin{proof}
By Theorem \ref{thnrh} the surjective code map will be a composition of permutations, adding/ deleting a trivial or duplicate neuron, or projection. So, it is sufficient to assume all of them independently and prove the above statement.
\noindent Let $\sigma_i^{}, \sigma_2^{} \in \mathcal{D} $ be maximal elements, we need to show that $ \sigma_1^{} \cap \sigma_2^{}\in \mathcal{D}. $
\paragraph{Permutation:} As $ q $ is a bijection, there exists unique $ \tau_i^{} \in \mathcal{C} $ such that $ \sigma_i^{}=q(\tau_i^{}), \text{ for } i=1,2 . $ By Lemma \ref{maxiso}, $ \tau_1^{},\tau_2^{} \in \mathcal{C} $ are maximal elements. This implies by hypothesis $\tau_3^{}= \tau_1^{}\cap \tau_2^{} \in \mathcal{C}. $ Let $ p\in S_n$ be a permutation. For $ i\in [3] $ let $ \tau_i^{}= \tau_{i1}^{}\tau_{i2}^{}\cdots\tau_{in}^{}. $ Then for $ i=1,2 $ we have $ \sigma_i^{}= \tau_{ip(1)}^{}\tau_{ip(2)}^{}\cdots\tau_{ip(n)}^{}. $ Then let $ q(\tau_1^{})\cap q(\tau_2^{})=\sigma_1^{} \cap \sigma_2^{}: = \gamma = \gamma_1\gamma_2\cdots\gamma_n; \text{ where } \gamma_j= \begin{cases}
1\qquad &\text{ if } \sigma_{1j}^{}=\sigma_{2j}^{}=1 \\
0\qquad & \text{ otherwise}
\end{cases} = \begin{cases}
1\qquad &\text{ if } \tau_{1p(j)}^{}=\tau_{2p(j)}^{}=1 \\
0\qquad & \text{ otherwise}
\end{cases} = \tau_{3p(i)}. $\\ This implies $\gamma= \tau_{ip(1)}^{}\tau_{3p(2)}^{}\cdots\tau_{3p(n)}^{}= q(\tau_3^{})\in \mathcal{D}. $
\paragraph{Adding a trivial or duplicate neuron:}As $ q $ is a bijection, there exists unique $ \tau_i^{} \in \mathcal{C} $ such that $ \sigma_i^{}=q(\tau_i^{}), \text{ for } i=1,2 . $ By Lemma \ref{maxiso}, $ \tau_1^{},\tau_2^{} \in \mathcal{C} $ are maximal elements. This implies by hypothesis $\tau_3^{}= \tau_1^{}\cap \tau_2^{} \in \mathcal{C}. $ For $ i\in [3] $ let $ \tau_i^{}= \tau_{i1}^{}\tau_{i2}^{}\cdots\tau_{in}^{}. $ Then for $ i=1,2 $ we have $ \sigma_i= \tau_{i1}^{}\tau_{i2}^{}\cdots\tau_{in}^{}d$ where $ d=0,1 $ or $ d=\tau_{ij} $ depending upon the map $ q. $ It is clear that $ \sigma_1^{} \cap \sigma_2^{}= \tau_{31}^{}\tau_{32}^{}\cdots\tau_{3n}^{}d=q(\tau_3^{}) \in \mathcal{D}.$
\paragraph{Deleting a trivial or duplicate neuron:}
As $ q $ is a bijection, there exists unique $ \tau_i^{} \in \mathcal{C} $ such that $ \sigma_i^{}=q(\tau_i^{}), \text{ for } i=1,2 . $ By Lemma \ref{maxiso}, $ \tau_1^{},\tau_2^{} \in \mathcal{C} $ are maximal elements. This implies by hypothesis $\tau_3^{}= \tau_1^{}\cap \tau_2^{} \in \mathcal{C}. $ For $ i\in [3] $ let $ \tau_i^{}= \tau_{i1}^{}\tau_{i2}^{}\cdots\tau_{in-1}^{}d $, where $ d=0,1 $ or $ d=\tau_{ij} $ depending upon the map $ q. $ Then for $ i=1,2 $ we have $ \sigma_i= \tau_{i1}^{}\tau_{i2}^{}\cdots\tau_{in-1}^{}$. It is clear that $ \sigma_1^{} \cap \sigma_2^{}= \tau_{31}^{}\tau_{32}^{}\cdots\tau_{3n-1}^{}=q(\tau_3^{}) \in \mathcal{D}.$
\paragraph{Projection:} We just extend the idea from deleting a trivial or duplicate neuron in view of Lemma $ \ref{maxpro}. $ That is if $ \sigma_1^{} $ and $ \sigma_2^{} $ are maximal codes in $ \mathcal{D} $ there exist maximal codes $ \tau_1^{},\tau_2^{} \in \mathcal{C} $ such that $ q(\tau_1^{})=\sigma_1^{} $ and $ q(\tau_2^{})=\sigma_2^{}. $ Rest follows.
Hence the proof.
\end{proof}
\begin{remark}
The converse of Theorem \ref{thmic} need not be true. For example consider the codes $ \mathcal{C}=\{100,010,001\} $ and $ \mathcal{D}=\{00,10,01\}. $ Consider the projection map $ q:\mathcal{C} \rightarrow \mathcal{D}, 100\mapsto 10, 010 \mapsto 01 \text{ and } 001 \mapsto 00. $ This map $ q $ satisfies the hypothesis of the converse. But $ \mathcal{C} $ is not maximum intersection complete.
\noindent This leads us to think that converse will hold true when the code map corresponds to a neural ring isomorphism.
\end{remark}
\begin{corollary}
Let $ q:\mathcal{C}\rightarrow \mathcal{D} $ be a code map corresponding to a neural ring isomorphism. Then $ \mathcal{C} $ is maximum intersection complete if and only if $ \mathcal{D} $ is maximum intersection complete.
\end{corollary}
\section{Introduction}
Brain communicates with us by firing neurons on and off to stimuli space, we call these as Binary codes or Neural codes. Figuring out how the brain works is to simply understand neural codes. Neuroscientist, John O'Keefe discovered and worked with a type of neuron called place cell. This was the motivation for the study of Neural codes. An area in the stimulus space is said to be receptive field, if it is the area of visual field that causes response in the cell. Given a receptive field one can get the neural code that represents it. And so the question that naturally occurs is can we get a receptive field, when given a binary code. If so, the region or the sets in the stimulus space, which gives the receptive field is called realization of code. The sets in the receptive field are referred as receptive cells. Before we go further, we would formally define a Code and its realization as the following
\begin{definition}[Binary code] \cite[Definition 1]{franke2018every}
A \textit{binary code(or neural code)} on $ n $ neurons is a collection $ \mathcal{C} $ of subsets of the set $ [n] =\{1,2,3,\dots,n\}.$ The elements of $ \mathcal{C} $ are called codewords.
\end{definition}
For a codeword $ \sigma=\sigma_1^{}\sigma_2^{}\dots\sigma_n^{} $ the set $ supp(\sigma) :=\left\{i\in \left[n\right] \vert\ \sigma_i^{\\
}=1\right\} $ is called support of $ \sigma. $
\begin{definition}
Let $ \mathcal{U}=\{U_1,U_2,\dots,U_n\} $ be a collection of sets in some stimuli space $X \subseteq\mathbb{R}^k$ and $ \mathcal{C} $ be a neural code on $ n $ neurons. Define $ \mathcal{C}(\mathcal{U})=\left\{\sigma\in [n] \ \Big\vert\ \displaystyle\bigcap_{j\in \operatorname{supp}(\sigma)} U_j \backslash \displaystyle\bigcup_{i\not\in \operatorname{supp}{\sigma} } U_i \not= \phi \right\}. $ We say that $ \mathcal{U} $ is a realization of $ \mathcal{C} $ if $ \mathcal{C}=\mathcal{C}(U). $
We call $ \atom{\sigma}=\displaystyle\bigcap_{j\in \operatorname{supp}(\sigma)} U_j \backslash \displaystyle\bigcup_{i\not\in \operatorname{supp}{\sigma} } U_i $ as the atom of the codeword $ \sigma $ with respect to $ \mathcal{U}. $ And we further denote $ U_\sigma :=\displaystyle\bigcap_{j\in \operatorname{supp}(\sigma)} U_j $ throughout this paper. Also, we fix $ U_\emptyset= X. $
\end{definition}
The realization $ \mathcal{U} $ of a code $ \mathcal{C} $ will be given a name after looking at the topological properties of sets in $ \mathcal{U}. $ For example, the realization is called open convex if all the sets are both convex and open in the stimuli space. Let all the sets of $ \mathcal{U} $ be in $ R^k $ with some fixed topological property (open, closed, convex, etc). If there exists no other collection $ \mathcal{U}' $ in $ \mathbb{R}^l\ (l <k) $ with same topological properties of $ \mathcal{U} $ such that $ \mathcal{C}(U')=\mathcal{C}, $ then $ k $ is said to be the minimal dimension in which $ \mathcal{C} $ can be realized with respect to its topological property. The receptive cells that we find in nature are almost open convex sets. So, the natural question is to see which all neural codes are open convex or closed convex. Megan K Franke and Samuel Muthiah \cite{franke2018every} proved and also gave an algorithm to show that every code is convex realizable. Joshua Cruz, et.al \cite{cruz2019open} showed that codes with max-intersection complete property\footnote{ A code $ \mathcal{C} $ is said to be max-intersection complete if $ \mathcal{C} $ contains all intersections of their maximal codewords} is both open convex and closed convex. Also, they gave an upper bound for the minimal embedding dimension.
In 2013 Carina Curto, et.al \cite{curto2013neural} explored this topic in Algebraic sense. They defined a ring structure called Neural ring($ \ring{C} $) for a given code $ \mathcal{C}, $ as $ \mathbb{F}_2[x_1,x_2,\dots,x_n]/I_\mathcal{C} $ where $ I_\mathcal{C}=\{f\in\mathbb{F}_2[x_1,x_2,\dots,x_n] | f(c)=0 \text{ for all } c\in \mathcal{C}\}. $ For any codeword $ c$ the characteristic function $ p_c $ \footnote{ The characteristic function is given by $p_c(v)= \begin{cases}
1 & \text{ if } v=c \\ 0 & \text{ otherwise}
\end{cases} $} has $ \underset{c_i=1}{\Pi}x_i\underset{c_j=0}{\Pi}(1-x_j) $ as its polynomial form. Further they also defined Neural ideal $ J_\mathcal{C} =\langle \{p_c| c\not \in \mathcal{C}\}\rangle. $ Neural Ideal is closely associated to Stanley-Reisner ideal \cite{miller2004combinatorial}. Later they define Canonical form for neural ideal and have given an algorithm to find the same. Ethan Petersen, et.al \cite{petersen2018neural} worked on giving an algorithm for canonical forms of $ J_\mathcal{C}. $ They gave a Sage-math package which contains several algorithms related to canonical form of neural ideals. Also, they gave an explicit algorithm which updates a given canonical form after adding another codeword to the code $ \mathcal{C}. $
Curto and Youngs \cite{curto2020neural} discuss ring homomorphisms between two neural rings. They proved that there is a 1-1 correspondence between code maps $ q:\mathcal{C}\rightarrow \mathcal{D} $ and the ring homomorphisms $ \phi:\ring{\mathcal{D}}\rightarrow \ring{\mathcal{C}}. $ Usually the map $ q, $ associated with the ring homomorphism $ \phi $ is denoted by $ q_\phi,$ and is called the associated code map. Also, they showed that $ \ring{\mathcal{C}} \cong \ring{D} $ if and only if $ |\mathcal{C}|=|\mathcal{D}|. $ That means the Neural ring loses information on the codewords present in the code and only considers the cardinality of the code. This lead Curto and Youngs \cite{curto2020neural} restrict the ring homomorphisms. The new class is called Neural Ring Homomorphism and this is dependent on the codewords. Later, the authors gave a way to determine whether the given ring homomorphisms $ \phi:\ring{\mathcal{D}}\rightarrow \ring{\mathcal{C}}. $ is a neural ring homomorphism depending on how the associated code map behaves. Lastly, Curto and Youngs \cite{curto2020neural} bridge the idea of codes being open convex and neural ring homomorphisms.
As we mentioned before that Carina Curto, et.al \cite{curto2013neural} defined the neural ideal. This ideal is further explored by A Jeffs, M Omar, N Youngs \cite{jeffs2018homomorphisms}. They tried to get all ring homomorphisms from $ \mathbb{F}_2[y_1,\dots,y_n] \rightarrow \mathbb{F}_2[x_1,\dots,x_m] $ that preserve Neural Ideals. They showed that only specific code maps satisfy the above condition. Later they gave a description on how these neural ideal preserving maps realize the codes.
This paper is structured as follows. In section 2 our main result is Theorem \ref{mainth2}. We state and prove that the class of open convex and that of closed convex codes are same in dimension 1. And for dimension 2 we work with conjecture given by Megan K Franke and Samuel Muthiah \cite{franke2018every} which states that a code is open convex with minimal dimension 2 is convex with minimal dimension 2. We provide few class of examples in Proposition \ref{lemcon} and Remark \ref{remarksec3} which satisfy the conjecture. We introduce a new class of codes called doublet maximal codes in section 3. Later in Theorem \ref{tdmc} we see that for a doublet maximal code open convex and max-intersection complete are the same class of codes. In section 4 we see the relationship of two codes being max-intersection complete via a code map between them in Theorem \ref{thmic}. In the last section we take on the task of counting neural ring endomorphisms on a few special codes. We call this special class of codes circulant codes. We count neural ring endomorphisms for many codes in this class.
\section{Counting Neural ring endomorphisms}
In this section we will work on counting neural ring endomorphisms on a code $ \mathcal{C}. $ We restrict the code $ \mathcal{C} $ to be on $ n $ neurons and have $ |\mathcal{C}|=n. $ Denote $ \nrh{\mathcal{C}} $ to be the collection of all neural ring endomorphisms on $ \ring{\mathcal{C}}. $ Before we proceed lets observe a relation between $\nrh{\mathcal{C}} $ and $ \nrh{\mathcal{C}'}, $ when $ \mathcal{C}' $ is obtained from $ \mathcal{C}. $
\begin{obs} \label{obsnrh}
Let $ \mathcal{C}$ be a code on $ n $ neurons. And $ \mathcal{C}' $ be the code obtained from $ \mathcal{C} $ after applying any of the elementary code maps (1) to (3) written in Theorem \ref{thmnycc}. We observe that $ \nrh{\mathcal{C}} $ has a monoid structure with composition as the binary operation. Then there is a one-one correspondence between $ \nrh{\mathcal{C}} $ and $ \nrh{\mathcal{C}'}. $
Let $ q_\alpha: \mathcal{C}\rightarrow \mathcal{C}'$ be any such elementary code map. Then by Theorem \ref{thmnycc} we have that the corresponding neural ring homomorphism $ \alpha: \ring{\mathcal{C}'}\to \ring{\mathcal{C}} $ is a neural ring isomorphism. Define the correspondence by:
\begin{align*}
\Phi: \nrh{\mathcal{C}}& \rightarrow \nrh{\mathcal{C}'}\\
\phi& \mapsto \alpha^{-1}\circ \phi \circ \alpha
\end{align*}
This map $ \Phi $ is well defined as composition of neural ring homomorphisms is a neural ring homomorphism. We can easily observe that $ \Phi $ is a bijection. Therefore we have $ \vert \nrh{\mathcal{C}} \vert = \vert \nrh{\mathcal{C}'} \vert.$ Moreover, $ \Phi $ is a monoid isomorphism since it preserves composition and identity.
\end{obs}
\subsection{Neural ring homomorphisms on special codes}
Let $ \mathcal{C} =\{c_1^{},c_2^{},\dots,c_n^{}\} $ be a code on $ n $ neurons with $ c_i^{}=(c_{i1}^{}c_{i2}^{}\cdots c_{in}^{}) $ be the binary representation of the codeword $ c_i, $ where $ c_{ij}^{}\in\{0,1\}. $ Denote $ \rh{\mathcal{C}} $ the collection of all ring homomorphisms from $ \ring{\mathcal{C}} $ into itself. In this section we would initially want to define 3 different category of maps that are present in $ \rh{\mathcal{C}}. $ We will get these categories by using basic properties of the ring homomorphism. Firstly, we know that the ring $ \ring{\mathcal{C}} $ can be seen as an $ n $ dimensional vector space over $ \mathbb{Z}_2. $ Therefore $ \ring{\mathcal{C}} $ is isomorphic to $ \sum_{}^n \mathbb{Z}_2 $ ($ n $ direct sums of $ \mathbb Z_2$). Also, the characteristic functions $ \{\rho_{c_i}\}_{i=1}^n $ form a basis of $ \ring{\mathcal{C}}. $ We define the ring homomorphisms on these basis elements. And being ring homomorphisms they are going to preserve the multiplicative structure. Further in this we section ignore $ c $ and write $ \rho_{c_i} $as just $ \rho_i $. In 1974 Carlton J Maxson \cite{maxson1974endomorphism} explored the semi-group of endomorphisms of a ring. He proved that the semi-group of endomorphisms of $ \sum_{}^n \mathbb{Z}_2 $ is all the partial functions from $ [n] $ into itself and the endomorphism which preserve unity corresponds to all the functions from $ [n] $ into itself. We observed that the former's cardinality is $ (n + 1)^n $ and $ n^n $ for the later. Therefore giving $ |\rh{\mathcal{C}}|=n^n. $
Let us now describe an arbitrary map $ \phi \in \rh{\mathcal{C}}. $ Since we already know that $ \ring{\mathcal{C}} $ is a vector space, we will first determine $ \phi $ on the basis elements $ \{\rho_i\}_{i=1}^n. $ Given a basis element $ \rho_i $ let $ \phi $ map to $ \sum_{j=1}^n a_{ij}^{} \rho_j^{}, $ where $ a_{ij}^{}\in\mathbb Z_2=\{0,1\}. $ Further $ \sum_{j=1}^n a_{ij}^{} \rho_j^{} $ can be seen as the dot product of $ a_i $ and $ P $ where $ a_i=(a_{i1}^{},a_{i2}^{},\dots,a_{in}^{}) \in \{0,1\}^n $ and $ P=(\rho_1,\rho_2,\dots,\rho_n). $ So, rewriting the map we get $ \rho_i \mapsto a_i\cdot P. $ We are going to say that $ \phi $ is determined by these vectors $ a_i\ (\phi \leftrightarrow \{a_i\}_{i\in[n]}^{}) . $ Since the map $ \phi $ is a ring homomorphism it will preserve the multiplication in $ \ring{\mathcal{C}}$. We will now see conditions on vectors $ a_i $ to make sure $ \phi $ preserves multiplication.
We are going to use following facts, details of which are mentioned in ``Neural ring homomorphisms and maps between neural codes'' paper by Carina Curto and Nora Youngs \cite{curto2020neural}
\begin{enumerate}
\item $ \rho_i \rho_j =\begin{cases}
0 \ \ &\text{ if } i \not =j \\
\rho_i \ \ &\text{ if } i=j.
\end{cases} $
\item $\sum_{i=1}^{n}\rho_i=1_{\ring{\C}}.$
\end{enumerate}
Using these two we make the following remarks. Before that we write down few notations
\underline{\textbf{Notations:}}
\begin{enumerate}
\item $ P $ denotes the vector $ (\rho_1,\rho_2,\dots,\rho_n) $
\item $ \phi \leftrightarrow \{a_i\}_{i\in[n]}^{}: \ a_i=(a_{i1}^{},a_{i2}^{},\dots,a_{in}^{}) $ are the set of vectors that determine $ \phi. $
\item $ |a_i|: $ This notation is used to refer the number of one's $ a_i $ contains
\end{enumerate}
\begin{remark} \label{obsrh}
\begin{enumerate}
\item $ (a_i\cdot P)(a_j\cdot P) =\sum_{l=1}^n a_{il}^{} \rho_l^{} \sum_{k=1}^n a_{jk}^{} \rho_k^{}=\sum_{r=1}^n b_{ijr}^{} \rho_r^{},$
where $ b_{ijr}^{}=a_{ir} ^{}a_{jr}^{}.$
Let $ b_{ij}=(b_{ij1}^{},b_{ij2}^{},\dots,b_{ijn}^{}) .$
Then we get $ (a_i\cdot P)(a_j\cdot P)=b_{ij}\cdot P $
\item For some $ i\not=j\in [n] $ we have $b_{ij}\cdot P=(a_i\cdot P)(a_j\cdot P)=\phi(\rho_i)\phi(\rho_j)=\phi(\rho_i\rho_j)=\phi(0)=0 .$ Therefore $ \sum_{k=1}^{n} b_{ijk} \rho_k=0$. So, we get $ b_{ijk}=0 $ for all $ k. $
\item Suppose for some $ i,k\in[n] $ let $ a_{ik}=1. $ Then as $ 0=b_{ijk}=a_{ik}a_{jk} $ for all $ j\not=i\in [n], $ we have $ a_{jk}=0 .$ This means for a given coordinate $ k\in[n] $, we have at-most one vector $ a_i $ such that $ a_{ik}=1. $ So, we get that the number of ones in all $ a_i $'s together is at-most $ n. $ Therefore we can \label{reml} say that $ \sum_{i=1}^n |a_i|\leq n. $
\item Since $ 1_{\ring{\C}}=\sum_{i=1}^{n} \rho_i, $ applying $ \phi $ on both sides we get $ 1_{\ring{\C}}=\sum_{i=1}^{n} (a_i\cdot P). $ As our ring homomorphisms preserve unity by definition. Further, we get $ \sum_{j=1}^{n} \rho_j^{}1_{\ring{\C}}=\sum_{i=1}^{n}\sum_{j=1}^{n}a_{ij}^{}\rho_j^{} = \sum_{i=1}^n a_{i1}\rho_1+\sum_{i=1}^n a_{i2}\rho_2+\dots+\sum_{i=1}^n a_{in}\rho_n.$
Therefore comparing the coordinates on both sides we get that for all $ j\in [n],\ \sum_{i=1}^n a_{ij}\rho_j=1. $ This means for a given coordinate $ k\in[n] $, we have at-least one vector $ a_i $ such that $ a_{ik}=1. $ So, we get that the number of ones in all $ a_i $'s together is at-least $ n. $ Therefore we can say that $ \sum_{i=1}^n |a_i|\geq n. $ This and the observation \ref{reml} gives us $ \sum_{i=1}^{n}|a_i|= n. $
\item If there is a vector $ a_i $ with $ |a_i|=r. $ Then from the previous observation we can guarantee that there will be at-least $ r-1\ j $'s such that $ a_j $ is a zero vector. Furthermore assume there exist an $ i\in [n] $ such that $ |a_i|=n.$ That is $ a_i $ is an all ones vector. Then for all $ j\not=i $ we have $ a_j $ is a zero vector.
\end{enumerate}
\end{remark}
We are finally ready to define three classes of maps in $ \rh{\mathcal{C}}. $
\begin{definition}
\begin{enumerate}
\item \textbf{Basis permutation maps (BPM)} We call an element $ \phi\in \rh{\mathcal{C}} $ a \textit{basis permutation map} if for all $ i,\ |a_i|=1 $. It is easy to observe that there will be $ n! $ many such maps. We will denote $ \bpm{\mathcal{C}} $ as the collection of all basis permutations maps from $ \ring{\mathcal{C}} $ into itself.
\item \textbf{Unity maps (UM)} We call a $ \phi\in \rh{\mathcal{C}} $ a \textit{unity map} if there exists an $ i $ such that $ |a_i|=n $ is all ones vector. From the Remarks \ref{obsrh} we now know that all the other vectors determining $ \phi $ will be zero vectors. Therefore we observe that there are exactly $ n $ such maps. We will denote $ \um{\mathcal{C}} $ as the collection of all unity maps from $ \ring{\mathcal{C}} $ to itself.
\item \textbf{Non BPM and Non UM maps} These are all the other maps in $ \rh{\mathcal{C}}. $ So we have cardinality of these maps as $ n^n-n!-n. $ Let $ \psi $ be a map in this class. As we know that $ \psi $ is not a BPM. This implies that there exist at-least one $ i\in [n] $ such that $ |a_i|\geq 2. $ Therefore at least one other vector $ a_j $ must be as zero vector. So we can also refer to this class as Non unity maps with at-least one $ a_i =0.$.
\end{enumerate}
\end{definition}
\begin{eg}
Let $ \mathcal{C} $ be a code on $ 3 $ neurons with $ |\mathcal{C}|=3. $ We know that $ \{\rho_1,\rho_2,\rho_3\} $ generates $ \ring{\mathcal{C}}. $ We give 3 different ring endomorphisms on $ \ring{\mathcal{C}}. $
\begin{enumerate}
\item Let, $ a_1=(0,1,0),a_2=(0,0,1) $ and $ a_3=(1,0,0). $ Therefore the map $ \phi $ given by $ \{a_i\}_{i\in[3]} $ is a basis permutation map. Moreover we see $\phi $ maps basis as follows: $ \rho_1\mapsto\rho_2,\ \rho_2\mapsto\rho_3,\ \rho_3\mapsto \rho_1. $
\item Let, $ a_1=(0,0,0),a_2=(1,1,1) $ and $ a_3=(0,0,0). $ Therefore the map $ \phi $ given by $ \{a_i\}_{i\in[3]} $ is a unity map. Moreover we see $\phi $ maps basis as follows: $ \rho_1\mapsto 0,\ \rho_2\mapsto\rho_1+\rho_2+\rho_3= 1_{\ring{\C}},\ \rho_3\mapsto 0. $
\item Let, $ a_1=(1,0,1),a_2=(0,0,0) $ and $ a_3=(0,1,0). $ Therefore the map $ \phi $ given by $ \{a_i\}_{i\in[3]} $ is a Non BPM and Non UM map. Moreover we see $\phi $ maps basis as follows: $ \rho_1\mapsto \rho_1+\rho_3,\ \rho_2\mapsto 0,\ \rho_3\mapsto \rho_2. $
\end{enumerate}
\end{eg}
\begin{remark}
Let $ \phi\in\rh{\mathcal{C}} $ be a unity map. Then given any $ x_j=\sum_{c_{ij}=1} \rho_i $ we get $ \phi(x_j)\in \{0,1\} $ for all $ j\in [n].$ This is because $ \phi{(\rho_j)}\in \{0,1\}. $ Therefore irrespective of the code, all \textit{unity maps} are neural ring homomorphisms. So, given a code $ \mathcal{C} $ of cardinality $ n $ with $ n $ neurons we have $ |\nrh{\mathcal{C}}| \geq n. $
\end{remark}
\subsection{Circulant codes}
Consider the codeword in $ n $ neurons given by $ c_1=(10\cdots0), $ i.e $ 1 $ followed by $ n-1 $ zeros. Shift $ 1 $ to the right and generate other codewords. In other words $ c_i $ will be a codeword containing $ 1 $ in $ i $th place and $ 0 $ everywhere else. It is easy to observe that there will be exactly $ n $ such codewords. Denote $ \mathcal{C} $ as the code with $\mathcal{C}=\{c_i\}_{i=1}^n. $ If we write down a matrix with each of it's rows as the entries of the codeword, then we get an order $ n $ \textit{circulant matrix}\footnote{A circulant matrix of order $ n $ is a matrix in which each row shifted one element to the right with respect to the previous row. Note that one row is enough to determine the entire circulant matrix, as the rest can be obtained iteratively by shifting to the right.} with entries 0 and 1. We will now call such matrix for any code as correspondent matrix of the code. So we name this code as circulant code. Similarly one could have started with $ c_1=(1100\cdots0) $ i.e two 1's followed by zeros. We would still obtain a circulant matrix. We give a generalized definition of such codes as follows.
\begin{definition}[circulant codes]
A code $ C=\{c_1,c_2,\dots,c_n\} $ on $ n $ neurons is called \textit{circulant code} if the correspondent $ n\times n $ matrix of the code is circulant. We further specify a \textit{circulant code to be of support $ p\ (1\leq p < n) $} if $ c_p=(11\cdots10\cdots0) $ (i.e $ p $ consecutive ones followed by zeros) and the other $ c_i $'s are simply obtained by the $ i^{\text{th}} $ row of the correspondent matrix of the code which is circulant.
\noindent Note that for all $ i\in[n] $ we will get$ |\operatorname{supp}(c_i)|=p. $ Also, we do not consider $ p=n. $ As in that case $ \mathcal{C}=\{(11\cdots11)\} $ is a code with cardinality $ 1. $ And we are interested only in the codes on $ n $ neurons and cardinality $ n. $
\end{definition}
\begin{eg}
The following are few examples of \textit{circulant codes}
\begin{enumerate}
\item $ \{100,010,001\} $ is a circulant code with support $ p=1 $ on neurons $ n=3. $
\item $ \{1001,1100,0110,0011\} $ is a circulant code with support $ p=2 $ on neurons $ n=4. $
\end{enumerate}
\end{eg}
\begin{remark}
We have fixed the order of $ c_i $'s in the circulant code $ \mathcal{C}. $ For example consider the code $ \mathcal{C}=\{101,110,011\} $ and $ \mathcal{C}'=\{110,011,101\} $ with elements reordered. Then $ \mathcal{C} $ is a circulant code on $ n=3 $ neurons with support $ p=2 $ whereas $ \mathcal{C}' $ is no more a circulant code. But $ q_\alpha=(123):\mathcal{C}\rightarrow\mathcal{C}' $ is a permutation on neurons which gives an neural ring isomorphism between $ \ring{\mathcal{C}} $ and $ \ring{\mathcal{C}'}. $ And from Observation \ref{obsnrh} we get that $ |\nrh{\mathcal{C}}|=|\nrh{\mathcal{C}'}|. $
\end{remark}
Our aim is to investigate $ \nrh{\mathcal{C}} $ and give its cardinality for \textit{circulant codes}. Given a map $ \phi\in \rh{\mathcal{C}} $ it belongs to $ \nrh{\mathcal{C}} $ if for all $ i\in[n], \phi(x_i)\in \{x_i|i\in [n]\} \cup \{0,1\}.$\footnote{{Since these are maps from ring to itself }$ y_i=x_i. $ } So it would be important for us to understand what $ x_i $'s are in the circulant codes. First we note that the number of terms in $ x_i $ comes from the number of 1's in $ i^{\text{th}} $ column of the correspondent matrix of the code. For a circulant code the correspondent matrix is a circulant matrix. And in a circulant matrix the row sum and column sum for all rows and columns is a constant. Therefore we get that in a circulant code of support $ p $ the number of terms in $ x_i $ is same for all $ i\in [n]. $ In fact each $ x_i $ will be a sum of $ p $ terms. Moreover we can observe that $ x_i=\sum_{k=0}^{p-1} \rho_{\mdsum{i+k}}^{}, $ where $$ \big(\big)_\oplus^{}: \{-n+1,\dots,0\} \cup[2n-1]\to [n] \text{ given by } \left( i \right)_{\oplus}
^{} = \begin{cases}
i & \text{ if } 0<i \leq n\\
j & \text{ if } i>n \text{ and }i=n+j
\\ k & \text{ if } -n<i\leq 0 \text{ and } i=-n+k
\end{cases}. $$
\noindent Note that $ \rho_{\left({2n} ^{}\right)_{\oplus}^{}} $ doesn't appear in the expression of any $ x_i $'s as $ p<n. $ We denote $\rho_{\mdsum{i+j}} $ as the $ j+1^{\text{th}} $ term in the expression of $ x_i. $ So, naturally we get $ \rho_{i}$ and $ \rho_{\mdsum{i+p-1}} $ to be the first and last (or $ p^{\text{th}} $ ) term in the expression of $ x_i $ respectively.
\begin{proposition}
If $ \mathcal{C} $ is a circulant code with support $ p=1 $ or $ n-1, $ then $ |\nrh{\mathcal{C}}|=n!+n. $ \label{propcirnrh}
\end{proposition}
\begin{proof}
\begin{caseof}
\casea{$ p=1 $}{When $p=1 $ we will have $ x_i=\rho_i $ for all $ i. $ Given any $ \phi \in \bpm{\mathcal{C}} $ we get $ \phi(x_i)=\phi(\rho_i)=\rho_j=x_j $ for some $ j\in [n]. $ This implies that all the basis permutation maps are in $ \nrh{\mathcal{C}}. $ Moreover we already know that $ \um{\mathcal{C}}\subseteq \nrh{\mathcal{C}} $ for any code $ \mathcal{C}. $ So, we have $ \bpm{\mathcal{C}} \cup \um{\mathcal{C}}\subseteq \nrh{\mathcal{C}} .$ It is left to show that given any non BPM and non UM it is not in $ \nrh{\mathcal{C}}. $ First thing we observe is that given any non BPM and non UM, $ \psi $ there exists an $ i\in [n] $ such that $ |a_i|=k $, where $ 2\leq k \leq n-1.$ Consider $ \psi(x_i)=\psi(\rho_i)=a_i\cdot P. $ This implies $ \psi(x_i) $ has $ k $ terms. But each $ x_i=\rho_i. $ Thus $ \psi(x_i)\not \in \{x_i|i\in [n]\} \cup \{0,1\} $. Therefore $ \psi\not \in \nrh{\mathcal{C}}. $ Hence we have $ \bpm{\mathcal{C}} \cup \um{\mathcal{C}} =\nrh{\mathcal{C}} $ and the result follows. }
\casea{$ p=n-1 $}{When $ p=n-1 $ we get $x_i=\sum_{k=0}^{n-2} \rho_{(i+ k)_{\oplus}^{} } ^{}. $ First we observe that if $ \phi\in \bpm{\mathcal{C}} $ then $ \phi(x_i) $ will also have exactly $ n-1 $ terms. This is because $ \phi $ being a BPM is a bijection map when restricted to basis elements. Next, we can see that writing $ n-1 $ terms out of given $ n $ terms gives us $ \binom{n}{n-1}=n $ choices. And all these $ n $ choices are included in $ x_i $'s as they are exactly $ n $ distinct of them. Therefore there exists a $ j\in [n] $ such that after rearrangement of terms in $ \phi(x_i), $ we get $ \phi(x_i)=x_j. $ Hence we have $ \bpm{\mathcal{C}} \cup \um{\mathcal{C}}\subseteq \nrh{\mathcal{C}}. $ It is once again left to show that $ \psi, $ a non BPM and non UM is not in $ \nrh{\mathcal{C}}. $ As we noticed in Case 1, there exists $ i $ such that $ |a_i|=k\ (2
\leq k\leq n-1), $ where $ a_i $ is a vector that determines $ \psi. $ We now assume that there are $ r $ vectors $ \{a_{r_1},a_{r_2}\dots, a_{r_r}\} $ which take the other $ n-k $ ones. And so $ n-r-1 $ vectors $ \{a_{t_1},a_{t_2},\dots, a_{t_{n-r-1}}\} $ are zero. As we mentioned earlier that all the term combinations are present in $ x_i. $ This implies there exists $ j\in [n] $ such that $ x_j= \rho_{r_1}+\rho_{r_2}+\dots+\rho_{r_r}+\rho_{t_1}+\rho_{t_2}\dots+\rho_{t_{n-r-1}}$. From this we can see that $ \psi(x_j) $ will have $ r $ terms in its summation. As $ k\geq 2 $ we have $ r < n-1. $ This implies $ \psi(x_j)\not\in \{x_i|i\in [n]\} \cup \{0,1\}$. Therefore $ \psi \not\in \nrh{\mathcal{C}}. $ Hence we have $ \bpm{\mathcal{C}} \cup \um{\mathcal{C}} =\nrh{\mathcal{C}} $ and the result follows. }
\end{caseof}
\vspace{-0.7cm}
\end{proof}
\begin{remark}\label{rem42}
Consider the code $ \mathcal{C}=\{1001,1100,0110,0011\}.$ We observe that for this circulant code with $ p=2 $ there are some $ \phi \in \bpm{\mathcal{C}} $ but $ \phi \not \in \nrh{\mathcal{C}}. $ Moreover for this code $ \mathcal{C}, $ know that $ |\bpm{\mathcal{C}}|=24$ and we only found 8 maps out of these present in $ \nrh{\mathcal{C}}. $ The other interesting fact is that there are some non BPM and non UM's which for this code is present in $ \nrh{\mathcal{C}}. $ By brute force we computed that there are 24 such non BPM and non UM maps and it gives us that $ |\nrh{\mathcal{C}}|=36 > 4!+4. $ Also, as we observed that the $\bpm{\mathcal{C}} $ is $ 8=2\cdot4=p\cdot n, $ for $ p=2 $ and $ n=4. $ So we try to see whether this is true for all $ n. $
\end{remark}
\begin{lemma}
If $ \mathcal{C} $ is a circulant code with support $ p=2 $ then the total number of basis permutation maps present in $ \nrh{\mathcal{C}} $ is $ 2n. $ \label{prp2}
\end{lemma}
\begin{proof}
Let $ \phi\in \bpm{\mathcal{C}}. $ It is enough to see the restriction of $ \phi $ to basis elements so as to determine the entire map. For this reason we now start counting where $ \phi $ can map each $ \rho_i. $ Starting with $ \rho_1^{} $ it is clear that $ \rho_1^{} $ has $ n $ choices. Assume $ \rho_1^{} $ has been mapped to $ \rho_j^{}. $ Since $ x_1^{}=\rho_1^{}+\rho_2^{} $ and $ \phi(x_1^{})\in \{x_i^{}|i\in [n]\} $
(For a map in $ \bpm{\mathcal{C}}\ \phi(x_i^{}) $ and $ x_i $ have same number of terms, leading to $ \phi(x_i^{})\not \in\{0,1\}). $
Therefore $ \phi(x_1^{})=\phi(\rho_1^{}+\rho_2^{})=\rho_j+\phi(\rho_2). $ So, for $ \phi(x_1^{})\in\{x_i^{}|i\in[n]\}, $ we must have $ \phi(\rho_2^{}) =\rho_{j+1}^{} $ or $ \rho_{j-1}^{}. $
Therefore $ \rho_2 $ has 2 choices when $ \rho_1 $ is fixed. On fixing $ \rho_2^{} \mapsto \rho_{j-1}^{} $ we similarly get $ \rho_3\mapsto \rho_j $ or $ \rho_{j-2}. $
But as $ \phi(\rho_1^{})= \rho_j $ we cannot have $ \phi(\rho_3^{})=\rho_j^{}. $ Therefore $ \rho_3^{} $ has exactly one choice when $ \rho_1^{}$ and $ \rho_2^{} $ are fixed. $ \rho_3^{} $ will still have 1 choice when $ \rho_2{}\mapsto \rho_{j+1} $ And in total we will have $ 2n $ choices. Hence the result.
\end{proof}
\begin{remark}
In Propositions \ref{propcirnrh} and Lemma \ref{prp2} we have counted the number of basis permutation maps that are neural ring homomorphisms for a circulant code with support $ p=1,2$ and $n-1. $ We have obtained this count to be $ n!,2n $ and $n! $ respectively. We further calculated for $ p=3 $ and still obtained $ 2n$ to be the total BPM that are in $ \nrh{\mathcal{C}} $. We strongly believe pattern remains same as $ p $ increases. So, we pursued the following Theorem and obtained the proof.
\end{remark}
\begin{theorem} \label{thnrhbpm}
Let $ \mathcal{C} $ be a circulant code with support $ p\ (1\leq p < n). $ The total number of basis permutation maps present in $ \nrh{\mathcal{C}} $ is given by $ \begin{cases}
n! &\text{ if } p=1 \text{ and } p=n-1 \\
2n &\text{ if } 1<p<n-1
\end{cases}. $
\end{theorem}
\vspace{-0.7cm}
\begin{proof}
\begin{caseof}
\casea{${ p=1\text{ or } n-1.}$}{In this case we get the result using the proof of Proposition \ref{propcirnrh}. } \vspace{-0.2cm}
\casea{$ {p=2} $ }{This is Lemma \ref{prp2}.} \vspace{-0.2cm}
\casea{$ 2<p<n-1 $}{As $ p < n-1 $ and $ x_i=\sum_{k=0}^{p-1}\rho_{\mdsum{i+k}}^{}, $ this gives us the following equations} \begin{align*}
x_1^{}&=\rho_1^{}+\rho_2^{}+\dots+\rho_p^{},\\ x_2^{}&=\rho_2^{}+\rho_3^{}+\dots+\rho_{p+1}^{},\\ x_3^{}&=\rho_3^{}+\rho_4^{}+\dots+\rho_{p+2}^{}
\end{align*} Let $ \phi\in \bpm{\mathcal{C}} $ be a neural ring homomorphism. As seen in the proof of Lemma \ref{prp2}, it is enough to see the restriction of $ \phi $ to basis elements. Starting with $ \rho_1^{} $ it is clear that $ \rho_1^{} $ can be mapped to any of the $ n\ \rho_i $'s. Assume $ \rho_1^{} $ has been mapped to $ \rho_j^{} $ for some $ j\in[n] $
\end{caseof}
\begin{claim} $ \mathbf\phi(\rho_2)= \rho_{\mdsum{{j+1}}}^{}$ or $ \phi(\rho_2)=\rho_{\mdsum{{j+(n-1)}}}^{}.$
Suppose not. Let $ \phi(\rho_2)= \rho_{\mdsum{{j+k}}},$ where $ k\in[n]\backslash \{1,n-1\}.$
We get $ \phi(x_1^{})=\phi(\rho_1^{}+\rho_2^{}\dots+\rho_p^{})=\rho_j+\rho_{\mdsum{{j+k}}}^{}+\phi(\rho_3^{})+\dots\phi(\rho_p^{}). $ As $ \phi $ is a neural ring homomorphism $ \phi(x_1^{})\in\{x_i^{}|i\in[n]\}, $ there exists $ l\in[n] $ such that $ \phi(x_1)=x_l^{} $. Therefore for all $ i\in [n]\backslash[2] $ we get $ \phi(\rho_i^{})=\rho_{r_i} $ such that $ \rho_{r_i} $ is present in the expression of $ x_l^{} $ and $ r_i\not=j $ or $ \mdsum{j+k}. $ Let, if possible $ \rho_j $ be the first term in the expression of $ x_l $ or in other words let $ x_l=x_j. $ Consider $ \phi(x_2^{})= \phi(x_{2}^{}+\rho_1^{}-\rho_1^{}) =\phi(\rho_1^{}+\rho_2^{}+ \dots+\rho_p+\rho_{p+1}^{}-\rho_{1}^{}) =x_j-\rho_j+\phi(\rho_{p+1}^{}) = \rho_{\mdsum{{j+1}}}^{}+\dots+ \rho_{\mdsum{j+k}}+\dots+ \rho_{\mdsum{j+(p-1)}}^{} + \phi(\rho_{\rho_{p+1}}). $ As $ \phi(x_2^{}) \in\{x_i^{}|i\in[n]\} $ it must be a sum of some $ p $ number of consecutive $ \rho_i $'s. This forces $ \phi(\rho_{p+1})= \rho_j $ or $\phi(\rho_{p+1})= \rho_{\mdsum{j+p}}. $ But the former one is not possible as $ \phi(\rho_1^{})=\rho_j. $ Therefore we get $ \rho_{p+1}\mapsto \rho_{\mdsum{j+p}}. $ Next, we look at $ \phi(x_3^{}). $ Now $ \phi(x_3^{})=\rho_{\mdsum{j+1}}^{}+\dots+\rho_{\mdsum{j+k-1}}^{}+\rho_{\mdsum{j+k+1}}^{}+\dots+\rho_{\mdsum{j+p}}^{}+\phi(\rho_{p+2}). $ For $ \phi(x_3^{}) $ to be some $ x_m $ we would require $ \phi(\rho_{p+2})=\rho_{\mdsum{j+k}}^{} $ as $\rho_{\mdsum{j+k}}^{} $ is the missing term. But, then we would end up getting $ \phi(\rho_{p+2})=\phi(\rho_2), $ which is a contradiction. Therefore $ x_l^{}\not=x_j. $ We would get a similar contradiction even if $ \rho_j $ was the last term in the expression of $ x_l^{}. $
Now suppose that $ \rho_j $ is in between term in the expression of $ x_l^{}, $ i.e let $ x_l^{}=\rho_l^{}+\dots+\rho_{j}^{}+ \rho_{\mdsum{j+1}}^{}+\dots+\rho_{\mdsum{j+k}}^{}+\dots+\rho_{\mdsum{l+p-1}}^{}. $ Now we get $ \phi(x_2^{})=\rho_l+\dots+ \rho_{\mdsum{j+1}}^{}+\dots+\rho_{\mdsum{j+k}}+\dots+\rho_{\mdsum{l+p-1}}^{}+\phi(\rho_{p+1}^{}). $ This implies for $ \phi(x_2^{})\in\{x_i^{}|i\in[n]\}, $ we need $ \phi(\rho_{p+1}^{})=\rho_j^{}. $ But this would give us $ \phi(\rho_1^{})=\phi(\rho_{p+1}^{}) $ which is a contradiction. Hence the claim.
\end{claim}
Therefore $ \phi $ maps $ \rho_2$ to either $ \rho_{\mdsum{j+1}}^{} $ or $ \rho_{\mdsum{j+(n-1)}}^{} $. In other words $ \phi(\rho_2^{}) $ is mapped to the basis element that is adjacent to $ \phi(\rho_1^{}).$ Similarly we see that $ \rho_3^{} $ can have 2 possibilities, i.e it can be mapped to basis elements that are adjacent to $\phi( \rho_2^{}) $. Fix $ \rho_2^{}\mapsto \rho_{\mdsum{j+1}^{}}, $ then we get $ \rho_3^{}\mapsto\rho_{\mdsum{j+2}^{}} $ or $ \rho_3^{}\mapsto\rho_{j}^{}. $ But the later one is not possible as $ \phi(\rho_1^{})=\rho_j^{}. $ Even if $ \rho_2^{}\mapsto \rho_{\mdsum{j+(n-1)}}^{} $ we get that $ \rho_3^{} $ can only be mapped to $ \rho_{\mdsum{j+(n-2)}}^{} $ for the same reason. Therefore we see that $ \rho_3^{} $ has only one choice to get mapped, whenever $ \phi(\rho_1^{}) $ and $ \phi(\rho_2^{}) $ are already fixed. Further we see $ \rho_i,\ i\in [n]\backslash[3] $ has just 1 choice for it to be mapped. So, we get total choices for $ \phi $ to be an neural ring homomorphism is $ n\times2\times 1\times\dots\times1=2n. $ Hence the result.
\end{proof}
We know that $ |\nrh{\mathcal{C}}| =n!+n $ for circulant codes with support $ p=1 $ and $ p=n-1 $ by Proposition \ref{propcirnrh}. Now by Theorem \ref{thnrhbpm} we get that $ |\nrh{\mathcal{C}}| \geq 3n $ for all circulant codes with support $p$ on $n>2 $ neurons. Further we will want to know how does non-basis permutation and non unity maps behave on few circulant codes with support $ 1<p<n-1. $ Before that we will introduce some notations. Let $ y_{i}= \rho_{i1}+\rho_{i2}+\dots+\rho_{ik}$ be some summation of a combination of $ k\ \rho_i $'s. We will use $ \norm{y_i} $ as the notation to indicate the number of distinct $ \rho_i $'s in the expression of $ y_i. $ And so we get $ \norm{y_i}=k. $ Similarly, we will have $\norm{x_i}=p $ for a circulant code of support $ p. $ As we know that $ x_i=\sum_{k=0}^{p-1} \rho_{\mdsum{i+k}}^{}. $ We already know by definition $ \phi\in \rh{\mathcal{C}} $ is in $ \nrh{\mathcal{C}} $ if for all $ i\in [n] $ we have $ \phi(x_i)\in\{x_j|j\in [n]\} \cup\{0,1\}. $ With the notation $ \norm{.} $ we can say that the necessary condition for $ \phi\in
\rh{\mathcal{C}}$ to be in $\nrh{\mathcal{C}} $ is: for all $ i\in[n] $ we must have $ \norm{\phi(x_i)}\in\{0,n,\norm{x_j}\}$ for some $ j\in[n]. $ And if $ \mathcal{C} $ circulant code with support $ p $ then as $ \norm{x_j}=p $ for all $ j\in[n]. $ So, we get the necessary condition to be $ \norm{\phi(x_i)}\in\{0,n,p\} $ for all $ i\in [n]. $ Note that for all $ i\in [n] $ we have $ \phi(\rho_i)=a_i\cdot P= \displaystyle\sum_{a_{ij}=1}\rho_j^{}. $ This gives us that $ \norm{\phi(\rho_i^{})}=|a_i|. $ Also $ \norm{\phi(x_i)}=\displaystyle\sum_{k=0}^{p-1} \phi\left( \rho_{\mdsum{i+k}}^{}\right) = \sum_{k=0}^{p-1} \Big|a_{\mdsum{i+k}}^{}\Big|.$
\begin{theorem}
Let $ \mathcal{C} $ be a circulant code on $ n $ neurons with support $ p=2. $ If $ n $ is odd then $ |\nrh{\mathcal{C}}|=3n. $ \label{thnp2}
\end{theorem}
\vspace{-0.5cm}
\begin{proof}
\noindent Clearly $ n $ cannot be 1 as $ p=2. $ \vspace{-0.2cm}
\begin{caseof}
\casea{$ n=3 $}{As $ p=2=n-1 $ in this case. By Proposition $ \ref{propcirnrh} $ we already know that $ \nrh{\mathcal{C}}=3!+3=3.n. $ Hence the proof.} \vspace{-0.3cm}
\casea{$ n>3 $}{From the Lemma $ \ref{prp2} $ we get that the total basis permutation maps that are neural ring homomorphisms are $ 2n. $ We already know that there are $ n $ unity maps and all are in $ \nrh{\mathcal{C}}. $ Therefore we have $ |\nrh{\mathcal{C}}| \geq 3n . $ We are only left to show that there are no more neural ring homomorphisms.
}
\end{caseof}\vspace{-0.2cm}
Let $ \phi $ be a non BPM and non UM. Suppose if possible $ \phi $ be a neural ring homomorphism, with $ \{a_i\}_{i\in [n]} $ as the vectors that represent $ \phi. $ As $ \phi $ is a non BPM and non UM, we already know that there exists $ m\in[n] $ such that $ $ and $ |a_m|\geq 2. $
\begin{claim} $ \mathbf{\textbf{For all } i, \textbf{ we have } |a_i|\leq 2. } $ Suppose not. Then there exists $ j $ such that $ |a_j|=k>2. $ Also as $ \phi $ is a non unity map we have $ k<n. $ We know that $ x_j=\rho_j+\rho_{\mdsum{{j+1}^{}}}^{}. $ By the necessary condition for $ \phi\in \nrh{\mathcal{C}}, $ we have that $ \norm{\phi(x_j)}=0,2 $ or $ n. $ But $ \norm{\phi(\rho_j)}=|a_j|=k>2.$ So the only possibility is that $ \norm{\phi(x_j)}=n. $ This gives us that $ |a_{\mdsum{{j+1}}}^{}|=\norm{\rho_{\mdsum{{j+1}}^{}}}=n-k. $ Also as $ |a_j|+\Big|a_{\mdsum{{j+1}}}^{}\Big|=n, $ we get that $ |a_i|=0, $ for all $ a_i\not=a_j $ and $ a_i\not=a_{\mdsum{{j+1}}} $ Consider $ \phi \left( x_{\mdsum{{j-1}^{}}}^{}\right) = \phi(\rho_{\mdsum{j-1}})+\phi(\rho_j)= \phi\left( {\rho_j}\right) .$ Therefore $ \Big\Vert{\phi\left( x_{\mdsum{{j-1}^{}}}\right) }\Big\Vert=\norm{\phi(\rho_j)}=k\not=0,2 $ or $ n $ as $ 2<k<n. $ This is a contradiction to the necessary condition of $ \phi\in\nrh{\mathcal{C}}. $ Hence the claim.
\end{claim}
With the claim we get that $ |a_m|=2. $ Suppose there exists some $ j\in[n] $ such that $ |a_j|=1. $ As $|a_j|= \norm{\phi(\rho_j)}=1, $ gives us $ \norm{\phi(x_j)} \not=0.$ Also, for all $i\in [n], \ |a_i|\leq 2 $ so we have $ \norm{\phi(x_j^{})}=|a_j|+|a_{\mdsum{j+1}^{}}|=1+|a_{\mdsum{j+1}^{}}|\leq 1+2=3. $ Therefore we have $ \norm{\phi(x_j)} \not=n, $ since $ n>3. $ Thus the necessary condition gives us that $ \norm{\phi(x_j)}=2.$ And we have $ \norm{\phi(\rho_{\mdsum{{j+1}}}^{})}=1. $ Iteratively we get for all $ i\in[n] $ that $ \norm{\phi(\rho_i)}=1=|a_i|. $ This is a contradiction to the fact that $ |a_m|=2. $ Therefore we have that for all $i\in[n], |a_i|=0$ or $ 2. $
The remark \ref{obsrh} gives us that $ \sum_{i=1}^{n}|a_i|=n. $ But as left hand side will be an even number, this forces $ n $ to be even. This is a contradiction to the hypothesis that $ n $ is odd. Therefore $ \phi \not \in \nrh{C}. $ Hence the proof.
\end{proof}
In the view of Theorem \ref{thnp2} we would further want to see the count of $ \nrh{\mathcal{C}} $ when $ n $ is even. In Remark \ref{rem42} we have seen that for $ n=4 $ and $p=2$ of a circulant code $ \mathcal{C} $ we get $ |\nrh{\mathcal{C}}|=36. $ We will now look for $ n\geq 6 $ in the following theorem.
\begin{theorem}
Let $ \mathcal{C} $ be a circulant code on $ n $ neurons with support $ p=2. $ If $ n=2k $ and $ k\geq 3, $ then $ |\nrh{\mathcal{C}}|=2^2\left(\dfrac{n}{2}\right)!+3n. $\label{thnpk2}
\end{theorem}
\begin{proof}
Let us first count the total number of non BPM and non UM maps that are in $ \nrh{\mathcal{C}}. $ Let $ \phi $ be a non BPM and non UM map with $ \{a_i\}_{i\in[n]} $ as its representing vectors. As observed in the proof of Theorem \ref{thnp2} for all $ i\in[n] $ we have $ |a_i|=0 $ or $ 2. $ Suppose if $ |a_i|=2=|a_{\mdsum{{i+1}}}|, $ then $ \norm{\phi(x_i)}=4. $ This contradicts the necessary condition of neural ring homomorphism as $ n>4. $ This implies no two consecutive $ a_i $'s have the same non-zero count , i.e $ |a_i|\not=|a_{\mdsum{i+1}}| $ for any $ i\in[n]. $ Thus if $ |a_1|=2 $ then for all $ m\in[k] $ we get $ |a_{2m-1}|=2 $ and $ |a_{2m}|=0. $ Similarly if $ |a_2|=2 $ for all $m\in [k] $ we get $ |a_{2m}|= 2$ and $ |a_{2m-1}|=0. $ Therefore when $ \phi \in \nrh{\mathcal{C}} $ it has broadly two types of choices for the vectors that can represent it. Let us fix one type of choice and count how many such neural ring homomorphisms it corresponds to. By the choice of all $ |a_i| $ we see that for all $i\in [n],\ \norm{\phi(x_i)}=2. $ This implies for all $ i\in [n] $ there exist $ j\in[n] $ such that $ \phi(x_i^{})=x_j^{}. $
Assume $ |a_1|=2. $ Consider $\phi(x_1^{})=\phi(\rho_1^{}+\rho_2^{})=(a_1\cdot P)+(a_2\cdot P)=(a_1\cdot P)=\phi(\rho_1^{}). $ Let $ \phi(x_1^{})=x_i^{} $ (say) for some $ i\in [n]. $ Then $ \phi(\rho_{1}^{})=x_i^{} $ and clearly $ \rho_i $ has $ n $ choices. Similarly whenever $ |a_l|=2 $ we get that $ \rho_l^{}\mapsto x_j=\rho_j^{}+\rho_{\mdsum{{j+1}}}^{}. $ In general, we can say $ \phi $ maps every basis element to $ 0 $ or a consecutive\footnote{We consider $ \rho_n^{}+\rho_1^{} $ as a consecutive sum} sum of basis elements. As in this case $ |a_{2m-1}|=2 $ and $ |a_{2m}|=0 $ for all $ m\in [k]. $ So, we have $ \phi(\rho_{2m}^{}) =0$ for all $ m\in[k]. $ And we need to only figure out $ \phi(\rho_{2m-1}^{}).$ We already fixed when $ m=1 $. Next, we look at $ m=3, $ i.e we need to find where $ \rho_3^{} $ is mapped by $ \phi $. Let, if possible $ \rho_3{} \mapsto x_{\mdsum{{i+r}}}^{} $ where $0<r<n $ and $ r$ is odd. Firstly, we note that $ r\not=n-1,$ and $r\not=1, $ as $ x_{\mdsum{{i-1}}}^{} =\rho_{\mdsum{{i-1}}}^{}+\rho_i$ and $ x_{\mdsum{{i+1}}}^{}=\rho_{\mdsum{{i+1}}}^{}+\rho_{\mdsum{{i+2}}}^{}. $ So now as $ r\geq 3 $ we observe that the number of $ \rho_j^{} $'s that are in between $ \rho_{\mdsum{i+1}} $ and $ \rho_{\mdsum{{i+r}}} $ is $ r-2. $
Note that once the $ \phi(\rho_{2m-1}) $ is chosen for all $ m\in[k-1]\backslash[2] $ there will still be one $ \rho_l $ in between $ \rho_{\mdsum{i+1}} $ and $ \rho_{\mdsum{{i+r}}} $ as $ r-2 $ is odd. In other words this process will exhaust all the sum of consecutive basis. Now we have to map $ \rho_{n-1} $ as $ |a_{n-1}|=2. $ But there is no more sum of consecutive basis left, meaning there is no choice for $ \phi(\rho_{n-1}^{}). $ Therefore $ \rho_3^{} $ cannot map to $ x_{\mdsum{{i+r}}}^{} $ when $ r $ is odd. Thus $ \phi:\rho_3^{}\mapsto x_{\mdsum{i+r}} $ for some even $ r\geq 2. $ This clearly gives $ \frac{n}{2}-1 =k-1$ choices for $\rho_3^{} $ to be mapped by $ \phi. $ Similarly we observe that $ \rho_5^{} $ will have $ k-2 $ choices. At the end we see that $ \rho_{n-1} $ has only 1 pair to choose from. Hence just $ 1 $ choice. Thus in total we get $ n(k-1)! $ as the number of possible $ \phi $ that can neural ring homomorphism when $ |a_1|=2 $.
Similarly, we get $ n(k-1)! $ as the number of possible $ \phi $ that can neural ring homomorphism when $ |a_2|=2. $ Therefore total number of non BPM and non UM maps that are in $\nrh{\mathcal{C}} $ are $ 2n(k-1)!. $ By Lemma $ \ref{prp2} $ we already know the count of BPM that are in $ \nrh{\mathcal{C}} $ to be $ 2n. $ And finally adding the $ n $ unity maps we get the result.
\end{proof}
Combining the results of Theorem \ref{thnp2} and \ref{thnpk2} together, we can write as $$ |\nrh{\mathcal{C}}|=\begin{cases}
3n \qquad &\text{ if } n \text{ is odd and } n>1.\\
3n+2^2\left(\dfrac{n}{2}\right)! \qquad &\text{ if } n \text{ is even and } n>4
\end{cases}$$ where $ \mathcal{C} $ is a circulant code with support $ p=2. $
Theorem \ref{thnp2} and \ref{thnpk2} gave us a hint that $ \operatorname{GCD}(p,n) $ could play a vital role in deciding the count of $ \nrh{\mathcal{C}}. $ With brute force we found that for a circulant code with support $ p=3 $ on $ n=3k+1 $ and $ n=3k+2, $ the non BPM and non UM maps that are in $ \nrh{\mathcal{C}} $ is zero. This lead us to think that the non BPM and non UM maps in $ \nrh{\mathcal{C}} $ is zero when $ \operatorname{GCD}(p,n)=1. $ We will first prove a Lemma that would be required in the proof.
\begin{lemma}
Let $ \mathcal{C} $ be a circulant code with support $ p>1$ on $ n $ neurons with $ \operatorname{GCD}(p,n)=1$ and $ \phi\in\rh{\mathcal{C}}$ be a non BPM non UM map. If $ \norm{\phi(x_i)}\in\{0,p,n\} $ for all $ i\in[n] $ then $ \norm{\phi(x_i)}=p $ for all $ i\in[n]. $ \label{lemncnrh}
\end{lemma}
\begin{proof}
Let $ \{a_i\}_{i\in[n]} $ be the vectors that represent $ \phi. $ We will first show that $ \norm{\phi(x_i)} =n$ is not possible for any $ i\in[n]. $ Then we will further show $ \phi(x_i)\not=0 $ for all $ i\in [n]. $
Suppose there exists $ j\in[n] $ such that $ \norm{\phi(x_j)}=n. $ Without loss of generality let us assume $ j=1. $ As $ x_1=\rho_1+\dots+\rho_p $ we get $ n=\norm{\phi(x_1)}=|a_1|+|a_2|+\dots+|a_p|. $ This implies for all $ k\in[n]\backslash[p] $ we have $ |a_k|=0. $ Let $ l\in[p] $ be the smallest such that $ |a_l^{}|\not=0. $. Hence we get $ n=\norm{\phi(x_1)}=|a_l|+\dots+|a_p|. $ \\Consider
\begin{align*}
\norm{\phi(x_1^{})}-\norm{\phi(x_l^{})}=&\Big( |a_1|+\dots+|a_{l}| +\dots+ |a_{p}|\Big) -\left( |a_{l}|\dots+|a_p|+|a_{\mdsum{p+1}}|+\dots+|a_{\mdsum{l+p-1}}|\right) \\
=& |a_1|+\dots+|a_{l-1}|-(|a_{\mdsum{p+1}}|+\dots+|a_{\mdsum{l+p-1}}|)\\
=& |a_1|+\dots+|a_{l-1}| \qquad (\text{Since } |a_k|=0 \text{ for all } k\in[n]\backslash[p] ) \\
=& 0 \qquad \qquad (\text{Since } l \text{ is the smallest integer such that } |a_l|\not=0 ).
\end{align*} So, we get $ \norm{\phi(x_l^{})}=n. $ Next, we see that $ n= \norm{\phi(x_1)}=|a_l|+\dots+|a_p|.$ This gives us that $ |a_{l+1}|+\dots+|a_p|<n $ as $ |a_l|\not=0. $ Moreover we get $ 0<|a_{l+1}|+\dots+|a_p|<n, $ if not we get $ |a_l|=n $ and that is not possible as $ \phi $ is not a UM map. \\Consider
\begin{align*}
\norm{\phi(x_{l+1})}=&|a_{l+1}|+\dots+|a_p|+ \dots+|a_{\mdsum{l+p}}|\\
=&|a_{l+1}|+\dots+|a_p| \qquad (\text{Since } |a_k|=0 \text{ for all } k\in[n]\backslash[p] )\\
\implies& 0< \norm{\phi(x_{l+1})}<n \quad (\text{Since } 0<|a_{l+1}|+\dots+|a_p|<n)
\end{align*}
And by hypothesis $ \norm{\phi(x_{l+1})}\in\{0,p,n\}. $ Hence we get $ \norm{\phi(x_{\mdsum{l+1}^{}})}=p $ and $ n-p=\norm{\phi(x_l^{})}-\norm{\phi(x_{\mdsum{l+1}^{}})}=|a_l|-|a_{ \mdsum{l+p}}|. $ Now, if $ \mdsum{l+p}\in [n]\backslash[p], $ then and that is not possible as $|a_{ \mdsum{l+p}}|=0 $ and we get $ |a_l|=n-p. $ Or if $ \mdsum{l+p}\in [p] $ we observe that $ \mdsum{l+p}=l+p-n<l $ as $ p<n. $ Thus if $ |a_{ \mdsum{l+p}}|\not=0, $ it contradicts the minimality of $ l. $ So we end up getting $ |a_l|=n-p. $ Let $ m\in[p]\backslash[l] $ be the smallest such that $ |a_m|\not=0. $ Note that as $ |a_l|=n-p $ and $ \sum_{i=1}^{p}|a_i|=n $ we will have $ 0<|a_m|\leq p. $ Suppose $ |a_m|=k <p $ then $ \norm{\phi(x_{\mdsum{m+1}}{})}=|a_{m+1}|+\dots|a_p|+\dots+|a_{\mdsum{m+p}}|=n- \sum_{i=1}^{m}|a_i|=n-(n-p+k)=p-k\not\in\{0,p,n\}. $ Therefore it ensures $ |a_m|=p. $ This also results to $ |a_i|=0 $ for all $ i\in[n]\backslash\{l,m\}. $
Consider $ x_{\mdsum{m+n-p}}^{}=\rho_{\mdsum{m+n-p}} +\dots+\rho_1+\dots+\rho_l+\dots+\rho_{\mdsum{m+n-1}} $
so we get $ \norm{\phi(x_{\mdsum{m+n-p}}^{})}=|a_{\mdsum{m+n-p}}|+\dots+|a_l|+\dots+|a_{\mdsum{m+n-1}}|=|a_l|=n-p. $ And for $ \norm{\phi(x_{\mdsum{m+n-p})}}\in\{0,p,n\} $ we must have $ n=p $ or $2p, $ or $ p=0 $. But as $ \operatorname{GCD}(p,n)=1 $ and $ p>1 $ none of them is possible. Therefore we get a contradiction to the hypothesis. Hence $ \norm{\phi(x_i)}\not=n $ for any $ i\in[n]. $
Let if possible there exists $ j\in[n] $ such that $ \norm{\phi(x_j)}=0. $ We also know there exists $ k\in[n-1] $ such that $ \norm{\phi(x_{\mdsum{j+k}})}\not=0. $ Thus $ \norm{\phi(x_{\mdsum{j+k}})}=p. $ as it cannot be $ n $ using last paragraph. Choose the smallest $ k $ such that $ \norm{\phi(x_{\mdsum{j+k}})}=p, $ i.e $ \norm{\phi(x_{\mdsum{j+m}})}=0 $ for all $ m<k. $ Also as $ x_{\mdsum{j+k-1}} = \sum_{m=0}^{p-1} \rho_{\mdsum{j+k-1+m}} $ we have $0=\norm{\phi(x_{\mdsum{j+k-1}})}=\sum_{m=0}^{p-1}|a_{\mdsum{j+k-1+m}}|.$ Therefore we get $ |a_{\mdsum{j+k-1+m}}|=0 $ for all $ m\in \{0\}\cup[p-1]. $ Consider $ x_{\mdsum{j+k}}=\rho_{\mdsum{j+k}}+\dots+\rho_{\mdsum{j+k+p-1}}= x_{\mdsum{j+k-1}}-\rho_{\mdsum{j+k-1}}+\rho_{\mdsum{j+k+p-1}}. $ So, we get $ p=\norm{\phi({x_{\mdsum{j+k}}})}=\norm{\phi(x_{\mdsum{j+k-1}})}- |a_{\mdsum{j+k-1}}| + |a_{\mdsum{j+k+p-1}}|=|a_{\mdsum{j+k+p-1}}|$. Next, we choose the smallest $ l>0 $ such that $ \norm{\phi(x_{\mdsum{j+k+l}})}=p $ and repeating the process as above we get $ |a_{\mdsum{j+k+l+p-1}}|=p $ and other $ |a_i| $'s corresponding to $ \rho_i $'s that are in the expression of $ x_{\mdsum{j+k+l}} $ are 0. Therefore for all $ i\in[n] $ we get $ |a_i|\in\{0,p\}. $ As $ \sum_{i=0}^{n}|a_i|=n $ and $ \sum_{i=0}^{n}|a_i|=dp, $ this implies $ p|n $ and $ \operatorname{GCD}(p,n)=p\not =1. $ This is a contradiction to our hypothesis that $ \operatorname{GCD}(p,n)=1. $ Hence the result.
\end{proof}
\noindent In other words Lemma \ref{lemncnrh} says that if the map $ \phi $ satisfies the necessary condition to be a neural ring homomorphism then $ \norm{\phi(x_i)}=p $ for all $ i\in[n]. $
\begin{obs}\label{obsgcd}
Let $ \mathcal{C} $ be a circulant code on $ n $ neurons with $ p>1 $ and $ \operatorname{GCD}(n,p)=1. $ Also, let $ n=pd+r, $ where $ 0<r<p. $ Suppose $ \phi\in\rh{\mathcal{C}}$ is a non BPM non UM map satisfying the necessary condition to be in $ \nrh{\mathcal{C}} $ with $ \{a_i\}_{i\in[n]} $ as the vectors that represent it. Relabel the indices in the set $ \{a_i\}_{i\in[n]} $ and write them as $ \{\beta_{11},\dots,\beta_{1p},\beta_{21}\dots,\beta_{dp}, \beta_{(d+1) 1},\dots \beta_{(d+1) r}\}. $ As $ \phi $ satisfies the necessary condition to be in $ \nrh{\mathcal{C}}, $ by the Lemma \ref{lemncnrh} for $ i\in[n] $ we have $ \norm{\phi(x_i)}=p. $ So, we have $ p=\norm{\phi(x_1)}=|\beta_{11}|+ |\beta_{12}|+\dots+ |\beta_{1p}|. $ Similarly we get $ \sum_{j=1}^{p}|\beta_{ij}|=p $ for all $ i\in[d]. $ We also have $ 0=\norm{\phi(x_1)}-\norm{\phi(x_2)}=|\beta_{11}|-|\beta_{21}|. $ Therefore we get $ |\beta_{11}|=|\beta_{21}|. $ Now consider $ 0=\norm{x_1}-\norm{x_3}=|\beta_{11}|+|\beta_{12}|-|\beta_{21}|-|\beta_{22}| $. And we get $ |\beta_{12}|=|\beta_{22}|. $ Further considering $ 0=\norm{x_1}-\norm{x_{j+1}} $ we get $ |\beta_{1j}|=|\beta_{2j}|$ for all $ j\in[p]. $ Extending the result for all $i\in[d] $ we see that $ |\beta_{ij}|=|\beta_{1j}| $ for all $ j\in[p]. $ This is also true when $ i=d+1, $ i.e $ |\beta_{(d+1)j}|=|\beta_{1j}| $ for all $ j\in[r]. $ Next, note that $n=\sum_{i=1}^{n}|a_i|=\sum_{i=1}^{d}\sum_{j=1}^{p}|\beta_{ij}|+\sum_{j=1}^{r}|\beta_{(d+1)j}|=pd+\sum_{j=1}^{r}|\beta_{(d+1)j}|.$ This implies $ \sum_{j=1}^{r}|\beta_{(d+1)j}|=n-pd=r. $ Thus we have $ \sum_{j=1}^{r}|\beta_{ij}|=r $ for all $ i\in[d+1]. $ Consider $ 0=\norm{\phi(x_n)}-\norm{\phi(x_1)}=|\beta_{(d+1)r}|-|\beta_{1p}|. $ So, we get $ |\beta_{(d+1)r}|=|\beta_{1p}|. $ Similarly when we consider $ 0= \norm{\phi(x_{n-j})}-\norm{\phi(x_1)}$ we get $ |\beta_{(d+1)(r-j)}|=|\beta_{1(p-j)}| $ for $ j\in\{0\}\cup[r-1]. $
\end{obs}
\begin{proposition}\label{propgcd1}
Let $ \mathcal{C} $ be a circulant code on $ n $ neurons with support $ 2<p<n-1 $. If $ \operatorname{GCD}(p,n)=1 $ and $ n=pd+1 $ then $ |\nrh{\mathcal{C}}|=3n.$
\end{proposition}
\begin{proof}
Let $ \phi\in\rh{\mathcal{C}} $ be a non BPM and non UM map with $ \{a_{i}\}_{i\in [n]} $ as its representing vectors. Also, label the vectors $ \{a_i\}_{i\in[n]} $ as in \ref{obsgcd} and rewrite them as $ \{\beta_{ij}\}_{i\in[d],j\in[p]} \cup\{\beta_{(d+1)j}\}_{j\in[r]}. $ Let if possible $ \phi\in\nrh{\mathcal{C}}, $ then by Lemma \ref{lemncnrh} for $ i\in[n] $ we get $ \norm{\phi(x_i)}=p. $ By Observation \ref{obsgcd} we get the following remarks: \begin{enumerate}
\item $ \sum_{j=1}^{r}|\beta_{(d+1)j}|=r, $ and as $ r=1 $ we have $ |\beta_{(d+1)1}|=1. $
\item For all $ i\in[d] $ we have $ |\beta_{i1}|=|\beta_{(d+1)1}|=1. $
\item Also, for all $ i\in[d] $ we have $ |\beta_{ip}|=|\beta_{(d+1)1}|=1. $
\end{enumerate}
Consider $ p=\norm{\phi(x_n)}=|\beta_{(d+1)1}|+|\beta_{11}|+\dots+|\beta_{1(p-1)}| =1+1+\sum_{j=2}^{p-1}|\beta_{1j}|.$ This implies $ \sum_{j=2}^{p-1}|\beta_{1j}|=p-2. $ Also $ p=\norm{\phi(x_{n-1})}=|\beta_{dp}|+|\beta_{(d+1)1}|+|\beta_{11}|+\sum_{j=2}^{p-1}|\beta_{1j}|-|\beta_{1(p-1)}|=1+1+1+p-2-|\beta_{1(p-1)}|. $ This implies $ |\beta_{1(p-1)}|=1. $ And by Observation \ref{obsgcd} we get $ |\beta_{i{(p-1)}}|=1, $ for all $ i\in[d]. $ Note that at this juncture we have already proved the result if $ p=3. $ As we get that $ |a_i|=1$ for all $ i\in[n],$ which is a contradiction to the fact that $ \phi $ is a non BPM and non UM map. If $ p>3 $ we see that from $ \norm{\phi(x_{n-1})} $ we get $ \sum_{j=2}^{p-2}|\beta_{1j}|=p-3 $ and next we get $ |\beta_{1p-2}|=1. $ Iteratively we get $|a_i|=1 $ for all $ i\in[n], $ which as discussed above cannot happen. Therefore $ \phi\not\in\nrh{\mathcal{C}}. $ This implies that the none of the non BPM and non UM maps are in $ \nrh{\mathcal{C}}. $ Hence by Theorem \ref{thnrhbpm} we already know the count of BPM that are in $ \nrh{\mathcal{C}} $ to be $ 2n. $ And finally adding the $ n $ unity maps we get the result.
\end{proof}
\begin{proposition}\label{propgcd2}
Let $ \mathcal{C} $ be a circulant code on $ n $ neurons with support $ 2<p<n-1 $. If $ \operatorname{GCD}(p,n)=1 $ and $ n=pd+2 $ then $ |\nrh{\mathcal{C}}|=3n.$
\end{proposition}
\begin{proof}
Let $ \phi\in\rh{\mathcal{C}} $ be a non BPM and non UM map with $ \{a_{i}\}_{i\in [n]} $ as its representing vectors. Also, label the vectors $ \{a_i\}_{i\in[n]} $ as in \ref{obsgcd} and rewrite them as $ \{\beta_{ij}\}_{i\in[d],j\in[p]} \cup\{\beta_{(d+1)j}\}_{j\in[r]}. $ Let if possible $ \phi\in\nrh{\mathcal{C}}. $ Then by Lemma \ref{lemncnrh} for $ i\in[n] $ we get $ \norm{\phi(x_i)}=p. $ By Observation \ref{obsgcd} we get
\begin{enumerate}
\item $ \sum_{j=1}^{2}|\beta_{(d+1)j}|=2. $ Therefore we can have $ |\beta_{(d+1)1}|=|\beta_{(d+1)2}|=1$ or $ |\beta_{(d+1)1}|=2,|\beta_{(d+1)2}|=0 $ or $ |\beta_{(d+1)1}|=0,|\beta_{(d+1)2}|=2.$
\item For all $ i\in[d] $ we have $ |\beta_{i1}|=|\beta_{(d+1)1}| $ and $|\beta_{i2}|=|\beta_{(d+1)2}|. $
\item Also, for all $ i\in[d] $ we have $ |\beta_{ip}|=|\beta_{(d+1)2}|$ and $|\beta_{i(p-1)}|=|\beta_{(d+1)1}|.$
\end{enumerate}
Consider $ 0=\norm{\phi(x_{n-3})}-\norm{\phi(x_{n-2})}=|\beta_{dp}|+|\beta_{(d+1)1}|+|\beta_{(d+1)2}|+\sum_{j=1}^{p-3}|\beta_{1j}|-\Big(|\beta_{(d+1)1}|+|\beta_{(d+1)2}|+\sum_{j=1}^{p-3}|\beta_{1j}|+|\beta_{1(p-2)}|\Big)=0-|\beta_{1(p-2)}|. $ This implies $ |\beta_{1(p-2)}|=|\beta_{dp}|=|\beta_{(d+1)2}|. $ Similarly we get $ |\beta_{1(p-3)}|=|\beta_{d(p-1)}|=|\beta_{(d+1)1}|. $
\begin{caseof}
\casea{$ |\beta_{(d+1)1}|=|\beta_{(d+1)2}|=1 $}{In this case we get $ |\beta_{1(p-2)}|=|\beta_{(d+1)2}|=1 $ and $ |\beta_{1(p-3)}|=|\beta_{(d+1)1}|=1.$
On extending we get $ |\beta_{1j}|=1 $ for all $ j\in[p]. $ Therefore by observation \ref{obsgcd} for all $ i\in[d] $ and $ j\in[p] $ we get $ |\beta_{ij}|=1. $ And as $ |\beta_{(d+1)1}|=|\beta_{(d+1)2}|=1 $ we get $ |a_i|=1 $ for all $ i\in[n]. $ This implies $ \phi $ is a BPM and that is a contradiction as we have chosen $ \phi $ to be a non BPM and non UM map. Hence this case cannot occur.}
\casea{$ |\beta_{(d+1)1}|=2,|\beta_{(d+1)2}|=0 $ or $ |\beta_{(d+1)1}|=0,|\beta_{(d+1)2}|=2.$}{We will work with $ |\beta_{(d+1)1}|=2,|\beta_{(d+1)2}|=0 $ and the other case will be very similar to this. In this case we get $ |\beta_{1(p-2)}|=|\beta_{(d+1)2}|= 0$ and $ |\beta_{1(p-3)}|=|\beta_{(d+1)1}|=2.$
On extending we get $ |\beta_{1j}|\in\{0,2\} $ for all $ j\in[p]. $ And $ p=\norm{\phi(x_1)}=\sum_{j=1}^{p} |\beta_{1j}| =2k$ for some $ k. $ This implies $ 2|p $ and in turn $ 2|\operatorname{GCD}(p,n). $ Therefore we get $ \operatorname{GCD}(p,n)\geq 2 $ which is a contradiction. Hence this case cannot occur. }
\end{caseof}
\noindent Thus we get $ \phi\notin \nrh{\mathcal{C}}. $ By Theorem \ref{thnrhbpm} we already know the count of BPM that are in $ \nrh{\mathcal{C}} $ to be $ 2n. $ And finally adding the $ n $ unity maps we get the result.
\end{proof}
Combining the results of Propositions \ref{propgcd1} and \ref{propgcd2} for a circulant code $ \mathcal{C} $ with support $ 2<p<n-1 $ we get that $ |\nrh{\mathcal{C}}|=3n $ for $ n=pd+r $ where $ r\in\{1,2\} $ and $ \operatorname{GCD}(p,n)=1. $
Next aim is to generalize the above Propositions \ref{propgcd1} and \ref{propgcd2} for any $ r$ such that $ n=pd+r$ and $ 0<r<p $. At this moment we strongly believe the following conjecture.
\begin{conjecture}
Let $ \mathcal{C} $ be a circulant code on $ n $ neurons with support $ 2<p<n-1 $. If $ \operatorname{GCD}(p,n)=1 $ and $ n=pd+r $ with $ 2<r<p, $ then $ |\nrh{\mathcal{C}}|=3n.$
\end{conjecture}
\noindent We also note that if the circulant code is on $ n $ neurons with support $ p=3 $ if $ \operatorname{GCD}(n,3)=1 $ then $ n=3k+1 $ or $ n=3l+2 $ for some $ k,l. $ So,if $ n>4 $ Propositions \ref{propgcd1} and \ref{propgcd2} gives us that $ |\nrh{\mathcal{C}}| =3n.$ And if $ n=4 $ as $ p=3=n-1 $ we get $ |\nrh{\mathcal{C}}|=n!+n=28 $ by Proposition \ref{propcirnrh}. Note that when $ p=3 $ we are now only left with $ n=3d $ case. By brute force we counted that $ |\nrh{\mathcal{C}}|=270 $ where $ \mathcal{C} $ is a circulant code on $ n=6 $ with support $ p=3. $ In the next theorem we will work with $ n=3d $ where $ d>2. $
\begin{theorem}
Let $ \mathcal{C} $ be a circulant code on $ n$ neurons with support $ p=3. $ If $ n=3d, $ where $ d>2 $ then $ |\nrh{\mathcal{C}}|=3n+3^2\left( \dfrac{n}{3} \right)!+12n. $ \label{thnp3k}
\end{theorem}
\begin{proof}
Let us first count the total number of non BPM and non UM maps that are in $ \nrh{\mathcal{C}}. $ Let $ \phi $ be a non BPM and non UM map with $ \{a_i\}_{i\in[n]} $ as its representing vectors. As observed in the proof of Theorem \ref{thnpk2} we have 3 sub-cases, which corresponds to $ |a_1|=3 $, $ |a_2|=3 $ and $ |a_3|=3. $ Also as there is another partition of $ 3 $ which is not all ones (namely $ 3=2+1 $) we get more cases which will corresponds to $ |a_1|=2,|a_2|=1,|a_3|=0 $ and their possible permutations. Thus in total we will have these 2 broader class of cases. Let us fix one type of choice and count how many such neural ring homomorphisms it corresponds to. By the choice of all $ |a_i| $ we see that for all $i\in [n],\ \norm{\phi(x_i)}=3. $ This implies for all $ i\in [n] $ there exists $ j\in[n] $ such that $ \phi(x_i^{})=x_j^{}. $
\begin{caseof}
\casea{$ (|a_1|,|a_2|,|a_3|)=(3,0,0) $ or $ (|a_1|,|a_2|,|a_3|)= (0,3,0) $ or $ (|a_1|,|a_2|, |a_3|)=(0,0,3) $}{\textbf{Sub-case a:} $ (|a_1|,|a_2|,|a_3|)=(3,0,0). $ \\ This case is similar to case 1 as in the proof of Theorem \ref{thnpk2}. Firstly its clear that $ \phi(\rho_1) $ has $ n $ choices and $ \phi(\rho_2)=\phi(\rho_3)=0. $ And for $ \phi(\rho_4) $ we have to choose from all the triplets that are left. And we get $ \left( \frac{n}{3}-1\right) $ choices. Further completing the process we get the total maps that are in $ \nrh{\mathcal{C}} $ as $ n\times\left( \frac{n}{3}-1\right)\times\left( \frac{n}{3}-2\right)\times1=3\left(\frac{n}{3}\right)! $}. \\
{Sub-case b} and {sub-case c} are almost same as {sub-case a}. Hence Case 1 gives us $ 3^2\left(\frac{n}{3}\right)! $ non BPM non UM maps that are in $ \nrh{\mathcal{C}}. $
\casea{ For some $ i,j\in[3], i\not =j $ let $ |a_i|=2 $ and $ |a_j |=1$}{Then by permuting $ i,j\in[3] $ we get 6 sub-cases.\\\textbf{Sub-case a: } $ (|a_1|,|a_2|,|a_3|)=(2,1,0) $. } \\ In this sub-case firstly we get that $ \phi(\rho_1) $ can take any consecutive sum of basis elements and so it has $ n $ choices. Let $ \phi(\rho_1)=\rho_l+\rho_{\mdsum{l+1}}. $ Next as $ \phi(x_1)\in\{x_k\} $ it ensures that $ \phi(\rho_2) $ can either be $ \rho_{\mdsum{l+n-1}} $ or $ \rho_{\mdsum{l+2}}. $ We already know that $ \phi(\rho_3)=0. $ Further we observe that this process fixes a unique choice for remaining $ \phi(\rho_k) $ for $ k\in[n]\backslash[3]. $ Hence this sub-case gives us $ 2n $ non BPM and non UM maps that are in $ \nrh{\mathcal{C}}. $
\noindent The remaining 5 sub-cases under case 2 will be similar to sub-case a. Hence we get $ 12n$ non BPM and non UM maps that are in $ \nrh{\mathcal{C}}. $
\end{caseof}
\noindent As described in the previous proofs we get $ 3n $ BPM maps that are in $ \nrh{\mathcal{C}}. $ Hence the result.
\end{proof}
\noindent Looking at the pattern from Theorems \ref{thnpk2} and \ref{thnp3k} we conjecture the following.
\begin{conjecture}
Let $ \mathcal{C} $ be a circulant code on $ n $ neurons with support $ p. $ If $ p>2 $ is prime and $ p|n $ then $ |\nrh{\mathcal{C}}|=3n+p^2\left( \dfrac{n}{p}\right)!+p(p+1)n. $
\end{conjecture}
\begin{figure}[H]
\begin{tikzpicture}[scale=6]
\draw[black] (1,0.8) rectangle (1.35,0.95);
\draw (1,0.87)node[right] {$ |\nrh{\mathcal{C}}|$};
\draw (0,0pt)node[above,align=left] {$ \mathcal{C}:$ a circulant\\ code on $ n $ neurons};
\draw[black] (-0.25,-0.02) rectangle (0.24,0.18);
\draw[black] (0.56,0.82) rectangle (0.69,0.91);
\draw (0.63,0.9)node[below] {$ p$};
\draw[->, thick, black] (0.24,0.09) -- (0.4,0.09);
\draw[-, thick, black] (0.4,0.7) -- (0.4,-0.4);
\draw[->, thick, black] (0.4,0.7) -- (0.6,0.7);
\draw (0.6,0.7)node[right] {$ 1$};
\draw[->, thick, black] (0.4,0.5) -- (0.6,0.5);
\draw (0.6,0.5)node[right] {$ 2$};
\draw[->, thick, black] (0.4,0.15) -- (0.6,0.15);
\draw (0.6,0.15)node[right] {$ 3$};
\draw (0.61,0.03)node[right] {$ \vdots$};
\draw (0.61,-0.25)node[right] {$ \vdots$};
\draw[->, thick, black] (0.4,-0.15) -- (0.6,-0.15);
\draw(0.6,-0.15)node[right]{$ p $};
\draw[->, thick, black] (0.4,-0.4) -- (0.6,-0.4);
\draw (0.6,-0.4)node[right] {$ n-1$};
\draw[->, thick, black] (0.66,0.7) -- (0.98,0.7);
\draw (1.03,0.7)node[right] {$ n!+n$};
\draw[->, thick, black] (0.66,0.5) -- (0.98,0.5);
\draw (0.98,0.5)node[right] {$ \begin{cases}
3n &n=2k+1,\ k>1.\\
3n+2^2\left(\frac{n}{2}\right)! & n=2k,\ k>2. \\36 & n=4.
\end{cases}$};
\draw[->, thick, black] (0.66,0.15) -- (0.98,0.15);
\draw (0.98,0.15)node[right] {$\begin{cases}
3n & n=3k+2,\ k>0. \\ 3n& n=3k+1,\ k>1.\\
15n+3^2\left( \frac{n}{3} \right)! & n=3k ,\ k>2.\\
270 & n=6.
\end{cases} $};
\draw[->, thick, black] (0.66,-0.15) -- (0.98,-0.15);
\draw (0.98,-0.15)node[right] {$ \begin{cases}
3n & \operatorname{GCD}(p,n)=1.\\
3n+p^2\left( \frac{n}{p}\right)!+(p^2+p)n\quad
& p|n.\end{cases}$};
\draw (.65,-0.12)node[right] {\textbf{\small Conjecture}};
\draw[->, thick, black] (0.76,-0.4) -- (0.98,-0.4);
\draw (1.03,-0.4)node[right] {$ n!+n$};
\end{tikzpicture}
\caption{The above figure represents count of neural ring endomorphisms for a circulant code on $ n $ neurons and support $ p $.}
\end{figure}
\noindent We are still working on remaining cases for $ p>3. $
\section{Convex codes in dimension 1 and 2}
Megan K Franke and Samuel Muthiah \cite{franke2018every} worked on convex codes and wanted to give an explicit relation between convex and open convex codes. They gave the following conjectures \ref{conMFSM1} and \ref{conMFSM2} stated below:
\begin{conjecture}\cite[Conjecture 1]{franke2018every} \label{conMFSM1}
A code $ \mathcal{C} $ has minimal convex embedding dimension 1 if and only if it has minimal open convex embedding dimension 1.
\end{conjecture}
\begin{conjecture}\cite[Conjecture 2]{franke2018every}
Suppose $ \mathcal{C} $ is open convex and has a minimal open convex embedding dimension of 2. Then the minimal convex embedding dimension of $ \mathcal{C} $ is 2. \label{conMFSM2}
\end{conjecture}
We found the code $ \mathcal{C}=\{12,23\} $ is a counter example for Conjecture \ref{conMFSM1}. The main Theorem \ref{mainth2} of this section is a modification to Conjecture \ref{conMFSM1}. And the theorem gives a relationship between open convex codes and closed convex codes in dimension 1. Later we worked on Conjecture \ref{conMFSM2}. This conjecture seems to hold true. We don't yet have a proof for it, but we have a class of examples which satisfy the Conjecture. At the end of this section(Remark \ref{remarksec3}) we will produce them.
\subsection{Relationship between open convex and closed convex codes}
In this section, we assume $ \forall x,y\in \mathbb{R},\ d(x,y) =\vert x -y \vert,$ the standard metric on $ \mathbb{R}. $ Let $ \mathcal{C} $ be a code on $ n $ neurons which has open convex realization $ \mathcal{U} $ in dimension 1. Let $ \mathcal{U}=\{I_1,I_2,\dots,I_n\} $ be a set of all open intervals in $ \mathbb{R} $ such that $ \mathcal{C}(\mathcal{U})= \mathcal{C} $. For each $j,$ let us assume $ \ I_j=(a_j,b_j) $ where we call $ a_j $ the initial point and $ b_j $ the final point of $ I_j$ and $ a_j\not=b_j $.
Denote $ \epsilon_{u}= \displaystyle\min_{1\leq i,j\leq n} d(b_i,a_j), $ as the epsilon distance of the realization $ \mathcal{U}. $ We further use the ordered pair $ (\mathcal{U},\epsilon_{u}) $ whenever we have a realization $ \mathcal{U} $ with its epsilon distance $ \epsilon_{u} $ for the sake of convenience.
\begin{proposition} \label{openepi}
Let $ (\mathcal{U},\epsilon_{u}) $ (with $ \epsilon_{u} $ possibly zero) be any open convex realization of a code $ \mathcal{C} $ in dimension 1. Then there exists another open convex realization $ (\mathcal{U}',\epsilon_{u'}) $ of $ \mathcal{C} $ such that $ \epsilon_{u'}> 0.$
\end{proposition}
\begin{proof}
\begin{caseof}
\casea{$ \epsilon_u > 0 $}{In this case there is nothing to prove, as $ (\mathcal{U},\epsilon_u) $ itself works.}
\casea{$ \epsilon_u =0 $}{In this case as $ \epsilon_u=0,$ there exist some $ i,j \in [n] $ with $ i\not= j $ such that $ d(b_i,a_j)=0. $ Let these $ i,j $'s be enumerated as $ i_k, j_k$ for $ \ k\in [n-1] $ whenever $ d(b_{i_k},a_{j_k})=0. $ Fix a $ k $ and choose, $ 2\cdot\delta= {b_{i_k}- \displaystyle\max_{a_l,b_l\in [a_{i_k},b_{i_k}).}\{a_l,b_l\}} $. } Then let $ I'_{i_k} = (a_{i_k}',b_{i_k}'), $ where $ a'_{i_k}=a_{i_k} $ and $ b'_{i_k}=b_{i_k}- \delta. $ Do the same procedure for all such $ k $'s and obtain new set of open convex intervals $\mathcal{U}' =\{I'_1,I'_2,\dots,I'_n\}$ with remaining intervals kept unchanged. It is clear that $ \epsilon_{u'} > 0 $ for $ \mathcal{U}'. $
We can see the that the atoms which were singleton sets have become intervals with length $ \delta. $ Moreover, no new atoms are added nor the existing atoms have been deleted. Hence we haven't added any new code in $ \mathcal{C}(\mathcal{U}'). $ So, we have $ \mathcal{C}(\mathcal{U})=\mathcal{C}(\mathcal{U}')=\mathcal{C}. $ Therefore we have a open convex realization of $ \mathcal{C} $ with corresponding epsilon distance greater than zero.
\end{caseof}
\vspace{-0.7cm}
\end{proof}
Now we observe that the Proposition \ref{openepi} guarantees an open convex realization with $ \epsilon > 0, $ whenever the code is open convex in dimension 1. Therefore the Proposition can be restated as follows :
\begin{remark}
Let $ \mathcal{C} $ be a code which is open convex with a realization in $ \mathbb{R} $ (i.e dimension 1). Then we can always find a open convex realization, $ (\mathcal{U},\epsilon) $ of $ \mathcal{C} $ such that $ \epsilon >0. $
\end{remark}
When a code $ \mathcal{C} $ is closed convex in dimension 1 then the sets in its realization $ \mathcal{U} $ can also be singletons, as they are closed in $ \mathbb{R}. $ We now will show that if there are singleton sets in $ \mathcal{U} $ then, we can also find another realization, $ \mathcal{U}' $ of $ \mathcal{C} $ in which all the sets are closed intervals of $ \mathbb{R}. $ For us, when we say closed intervals we do not consider the singletons.
\begin{lemma} \label{lemclo}
Given any closed convex realization $ \mathcal{U}= \{I_i\}_{i=1}^n $ of $ \mathcal{C} $ in dimension 1 with, possibly some $ I_j $'s as singletons for $ j\in [n]. $ Then there exists a closed convex realization $\mathcal{U}'= \{I_i'\}_{1=1}^n $ of $ \mathcal{C} $ in which every $ I'_i $ is a closed interval.
\end{lemma}
\begin{proof}
For some $ j\in [n], $ let $ I_j =\{x\} $ be a singleton set in $ \mathcal{U}. $ We will try to give another realization $ \mathcal{U}' $ with all closed intervals. We will currently assume that there is only one such set $ I_j\in \mathcal{U}. $ If there are more such sets, just repeat the following procedure to all those sets separately.
\begin{caseof}
\casea{$ x $ does not lie on boundary of any $ I_k\ (k\not = j,\ 1\leq k \leq n ) $ }{Let $ I_i=[a_i,b_i], $ for $ i\not=j $ and choose $ 2\cdot\delta= \min\{ d(a_i,x), d(x,b_i)\vert\ i\not = j,\ 1\leq i \leq n \} $ Then let $ I'_j = \left[x- {\delta}, x+ {\delta}\right]$ . Let $ \mathcal{U}' $ have sets $ I'_i=I_i $ when $ i\not= j,$ and $ I_j'. $ This new realization has no new codes as $ I'_j$ doesn't intersect $ I'_i\ (i\not=j) $, also we haven't deleted any old atoms. }
\casea{$ x $ lies on boundary of a few $ I_k$'s}{There can arise two sub-cases here:
\begin{subcaseof}
\subcase{Intersection of all such $ I_k $'s is a closed interval}{In this case, since the intersection is a closed interval, $ x $ is either the initial point or the final point of all such $ I_k $'s and therefore these intervals are nested. We assume that $ x $ lies on the final point of all the $ I_k $'s, i.e $ b_k=x, $ for all such $ k $. The other case in which $ x=a_k, $ for all such $ k $ will be mutatis mutandis. Let $ 2\cdot\delta ={b_{k}-\displaystyle \max_{a_l,b_l\in (-\infty,x)}\{a_l,b_l\}} $, and define $ I'_j=[\delta, x]. $ Also, let $ I_i'=I_i $ for all $ i\not=j.$ With this new collection $ \mathcal{U}' =\{I_i'\}_{i=1}^ n $ one can check that we haven't added or deleted any codes.}
\subcase{Intersection of all such $ I_k $'s is the point $ x $}{In this case, let such $ I_k $'s be $ I_{k_1},I_{k_2},\dots,I_{k_r} (1\leq r\leq n). $ There maybe a few $ I_k $'s such that $ x=a_{k_m} $ and a few in which $ x=b_{k_p}, (m\not = p). $ Label them as $ I_{a_k} $'s and $ I_{b_k} $'s respectively. We will try to create a closed interval using these, so that it becomes easier to replace $ I_j $ by a closed interval. Let $ 2\cdot\delta ={b_{k_p}-\displaystyle \max_{a_l,b_l\in (-\infty,x)}\{a_l,b_l\}}. $ Consider all $ I_{a_k}, $'s and change them to $ I'_{a_k} = [a'_k,b_k]$, where $ a'_k= a_k-\delta. $ Then choose $ I'_j= [a'_k,x]$. With this new collection $ \mathcal{U}' =\{I_i'\}_{i=1}^ n $ one can check that we haven't added or deleted any codes.}
\end{subcaseof}}
\end{caseof}
\vspace{-0.7cm}
Hence the proof.
\vspace{-0.2cm}
\end{proof}
\begin{proposition} \label{closepi}
Given any closed convex realization $ (\mathcal{U},\epsilon_u) $ of a code $ \mathcal{C} $ (with $ \epsilon_{u} $ possibly zero), there exists another closed convex realization $ (\mathcal{U}',\epsilon_{u'}) $ of $ \mathcal{C} $ such that $ \epsilon_{u'}> 0.$
\end{proposition}
\begin{proof}
\begin{caseof}
\casea{Every $ I_j $ in $ \mathcal{U} $ is a closed interval}{ The proof goes similarly as in the Proposition \ref{openepi}, except change $ I'_{j_k}= [a'_{j_k},b'_{j_k}], $ where $ a'_{j_k}=a_{j_k}- \delta $ and $ b'_{j_k}=b_{j_k}. $ }
\casea{If some $ I_j $'s is a singleton set}{Use Lemma \ref{lemclo} to convert single ton sets to closed intervals and then this becomes case 1.}
\end{caseof}
\vspace{-0.7cm}
\end{proof}
Note that Proposition \ref{closepi} can be restated as follows:
\begin{remark}
Let $ \mathcal{C} $ be a code which is closed convex with a realization in $ \mathbb{R} $ (i.e dimension 1). Then there exists a closed convex realization, $ (\mathcal{U},\epsilon) $ such that $ \epsilon >0. $
\end{remark}
\noindent The following theorem is the main result which gives the relationship between open convex and closed convex codes of dimension 1.
\begin{theorem}
Suppose $ \mathcal{C} $ is a code on $ n $ neurons. Then it is open convex with minimum dimension 1 if and only if it is closed convex with minimum dimension 1. \label{mainth2}
\end{theorem}
\begin{proof}
The idea of this proof is to consider $ \epsilon = \displaystyle\min_{l,r\in [n]}\{d(b_l,a_r)\} $ and create intervals with it. The Propositions \ref{openepi} \& \ref{closepi} has ensured that there exist closed and open realizations with $ \epsilon >0. $
Consider $ \mathcal{C} $ to be open convex with $ (\mathcal{U},\epsilon) $ as its open realization such that $ \epsilon>0. $ Let $ J_i=[a'_i,b'_i], $ where $ a'_i=a_i+\epsilon/3 \text{ and } b'_i=b_i-\epsilon/3. $ Since $ \epsilon>0,$ the change in the intervals doesn't add any new code or delete the old ones. This realization, $ \mathcal{U}'=\{J_i\}_{1\leq i\leq n} $ will make $ \mathcal{C} $ closed convex.
The proof for the converse is similar.
\end{proof}
\subsection{Convex codes that are not realizable in $ \mathbb{R} $}
\begin{proposition}
Let $ \{i,j,k,\sigma\} \subseteq
\mathcal{C} $ be a code, such that $i,j,k\in \sigma\subset[n] $ and $ i,j,k $ are all distinct elements in $ [n]. $ Then $ \mathcal{C} $ can never be a convex code in dimension $ 1. $ \label{lemcon}
\end{proposition}
\begin{proof}
We show that for any possible sets in $ \mathbb{R} $ this code cannot be convex realized. We construct sets $ U_i,U_j,U_k $ as a part of a convex realization $ \mathcal{U} $ of $ \mathcal{C}. $ We observe that $ U_l \cap \atom{l}\not=\emptyset
$ and $ U_l\cap\atom{\sigma}\not=\emptyset $ as $l\subseteq \sigma, $ for $ l=i,j,k. $ Since atoms are disjoints that gives us that $ U_l $ contains at-least two points. However as $ U_l $'s are convex sets in $ \mathbb{R}, $ they must be intervals.
Without loss of generality we may assume that $ U_i $ is open, $ U_j $ is clopen (neither closed nor open) and $ U_k $ is a closed set. We choose any $ a_i,b_i\in \mathbb{R} $ and fix $ U_i=(a_i,b_i). $ Since $ ijk \subseteq \sigma \in \mathcal{C} $ we have $ U_i\cap U_j \cap U_k\not=\emptyset. $ Therefore we choose $ a_j $ such that $ a_i < a_j < b_i.$ Also, as $ \atom{j}= U_j\backslash U_i \cup U_k \not= \emptyset $ we must have $ b_j \in U_i^c. $ We choose $ b_j > b_i $ and construct $ U_j=(a_j,b_j]. $
The construction so far can be seen in the fig \ref{figconl}. It is left to construct $ U_k. $ Once again we realize that $ U_k$ must be a part of $ U_i\cap U_j $ and so we choose $ a_k $ such that $ a_j<a_k<b_i. $ And as $ \atom{k}= U_k\backslash U_i \cup U_j \not= \emptyset$ we should have $ b_k $ lying in $ (U_i\cup U_j)^c. $ So we choose $ b_k>b_j, $ and construct $ U_k=[a_k,b_k]. $ But this gives us that $ U_j \subset U_i \cup U_k, $ leaving $ \atom{j}=\emptyset $ which is a contradiction. Note that we have constructed $ U_j $ and $ U_k $ to the right of $ U_i. $ But the proof will follow similar to what we have done (with minor changes) even if we construct the sets on the left side of $ U_i. $
\end{proof}
\begin{proposition}
Let $ \{i,j,k,\sigma_{ij},\sigma_{ik},\sigma_{jk}\} \subseteq
\mathcal{C}' $ be a code, such that $i,j\in \sigma_{ij}\subset[n] $ and $ k\notin \sigma_{ij}. $ Similarly obtain $ \sigma_{ik},\sigma_{jk}. $ Then $ \mathcal{C} $ can never be a convex code in dimension $ 1. $ \label{lemcon2}
\end{proposition}
Proof of Proposition \ref{lemcon2} is similar to roof of Proposition \ref{lemcon}.
\begin{remark}\label{remarksec3}
Thus we have got two classes of examples
\begin{align*}
\mathcal{C} \supseteq& \{i,j,k,ijk\} \qquad (i,j,k\in \sigma\subset[n]) \\ \mathcal{C}' \supseteq & \{i,j,k,\sigma_{ij},\sigma_{ik},\sigma_{jk}\} \qquad (\text{as defined above})
\end{align*}
which surely cannot have convex realization in dimension 1. So, if $ \mathcal{C} $ or $\mathcal{C}' $ have a minimum \textit{open} convex dimension of 2, then $ C $ or $C' $ are supporting class of examples of Conjecture \ref{conMFSM2}. For example
\begin{enumerate}
\item $ \mathcal{C}_1=\{1,2,3,1234\} $ has both open convex and convex minimal dimensions as 2.
\item $ \mathcal{C}_2=\{1,2,3,124,23,135\} $ has both open convex and convex minimal dimensions as 2.
\end{enumerate}
\end{remark}
\begin{figure}[]
\begin{center}
\begin{tikzpicture}[scale=7]
\draw[<->, ultra thick] (0,0) -- (1.5,0);
\foreach \x/\xtext in {0.2/$ a_i $,0.4/,0.6/$ a_j $,0.8/,1/$ b_i $,1.2/$ b_j $,1.4/}
\draw[thick] (\x,0.5pt) -- (\x,-0.5pt) node[below] {\xtext};
\draw (0.2,1pt) node[above] {$U_i$};
\draw[{(-)}, ultra thick, green] (0.2,0) -- (1.0,0); \draw (0.6,1pt)node[above] {$U_j$};
\draw[{(-]}, ultra thick, red] (0.6,.0) -- (1.2,0.00);
\end{tikzpicture}
\end{center}
\caption{This figure gives us the construction of $ U_i, U_j $ of the Proposition \ref{lemcon}} \label{figconl}
\end{figure}
\section{Doublet maximal codes}
A codeword $ \sigma $ is said to be
\textit{maximal} if as a subset $ \sigma \subset [n] $ it is not contained in any other codeword of $ \mathcal{C}. $ Maximal codewords play an important role and we will see that atoms corresponding to them have special properties. The following Lemma gives us one such.
\begin{lemma}
Let $ \tau\in \mathcal{C} $ be the maximal codeword, and let $ \mathcal{C} $ have a convex realization, $ \mathcal{U}=\{U_1,U_2,\dots,U_n\} $ in $ \mathbb{R}^m, $ i.e in dimension $ m, $ then \label{mopen}
\begin{enumerate}
\item $ U_\tau \subseteq \left(\displaystyle\bigcup_{i\not\in \operatorname{supp}(\tau)} U_i \right)^c $ \label{mopen1}
\item If all $ U_i $'s are open (or closed) in $ R^m $ (i.e if $ \mathcal{C} $ is either open convex( or closed convex)) then $ \atom{\tau} $ is open (or closed) in $ R^m $.
\end{enumerate}
\end{lemma}
\begin{proof}
\begin{enumerate}
\item Suppose, if possible let $ U_\tau \not\subseteq \left(\displaystyle\bigcup_{i\not\in \operatorname{supp}(\tau)} U_i \right)^c. $ Then there exists $ x, $ such that $ x\in U_\tau $ and $ x \not \in \left(\displaystyle\bigcup_{i\not\in \operatorname{supp}(\tau)} U_i \right)^c. $ This implies that $ x \in \displaystyle\bigcup_{i\not\in \operatorname{supp}(\tau)} U_i.$ Therefore there exists a $ k\not \in \operatorname{supp}(\tau) $ such that $ x \in U_k. $ Define a codeword $ \beta $ such that $ \operatorname{supp}{\beta}= \{i\in [n]| i\not\in \operatorname{supp}(\tau) \text{ and } x \in U_i\}, $ clearly making $ \beta \not =\emptyset. $ Denote $ \alpha= \tau\cup \beta $. Since $ x\in U_{i} $ for all $ i \in \operatorname{supp}(\beta) $ we have that $ x\in U_\beta.$ This implies $x\in U_\tau\cap U_\beta = U_\alpha. $ Also, we have $ x\not\in \bigcup_{i\not \in \operatorname{supp}(\alpha)} U_i, $ as we can see that $ \operatorname{supp}(\alpha) $ contains all the $ i $'s such that $ x\in U_i. $ Therefore we have $ x\in U_\alpha\backslash \displaystyle \bigcup_{i\not \in \operatorname{supp}(\alpha)}U_i= \atom{\alpha}. $ Hence as $ \atom{\alpha}\not=\emptyset $, we have $ \tau \subset \alpha \in \mathcal{C}(\mathcal{U})=\mathcal{C}, $ which contradicts the maximality of $ \tau. $ Hence the proof.
\item We know that $ \atom{\tau}= U_\tau \Big\backslash \displaystyle \bigcup_{i\not \in \operatorname{supp}(\tau)}U_i = U_\tau \cap \left(\displaystyle\bigcup_{i\not\in \operatorname{supp}{\tau}} U_i \right)^c. $ Thus by part (\ref{mopen1}) we have $ \atom{\tau}= U_\tau. $ And since finite intersection of open (or closed) sets is open (or closed) we have the proof.
\end{enumerate}
\end{proof}
\noindent Next, we work with codes called max-intersection complete. Joshua Cruz, et.al \cite{cruz2019open} defined max-intersection complete codes as follows.
\begin{definition}[max-intersection complete]
The intersection complete of a code $ \mathcal{C} $ is the code that consists of all non-empty intersections of codewords in $ \mathcal{C}: $ $$\widehat{\mathcal{C}}= \left\{\sigma \big\vert \sigma = \bigcap_{v\in \mathcal{C}'} v \text{ for some non-empty subcode } \mathcal{C}' \subset \mathcal{C} \right\}.$$ Denote $ M(\mathcal{C}) \subset \mathcal{C} $ the sub-code consisting of all maximal codewords of $ \mathcal{C}. $ A code is said to be \textit{max-intersection complete }if $ \widehat{M(\mathcal{C})} \subseteq \mathcal{C}. $ Note that, if $ M(\mathcal{C})=\{\tau_1^{},\tau_2^{}\},$ then $ \mathcal{C} $ will be max-intersection complete, if and only if $ \tau_1^{}\cap \tau_2^{} \in \mathcal{C}. $
\end{definition}
Joshua Cruz, et.al \cite{cruz2019open} showed that are codes with max-intersection complete is both open convex and closed convex. Also, they gave an upper bound for the minimal embedding dimension. We tried to look at the converse of the Theorem in dimension 1, i.e. is all open convex codes of dimension 1 max-intersection complete? And, we found a code which was open convex in dimension 1 which is not max-intersection complete. The code is described and explained in figure \ref{figmip}. We observed that having 3 maximal codes did break the converse and hence we proposed the following theorem.
\begin{figure}[]
\begin{tikzpicture}[scale=5]
\draw[<->, ultra thick] (-0.3,0) -- (2.1,0);
\foreach \x/\xtext in {-0.2/0,0/1,0.2/2,0.4/3,0.6/4,0.8/5,1/6,1.2/7,1.4/8,1.6/9,1.8/10,2.0/11}
\draw[thick] (\x,0.5pt) -- (\x,-0.5pt) node[below] {\xtext};
\draw (0.3,1pt) node[above] {$U_1$};
\draw[{(-)}, ultra thick, green] (0.3,0) -- (1.0,0); \draw (0.4,1pt)node[above] {$U_2$};
\draw[(-), ultra thick, red] (0.4,0) -- (0.7,0);
\draw (0.15,1pt)node[above] {$U_3$};
\draw[(-), ultra thick, blue] (0.15,0) -- (0.5,0);
\draw (0.6,1pt)node[above] {$U_4$};
\draw[(-), ultra thick, yellow] (0.6,.0) -- (1.6,0);
\draw (0.8,1pt)node[above] {$U_5$};
\draw[(-), ultra thick, brown] (0.8,.0) -- (1.8,0);
\draw (0.9,-0.6)node[below] {$ \atom{145} $};
\draw[->, thick, black] (0.9,.0) -- (0.9,-0.6);
\draw (0.75,-0.45)node[below] {$ \atom{14} $};
\draw[->, thick, black] (0.75,.0) -- (0.75,-0.45);
\draw (0.65,-0.6)node[below] {$ \atom{124} $};
\draw[->, thick, black] (0.65,.0) -- (0.65,-0.6);
\draw (0.55,-0.45)node[below] {$ \atom{12} $};
\draw[->, thick, black] (0.55,.0) -- (0.55,-0.45);
\draw (0.45,-0.6)node[below] {$ \atom{123} $};
\draw[->, thick, black] (0.45,.0) -- (0.45,-0.6);
\draw (0.35,-0.45)node[below] {$ \atom{13} $};
\draw[->, thick, black] (0.35,.0) -- (0.35,-0.45);
\draw (0.25,-0.6)node[below] {$ \atom{3} $};
\draw[->, thick, black] (0.25,.0) -- (0.25,-0.6);
\draw (1.3,-0.45)node[below] {$ \atom{45} $};
\draw[->, thick, black] (1.3,.0) -- (1.3,-0.45);
\draw (1.7,-0.45)node[below] {$ \atom{5} $};
\draw[->, thick, black] (1.7,.0) -- (1.7,-0.45);
\end{tikzpicture}
\caption{This figure gives a code $ \mathcal{C}=\mathcal{C}(\mathcal{U})= \{3,5,12,13,14,45,123,124,145\}$ realized by $ \{U_1,U_2,U_3,U_4,U_5\}. $ The code $ \mathcal{C} $ is open convex in dimension 1 and $ {123,145} $ are maximal sets, whose intersection is $ 1 $ and 1 doesn't belong to $ \mathcal{C} $. }
\label{figmip}
\end{figure}
\begin{theorem} \label{tcmip}
Let $ \mathcal{C} $ be a code with only two maximal codewords. Then $ \mathcal{C} $ is open convex if and only if $ \mathcal{C} $ is max-intersection complete.
\end{theorem}
\begin{proof}
Let $ \tau_1^{} $ and $ \tau_2^{} $ be the only maximal codewords of $ \mathcal{C}. $ If $ \sigma= \tau_1^{} \cap \tau_2^{}= \emptyset $ we get $ \widehat{M(\mathcal{C})}=\emptyset \subset \mathcal{C} $ and hence the code $ \mathcal{C} $ vacuously satisfy the conditions we require. Therefore we assume $ \sigma\not=\emptyset. $ Let $ \mathcal{U}=\{U_i\}_{i\in[n]}^{} $ be a collection of open convex sets in $ \mathbb{R}^m $ such that $ \mathcal{C}(\mathcal{U})=\mathcal{C}. $ Now, we need to show $ \sigma \in \mathcal{C}. $ Suppose not, then as $ \sigma \not \in \mathcal{C}= \mathcal{C}(\mathcal{U}), $ we get $ \atom{\sigma}=\emptyset \implies \displaystyle\bigcap_{i\in \operatorname{supp}(\sigma)} U_i \Big \backslash \displaystyle\bigcup_{j\not\in \operatorname{supp}{\sigma} } U_j= \emptyset \\ \implies \ U_\sigma^{}=\displaystyle\bigcap_{i\in \operatorname{supp}(\sigma)} U_i \subseteq \displaystyle\bigcup_{j\not\in \operatorname{supp}{\sigma} } U_j $ ( Since $ U_\sigma \not = \emptyset $ as $ U_{\tau_{1}^{}},U_{\tau_{2}^{}} \subseteq U_\sigma $). By Lemma \ref{mopen} we know that $ \atom{\tau_i^{}}= U_{\tau_i^{}} $ for $ i=1,2. $ Moreover we will show that $ U_{\tau_1^{}} $ and $ U_{\tau_2^{}} $ form a separation\footnote{A separation of $ X $ is a pair $ U,V $ of disjoint nonempty open subsets of $ X $ whose union is $ X $. The space $ X $ is not connected if there exist a separation. } of $ U_\sigma. $
As $ \tau_1^{},\tau_2^{} \in \mathcal{C}=\mathcal{C}(\mathcal{U}), $ we have $ U_{\tau_1^{}} \not = \emptyset \not = U_{\tau_2^{}}. $ Also, as atoms are disjoint regions, we have $ U_{\tau_1^{}} \cap U_{\tau_2^{}}=\emptyset. $ From Lemma \ref{mopen} we know that the atoms of maximal sets are open in $ R^m $. Also, we have $ U_{\tau_i^{}}= U_{\tau_i^{}} \cap U_\sigma $ is open in $ U_\sigma, $ for $ i=1,2. $ Now it is only left for us to prove that $ U_\sigma= U_{\tau_1^{}} \cup U_{\tau_2^{}}. $ We can observe that $ U_{\tau_1^{}} =\displaystyle\bigcap_{j\in \operatorname{supp}(\tau_1^{})} U_j= \displaystyle\bigcap_{j\in \operatorname{supp}(\sigma)} U_j \quad \cap \displaystyle\bigcap_{\substack{{i\in \operatorname{supp}(\tau_1^{}) }\\ { i \not \in \operatorname{supp}(\sigma) } }}U_j:= U_{\sigma} \cap U_{\tau_1^{}\backslash \sigma}. $ Similarly we get $ U_{\tau_2^{}}= U_{\tau_1^{}\backslash \sigma}.$ Consider $ U_{\tau_1^{}} \cup U_{\tau_2^{}} = \left\{U_{\sigma} \cap U_{\tau_1^{}\backslash \sigma}\right\} \cup \left\{U_{\sigma} \cap U_{\tau_2^{}\backslash \sigma}\right\}= U_\sigma \cap \left\{U_{\tau_1^{}\backslash \sigma} \cup U_{\tau_2^{}\backslash \sigma}\right\} $
\begin{claim}
$ U_\sigma \subseteq \left\{U_{\tau_1^{}\backslash \sigma} \cup U_{\tau_2^{}\backslash \sigma}\right\}. $ Suppose not, then there exists an $ x \in U_\sigma $ such that $ x\not\in \left\{U_{\tau_1^{}\backslash \sigma} \cup U_{\tau_2^{}\backslash \sigma}\right\},$ i.e. $ x\not \in U_{\tau_1^{}\backslash \sigma} \text{ and } x\not \in U_{\tau_2^{}\backslash \sigma}. $ Also, we have $ U_\sigma \subseteq \displaystyle\bigcup_{j\not\in \operatorname{supp}{\sigma} } U_j.$ This implies that there exists some $ k\not \in \operatorname{supp}(\sigma) $ such that $ x\in U_k. $ i.e. $ k \not \in \operatorname{supp}(\tau_{1}^{}) \cup \operatorname{supp}(\tau^{}_{2}) $ and $ x\in U_k. $ Let us define a codeword $ \beta $ such that $ \operatorname{supp}(\beta)= \{i\in [n]\ |\ i\not\in \operatorname{supp}(\sigma) \text{ and } x \in U_i\}. $ Clearly we have $ \beta \not = \emptyset. $ Denote $ \alpha= \sigma\cup \beta. $ Then, we get $ x \in \atom{\alpha}, $ by working on similar lines as in the proof of part 1 of Lemma \ref{mopen}. Implies $ \atom{\alpha}\not =\emptyset $ and $ \alpha \in \mathcal{C}(\mathcal{U})= \mathcal{C}. $ Since $ \beta \cap \tau_1^{} =\emptyset $ and $ \beta \cap \tau_2^{} =\emptyset. $ We have $ \alpha \not \subset \tau_1 $ and $ \alpha \not \subset \tau_2. $ Therefore we get that either $ \alpha $ is a maximal codeword in $ \mathcal{C} $ or there exist a maximal codeword containing $ \alpha $ which is different from $ \tau_1^{} $ or $ \tau_2^{}. $ This is a contradiction to the hypothesis that $ \mathcal{C} $ has only two maximal codewords, $ \tau_1^{} $ and $ \tau_2^{} $. Hence the supposition is wrong, implying $ U_\sigma \subseteq \left\{U_{\tau_1^{}\backslash \sigma} \cup U_{\tau_2^{}\backslash \sigma}\right\}. $
\end{claim}
Now the equation $ U_{\tau_1^{}} \cup U_{\tau_2^{}}= U_\sigma \cap \left\{U_{\tau_1^{}\backslash \sigma} \cup U_{\tau_2^{}\backslash \sigma}\right\} $ becomes $ U_{\tau_1^{}} \cup U_{\tau_2^{}}= U_\sigma, $ by $ U_\sigma \subseteq \left\{U_{\tau_1^{}\backslash \sigma} \cup U_{\tau_2^{}\backslash \sigma}\right\}. $ This means that $ U_{\tau_1^{}} $ and $ U_{\tau_2^{}} $ form a separation of $ U_\sigma. $ But $ U_\sigma $ is intersection of connected sets so must be a connected set itself. Hence cannot have a separation. Thus we must have $ \sigma\in \mathcal{C}(\mathcal{U})= \mathcal{C}. $
\end{proof}
\begin{remark}
The above Theorem holds when open convex is replaced by closed convex, as the proof changes only in the place where separation will be obtained by the closed sets instead of open.
\end{remark}
\begin{eg}
Consider the sets $ \mathcal{U}= \{U_1,U_2,U_3,U_4,U_5,U_6\} $ in $ \mathbb{R} $ as in Figure \ref{ex2}. Let $ \mathcal{C}= \mathcal{C}(\mathcal{U})=\{2,4,12,23,45,46\}. $ The code $ \mathcal{C} $ has 4 maximal codes and it is both max-intersection closed and open convex. But the interesting fact is one can break the code into $ \mathcal{C}=\mathcal{C}_1 \sqcup\ \mathcal{C}_2, $ where $ \mathcal{C}_1= \{2,12,23\} $ and $ \mathcal{C}_2=\{4,45,46\}. $ The codes $ \mathcal{C}_1 $ and $ \mathcal{C}_2 $ satisfy the hypothesis of the Theorem \ref{tcmip}. This leads us to define a new class of codes called Doublet maximal codes.
\end{eg}
\begin{figure}[H]
\begin{tikzpicture}[scale=5]
\draw[<->, ultra thick] (-0.3,0) -- (2.1,0);
\foreach \x/\xtext in {-0.2/0,0/1,0.2/2,0.4/3,0.6/4,0.8/5,1/6,1.2/7,1.4/8,1.6/9,1.8/10,2.0/11}
\draw[thick] (\x,0.5pt) -- (\x,-0.5pt) node[below] {\xtext};
\draw (0.5,-3.7pt) node[below] {$U_2$};
\draw[{(-)}, ultra thick, green] (0.3,-0.13) -- (0.8,-0.13); \draw (0.4,1pt)node[above] {$U_1$};
\draw[(-), ultra thick, red] (0.3,0) -- (0.5,0);
\draw (0.8,1pt)node[above] {$U_3$};
\draw[(-), ultra thick, blue] (0.5,0) -- (0.8,0);
\draw (0.96,-6pt)node[above] {$U_4$};
\draw[(-), ultra thick, yellow] (1,-0.13) -- (1.8,-0.13);
\draw (1,1pt)node[above] {$U_5$};
\draw[(-), ultra thick, brown] (1,.0) -- (1.3,0);
\draw (1.8,1pt)node[above] {$U_6$};
\draw[(-), ultra thick, blue] (1.3,.0) -- (1.8,0);
\draw (1.1,0.42)node[below] {$ \atom{45} $};
\draw[->, thick, black] (1.1,.0) -- (1.1,0.3);
\draw (0.65,0.42)node[below] {$ \atom{23} $};
\draw[->, thick, black] (0.65,.0) -- (0.65,0.3);
\draw (0.5,0.42)node[below] {$ \atom{2} $};
\draw[->, thick, black] (0.5,.0) -- (0.5,0.3);
\draw (0.35,0.42)node[below] {$ \atom{12} $};
\draw[->, thick, black] (0.35,.0) -- (0.35,0.3);
\draw (1.3,0.42)node[below] {$ \atom{4} $};
\draw[->, thick, black] (1.3,.0) -- (1.3,0.3);
\draw (1.62,0.42)node[below] {$ \atom{46} $};
\draw[->, thick, black] (1.62,.0) -- (1.62,0.3);
\end{tikzpicture}
\caption{This figure gives a code $ \mathcal{C}= \{2,4,12,23,45,46\}.$ }
\label{ex2}
\end{figure}
\begin{definition}[Doublet maximal codes]
A code $ \mathcal{C} $ is called a \textit{Doublet maximal codes} if $M(\mathcal{C})=\{\tau_i^{}\}_{i \in [p]}, $ the set of all maximal codewords of $ \mathcal{C} $ have the property that for every $ i\in [p] $ there exists atmost one $ j\not= i $ such that $ \tau_i\cap \tau_j\not=\emptyset. $
\end{definition}
\begin{eg}
\begin{enumerate}
\item Let $ \mathcal{C}_1 =\{2,4,12,23,45,46\}.$ This is a Doublet maximal code with two pairs of maximal codewords $ \{12,23\} $ and $ \{45,46\}. $
\item Let $ \mathcal{C}_2=\{2,4,12,23\}$ This is a Doublet maximal code with one pair, $ \{12,23\} $ and and one singleton, $ \{4\} $ as maximal codewords.
\item Let $ \mathcal{C}_3 =\{3,5,12,13,14,45,123,124,145\}. $ This is a non-example. This code has 3 maximal codewords with all pairwise intersections being non-empty. Also from Figure \ref{figmip} we can see that this code is not maximal intersection complete.
\end{enumerate}
\end{eg}
\begin{theorem} \label{tdmc}
Let $ \mathcal{C} $ be a Doublet maximal code then $ \mathcal{C} $ is open (or closed) convex if and only if $ \mathcal{C} $ is max-intersection complete.
\end{theorem}
Let $ M(\mathcal{C})_2 $ be the set of all pairs of maximal codewords whose intersection is non-empty. Proof of Theorem \ref{tdmc} is then obtained using Theorem \ref{tcmip} iteratively on $ M(\mathcal{C})_2. $
\section{Neural ring homomorphisms and max-intersection complete codes}
\subsection{Background and Preliminaries}
In this section we consider a codeword in a code $ C $ with $ n $ neurons in its binary form. That is if $ c\in \mathcal{C} $ then $ c=(c_1^{}c_2^{}\cdots c_n^{}), $ where $ c_i\in\{0,1\}. $ This is same as seeing $ \mathcal{C}\subset\{0,1\}^n. $ Carina Curto and Nora Youngs \cite{curto2020neural} gave a description of Neural ring homomorphisms as follows
\begin{definition}
Let $ \mathcal{C} \subset \{0,1\}^n $ and $ \mathcal{D}\subset \{0,1\}^m $ be neural codes, and let $ \ring{\mathcal{C}}= \mathbb{F}_2[y_1,\dots,y_n]/I_{\mathcal{C}} $ and $ \ring{\mathcal{D}}= \mathbb{F}_2[x_1,\dots,x_m]/I_{\mathcal{D}} $ be the corresponding neural ring. A ring homomorphism $ \phi:\ring{\mathcal{D}}\rightarrow \ring{\mathcal{C}} $ is a neural ring homomorphism if $ \phi(x_j)\in\{y_i\vert i \in [n]\} \cup \{0,1\} $ for all $ j\in [m],$ where $ x_i=\displaystyle\sum_{\{d\in\mathcal{D}|d_i=1\}} \rho_d $. We say that a neural ring homomorphism $ \phi $ is a neural ring isomorphism if it is a ring isomorphism and its inverse is also a neural ring homomorphism.
\end{definition}
In the beginning of the paper \cite{curto2020neural}, Curto and Youngs discuss ring homomorphisms between two neural rings. Then they proved that there is a 1-1 correspondence between code maps $ q:\mathcal{C}\rightarrow \mathcal{D} $ and the ring homomorphisms $ \phi:\ring{\mathcal{D}}\rightarrow \ring{\mathcal{C}}. $ Usually the map $ q, $ associated with the ring homomorphism $ \phi $ is denoted by $ q_\phi. $ Later, the authors classify all the neural ring homomorphisms using the following theorem:
\begin{theorem}\cite[Theorem 3.4]{curto2020neural} \label{thmnycc}
A map $ \phi:\ring{\mathcal{D}}\rightarrow \ring{\mathcal{C}} $ is a neural ring homomorphism if and only if $ q_\phi $ is a composition of the following elementary code maps:
\begin{enumerate}
\item Permutation
\item Adding a trivial neuron(or deleting a trivial neuron)
\item Duplication of a neuron(or deleting a neuron that is a duplicate of another)
\item Neuron projection (or deleting a not necessarily trivial neuron)
\item Inclusion ( of one code into another)
\end{enumerate}
Moreover, $ \phi $ is a neural ring isomorphism if and only if $ q_\phi $ is a composition of maps $ (1)-(3). $
\end{theorem}
Lastly, Curto and Youngs \cite{curto2020neural} bridge the idea of codes being open convex and neural ring homomorphisms using the following theorem
\begin{theorem}\cite[Theorem 4.3]{curto2020neural} \label{thnrh}
Let $ \mathcal{C} $ be a code containing the all-zeros codeword and $ q:\mathcal{C} \rightarrow\mathcal{D} $ a surjective code map corresponding to a neural ring homomorphism. Then if $ \mathcal{C} $ is convex(open convex), $ \mathcal{D} $ is also convex(open convex) with $ d(\mathcal{D}) \leq d(C).$
\end{theorem}
\begin{remark}
Curto and Youngs \cite{curto2020neural} proved the above theorem for open convex. We can see that if the code were to be closed convex a similar theorem will hold with little changes to the proof.
\end{remark}
\subsection{Main Theorem}
Now we will try to connect neural ring homomorphisms and max-intersection complete property. For the remainder of the section we assume that $ \mathcal{C} $ is a code on $ n $ neurons and $ \mathcal{D} $ is a code whose neuron number is described if and when required.
\begin{obs} \label{obssig}
Let $ q:\mathcal{C} \rightarrow \mathcal{D} $ be a code map corresponding to a given neural ring homomorphism $ \phi:\ring{\mathcal{D}}\to \ring{\mathcal{C}}. $ If $ \sigma_i^{} \subset \sigma_j^{} $ in $ \mathcal{C} $ then $ q(\sigma_i^{})\subset q(\sigma_j^{}) $ in $ \mathcal{D}. $
By theorem $ \ref{thmnycc} $ we now know that there are only 5 possibilities of a code map associated to Neural ring homomorphism. The observation is fairly computational and can be obtained by applying an arbitrary code (say $ \sigma =\sigma_1^{}\sigma_2^{}\cdots\sigma_n^{} $) to all the 5 maps. Further in this section it becomes easier to see the neural code in $ 0 $ and $ 1 $'s, i.e if $ \sigma=12 $ and $ n=3 $ we write the $ \sigma=110. $ Basically we express the support of $ \sigma. $
\end{obs}
\begin{lemma}
Let $ q:\mathcal{C} \rightarrow \mathcal{D} $ be either a permutation, or adding/ deleting a trivial or duplicate neuron, then $\tau \in \mathcal{C} $ is a maximal element if and only if $ q(\tau) \in \mathcal{D} $ is a maximal element. \label{maxiso}
\end{lemma}
\begin{proof}
If $ q $ is either a permutation, or adding/ deleting a trivial or duplicate neuron then the corresponding neural ring homomorphism is an isomorphism. This implies that $ q $ is a bijection \cite[Proposition 2.3]{curto2020neural}.
Let $ \tau\in C $ be a maximal element. Suppose if possible $ q(\tau) $ not be a maximal element in $ \mathcal{D} $. Then there exists $ q(\tau_1^{})\in D $ such that $ q(\tau)\subsetneq q(\tau_1^{}). $ This implies $ \tau \subsetneq \tau_1^{} $ using Observation \ref{obssig} in view of $ q $ being a bijection. This is a contradiction to the fact that $ \tau $ is a maximal element in $ \mathcal{C}. $
Conversely, if $ q(\tau) $ is maximal element in $ \mathcal{D}. $ Then one can show that $ \tau $ is a maximal element in $ \mathcal{C} $ using $ q^{-1} $ and the idea from necessary part of the proof. This works because $ q^{-1} $ is again either a permutation, or adding/ deleting a trivial or duplicate neuron and so fits the hypothesis of the necessary conditions.
\end{proof}
\begin{lemma}
Let $ q: \mathcal{C} \rightarrow \mathcal{D} $ be a projection. If $ \sigma \in \mathcal{D} $ is a maximal element then there exists a maximal element $ \tau \in \mathcal{C} $ such that $ q(\tau)= \sigma. $ \label{maxpro}
\end{lemma}
\begin{proof}
Let $ \sigma= \sigma_1^{}\sigma_2^{}\cdots\sigma_{n-1}^{}$ and since $ q $ being a projection map we know that $ q $ is surjective. Therefore there exists $ \tau \in \mathcal{C} $ such that $ q(\tau)= \sigma. $ Moreover we precisely know the choices of $ \tau, $ it can either be $ \sigma $ followed by $ 1 $ or $ 0. $ Label $ \tau_0:= \sigma_1^{}\sigma_2^{}\cdots\sigma_{n-1}^{}0$ and $ \tau_1^{}:=\sigma_1^{}\sigma_2^{}\cdots\sigma_{n-1}^{}1. $ Remember that $ \mathcal{C} $ can have either $ \tau_0^{} $ or $ \tau_1^{}, $ or both and so we have 3 cases. It is clear that $ \tau_0^{} \subset \tau_1^{}, $ therefore the case in which both $ \tau_0^{} $ and $ \tau_1^{} $ exist is redundant.
\begin{caseof}
\casea{$\tau_1^{} \in \mathcal{C}$}{In this case we claim $ \tau_1^{} $ is a maximal element in $ \mathcal{C}. $ Suppose not, then there exist a $ \tau_2^{}\in \mathcal{C} $ such that $ \tau_1^{}\subsetneq \tau_2^{}$. By Observation \ref{obssig} we have $ q(\tau_1^{})\subset q(\tau_2^{}). $ But as $ \sigma=q(\tau_1^{}) $ is a maximal element in $ \mathcal{D} $ we get $ q(\tau_1^{}) = q(\tau_2^{}). $ This implies $ \tau_2^{}=\tau_1^{} $ or $ \tau_2^{}=\tau_0^{}, $ this is a contradiction since $ \tau_1^{} \subsetneq \tau_2^{} $ and $ \tau_0^{}\subset \tau_1^{} $.}
\casea{$ \tau_1^{}\not\in \mathcal{C} $}{In this case we claim that $ \tau_0^{} $ is maximal element and the proof is similar to the previous case.}
\noindent Hence the proof.
\vspace{-0.7cm}
\end{caseof}
\end{proof}
\begin{remark}
\begin{enumerate}
\item Converse of Lemma \ref{maxpro} need not hold. For example consider the code\linebreak $ \mathcal{C}=\{100,010,001,011,101,110\} $ and project the code to get $ \mathcal{D}= \{00,10,01,11\} . $ Clearly, $ 011 \in \mathcal{C} $ is a maximal code but $ q(011)=01 \subset 11. $ Implies that it is not a maximal code after projection.
\item Let $ \tau_1^{},\tau_2^{} \in \mathcal{C} $ be two codewords and $ \tau_3^{}= \tau_1^{}\cap \tau_2^{}. $ For $ i\in [3] $ let $ \tau_i^{}= \tau_{i1}^{}\tau_{i2}^{}\cdots\tau_{in}^{} $ then we observe that $ \tau_3 $ is given as: $ \tau_{3j}^{} = \begin{cases}
1\qquad &\text{ if } \tau_{1j}^{}=\tau_{2j}^{}=1 \\
0\qquad & \text{ otherwise}
\end{cases}. $
\end{enumerate}
\end{remark}
\begin{theorem}
Let $ q:\mathcal{C} \rightarrow\mathcal{D} $ be a surjective code map corresponding to a neural ring homomorphism. Then if $ \mathcal{C} $ is max-intersection complete, so is $ \mathcal{D}. $ \label{thmic}
\end{theorem}
\begin{proof}
By Theorem \ref{thnrh} the surjective code map will be a composition of permutations, adding/ deleting a trivial or duplicate neuron, or projection. So, it is sufficient to assume all of them independently and prove the above statement.
\noindent Let $\sigma_i^{}, \sigma_2^{} \in \mathcal{D} $ be maximal elements, we need to show that $ \sigma_1^{} \cap \sigma_2^{}\in \mathcal{D}. $
\paragraph{Permutation:} As $ q $ is a bijection, there exists unique $ \tau_i^{} \in \mathcal{C} $ such that $ \sigma_i^{}=q(\tau_i^{}), \text{ for } i=1,2 . $ By Lemma \ref{maxiso}, $ \tau_1^{},\tau_2^{} \in \mathcal{C} $ are maximal elements. This implies by hypothesis $\tau_3^{}= \tau_1^{}\cap \tau_2^{} \in \mathcal{C}. $ Let $ p\in S_n$ be a permutation. For $ i\in [3] $ let $ \tau_i^{}= \tau_{i1}^{}\tau_{i2}^{}\cdots\tau_{in}^{}. $ Then for $ i=1,2 $ we have $ \sigma_i^{}= \tau_{ip(1)}^{}\tau_{ip(2)}^{}\cdots\tau_{ip(n)}^{}. $ Then let $ q(\tau_1^{})\cap q(\tau_2^{})=\sigma_1^{} \cap \sigma_2^{}: = \gamma = \gamma_1\gamma_2\cdots\gamma_n; \text{ where } \gamma_j= \begin{cases}
1\qquad &\text{ if } \sigma_{1j}^{}=\sigma_{2j}^{}=1 \\
0\qquad & \text{ otherwise}
\end{cases} = \begin{cases}
1\qquad &\text{ if } \tau_{1p(j)}^{}=\tau_{2p(j)}^{}=1 \\
0\qquad & \text{ otherwise}
\end{cases} = \tau_{3p(i)}. $\\ This implies $\gamma= \tau_{ip(1)}^{}\tau_{3p(2)}^{}\cdots\tau_{3p(n)}^{}= q(\tau_3^{})\in \mathcal{D}. $
\paragraph{Adding a trivial or duplicate neuron:}As $ q $ is a bijection, there exists unique $ \tau_i^{} \in \mathcal{C} $ such that $ \sigma_i^{}=q(\tau_i^{}), \text{ for } i=1,2 . $ By Lemma \ref{maxiso}, $ \tau_1^{},\tau_2^{} \in \mathcal{C} $ are maximal elements. This implies by hypothesis $\tau_3^{}= \tau_1^{}\cap \tau_2^{} \in \mathcal{C}. $ For $ i\in [3] $ let $ \tau_i^{}= \tau_{i1}^{}\tau_{i2}^{}\cdots\tau_{in}^{}. $ Then for $ i=1,2 $ we have $ \sigma_i= \tau_{i1}^{}\tau_{i2}^{}\cdots\tau_{in}^{}d$ where $ d=0,1 $ or $ d=\tau_{ij} $ depending upon the map $ q. $ It is clear that $ \sigma_1^{} \cap \sigma_2^{}= \tau_{31}^{}\tau_{32}^{}\cdots\tau_{3n}^{}d=q(\tau_3^{}) \in \mathcal{D}.$
\paragraph{Deleting a trivial or duplicate neuron:}
As $ q $ is a bijection, there exists unique $ \tau_i^{} \in \mathcal{C} $ such that $ \sigma_i^{}=q(\tau_i^{}), \text{ for } i=1,2 . $ By Lemma \ref{maxiso}, $ \tau_1^{},\tau_2^{} \in \mathcal{C} $ are maximal elements. This implies by hypothesis $\tau_3^{}= \tau_1^{}\cap \tau_2^{} \in \mathcal{C}. $ For $ i\in [3] $ let $ \tau_i^{}= \tau_{i1}^{}\tau_{i2}^{}\cdots\tau_{in-1}^{}d $, where $ d=0,1 $ or $ d=\tau_{ij} $ depending upon the map $ q. $ Then for $ i=1,2 $ we have $ \sigma_i= \tau_{i1}^{}\tau_{i2}^{}\cdots\tau_{in-1}^{}$. It is clear that $ \sigma_1^{} \cap \sigma_2^{}= \tau_{31}^{}\tau_{32}^{}\cdots\tau_{3n-1}^{}=q(\tau_3^{}) \in \mathcal{D}.$
\paragraph{Projection:} We just extend the idea from deleting a trivial or duplicate neuron in view of Lemma $ \ref{maxpro}. $ That is if $ \sigma_1^{} $ and $ \sigma_2^{} $ are maximal codes in $ \mathcal{D} $ there exist maximal codes $ \tau_1^{},\tau_2^{} \in \mathcal{C} $ such that $ q(\tau_1^{})=\sigma_1^{} $ and $ q(\tau_2^{})=\sigma_2^{}. $ Rest follows.
Hence the proof.
\end{proof}
\begin{remark}
The converse of Theorem \ref{thmic} need not be true. For example consider the codes $ \mathcal{C}=\{100,010,001\} $ and $ \mathcal{D}=\{00,10,01\}. $ Consider the projection map $ q:\mathcal{C} \rightarrow \mathcal{D}, 100\mapsto 10, 010 \mapsto 01 \text{ and } 001 \mapsto 00. $ This map $ q $ satisfies the hypothesis of the converse. But $ \mathcal{C} $ is not maximum intersection complete.
\noindent This leads us to think that converse will hold true when the code map corresponds to a neural ring isomorphism.
\end{remark}
\begin{corollary}
Let $ q:\mathcal{C}\rightarrow \mathcal{D} $ be a code map corresponding to a neural ring isomorphism. Then $ \mathcal{C} $ is maximum intersection complete if and only if $ \mathcal{D} $ is maximum intersection complete.
\end{corollary}
\section{Introduction}
Brain communicates with us by firing neurons on and off to stimuli space, we call these as Binary codes or Neural codes. Figuring out how the brain works is to simply understand neural codes. Neuroscientist, John O'Keefe discovered and worked with a type of neuron called place cell. This was the motivation for the study of Neural codes. An area in the stimulus space is said to be receptive field, if it is the area of visual field that causes response in the cell. Given a receptive field one can get the neural code that represents it. And so the question that naturally occurs is can we get a receptive field, when given a binary code. If so, the region or the sets in the stimulus space, which gives the receptive field is called realization of code. The sets in the receptive field are referred as receptive cells. Before we go further, we would formally define a Code and its realization as the following
\begin{definition}[Binary code] \cite[Definition 1]{franke2018every}
A \textit{binary code(or neural code)} on $ n $ neurons is a collection $ \mathcal{C} $ of subsets of the set $ [n] =\{1,2,3,\dots,n\}.$ The elements of $ \mathcal{C} $ are called codewords.
\end{definition}
For a codeword $ \sigma=\sigma_1^{}\sigma_2^{}\dots\sigma_n^{} $ the set $ supp(\sigma) :=\left\{i\in \left[n\right] \vert\ \sigma_i^{\\
}=1\right\} $ is called support of $ \sigma. $
\begin{definition}
Let $ \mathcal{U}=\{U_1,U_2,\dots,U_n\} $ be a collection of sets in some stimuli space $X \subseteq\mathbb{R}^k$ and $ \mathcal{C} $ be a neural code on $ n $ neurons. Define $ \mathcal{C}(\mathcal{U})=\left\{\sigma\in [n] \ \Big\vert\ \displaystyle\bigcap_{j\in \operatorname{supp}(\sigma)} U_j \backslash \displaystyle\bigcup_{i\not\in \operatorname{supp}{\sigma} } U_i \not= \phi \right\}. $ We say that $ \mathcal{U} $ is a realization of $ \mathcal{C} $ if $ \mathcal{C}=\mathcal{C}(U). $
We call $ \atom{\sigma}=\displaystyle\bigcap_{j\in \operatorname{supp}(\sigma)} U_j \backslash \displaystyle\bigcup_{i\not\in \operatorname{supp}{\sigma} } U_i $ as the atom of the codeword $ \sigma $ with respect to $ \mathcal{U}. $ And we further denote $ U_\sigma :=\displaystyle\bigcap_{j\in \operatorname{supp}(\sigma)} U_j $ throughout this paper. Also, we fix $ U_\emptyset= X. $
\end{definition}
The realization $ \mathcal{U} $ of a code $ \mathcal{C} $ will be given a name after looking at the topological properties of sets in $ \mathcal{U}. $ For example, the realization is called open convex if all the sets are both convex and open in the stimuli space. Let all the sets of $ \mathcal{U} $ be in $ R^k $ with some fixed topological property (open, closed, convex, etc). If there exists no other collection $ \mathcal{U}' $ in $ \mathbb{R}^l\ (l <k) $ with same topological properties of $ \mathcal{U} $ such that $ \mathcal{C}(U')=\mathcal{C}, $ then $ k $ is said to be the minimal dimension in which $ \mathcal{C} $ can be realized with respect to its topological property. The receptive cells that we find in nature are almost open convex sets. So, the natural question is to see which all neural codes are open convex or closed convex. Megan K Franke and Samuel Muthiah \cite{franke2018every} proved and also gave an algorithm to show that every code is convex realizable. Joshua Cruz, et.al \cite{cruz2019open} showed that codes with max-intersection complete property\footnote{ A code $ \mathcal{C} $ is said to be max-intersection complete if $ \mathcal{C} $ contains all intersections of their maximal codewords} is both open convex and closed convex. Also, they gave an upper bound for the minimal embedding dimension.
In 2013 Carina Curto, et.al \cite{curto2013neural} explored this topic in Algebraic sense. They defined a ring structure called Neural ring($ \ring{C} $) for a given code $ \mathcal{C}, $ as $ \mathbb{F}_2[x_1,x_2,\dots,x_n]/I_\mathcal{C} $ where $ I_\mathcal{C}=\{f\in\mathbb{F}_2[x_1,x_2,\dots,x_n] | f(c)=0 \text{ for all } c\in \mathcal{C}\}. $ For any codeword $ c$ the characteristic function $ p_c $ \footnote{ The characteristic function is given by $p_c(v)= \begin{cases}
1 & \text{ if } v=c \\ 0 & \text{ otherwise}
\end{cases} $} has $ \underset{c_i=1}{\Pi}x_i\underset{c_j=0}{\Pi}(1-x_j) $ as its polynomial form. Further they also defined Neural ideal $ J_\mathcal{C} =\langle \{p_c| c\not \in \mathcal{C}\}\rangle. $ Neural Ideal is closely associated to Stanley-Reisner ideal \cite{miller2004combinatorial}. Later they define Canonical form for neural ideal and have given an algorithm to find the same. Ethan Petersen, et.al \cite{petersen2018neural} worked on giving an algorithm for canonical forms of $ J_\mathcal{C}. $ They gave a Sage-math package which contains several algorithms related to canonical form of neural ideals. Also, they gave an explicit algorithm which updates a given canonical form after adding another codeword to the code $ \mathcal{C}. $
Curto and Youngs \cite{curto2020neural} discuss ring homomorphisms between two neural rings. They proved that there is a 1-1 correspondence between code maps $ q:\mathcal{C}\rightarrow \mathcal{D} $ and the ring homomorphisms $ \phi:\ring{\mathcal{D}}\rightarrow \ring{\mathcal{C}}. $ Usually the map $ q, $ associated with the ring homomorphism $ \phi $ is denoted by $ q_\phi,$ and is called the associated code map. Also, they showed that $ \ring{\mathcal{C}} \cong \ring{D} $ if and only if $ |\mathcal{C}|=|\mathcal{D}|. $ That means the Neural ring loses information on the codewords present in the code and only considers the cardinality of the code. This lead Curto and Youngs \cite{curto2020neural} restrict the ring homomorphisms. The new class is called Neural Ring Homomorphism and this is dependent on the codewords. Later, the authors gave a way to determine whether the given ring homomorphisms $ \phi:\ring{\mathcal{D}}\rightarrow \ring{\mathcal{C}}. $ is a neural ring homomorphism depending on how the associated code map behaves. Lastly, Curto and Youngs \cite{curto2020neural} bridge the idea of codes being open convex and neural ring homomorphisms.
As we mentioned before that Carina Curto, et.al \cite{curto2013neural} defined the neural ideal. This ideal is further explored by A Jeffs, M Omar, N Youngs \cite{jeffs2018homomorphisms}. They tried to get all ring homomorphisms from $ \mathbb{F}_2[y_1,\dots,y_n] \rightarrow \mathbb{F}_2[x_1,\dots,x_m] $ that preserve Neural Ideals. They showed that only specific code maps satisfy the above condition. Later they gave a description on how these neural ideal preserving maps realize the codes.
This paper is structured as follows. In section 2 our main result is Theorem \ref{mainth2}. We state and prove that the class of open convex and that of closed convex codes are same in dimension 1. And for dimension 2 we work with conjecture given by Megan K Franke and Samuel Muthiah \cite{franke2018every} which states that a code is open convex with minimal dimension 2 is convex with minimal dimension 2. We provide few class of examples in Proposition \ref{lemcon} and Remark \ref{remarksec3} which satisfy the conjecture. We introduce a new class of codes called doublet maximal codes in section 3. Later in Theorem \ref{tdmc} we see that for a doublet maximal code open convex and max-intersection complete are the same class of codes. In section 4 we see the relationship of two codes being max-intersection complete via a code map between them in Theorem \ref{thmic}. In the last section we take on the task of counting neural ring endomorphisms on a few special codes. We call this special class of codes circulant codes. We count neural ring endomorphisms for many codes in this class.
\section{Counting Neural ring endomorphisms}
In this section we will work on counting neural ring endomorphisms on a code $ \mathcal{C}. $ We restrict the code $ \mathcal{C} $ to be on $ n $ neurons and have $ |\mathcal{C}|=n. $ Denote $ \nrh{\mathcal{C}} $ to be the collection of all neural ring endomorphisms on $ \ring{\mathcal{C}}. $ Before we proceed lets observe a relation between $\nrh{\mathcal{C}} $ and $ \nrh{\mathcal{C}'}, $ when $ \mathcal{C}' $ is obtained from $ \mathcal{C}. $
\begin{obs} \label{obsnrh}
Let $ \mathcal{C}$ be a code on $ n $ neurons. And $ \mathcal{C}' $ be the code obtained from $ \mathcal{C} $ after applying any of the elementary code maps (1) to (3) written in Theorem \ref{thmnycc}. We observe that $ \nrh{\mathcal{C}} $ has a monoid structure with composition as the binary operation. Then there is a one-one correspondence between $ \nrh{\mathcal{C}} $ and $ \nrh{\mathcal{C}'}. $
Let $ q_\alpha: \mathcal{C}\rightarrow \mathcal{C}'$ be any such elementary code map. Then by Theorem \ref{thmnycc} we have that the corresponding neural ring homomorphism $ \alpha: \ring{\mathcal{C}'}\to \ring{\mathcal{C}} $ is a neural ring isomorphism. Define the correspondence by:
\begin{align*}
\Phi: \nrh{\mathcal{C}}& \rightarrow \nrh{\mathcal{C}'}\\
\phi& \mapsto \alpha^{-1}\circ \phi \circ \alpha
\end{align*}
This map $ \Phi $ is well defined as composition of neural ring homomorphisms is a neural ring homomorphism. We can easily observe that $ \Phi $ is a bijection. Therefore we have $ \vert \nrh{\mathcal{C}} \vert = \vert \nrh{\mathcal{C}'} \vert.$ Moreover, $ \Phi $ is a monoid isomorphism since it preserves composition and identity.
\end{obs}
\subsection{Neural ring homomorphisms on special codes}
Let $ \mathcal{C} =\{c_1^{},c_2^{},\dots,c_n^{}\} $ be a code on $ n $ neurons with $ c_i^{}=(c_{i1}^{}c_{i2}^{}\cdots c_{in}^{}) $ be the binary representation of the codeword $ c_i, $ where $ c_{ij}^{}\in\{0,1\}. $ Denote $ \rh{\mathcal{C}} $ the collection of all ring homomorphisms from $ \ring{\mathcal{C}} $ into itself. In this section we would initially want to define 3 different category of maps that are present in $ \rh{\mathcal{C}}. $ We will get these categories by using basic properties of the ring homomorphism. Firstly, we know that the ring $ \ring{\mathcal{C}} $ can be seen as an $ n $ dimensional vector space over $ \mathbb{Z}_2. $ Therefore $ \ring{\mathcal{C}} $ is isomorphic to $ \sum_{}^n \mathbb{Z}_2 $ ($ n $ direct sums of $ \mathbb Z_2$). Also, the characteristic functions $ \{\rho_{c_i}\}_{i=1}^n $ form a basis of $ \ring{\mathcal{C}}. $ We define the ring homomorphisms on these basis elements. And being ring homomorphisms they are going to preserve the multiplicative structure. Further in this we section ignore $ c $ and write $ \rho_{c_i} $as just $ \rho_i $. In 1974 Carlton J Maxson \cite{maxson1974endomorphism} explored the semi-group of endomorphisms of a ring. He proved that the semi-group of endomorphisms of $ \sum_{}^n \mathbb{Z}_2 $ is all the partial functions from $ [n] $ into itself and the endomorphism which preserve unity corresponds to all the functions from $ [n] $ into itself. We observed that the former's cardinality is $ (n + 1)^n $ and $ n^n $ for the later. Therefore giving $ |\rh{\mathcal{C}}|=n^n. $
Let us now describe an arbitrary map $ \phi \in \rh{\mathcal{C}}. $ Since we already know that $ \ring{\mathcal{C}} $ is a vector space, we will first determine $ \phi $ on the basis elements $ \{\rho_i\}_{i=1}^n. $ Given a basis element $ \rho_i $ let $ \phi $ map to $ \sum_{j=1}^n a_{ij}^{} \rho_j^{}, $ where $ a_{ij}^{}\in\mathbb Z_2=\{0,1\}. $ Further $ \sum_{j=1}^n a_{ij}^{} \rho_j^{} $ can be seen as the dot product of $ a_i $ and $ P $ where $ a_i=(a_{i1}^{},a_{i2}^{},\dots,a_{in}^{}) \in \{0,1\}^n $ and $ P=(\rho_1,\rho_2,\dots,\rho_n). $ So, rewriting the map we get $ \rho_i \mapsto a_i\cdot P. $ We are going to say that $ \phi $ is determined by these vectors $ a_i\ (\phi \leftrightarrow \{a_i\}_{i\in[n]}^{}) . $ Since the map $ \phi $ is a ring homomorphism it will preserve the multiplication in $ \ring{\mathcal{C}}$. We will now see conditions on vectors $ a_i $ to make sure $ \phi $ preserves multiplication.
We are going to use following facts, details of which are mentioned in ``Neural ring homomorphisms and maps between neural codes'' paper by Carina Curto and Nora Youngs \cite{curto2020neural}
\begin{enumerate}
\item $ \rho_i \rho_j =\begin{cases}
0 \ \ &\text{ if } i \not =j \\
\rho_i \ \ &\text{ if } i=j.
\end{cases} $
\item $\sum_{i=1}^{n}\rho_i=1_{\ring{\C}}.$
\end{enumerate}
Using these two we make the following remarks. Before that we write down few notations
\underline{\textbf{Notations:}}
\begin{enumerate}
\item $ P $ denotes the vector $ (\rho_1,\rho_2,\dots,\rho_n) $
\item $ \phi \leftrightarrow \{a_i\}_{i\in[n]}^{}: \ a_i=(a_{i1}^{},a_{i2}^{},\dots,a_{in}^{}) $ are the set of vectors that determine $ \phi. $
\item $ |a_i|: $ This notation is used to refer the number of one's $ a_i $ contains
\end{enumerate}
\begin{remark} \label{obsrh}
\begin{enumerate}
\item $ (a_i\cdot P)(a_j\cdot P) =\sum_{l=1}^n a_{il}^{} \rho_l^{} \sum_{k=1}^n a_{jk}^{} \rho_k^{}=\sum_{r=1}^n b_{ijr}^{} \rho_r^{},$
where $ b_{ijr}^{}=a_{ir} ^{}a_{jr}^{}.$
Let $ b_{ij}=(b_{ij1}^{},b_{ij2}^{},\dots,b_{ijn}^{}) .$
Then we get $ (a_i\cdot P)(a_j\cdot P)=b_{ij}\cdot P $
\item For some $ i\not=j\in [n] $ we have $b_{ij}\cdot P=(a_i\cdot P)(a_j\cdot P)=\phi(\rho_i)\phi(\rho_j)=\phi(\rho_i\rho_j)=\phi(0)=0 .$ Therefore $ \sum_{k=1}^{n} b_{ijk} \rho_k=0$. So, we get $ b_{ijk}=0 $ for all $ k. $
\item Suppose for some $ i,k\in[n] $ let $ a_{ik}=1. $ Then as $ 0=b_{ijk}=a_{ik}a_{jk} $ for all $ j\not=i\in [n], $ we have $ a_{jk}=0 .$ This means for a given coordinate $ k\in[n] $, we have at-most one vector $ a_i $ such that $ a_{ik}=1. $ So, we get that the number of ones in all $ a_i $'s together is at-most $ n. $ Therefore we can \label{reml} say that $ \sum_{i=1}^n |a_i|\leq n. $
\item Since $ 1_{\ring{\C}}=\sum_{i=1}^{n} \rho_i, $ applying $ \phi $ on both sides we get $ 1_{\ring{\C}}=\sum_{i=1}^{n} (a_i\cdot P). $ As our ring homomorphisms preserve unity by definition. Further, we get $ \sum_{j=1}^{n} \rho_j^{}1_{\ring{\C}}=\sum_{i=1}^{n}\sum_{j=1}^{n}a_{ij}^{}\rho_j^{} = \sum_{i=1}^n a_{i1}\rho_1+\sum_{i=1}^n a_{i2}\rho_2+\dots+\sum_{i=1}^n a_{in}\rho_n.$
Therefore comparing the coordinates on both sides we get that for all $ j\in [n],\ \sum_{i=1}^n a_{ij}\rho_j=1. $ This means for a given coordinate $ k\in[n] $, we have at-least one vector $ a_i $ such that $ a_{ik}=1. $ So, we get that the number of ones in all $ a_i $'s together is at-least $ n. $ Therefore we can say that $ \sum_{i=1}^n |a_i|\geq n. $ This and the observation \ref{reml} gives us $ \sum_{i=1}^{n}|a_i|= n. $
\item If there is a vector $ a_i $ with $ |a_i|=r. $ Then from the previous observation we can guarantee that there will be at-least $ r-1\ j $'s such that $ a_j $ is a zero vector. Furthermore assume there exist an $ i\in [n] $ such that $ |a_i|=n.$ That is $ a_i $ is an all ones vector. Then for all $ j\not=i $ we have $ a_j $ is a zero vector.
\end{enumerate}
\end{remark}
We are finally ready to define three classes of maps in $ \rh{\mathcal{C}}. $
\begin{definition}
\begin{enumerate}
\item \textbf{Basis permutation maps (BPM)} We call an element $ \phi\in \rh{\mathcal{C}} $ a \textit{basis permutation map} if for all $ i,\ |a_i|=1 $. It is easy to observe that there will be $ n! $ many such maps. We will denote $ \bpm{\mathcal{C}} $ as the collection of all basis permutations maps from $ \ring{\mathcal{C}} $ into itself.
\item \textbf{Unity maps (UM)} We call a $ \phi\in \rh{\mathcal{C}} $ a \textit{unity map} if there exists an $ i $ such that $ |a_i|=n $ is all ones vector. From the Remarks \ref{obsrh} we now know that all the other vectors determining $ \phi $ will be zero vectors. Therefore we observe that there are exactly $ n $ such maps. We will denote $ \um{\mathcal{C}} $ as the collection of all unity maps from $ \ring{\mathcal{C}} $ to itself.
\item \textbf{Non BPM and Non UM maps} These are all the other maps in $ \rh{\mathcal{C}}. $ So we have cardinality of these maps as $ n^n-n!-n. $ Let $ \psi $ be a map in this class. As we know that $ \psi $ is not a BPM. This implies that there exist at-least one $ i\in [n] $ such that $ |a_i|\geq 2. $ Therefore at least one other vector $ a_j $ must be as zero vector. So we can also refer to this class as Non unity maps with at-least one $ a_i =0.$.
\end{enumerate}
\end{definition}
\begin{eg}
Let $ \mathcal{C} $ be a code on $ 3 $ neurons with $ |\mathcal{C}|=3. $ We know that $ \{\rho_1,\rho_2,\rho_3\} $ generates $ \ring{\mathcal{C}}. $ We give 3 different ring endomorphisms on $ \ring{\mathcal{C}}. $
\begin{enumerate}
\item Let, $ a_1=(0,1,0),a_2=(0,0,1) $ and $ a_3=(1,0,0). $ Therefore the map $ \phi $ given by $ \{a_i\}_{i\in[3]} $ is a basis permutation map. Moreover we see $\phi $ maps basis as follows: $ \rho_1\mapsto\rho_2,\ \rho_2\mapsto\rho_3,\ \rho_3\mapsto \rho_1. $
\item Let, $ a_1=(0,0,0),a_2=(1,1,1) $ and $ a_3=(0,0,0). $ Therefore the map $ \phi $ given by $ \{a_i\}_{i\in[3]} $ is a unity map. Moreover we see $\phi $ maps basis as follows: $ \rho_1\mapsto 0,\ \rho_2\mapsto\rho_1+\rho_2+\rho_3= 1_{\ring{\C}},\ \rho_3\mapsto 0. $
\item Let, $ a_1=(1,0,1),a_2=(0,0,0) $ and $ a_3=(0,1,0). $ Therefore the map $ \phi $ given by $ \{a_i\}_{i\in[3]} $ is a Non BPM and Non UM map. Moreover we see $\phi $ maps basis as follows: $ \rho_1\mapsto \rho_1+\rho_3,\ \rho_2\mapsto 0,\ \rho_3\mapsto \rho_2. $
\end{enumerate}
\end{eg}
\begin{remark}
Let $ \phi\in\rh{\mathcal{C}} $ be a unity map. Then given any $ x_j=\sum_{c_{ij}=1} \rho_i $ we get $ \phi(x_j)\in \{0,1\} $ for all $ j\in [n].$ This is because $ \phi{(\rho_j)}\in \{0,1\}. $ Therefore irrespective of the code, all \textit{unity maps} are neural ring homomorphisms. So, given a code $ \mathcal{C} $ of cardinality $ n $ with $ n $ neurons we have $ |\nrh{\mathcal{C}}| \geq n. $
\end{remark}
\subsection{Circulant codes}
Consider the codeword in $ n $ neurons given by $ c_1=(10\cdots0), $ i.e $ 1 $ followed by $ n-1 $ zeros. Shift $ 1 $ to the right and generate other codewords. In other words $ c_i $ will be a codeword containing $ 1 $ in $ i $th place and $ 0 $ everywhere else. It is easy to observe that there will be exactly $ n $ such codewords. Denote $ \mathcal{C} $ as the code with $\mathcal{C}=\{c_i\}_{i=1}^n. $ If we write down a matrix with each of it's rows as the entries of the codeword, then we get an order $ n $ \textit{circulant matrix}\footnote{A circulant matrix of order $ n $ is a matrix in which each row shifted one element to the right with respect to the previous row. Note that one row is enough to determine the entire circulant matrix, as the rest can be obtained iteratively by shifting to the right.} with entries 0 and 1. We will now call such matrix for any code as correspondent matrix of the code. So we name this code as circulant code. Similarly one could have started with $ c_1=(1100\cdots0) $ i.e two 1's followed by zeros. We would still obtain a circulant matrix. We give a generalized definition of such codes as follows.
\begin{definition}[circulant codes]
A code $ C=\{c_1,c_2,\dots,c_n\} $ on $ n $ neurons is called \textit{circulant code} if the correspondent $ n\times n $ matrix of the code is circulant. We further specify a \textit{circulant code to be of support $ p\ (1\leq p < n) $} if $ c_p=(11\cdots10\cdots0) $ (i.e $ p $ consecutive ones followed by zeros) and the other $ c_i $'s are simply obtained by the $ i^{\text{th}} $ row of the correspondent matrix of the code which is circulant.
\noindent Note that for all $ i\in[n] $ we will get$ |\operatorname{supp}(c_i)|=p. $ Also, we do not consider $ p=n. $ As in that case $ \mathcal{C}=\{(11\cdots11)\} $ is a code with cardinality $ 1. $ And we are interested only in the codes on $ n $ neurons and cardinality $ n. $
\end{definition}
\begin{eg}
The following are few examples of \textit{circulant codes}
\begin{enumerate}
\item $ \{100,010,001\} $ is a circulant code with support $ p=1 $ on neurons $ n=3. $
\item $ \{1001,1100,0110,0011\} $ is a circulant code with support $ p=2 $ on neurons $ n=4. $
\end{enumerate}
\end{eg}
\begin{remark}
We have fixed the order of $ c_i $'s in the circulant code $ \mathcal{C}. $ For example consider the code $ \mathcal{C}=\{101,110,011\} $ and $ \mathcal{C}'=\{110,011,101\} $ with elements reordered. Then $ \mathcal{C} $ is a circulant code on $ n=3 $ neurons with support $ p=2 $ whereas $ \mathcal{C}' $ is no more a circulant code. But $ q_\alpha=(123):\mathcal{C}\rightarrow\mathcal{C}' $ is a permutation on neurons which gives an neural ring isomorphism between $ \ring{\mathcal{C}} $ and $ \ring{\mathcal{C}'}. $ And from Observation \ref{obsnrh} we get that $ |\nrh{\mathcal{C}}|=|\nrh{\mathcal{C}'}|. $
\end{remark}
Our aim is to investigate $ \nrh{\mathcal{C}} $ and give its cardinality for \textit{circulant codes}. Given a map $ \phi\in \rh{\mathcal{C}} $ it belongs to $ \nrh{\mathcal{C}} $ if for all $ i\in[n], \phi(x_i)\in \{x_i|i\in [n]\} \cup \{0,1\}.$\footnote{{Since these are maps from ring to itself }$ y_i=x_i. $ } So it would be important for us to understand what $ x_i $'s are in the circulant codes. First we note that the number of terms in $ x_i $ comes from the number of 1's in $ i^{\text{th}} $ column of the correspondent matrix of the code. For a circulant code the correspondent matrix is a circulant matrix. And in a circulant matrix the row sum and column sum for all rows and columns is a constant. Therefore we get that in a circulant code of support $ p $ the number of terms in $ x_i $ is same for all $ i\in [n]. $ In fact each $ x_i $ will be a sum of $ p $ terms. Moreover we can observe that $ x_i=\sum_{k=0}^{p-1} \rho_{\mdsum{i+k}}^{}, $ where $$ \big(\big)_\oplus^{}: \{-n+1,\dots,0\} \cup[2n-1]\to [n] \text{ given by } \left( i \right)_{\oplus}
^{} = \begin{cases}
i & \text{ if } 0<i \leq n\\
j & \text{ if } i>n \text{ and }i=n+j
\\ k & \text{ if } -n<i\leq 0 \text{ and } i=-n+k
\end{cases}. $$
\noindent Note that $ \rho_{\left({2n} ^{}\right)_{\oplus}^{}} $ doesn't appear in the expression of any $ x_i $'s as $ p<n. $ We denote $\rho_{\mdsum{i+j}} $ as the $ j+1^{\text{th}} $ term in the expression of $ x_i. $ So, naturally we get $ \rho_{i}$ and $ \rho_{\mdsum{i+p-1}} $ to be the first and last (or $ p^{\text{th}} $ ) term in the expression of $ x_i $ respectively.
\begin{proposition}
If $ \mathcal{C} $ is a circulant code with support $ p=1 $ or $ n-1, $ then $ |\nrh{\mathcal{C}}|=n!+n. $ \label{propcirnrh}
\end{proposition}
\begin{proof}
\begin{caseof}
\casea{$ p=1 $}{When $p=1 $ we will have $ x_i=\rho_i $ for all $ i. $ Given any $ \phi \in \bpm{\mathcal{C}} $ we get $ \phi(x_i)=\phi(\rho_i)=\rho_j=x_j $ for some $ j\in [n]. $ This implies that all the basis permutation maps are in $ \nrh{\mathcal{C}}. $ Moreover we already know that $ \um{\mathcal{C}}\subseteq \nrh{\mathcal{C}} $ for any code $ \mathcal{C}. $ So, we have $ \bpm{\mathcal{C}} \cup \um{\mathcal{C}}\subseteq \nrh{\mathcal{C}} .$ It is left to show that given any non BPM and non UM it is not in $ \nrh{\mathcal{C}}. $ First thing we observe is that given any non BPM and non UM, $ \psi $ there exists an $ i\in [n] $ such that $ |a_i|=k $, where $ 2\leq k \leq n-1.$ Consider $ \psi(x_i)=\psi(\rho_i)=a_i\cdot P. $ This implies $ \psi(x_i) $ has $ k $ terms. But each $ x_i=\rho_i. $ Thus $ \psi(x_i)\not \in \{x_i|i\in [n]\} \cup \{0,1\} $. Therefore $ \psi\not \in \nrh{\mathcal{C}}. $ Hence we have $ \bpm{\mathcal{C}} \cup \um{\mathcal{C}} =\nrh{\mathcal{C}} $ and the result follows. }
\casea{$ p=n-1 $}{When $ p=n-1 $ we get $x_i=\sum_{k=0}^{n-2} \rho_{(i+ k)_{\oplus}^{} } ^{}. $ First we observe that if $ \phi\in \bpm{\mathcal{C}} $ then $ \phi(x_i) $ will also have exactly $ n-1 $ terms. This is because $ \phi $ being a BPM is a bijection map when restricted to basis elements. Next, we can see that writing $ n-1 $ terms out of given $ n $ terms gives us $ \binom{n}{n-1}=n $ choices. And all these $ n $ choices are included in $ x_i $'s as they are exactly $ n $ distinct of them. Therefore there exists a $ j\in [n] $ such that after rearrangement of terms in $ \phi(x_i), $ we get $ \phi(x_i)=x_j. $ Hence we have $ \bpm{\mathcal{C}} \cup \um{\mathcal{C}}\subseteq \nrh{\mathcal{C}}. $ It is once again left to show that $ \psi, $ a non BPM and non UM is not in $ \nrh{\mathcal{C}}. $ As we noticed in Case 1, there exists $ i $ such that $ |a_i|=k\ (2
\leq k\leq n-1), $ where $ a_i $ is a vector that determines $ \psi. $ We now assume that there are $ r $ vectors $ \{a_{r_1},a_{r_2}\dots, a_{r_r}\} $ which take the other $ n-k $ ones. And so $ n-r-1 $ vectors $ \{a_{t_1},a_{t_2},\dots, a_{t_{n-r-1}}\} $ are zero. As we mentioned earlier that all the term combinations are present in $ x_i. $ This implies there exists $ j\in [n] $ such that $ x_j= \rho_{r_1}+\rho_{r_2}+\dots+\rho_{r_r}+\rho_{t_1}+\rho_{t_2}\dots+\rho_{t_{n-r-1}}$. From this we can see that $ \psi(x_j) $ will have $ r $ terms in its summation. As $ k\geq 2 $ we have $ r < n-1. $ This implies $ \psi(x_j)\not\in \{x_i|i\in [n]\} \cup \{0,1\}$. Therefore $ \psi \not\in \nrh{\mathcal{C}}. $ Hence we have $ \bpm{\mathcal{C}} \cup \um{\mathcal{C}} =\nrh{\mathcal{C}} $ and the result follows. }
\end{caseof}
\vspace{-0.7cm}
\end{proof}
\begin{remark}\label{rem42}
Consider the code $ \mathcal{C}=\{1001,1100,0110,0011\}.$ We observe that for this circulant code with $ p=2 $ there are some $ \phi \in \bpm{\mathcal{C}} $ but $ \phi \not \in \nrh{\mathcal{C}}. $ Moreover for this code $ \mathcal{C}, $ know that $ |\bpm{\mathcal{C}}|=24$ and we only found 8 maps out of these present in $ \nrh{\mathcal{C}}. $ The other interesting fact is that there are some non BPM and non UM's which for this code is present in $ \nrh{\mathcal{C}}. $ By brute force we computed that there are 24 such non BPM and non UM maps and it gives us that $ |\nrh{\mathcal{C}}|=36 > 4!+4. $ Also, as we observed that the $\bpm{\mathcal{C}} $ is $ 8=2\cdot4=p\cdot n, $ for $ p=2 $ and $ n=4. $ So we try to see whether this is true for all $ n. $
\end{remark}
\begin{lemma}
If $ \mathcal{C} $ is a circulant code with support $ p=2 $ then the total number of basis permutation maps present in $ \nrh{\mathcal{C}} $ is $ 2n. $ \label{prp2}
\end{lemma}
\begin{proof}
Let $ \phi\in \bpm{\mathcal{C}}. $ It is enough to see the restriction of $ \phi $ to basis elements so as to determine the entire map. For this reason we now start counting where $ \phi $ can map each $ \rho_i. $ Starting with $ \rho_1^{} $ it is clear that $ \rho_1^{} $ has $ n $ choices. Assume $ \rho_1^{} $ has been mapped to $ \rho_j^{}. $ Since $ x_1^{}=\rho_1^{}+\rho_2^{} $ and $ \phi(x_1^{})\in \{x_i^{}|i\in [n]\} $
(For a map in $ \bpm{\mathcal{C}}\ \phi(x_i^{}) $ and $ x_i $ have same number of terms, leading to $ \phi(x_i^{})\not \in\{0,1\}). $
Therefore $ \phi(x_1^{})=\phi(\rho_1^{}+\rho_2^{})=\rho_j+\phi(\rho_2). $ So, for $ \phi(x_1^{})\in\{x_i^{}|i\in[n]\}, $ we must have $ \phi(\rho_2^{}) =\rho_{j+1}^{} $ or $ \rho_{j-1}^{}. $
Therefore $ \rho_2 $ has 2 choices when $ \rho_1 $ is fixed. On fixing $ \rho_2^{} \mapsto \rho_{j-1}^{} $ we similarly get $ \rho_3\mapsto \rho_j $ or $ \rho_{j-2}. $
But as $ \phi(\rho_1^{})= \rho_j $ we cannot have $ \phi(\rho_3^{})=\rho_j^{}. $ Therefore $ \rho_3^{} $ has exactly one choice when $ \rho_1^{}$ and $ \rho_2^{} $ are fixed. $ \rho_3^{} $ will still have 1 choice when $ \rho_2{}\mapsto \rho_{j+1} $ And in total we will have $ 2n $ choices. Hence the result.
\end{proof}
\begin{remark}
In Propositions \ref{propcirnrh} and Lemma \ref{prp2} we have counted the number of basis permutation maps that are neural ring homomorphisms for a circulant code with support $ p=1,2$ and $n-1. $ We have obtained this count to be $ n!,2n $ and $n! $ respectively. We further calculated for $ p=3 $ and still obtained $ 2n$ to be the total BPM that are in $ \nrh{\mathcal{C}} $. We strongly believe pattern remains same as $ p $ increases. So, we pursued the following Theorem and obtained the proof.
\end{remark}
\begin{theorem} \label{thnrhbpm}
Let $ \mathcal{C} $ be a circulant code with support $ p\ (1\leq p < n). $ The total number of basis permutation maps present in $ \nrh{\mathcal{C}} $ is given by $ \begin{cases}
n! &\text{ if } p=1 \text{ and } p=n-1 \\
2n &\text{ if } 1<p<n-1
\end{cases}. $
\end{theorem}
\vspace{-0.7cm}
\begin{proof}
\begin{caseof}
\casea{${ p=1\text{ or } n-1.}$}{In this case we get the result using the proof of Proposition \ref{propcirnrh}. } \vspace{-0.2cm}
\casea{$ {p=2} $ }{This is Lemma \ref{prp2}.} \vspace{-0.2cm}
\casea{$ 2<p<n-1 $}{As $ p < n-1 $ and $ x_i=\sum_{k=0}^{p-1}\rho_{\mdsum{i+k}}^{}, $ this gives us the following equations} \begin{align*}
x_1^{}&=\rho_1^{}+\rho_2^{}+\dots+\rho_p^{},\\ x_2^{}&=\rho_2^{}+\rho_3^{}+\dots+\rho_{p+1}^{},\\ x_3^{}&=\rho_3^{}+\rho_4^{}+\dots+\rho_{p+2}^{}
\end{align*} Let $ \phi\in \bpm{\mathcal{C}} $ be a neural ring homomorphism. As seen in the proof of Lemma \ref{prp2}, it is enough to see the restriction of $ \phi $ to basis elements. Starting with $ \rho_1^{} $ it is clear that $ \rho_1^{} $ can be mapped to any of the $ n\ \rho_i $'s. Assume $ \rho_1^{} $ has been mapped to $ \rho_j^{} $ for some $ j\in[n] $
\end{caseof}
\begin{claim} $ \mathbf\phi(\rho_2)= \rho_{\mdsum{{j+1}}}^{}$ or $ \phi(\rho_2)=\rho_{\mdsum{{j+(n-1)}}}^{}.$
Suppose not. Let $ \phi(\rho_2)= \rho_{\mdsum{{j+k}}},$ where $ k\in[n]\backslash \{1,n-1\}.$
We get $ \phi(x_1^{})=\phi(\rho_1^{}+\rho_2^{}\dots+\rho_p^{})=\rho_j+\rho_{\mdsum{{j+k}}}^{}+\phi(\rho_3^{})+\dots\phi(\rho_p^{}). $ As $ \phi $ is a neural ring homomorphism $ \phi(x_1^{})\in\{x_i^{}|i\in[n]\}, $ there exists $ l\in[n] $ such that $ \phi(x_1)=x_l^{} $. Therefore for all $ i\in [n]\backslash[2] $ we get $ \phi(\rho_i^{})=\rho_{r_i} $ such that $ \rho_{r_i} $ is present in the expression of $ x_l^{} $ and $ r_i\not=j $ or $ \mdsum{j+k}. $ Let, if possible $ \rho_j $ be the first term in the expression of $ x_l $ or in other words let $ x_l=x_j. $ Consider $ \phi(x_2^{})= \phi(x_{2}^{}+\rho_1^{}-\rho_1^{}) =\phi(\rho_1^{}+\rho_2^{}+ \dots+\rho_p+\rho_{p+1}^{}-\rho_{1}^{}) =x_j-\rho_j+\phi(\rho_{p+1}^{}) = \rho_{\mdsum{{j+1}}}^{}+\dots+ \rho_{\mdsum{j+k}}+\dots+ \rho_{\mdsum{j+(p-1)}}^{} + \phi(\rho_{\rho_{p+1}}). $ As $ \phi(x_2^{}) \in\{x_i^{}|i\in[n]\} $ it must be a sum of some $ p $ number of consecutive $ \rho_i $'s. This forces $ \phi(\rho_{p+1})= \rho_j $ or $\phi(\rho_{p+1})= \rho_{\mdsum{j+p}}. $ But the former one is not possible as $ \phi(\rho_1^{})=\rho_j. $ Therefore we get $ \rho_{p+1}\mapsto \rho_{\mdsum{j+p}}. $ Next, we look at $ \phi(x_3^{}). $ Now $ \phi(x_3^{})=\rho_{\mdsum{j+1}}^{}+\dots+\rho_{\mdsum{j+k-1}}^{}+\rho_{\mdsum{j+k+1}}^{}+\dots+\rho_{\mdsum{j+p}}^{}+\phi(\rho_{p+2}). $ For $ \phi(x_3^{}) $ to be some $ x_m $ we would require $ \phi(\rho_{p+2})=\rho_{\mdsum{j+k}}^{} $ as $\rho_{\mdsum{j+k}}^{} $ is the missing term. But, then we would end up getting $ \phi(\rho_{p+2})=\phi(\rho_2), $ which is a contradiction. Therefore $ x_l^{}\not=x_j. $ We would get a similar contradiction even if $ \rho_j $ was the last term in the expression of $ x_l^{}. $
Now suppose that $ \rho_j $ is in between term in the expression of $ x_l^{}, $ i.e let $ x_l^{}=\rho_l^{}+\dots+\rho_{j}^{}+ \rho_{\mdsum{j+1}}^{}+\dots+\rho_{\mdsum{j+k}}^{}+\dots+\rho_{\mdsum{l+p-1}}^{}. $ Now we get $ \phi(x_2^{})=\rho_l+\dots+ \rho_{\mdsum{j+1}}^{}+\dots+\rho_{\mdsum{j+k}}+\dots+\rho_{\mdsum{l+p-1}}^{}+\phi(\rho_{p+1}^{}). $ This implies for $ \phi(x_2^{})\in\{x_i^{}|i\in[n]\}, $ we need $ \phi(\rho_{p+1}^{})=\rho_j^{}. $ But this would give us $ \phi(\rho_1^{})=\phi(\rho_{p+1}^{}) $ which is a contradiction. Hence the claim.
\end{claim}
Therefore $ \phi $ maps $ \rho_2$ to either $ \rho_{\mdsum{j+1}}^{} $ or $ \rho_{\mdsum{j+(n-1)}}^{} $. In other words $ \phi(\rho_2^{}) $ is mapped to the basis element that is adjacent to $ \phi(\rho_1^{}).$ Similarly we see that $ \rho_3^{} $ can have 2 possibilities, i.e it can be mapped to basis elements that are adjacent to $\phi( \rho_2^{}) $. Fix $ \rho_2^{}\mapsto \rho_{\mdsum{j+1}^{}}, $ then we get $ \rho_3^{}\mapsto\rho_{\mdsum{j+2}^{}} $ or $ \rho_3^{}\mapsto\rho_{j}^{}. $ But the later one is not possible as $ \phi(\rho_1^{})=\rho_j^{}. $ Even if $ \rho_2^{}\mapsto \rho_{\mdsum{j+(n-1)}}^{} $ we get that $ \rho_3^{} $ can only be mapped to $ \rho_{\mdsum{j+(n-2)}}^{} $ for the same reason. Therefore we see that $ \rho_3^{} $ has only one choice to get mapped, whenever $ \phi(\rho_1^{}) $ and $ \phi(\rho_2^{}) $ are already fixed. Further we see $ \rho_i,\ i\in [n]\backslash[3] $ has just 1 choice for it to be mapped. So, we get total choices for $ \phi $ to be an neural ring homomorphism is $ n\times2\times 1\times\dots\times1=2n. $ Hence the result.
\end{proof}
We know that $ |\nrh{\mathcal{C}}| =n!+n $ for circulant codes with support $ p=1 $ and $ p=n-1 $ by Proposition \ref{propcirnrh}. Now by Theorem \ref{thnrhbpm} we get that $ |\nrh{\mathcal{C}}| \geq 3n $ for all circulant codes with support $p$ on $n>2 $ neurons. Further we will want to know how does non-basis permutation and non unity maps behave on few circulant codes with support $ 1<p<n-1. $ Before that we will introduce some notations. Let $ y_{i}= \rho_{i1}+\rho_{i2}+\dots+\rho_{ik}$ be some summation of a combination of $ k\ \rho_i $'s. We will use $ \norm{y_i} $ as the notation to indicate the number of distinct $ \rho_i $'s in the expression of $ y_i. $ And so we get $ \norm{y_i}=k. $ Similarly, we will have $\norm{x_i}=p $ for a circulant code of support $ p. $ As we know that $ x_i=\sum_{k=0}^{p-1} \rho_{\mdsum{i+k}}^{}. $ We already know by definition $ \phi\in \rh{\mathcal{C}} $ is in $ \nrh{\mathcal{C}} $ if for all $ i\in [n] $ we have $ \phi(x_i)\in\{x_j|j\in [n]\} \cup\{0,1\}. $ With the notation $ \norm{.} $ we can say that the necessary condition for $ \phi\in
\rh{\mathcal{C}}$ to be in $\nrh{\mathcal{C}} $ is: for all $ i\in[n] $ we must have $ \norm{\phi(x_i)}\in\{0,n,\norm{x_j}\}$ for some $ j\in[n]. $ And if $ \mathcal{C} $ circulant code with support $ p $ then as $ \norm{x_j}=p $ for all $ j\in[n]. $ So, we get the necessary condition to be $ \norm{\phi(x_i)}\in\{0,n,p\} $ for all $ i\in [n]. $ Note that for all $ i\in [n] $ we have $ \phi(\rho_i)=a_i\cdot P= \displaystyle\sum_{a_{ij}=1}\rho_j^{}. $ This gives us that $ \norm{\phi(\rho_i^{})}=|a_i|. $ Also $ \norm{\phi(x_i)}=\displaystyle\sum_{k=0}^{p-1} \phi\left( \rho_{\mdsum{i+k}}^{}\right) = \sum_{k=0}^{p-1} \Big|a_{\mdsum{i+k}}^{}\Big|.$
\begin{theorem}
Let $ \mathcal{C} $ be a circulant code on $ n $ neurons with support $ p=2. $ If $ n $ is odd then $ |\nrh{\mathcal{C}}|=3n. $ \label{thnp2}
\end{theorem}
\vspace{-0.5cm}
\begin{proof}
\noindent Clearly $ n $ cannot be 1 as $ p=2. $ \vspace{-0.2cm}
\begin{caseof}
\casea{$ n=3 $}{As $ p=2=n-1 $ in this case. By Proposition $ \ref{propcirnrh} $ we already know that $ \nrh{\mathcal{C}}=3!+3=3.n. $ Hence the proof.} \vspace{-0.3cm}
\casea{$ n>3 $}{From the Lemma $ \ref{prp2} $ we get that the total basis permutation maps that are neural ring homomorphisms are $ 2n. $ We already know that there are $ n $ unity maps and all are in $ \nrh{\mathcal{C}}. $ Therefore we have $ |\nrh{\mathcal{C}}| \geq 3n . $ We are only left to show that there are no more neural ring homomorphisms.
}
\end{caseof}\vspace{-0.2cm}
Let $ \phi $ be a non BPM and non UM. Suppose if possible $ \phi $ be a neural ring homomorphism, with $ \{a_i\}_{i\in [n]} $ as the vectors that represent $ \phi. $ As $ \phi $ is a non BPM and non UM, we already know that there exists $ m\in[n] $ such that $ $ and $ |a_m|\geq 2. $
\begin{claim} $ \mathbf{\textbf{For all } i, \textbf{ we have } |a_i|\leq 2. } $ Suppose not. Then there exists $ j $ such that $ |a_j|=k>2. $ Also as $ \phi $ is a non unity map we have $ k<n. $ We know that $ x_j=\rho_j+\rho_{\mdsum{{j+1}^{}}}^{}. $ By the necessary condition for $ \phi\in \nrh{\mathcal{C}}, $ we have that $ \norm{\phi(x_j)}=0,2 $ or $ n. $ But $ \norm{\phi(\rho_j)}=|a_j|=k>2.$ So the only possibility is that $ \norm{\phi(x_j)}=n. $ This gives us that $ |a_{\mdsum{{j+1}}}^{}|=\norm{\rho_{\mdsum{{j+1}}^{}}}=n-k. $ Also as $ |a_j|+\Big|a_{\mdsum{{j+1}}}^{}\Big|=n, $ we get that $ |a_i|=0, $ for all $ a_i\not=a_j $ and $ a_i\not=a_{\mdsum{{j+1}}} $ Consider $ \phi \left( x_{\mdsum{{j-1}^{}}}^{}\right) = \phi(\rho_{\mdsum{j-1}})+\phi(\rho_j)= \phi\left( {\rho_j}\right) .$ Therefore $ \Big\Vert{\phi\left( x_{\mdsum{{j-1}^{}}}\right) }\Big\Vert=\norm{\phi(\rho_j)}=k\not=0,2 $ or $ n $ as $ 2<k<n. $ This is a contradiction to the necessary condition of $ \phi\in\nrh{\mathcal{C}}. $ Hence the claim.
\end{claim}
With the claim we get that $ |a_m|=2. $ Suppose there exists some $ j\in[n] $ such that $ |a_j|=1. $ As $|a_j|= \norm{\phi(\rho_j)}=1, $ gives us $ \norm{\phi(x_j)} \not=0.$ Also, for all $i\in [n], \ |a_i|\leq 2 $ so we have $ \norm{\phi(x_j^{})}=|a_j|+|a_{\mdsum{j+1}^{}}|=1+|a_{\mdsum{j+1}^{}}|\leq 1+2=3. $ Therefore we have $ \norm{\phi(x_j)} \not=n, $ since $ n>3. $ Thus the necessary condition gives us that $ \norm{\phi(x_j)}=2.$ And we have $ \norm{\phi(\rho_{\mdsum{{j+1}}}^{})}=1. $ Iteratively we get for all $ i\in[n] $ that $ \norm{\phi(\rho_i)}=1=|a_i|. $ This is a contradiction to the fact that $ |a_m|=2. $ Therefore we have that for all $i\in[n], |a_i|=0$ or $ 2. $
The remark \ref{obsrh} gives us that $ \sum_{i=1}^{n}|a_i|=n. $ But as left hand side will be an even number, this forces $ n $ to be even. This is a contradiction to the hypothesis that $ n $ is odd. Therefore $ \phi \not \in \nrh{C}. $ Hence the proof.
\end{proof}
In the view of Theorem \ref{thnp2} we would further want to see the count of $ \nrh{\mathcal{C}} $ when $ n $ is even. In Remark \ref{rem42} we have seen that for $ n=4 $ and $p=2$ of a circulant code $ \mathcal{C} $ we get $ |\nrh{\mathcal{C}}|=36. $ We will now look for $ n\geq 6 $ in the following theorem.
\begin{theorem}
Let $ \mathcal{C} $ be a circulant code on $ n $ neurons with support $ p=2. $ If $ n=2k $ and $ k\geq 3, $ then $ |\nrh{\mathcal{C}}|=2^2\left(\dfrac{n}{2}\right)!+3n. $\label{thnpk2}
\end{theorem}
\begin{proof}
Let us first count the total number of non BPM and non UM maps that are in $ \nrh{\mathcal{C}}. $ Let $ \phi $ be a non BPM and non UM map with $ \{a_i\}_{i\in[n]} $ as its representing vectors. As observed in the proof of Theorem \ref{thnp2} for all $ i\in[n] $ we have $ |a_i|=0 $ or $ 2. $ Suppose if $ |a_i|=2=|a_{\mdsum{{i+1}}}|, $ then $ \norm{\phi(x_i)}=4. $ This contradicts the necessary condition of neural ring homomorphism as $ n>4. $ This implies no two consecutive $ a_i $'s have the same non-zero count , i.e $ |a_i|\not=|a_{\mdsum{i+1}}| $ for any $ i\in[n]. $ Thus if $ |a_1|=2 $ then for all $ m\in[k] $ we get $ |a_{2m-1}|=2 $ and $ |a_{2m}|=0. $ Similarly if $ |a_2|=2 $ for all $m\in [k] $ we get $ |a_{2m}|= 2$ and $ |a_{2m-1}|=0. $ Therefore when $ \phi \in \nrh{\mathcal{C}} $ it has broadly two types of choices for the vectors that can represent it. Let us fix one type of choice and count how many such neural ring homomorphisms it corresponds to. By the choice of all $ |a_i| $ we see that for all $i\in [n],\ \norm{\phi(x_i)}=2. $ This implies for all $ i\in [n] $ there exist $ j\in[n] $ such that $ \phi(x_i^{})=x_j^{}. $
Assume $ |a_1|=2. $ Consider $\phi(x_1^{})=\phi(\rho_1^{}+\rho_2^{})=(a_1\cdot P)+(a_2\cdot P)=(a_1\cdot P)=\phi(\rho_1^{}). $ Let $ \phi(x_1^{})=x_i^{} $ (say) for some $ i\in [n]. $ Then $ \phi(\rho_{1}^{})=x_i^{} $ and clearly $ \rho_i $ has $ n $ choices. Similarly whenever $ |a_l|=2 $ we get that $ \rho_l^{}\mapsto x_j=\rho_j^{}+\rho_{\mdsum{{j+1}}}^{}. $ In general, we can say $ \phi $ maps every basis element to $ 0 $ or a consecutive\footnote{We consider $ \rho_n^{}+\rho_1^{} $ as a consecutive sum} sum of basis elements. As in this case $ |a_{2m-1}|=2 $ and $ |a_{2m}|=0 $ for all $ m\in [k]. $ So, we have $ \phi(\rho_{2m}^{}) =0$ for all $ m\in[k]. $ And we need to only figure out $ \phi(\rho_{2m-1}^{}).$ We already fixed when $ m=1 $. Next, we look at $ m=3, $ i.e we need to find where $ \rho_3^{} $ is mapped by $ \phi $. Let, if possible $ \rho_3{} \mapsto x_{\mdsum{{i+r}}}^{} $ where $0<r<n $ and $ r$ is odd. Firstly, we note that $ r\not=n-1,$ and $r\not=1, $ as $ x_{\mdsum{{i-1}}}^{} =\rho_{\mdsum{{i-1}}}^{}+\rho_i$ and $ x_{\mdsum{{i+1}}}^{}=\rho_{\mdsum{{i+1}}}^{}+\rho_{\mdsum{{i+2}}}^{}. $ So now as $ r\geq 3 $ we observe that the number of $ \rho_j^{} $'s that are in between $ \rho_{\mdsum{i+1}} $ and $ \rho_{\mdsum{{i+r}}} $ is $ r-2. $
Note that once the $ \phi(\rho_{2m-1}) $ is chosen for all $ m\in[k-1]\backslash[2] $ there will still be one $ \rho_l $ in between $ \rho_{\mdsum{i+1}} $ and $ \rho_{\mdsum{{i+r}}} $ as $ r-2 $ is odd. In other words this process will exhaust all the sum of consecutive basis. Now we have to map $ \rho_{n-1} $ as $ |a_{n-1}|=2. $ But there is no more sum of consecutive basis left, meaning there is no choice for $ \phi(\rho_{n-1}^{}). $ Therefore $ \rho_3^{} $ cannot map to $ x_{\mdsum{{i+r}}}^{} $ when $ r $ is odd. Thus $ \phi:\rho_3^{}\mapsto x_{\mdsum{i+r}} $ for some even $ r\geq 2. $ This clearly gives $ \frac{n}{2}-1 =k-1$ choices for $\rho_3^{} $ to be mapped by $ \phi. $ Similarly we observe that $ \rho_5^{} $ will have $ k-2 $ choices. At the end we see that $ \rho_{n-1} $ has only 1 pair to choose from. Hence just $ 1 $ choice. Thus in total we get $ n(k-1)! $ as the number of possible $ \phi $ that can neural ring homomorphism when $ |a_1|=2 $.
Similarly, we get $ n(k-1)! $ as the number of possible $ \phi $ that can neural ring homomorphism when $ |a_2|=2. $ Therefore total number of non BPM and non UM maps that are in $\nrh{\mathcal{C}} $ are $ 2n(k-1)!. $ By Lemma $ \ref{prp2} $ we already know the count of BPM that are in $ \nrh{\mathcal{C}} $ to be $ 2n. $ And finally adding the $ n $ unity maps we get the result.
\end{proof}
Combining the results of Theorem \ref{thnp2} and \ref{thnpk2} together, we can write as $$ |\nrh{\mathcal{C}}|=\begin{cases}
3n \qquad &\text{ if } n \text{ is odd and } n>1.\\
3n+2^2\left(\dfrac{n}{2}\right)! \qquad &\text{ if } n \text{ is even and } n>4
\end{cases}$$ where $ \mathcal{C} $ is a circulant code with support $ p=2. $
Theorem \ref{thnp2} and \ref{thnpk2} gave us a hint that $ \operatorname{GCD}(p,n) $ could play a vital role in deciding the count of $ \nrh{\mathcal{C}}. $ With brute force we found that for a circulant code with support $ p=3 $ on $ n=3k+1 $ and $ n=3k+2, $ the non BPM and non UM maps that are in $ \nrh{\mathcal{C}} $ is zero. This lead us to think that the non BPM and non UM maps in $ \nrh{\mathcal{C}} $ is zero when $ \operatorname{GCD}(p,n)=1. $ We will first prove a Lemma that would be required in the proof.
\begin{lemma}
Let $ \mathcal{C} $ be a circulant code with support $ p>1$ on $ n $ neurons with $ \operatorname{GCD}(p,n)=1$ and $ \phi\in\rh{\mathcal{C}}$ be a non BPM non UM map. If $ \norm{\phi(x_i)}\in\{0,p,n\} $ for all $ i\in[n] $ then $ \norm{\phi(x_i)}=p $ for all $ i\in[n]. $ \label{lemncnrh}
\end{lemma}
\begin{proof}
Let $ \{a_i\}_{i\in[n]} $ be the vectors that represent $ \phi. $ We will first show that $ \norm{\phi(x_i)} =n$ is not possible for any $ i\in[n]. $ Then we will further show $ \phi(x_i)\not=0 $ for all $ i\in [n]. $
Suppose there exists $ j\in[n] $ such that $ \norm{\phi(x_j)}=n. $ Without loss of generality let us assume $ j=1. $ As $ x_1=\rho_1+\dots+\rho_p $ we get $ n=\norm{\phi(x_1)}=|a_1|+|a_2|+\dots+|a_p|. $ This implies for all $ k\in[n]\backslash[p] $ we have $ |a_k|=0. $ Let $ l\in[p] $ be the smallest such that $ |a_l^{}|\not=0. $. Hence we get $ n=\norm{\phi(x_1)}=|a_l|+\dots+|a_p|. $ \\Consider
\begin{align*}
\norm{\phi(x_1^{})}-\norm{\phi(x_l^{})}=&\Big( |a_1|+\dots+|a_{l}| +\dots+ |a_{p}|\Big) -\left( |a_{l}|\dots+|a_p|+|a_{\mdsum{p+1}}|+\dots+|a_{\mdsum{l+p-1}}|\right) \\
=& |a_1|+\dots+|a_{l-1}|-(|a_{\mdsum{p+1}}|+\dots+|a_{\mdsum{l+p-1}}|)\\
=& |a_1|+\dots+|a_{l-1}| \qquad (\text{Since } |a_k|=0 \text{ for all } k\in[n]\backslash[p] ) \\
=& 0 \qquad \qquad (\text{Since } l \text{ is the smallest integer such that } |a_l|\not=0 ).
\end{align*} So, we get $ \norm{\phi(x_l^{})}=n. $ Next, we see that $ n= \norm{\phi(x_1)}=|a_l|+\dots+|a_p|.$ This gives us that $ |a_{l+1}|+\dots+|a_p|<n $ as $ |a_l|\not=0. $ Moreover we get $ 0<|a_{l+1}|+\dots+|a_p|<n, $ if not we get $ |a_l|=n $ and that is not possible as $ \phi $ is not a UM map. \\Consider
\begin{align*}
\norm{\phi(x_{l+1})}=&|a_{l+1}|+\dots+|a_p|+ \dots+|a_{\mdsum{l+p}}|\\
=&|a_{l+1}|+\dots+|a_p| \qquad (\text{Since } |a_k|=0 \text{ for all } k\in[n]\backslash[p] )\\
\implies& 0< \norm{\phi(x_{l+1})}<n \quad (\text{Since } 0<|a_{l+1}|+\dots+|a_p|<n)
\end{align*}
And by hypothesis $ \norm{\phi(x_{l+1})}\in\{0,p,n\}. $ Hence we get $ \norm{\phi(x_{\mdsum{l+1}^{}})}=p $ and $ n-p=\norm{\phi(x_l^{})}-\norm{\phi(x_{\mdsum{l+1}^{}})}=|a_l|-|a_{ \mdsum{l+p}}|. $ Now, if $ \mdsum{l+p}\in [n]\backslash[p], $ then and that is not possible as $|a_{ \mdsum{l+p}}|=0 $ and we get $ |a_l|=n-p. $ Or if $ \mdsum{l+p}\in [p] $ we observe that $ \mdsum{l+p}=l+p-n<l $ as $ p<n. $ Thus if $ |a_{ \mdsum{l+p}}|\not=0, $ it contradicts the minimality of $ l. $ So we end up getting $ |a_l|=n-p. $ Let $ m\in[p]\backslash[l] $ be the smallest such that $ |a_m|\not=0. $ Note that as $ |a_l|=n-p $ and $ \sum_{i=1}^{p}|a_i|=n $ we will have $ 0<|a_m|\leq p. $ Suppose $ |a_m|=k <p $ then $ \norm{\phi(x_{\mdsum{m+1}}{})}=|a_{m+1}|+\dots|a_p|+\dots+|a_{\mdsum{m+p}}|=n- \sum_{i=1}^{m}|a_i|=n-(n-p+k)=p-k\not\in\{0,p,n\}. $ Therefore it ensures $ |a_m|=p. $ This also results to $ |a_i|=0 $ for all $ i\in[n]\backslash\{l,m\}. $
Consider $ x_{\mdsum{m+n-p}}^{}=\rho_{\mdsum{m+n-p}} +\dots+\rho_1+\dots+\rho_l+\dots+\rho_{\mdsum{m+n-1}} $
so we get $ \norm{\phi(x_{\mdsum{m+n-p}}^{})}=|a_{\mdsum{m+n-p}}|+\dots+|a_l|+\dots+|a_{\mdsum{m+n-1}}|=|a_l|=n-p. $ And for $ \norm{\phi(x_{\mdsum{m+n-p})}}\in\{0,p,n\} $ we must have $ n=p $ or $2p, $ or $ p=0 $. But as $ \operatorname{GCD}(p,n)=1 $ and $ p>1 $ none of them is possible. Therefore we get a contradiction to the hypothesis. Hence $ \norm{\phi(x_i)}\not=n $ for any $ i\in[n]. $
Let if possible there exists $ j\in[n] $ such that $ \norm{\phi(x_j)}=0. $ We also know there exists $ k\in[n-1] $ such that $ \norm{\phi(x_{\mdsum{j+k}})}\not=0. $ Thus $ \norm{\phi(x_{\mdsum{j+k}})}=p. $ as it cannot be $ n $ using last paragraph. Choose the smallest $ k $ such that $ \norm{\phi(x_{\mdsum{j+k}})}=p, $ i.e $ \norm{\phi(x_{\mdsum{j+m}})}=0 $ for all $ m<k. $ Also as $ x_{\mdsum{j+k-1}} = \sum_{m=0}^{p-1} \rho_{\mdsum{j+k-1+m}} $ we have $0=\norm{\phi(x_{\mdsum{j+k-1}})}=\sum_{m=0}^{p-1}|a_{\mdsum{j+k-1+m}}|.$ Therefore we get $ |a_{\mdsum{j+k-1+m}}|=0 $ for all $ m\in \{0\}\cup[p-1]. $ Consider $ x_{\mdsum{j+k}}=\rho_{\mdsum{j+k}}+\dots+\rho_{\mdsum{j+k+p-1}}= x_{\mdsum{j+k-1}}-\rho_{\mdsum{j+k-1}}+\rho_{\mdsum{j+k+p-1}}. $ So, we get $ p=\norm{\phi({x_{\mdsum{j+k}}})}=\norm{\phi(x_{\mdsum{j+k-1}})}- |a_{\mdsum{j+k-1}}| + |a_{\mdsum{j+k+p-1}}|=|a_{\mdsum{j+k+p-1}}|$. Next, we choose the smallest $ l>0 $ such that $ \norm{\phi(x_{\mdsum{j+k+l}})}=p $ and repeating the process as above we get $ |a_{\mdsum{j+k+l+p-1}}|=p $ and other $ |a_i| $'s corresponding to $ \rho_i $'s that are in the expression of $ x_{\mdsum{j+k+l}} $ are 0. Therefore for all $ i\in[n] $ we get $ |a_i|\in\{0,p\}. $ As $ \sum_{i=0}^{n}|a_i|=n $ and $ \sum_{i=0}^{n}|a_i|=dp, $ this implies $ p|n $ and $ \operatorname{GCD}(p,n)=p\not =1. $ This is a contradiction to our hypothesis that $ \operatorname{GCD}(p,n)=1. $ Hence the result.
\end{proof}
\noindent In other words Lemma \ref{lemncnrh} says that if the map $ \phi $ satisfies the necessary condition to be a neural ring homomorphism then $ \norm{\phi(x_i)}=p $ for all $ i\in[n]. $
\begin{obs}\label{obsgcd}
Let $ \mathcal{C} $ be a circulant code on $ n $ neurons with $ p>1 $ and $ \operatorname{GCD}(n,p)=1. $ Also, let $ n=pd+r, $ where $ 0<r<p. $ Suppose $ \phi\in\rh{\mathcal{C}}$ is a non BPM non UM map satisfying the necessary condition to be in $ \nrh{\mathcal{C}} $ with $ \{a_i\}_{i\in[n]} $ as the vectors that represent it. Relabel the indices in the set $ \{a_i\}_{i\in[n]} $ and write them as $ \{\beta_{11},\dots,\beta_{1p},\beta_{21}\dots,\beta_{dp}, \beta_{(d+1) 1},\dots \beta_{(d+1) r}\}. $ As $ \phi $ satisfies the necessary condition to be in $ \nrh{\mathcal{C}}, $ by the Lemma \ref{lemncnrh} for $ i\in[n] $ we have $ \norm{\phi(x_i)}=p. $ So, we have $ p=\norm{\phi(x_1)}=|\beta_{11}|+ |\beta_{12}|+\dots+ |\beta_{1p}|. $ Similarly we get $ \sum_{j=1}^{p}|\beta_{ij}|=p $ for all $ i\in[d]. $ We also have $ 0=\norm{\phi(x_1)}-\norm{\phi(x_2)}=|\beta_{11}|-|\beta_{21}|. $ Therefore we get $ |\beta_{11}|=|\beta_{21}|. $ Now consider $ 0=\norm{x_1}-\norm{x_3}=|\beta_{11}|+|\beta_{12}|-|\beta_{21}|-|\beta_{22}| $. And we get $ |\beta_{12}|=|\beta_{22}|. $ Further considering $ 0=\norm{x_1}-\norm{x_{j+1}} $ we get $ |\beta_{1j}|=|\beta_{2j}|$ for all $ j\in[p]. $ Extending the result for all $i\in[d] $ we see that $ |\beta_{ij}|=|\beta_{1j}| $ for all $ j\in[p]. $ This is also true when $ i=d+1, $ i.e $ |\beta_{(d+1)j}|=|\beta_{1j}| $ for all $ j\in[r]. $ Next, note that $n=\sum_{i=1}^{n}|a_i|=\sum_{i=1}^{d}\sum_{j=1}^{p}|\beta_{ij}|+\sum_{j=1}^{r}|\beta_{(d+1)j}|=pd+\sum_{j=1}^{r}|\beta_{(d+1)j}|.$ This implies $ \sum_{j=1}^{r}|\beta_{(d+1)j}|=n-pd=r. $ Thus we have $ \sum_{j=1}^{r}|\beta_{ij}|=r $ for all $ i\in[d+1]. $ Consider $ 0=\norm{\phi(x_n)}-\norm{\phi(x_1)}=|\beta_{(d+1)r}|-|\beta_{1p}|. $ So, we get $ |\beta_{(d+1)r}|=|\beta_{1p}|. $ Similarly when we consider $ 0= \norm{\phi(x_{n-j})}-\norm{\phi(x_1)}$ we get $ |\beta_{(d+1)(r-j)}|=|\beta_{1(p-j)}| $ for $ j\in\{0\}\cup[r-1]. $
\end{obs}
\begin{proposition}\label{propgcd1}
Let $ \mathcal{C} $ be a circulant code on $ n $ neurons with support $ 2<p<n-1 $. If $ \operatorname{GCD}(p,n)=1 $ and $ n=pd+1 $ then $ |\nrh{\mathcal{C}}|=3n.$
\end{proposition}
\begin{proof}
Let $ \phi\in\rh{\mathcal{C}} $ be a non BPM and non UM map with $ \{a_{i}\}_{i\in [n]} $ as its representing vectors. Also, label the vectors $ \{a_i\}_{i\in[n]} $ as in \ref{obsgcd} and rewrite them as $ \{\beta_{ij}\}_{i\in[d],j\in[p]} \cup\{\beta_{(d+1)j}\}_{j\in[r]}. $ Let if possible $ \phi\in\nrh{\mathcal{C}}, $ then by Lemma \ref{lemncnrh} for $ i\in[n] $ we get $ \norm{\phi(x_i)}=p. $ By Observation \ref{obsgcd} we get the following remarks: \begin{enumerate}
\item $ \sum_{j=1}^{r}|\beta_{(d+1)j}|=r, $ and as $ r=1 $ we have $ |\beta_{(d+1)1}|=1. $
\item For all $ i\in[d] $ we have $ |\beta_{i1}|=|\beta_{(d+1)1}|=1. $
\item Also, for all $ i\in[d] $ we have $ |\beta_{ip}|=|\beta_{(d+1)1}|=1. $
\end{enumerate}
Consider $ p=\norm{\phi(x_n)}=|\beta_{(d+1)1}|+|\beta_{11}|+\dots+|\beta_{1(p-1)}| =1+1+\sum_{j=2}^{p-1}|\beta_{1j}|.$ This implies $ \sum_{j=2}^{p-1}|\beta_{1j}|=p-2. $ Also $ p=\norm{\phi(x_{n-1})}=|\beta_{dp}|+|\beta_{(d+1)1}|+|\beta_{11}|+\sum_{j=2}^{p-1}|\beta_{1j}|-|\beta_{1(p-1)}|=1+1+1+p-2-|\beta_{1(p-1)}|. $ This implies $ |\beta_{1(p-1)}|=1. $ And by Observation \ref{obsgcd} we get $ |\beta_{i{(p-1)}}|=1, $ for all $ i\in[d]. $ Note that at this juncture we have already proved the result if $ p=3. $ As we get that $ |a_i|=1$ for all $ i\in[n],$ which is a contradiction to the fact that $ \phi $ is a non BPM and non UM map. If $ p>3 $ we see that from $ \norm{\phi(x_{n-1})} $ we get $ \sum_{j=2}^{p-2}|\beta_{1j}|=p-3 $ and next we get $ |\beta_{1p-2}|=1. $ Iteratively we get $|a_i|=1 $ for all $ i\in[n], $ which as discussed above cannot happen. Therefore $ \phi\not\in\nrh{\mathcal{C}}. $ This implies that the none of the non BPM and non UM maps are in $ \nrh{\mathcal{C}}. $ Hence by Theorem \ref{thnrhbpm} we already know the count of BPM that are in $ \nrh{\mathcal{C}} $ to be $ 2n. $ And finally adding the $ n $ unity maps we get the result.
\end{proof}
\begin{proposition}\label{propgcd2}
Let $ \mathcal{C} $ be a circulant code on $ n $ neurons with support $ 2<p<n-1 $. If $ \operatorname{GCD}(p,n)=1 $ and $ n=pd+2 $ then $ |\nrh{\mathcal{C}}|=3n.$
\end{proposition}
\begin{proof}
Let $ \phi\in\rh{\mathcal{C}} $ be a non BPM and non UM map with $ \{a_{i}\}_{i\in [n]} $ as its representing vectors. Also, label the vectors $ \{a_i\}_{i\in[n]} $ as in \ref{obsgcd} and rewrite them as $ \{\beta_{ij}\}_{i\in[d],j\in[p]} \cup\{\beta_{(d+1)j}\}_{j\in[r]}. $ Let if possible $ \phi\in\nrh{\mathcal{C}}. $ Then by Lemma \ref{lemncnrh} for $ i\in[n] $ we get $ \norm{\phi(x_i)}=p. $ By Observation \ref{obsgcd} we get
\begin{enumerate}
\item $ \sum_{j=1}^{2}|\beta_{(d+1)j}|=2. $ Therefore we can have $ |\beta_{(d+1)1}|=|\beta_{(d+1)2}|=1$ or $ |\beta_{(d+1)1}|=2,|\beta_{(d+1)2}|=0 $ or $ |\beta_{(d+1)1}|=0,|\beta_{(d+1)2}|=2.$
\item For all $ i\in[d] $ we have $ |\beta_{i1}|=|\beta_{(d+1)1}| $ and $|\beta_{i2}|=|\beta_{(d+1)2}|. $
\item Also, for all $ i\in[d] $ we have $ |\beta_{ip}|=|\beta_{(d+1)2}|$ and $|\beta_{i(p-1)}|=|\beta_{(d+1)1}|.$
\end{enumerate}
Consider $ 0=\norm{\phi(x_{n-3})}-\norm{\phi(x_{n-2})}=|\beta_{dp}|+|\beta_{(d+1)1}|+|\beta_{(d+1)2}|+\sum_{j=1}^{p-3}|\beta_{1j}|-\Big(|\beta_{(d+1)1}|+|\beta_{(d+1)2}|+\sum_{j=1}^{p-3}|\beta_{1j}|+|\beta_{1(p-2)}|\Big)=0-|\beta_{1(p-2)}|. $ This implies $ |\beta_{1(p-2)}|=|\beta_{dp}|=|\beta_{(d+1)2}|. $ Similarly we get $ |\beta_{1(p-3)}|=|\beta_{d(p-1)}|=|\beta_{(d+1)1}|. $
\begin{caseof}
\casea{$ |\beta_{(d+1)1}|=|\beta_{(d+1)2}|=1 $}{In this case we get $ |\beta_{1(p-2)}|=|\beta_{(d+1)2}|=1 $ and $ |\beta_{1(p-3)}|=|\beta_{(d+1)1}|=1.$
On extending we get $ |\beta_{1j}|=1 $ for all $ j\in[p]. $ Therefore by observation \ref{obsgcd} for all $ i\in[d] $ and $ j\in[p] $ we get $ |\beta_{ij}|=1. $ And as $ |\beta_{(d+1)1}|=|\beta_{(d+1)2}|=1 $ we get $ |a_i|=1 $ for all $ i\in[n]. $ This implies $ \phi $ is a BPM and that is a contradiction as we have chosen $ \phi $ to be a non BPM and non UM map. Hence this case cannot occur.}
\casea{$ |\beta_{(d+1)1}|=2,|\beta_{(d+1)2}|=0 $ or $ |\beta_{(d+1)1}|=0,|\beta_{(d+1)2}|=2.$}{We will work with $ |\beta_{(d+1)1}|=2,|\beta_{(d+1)2}|=0 $ and the other case will be very similar to this. In this case we get $ |\beta_{1(p-2)}|=|\beta_{(d+1)2}|= 0$ and $ |\beta_{1(p-3)}|=|\beta_{(d+1)1}|=2.$
On extending we get $ |\beta_{1j}|\in\{0,2\} $ for all $ j\in[p]. $ And $ p=\norm{\phi(x_1)}=\sum_{j=1}^{p} |\beta_{1j}| =2k$ for some $ k. $ This implies $ 2|p $ and in turn $ 2|\operatorname{GCD}(p,n). $ Therefore we get $ \operatorname{GCD}(p,n)\geq 2 $ which is a contradiction. Hence this case cannot occur. }
\end{caseof}
\noindent Thus we get $ \phi\notin \nrh{\mathcal{C}}. $ By Theorem \ref{thnrhbpm} we already know the count of BPM that are in $ \nrh{\mathcal{C}} $ to be $ 2n. $ And finally adding the $ n $ unity maps we get the result.
\end{proof}
Combining the results of Propositions \ref{propgcd1} and \ref{propgcd2} for a circulant code $ \mathcal{C} $ with support $ 2<p<n-1 $ we get that $ |\nrh{\mathcal{C}}|=3n $ for $ n=pd+r $ where $ r\in\{1,2\} $ and $ \operatorname{GCD}(p,n)=1. $
Next aim is to generalize the above Propositions \ref{propgcd1} and \ref{propgcd2} for any $ r$ such that $ n=pd+r$ and $ 0<r<p $. At this moment we strongly believe the following conjecture.
\begin{conjecture}
Let $ \mathcal{C} $ be a circulant code on $ n $ neurons with support $ 2<p<n-1 $. If $ \operatorname{GCD}(p,n)=1 $ and $ n=pd+r $ with $ 2<r<p, $ then $ |\nrh{\mathcal{C}}|=3n.$
\end{conjecture}
\noindent We also note that if the circulant code is on $ n $ neurons with support $ p=3 $ if $ \operatorname{GCD}(n,3)=1 $ then $ n=3k+1 $ or $ n=3l+2 $ for some $ k,l. $ So,if $ n>4 $ Propositions \ref{propgcd1} and \ref{propgcd2} gives us that $ |\nrh{\mathcal{C}}| =3n.$ And if $ n=4 $ as $ p=3=n-1 $ we get $ |\nrh{\mathcal{C}}|=n!+n=28 $ by Proposition \ref{propcirnrh}. Note that when $ p=3 $ we are now only left with $ n=3d $ case. By brute force we counted that $ |\nrh{\mathcal{C}}|=270 $ where $ \mathcal{C} $ is a circulant code on $ n=6 $ with support $ p=3. $ In the next theorem we will work with $ n=3d $ where $ d>2. $
\begin{theorem}
Let $ \mathcal{C} $ be a circulant code on $ n$ neurons with support $ p=3. $ If $ n=3d, $ where $ d>2 $ then $ |\nrh{\mathcal{C}}|=3n+3^2\left( \dfrac{n}{3} \right)!+12n. $ \label{thnp3k}
\end{theorem}
\begin{proof}
Let us first count the total number of non BPM and non UM maps that are in $ \nrh{\mathcal{C}}. $ Let $ \phi $ be a non BPM and non UM map with $ \{a_i\}_{i\in[n]} $ as its representing vectors. As observed in the proof of Theorem \ref{thnpk2} we have 3 sub-cases, which corresponds to $ |a_1|=3 $, $ |a_2|=3 $ and $ |a_3|=3. $ Also as there is another partition of $ 3 $ which is not all ones (namely $ 3=2+1 $) we get more cases which will corresponds to $ |a_1|=2,|a_2|=1,|a_3|=0 $ and their possible permutations. Thus in total we will have these 2 broader class of cases. Let us fix one type of choice and count how many such neural ring homomorphisms it corresponds to. By the choice of all $ |a_i| $ we see that for all $i\in [n],\ \norm{\phi(x_i)}=3. $ This implies for all $ i\in [n] $ there exists $ j\in[n] $ such that $ \phi(x_i^{})=x_j^{}. $
\begin{caseof}
\casea{$ (|a_1|,|a_2|,|a_3|)=(3,0,0) $ or $ (|a_1|,|a_2|,|a_3|)= (0,3,0) $ or $ (|a_1|,|a_2|, |a_3|)=(0,0,3) $}{\textbf{Sub-case a:} $ (|a_1|,|a_2|,|a_3|)=(3,0,0). $ \\ This case is similar to case 1 as in the proof of Theorem \ref{thnpk2}. Firstly its clear that $ \phi(\rho_1) $ has $ n $ choices and $ \phi(\rho_2)=\phi(\rho_3)=0. $ And for $ \phi(\rho_4) $ we have to choose from all the triplets that are left. And we get $ \left( \frac{n}{3}-1\right) $ choices. Further completing the process we get the total maps that are in $ \nrh{\mathcal{C}} $ as $ n\times\left( \frac{n}{3}-1\right)\times\left( \frac{n}{3}-2\right)\times1=3\left(\frac{n}{3}\right)! $}. \\
{Sub-case b} and {sub-case c} are almost same as {sub-case a}. Hence Case 1 gives us $ 3^2\left(\frac{n}{3}\right)! $ non BPM non UM maps that are in $ \nrh{\mathcal{C}}. $
\casea{ For some $ i,j\in[3], i\not =j $ let $ |a_i|=2 $ and $ |a_j |=1$}{Then by permuting $ i,j\in[3] $ we get 6 sub-cases.\\\textbf{Sub-case a: } $ (|a_1|,|a_2|,|a_3|)=(2,1,0) $. } \\ In this sub-case firstly we get that $ \phi(\rho_1) $ can take any consecutive sum of basis elements and so it has $ n $ choices. Let $ \phi(\rho_1)=\rho_l+\rho_{\mdsum{l+1}}. $ Next as $ \phi(x_1)\in\{x_k\} $ it ensures that $ \phi(\rho_2) $ can either be $ \rho_{\mdsum{l+n-1}} $ or $ \rho_{\mdsum{l+2}}. $ We already know that $ \phi(\rho_3)=0. $ Further we observe that this process fixes a unique choice for remaining $ \phi(\rho_k) $ for $ k\in[n]\backslash[3]. $ Hence this sub-case gives us $ 2n $ non BPM and non UM maps that are in $ \nrh{\mathcal{C}}. $
\noindent The remaining 5 sub-cases under case 2 will be similar to sub-case a. Hence we get $ 12n$ non BPM and non UM maps that are in $ \nrh{\mathcal{C}}. $
\end{caseof}
\noindent As described in the previous proofs we get $ 3n $ BPM maps that are in $ \nrh{\mathcal{C}}. $ Hence the result.
\end{proof}
\noindent Looking at the pattern from Theorems \ref{thnpk2} and \ref{thnp3k} we conjecture the following.
\begin{conjecture}
Let $ \mathcal{C} $ be a circulant code on $ n $ neurons with support $ p. $ If $ p>2 $ is prime and $ p|n $ then $ |\nrh{\mathcal{C}}|=3n+p^2\left( \dfrac{n}{p}\right)!+p(p+1)n. $
\end{conjecture}
\begin{figure}[H]
\begin{tikzpicture}[scale=6]
\draw[black] (1,0.8) rectangle (1.35,0.95);
\draw (1,0.87)node[right] {$ |\nrh{\mathcal{C}}|$};
\draw (0,0pt)node[above,align=left] {$ \mathcal{C}:$ a circulant\\ code on $ n $ neurons};
\draw[black] (-0.25,-0.02) rectangle (0.24,0.18);
\draw[black] (0.56,0.82) rectangle (0.69,0.91);
\draw (0.63,0.9)node[below] {$ p$};
\draw[->, thick, black] (0.24,0.09) -- (0.4,0.09);
\draw[-, thick, black] (0.4,0.7) -- (0.4,-0.4);
\draw[->, thick, black] (0.4,0.7) -- (0.6,0.7);
\draw (0.6,0.7)node[right] {$ 1$};
\draw[->, thick, black] (0.4,0.5) -- (0.6,0.5);
\draw (0.6,0.5)node[right] {$ 2$};
\draw[->, thick, black] (0.4,0.15) -- (0.6,0.15);
\draw (0.6,0.15)node[right] {$ 3$};
\draw (0.61,0.03)node[right] {$ \vdots$};
\draw (0.61,-0.25)node[right] {$ \vdots$};
\draw[->, thick, black] (0.4,-0.15) -- (0.6,-0.15);
\draw(0.6,-0.15)node[right]{$ p $};
\draw[->, thick, black] (0.4,-0.4) -- (0.6,-0.4);
\draw (0.6,-0.4)node[right] {$ n-1$};
\draw[->, thick, black] (0.66,0.7) -- (0.98,0.7);
\draw (1.03,0.7)node[right] {$ n!+n$};
\draw[->, thick, black] (0.66,0.5) -- (0.98,0.5);
\draw (0.98,0.5)node[right] {$ \begin{cases}
3n &n=2k+1,\ k>1.\\
3n+2^2\left(\frac{n}{2}\right)! & n=2k,\ k>2. \\36 & n=4.
\end{cases}$};
\draw[->, thick, black] (0.66,0.15) -- (0.98,0.15);
\draw (0.98,0.15)node[right] {$\begin{cases}
3n & n=3k+2,\ k>0. \\ 3n& n=3k+1,\ k>1.\\
15n+3^2\left( \frac{n}{3} \right)! & n=3k ,\ k>2.\\
270 & n=6.
\end{cases} $};
\draw[->, thick, black] (0.66,-0.15) -- (0.98,-0.15);
\draw (0.98,-0.15)node[right] {$ \begin{cases}
3n & \operatorname{GCD}(p,n)=1.\\
3n+p^2\left( \frac{n}{p}\right)!+(p^2+p)n\quad
& p|n.\end{cases}$};
\draw (.65,-0.12)node[right] {\textbf{\small Conjecture}};
\draw[->, thick, black] (0.76,-0.4) -- (0.98,-0.4);
\draw (1.03,-0.4)node[right] {$ n!+n$};
\end{tikzpicture}
\caption{The above figure represents count of neural ring endomorphisms for a circulant code on $ n $ neurons and support $ p $.}
\end{figure}
\noindent We are still working on remaining cases for $ p>3. $ |
1,108,101,566,658 | arxiv | \section{Fully frustrated systems}
\label{sec1}
In the last few decades there has been a considerable interest in the
consequences of frustration on the critical behavior of statistical systems.
The simplest example is the antiferromagnetic Ising model on a triangular
lattice, whose Hamiltonian is
\begin{equation}
{\cal H} = J \sum_{\langle ij \rangle} \sigma_i \sigma_j,
\end{equation}
where $J$ is positive, $\sigma_i = \pm 1$, and the sum is extended
over all lattice nearest neighbors $\langle ij \rangle$.
\begin{figure}[tb]
\includegraphics[width=17pc]{figura1.eps}\hspace{2pc}%
\begin{minipage}[b]{17pc}
\caption{\label{frustration}
A frustrated triangle. If $\sigma_A = 1$, the local energy associated
with links AB and AC is
minimized by taking $\sigma_B = \sigma_C = -1$. The local energy
on link BC corresponds to a maximum: link BC is {\it frustrated}.
}
\vspace{10mm}
\end{minipage}
\end{figure}
For $T\to 0$ the system tends to be antiferromagnetically ordered, i.e.
spins on nearest-neighbor sites prefer to be oppositely aligned.
However, this is not possible everywhere. For instance, see
Fig.~\ref{frustration}, on any lattice triangle one link must
be {\em frustrated},
i.e.~spins on the corresponding sites must be parallel so that the local
energy assumes its {\it maximum} value.
The presence of frustration has an important consequence. At variance with
the ferromagnetic Ising model the antiferromagnetic one is
disordered at any temperature: the large entropy forbids an ordering transition
\cite{Wannier-50,Houtappel-50}.\footnote{It is interesting to note that
this is not true for the spin-$S$ antiferromagnetic Ising model if $S$ is
large enough; see \cite{NMH-93,LHL-95,ZH-97} and references therein.}
The Ising model can be generalized by considering the $N$-vector
model on a triangular lattice. In this case one considers
unit $N$-component vectors $\vec{s}_i$ and the Hamiltonian
\begin{equation}
{\cal H} = J \sum_{\langle ij\rangle} \vec{s}_i\cdot \vec{s}_j.
\label{HFFXYtr}
\end{equation}
Also this model is frustrated: There is no configuration in
which all neighboring spins are antiparallel. However, at variance with the
Ising case, here the entropy vanishes at zero temperature. Indeed,
once rotational invariance has been broken by fixing the direction of
one spin, there is a finite
number of configurations that are global minima of the Hamiltonian
\cite{Villain-77}. For $N=2$, the only case we will consider,
if $\vec{s}_i = (\cos \theta_i,\sin\theta_i)$, one must have
$|\theta_i - \theta_j| = 2 \pi/3$ or $4\pi/3$ when $i$ and $j$ are
nearest-neighbor sites. It is easy to verify that the degeneracy of the
ground state is ${\mathbb Z}_2\otimes O(2)$, where $O(2)$ is
the invariance rotation group. The group ${\mathbb Z}_2$ is due to
the possibility of two (chirally) different configurations.
As shown in Fig.~\ref{chiral-tr}, the ground state is uniquely
determined once one breaks rotational invariance (by setting, for instance,
$\theta_A = 0$) and chooses the chirality of triangle ABC (by setting
$\theta_B = 120^{\rm o}$ or $240^{\rm o}$). An observable that distinguishes
between the two ground states is the {\it chirality}. Given a
lattice triangle, see Fig.~\ref{chiral-tr}, we can consider
\cite{Villain-77}
\begin{equation}
C_n \equiv {2\over 3 \sqrt{3}} [
\sin(\theta_A - \theta_B) +
\sin(\theta_B - \theta_C) +
\sin(\theta_C - \theta_A) ],
\end{equation}
which assumes the values $\pm 1$ on any lattice triangle in a ground-state
configuration. A good order parameter is obtained as follows. We assign
$s_n = \pm 1$ to each lattice triangle so that $s_n = - s_m$ if triangles
$n$ and $m$ share a lattice link. The order parameter, the {\it chiral
magnetization}, is simply
\begin{equation}
M_C \equiv \sum_n s_n C_n,
\end{equation}
where the sum is extended over all lattice triangles.
\begin{figure}[tb]
\centerline{\psfig{width=15truecm,angle=0,file=figura2.eps}}
\caption{
Two inequivalent ground states related by a chiral transformation.
They are obtained as follows: one first fixes $\theta_A = 0^o$,
breaking rotational invariance. Then, there are two possible choices:
on the left we choose $\theta_B = 240^o$, $\theta_C = 120^o$;
on the right we make the opposite choice. All other lattice spins are
univocally defined.
}
\label{chiral-tr}
\end{figure}
\begin{figure}[tb]
\includegraphics[width=17pc]{figura3.eps}\hspace{2pc}%
\begin{minipage}[b]{17pc}
\caption{\label{fig:FFXY}
The couplings $j_{ij}$ in Hamiltonian~(\protect\ref{FFXY}):
$j_{ij} = 1$ on thin lines, $j_{ij} = -\alpha$ on thick lines.
}
\vspace{10mm}
\end{minipage}
\end{figure}
It is possible to define a frustrated model also on the square lattice.
The relevant model is not the antiferromagnetic Ising or $N$-vector model,
since no frustration occurs on the square lattice or, in general, on any
bipartite lattice.
To obtain a frustrated model we consider a Hamiltonian of the form
\cite{Villain-77}
\begin{equation}
{\cal H}_{\rm FFXY} = - J
\sum_{\langle ij\rangle} j_{ij} \, \vec{s}_i \cdot \vec{s}_j ,
\label{FFXY}
\end{equation}
where the two-component spins $\vec{s}_i$ satisfy $\vec{s}_i\cdot \vec{s}_i=1$,
$j_{ij}=1$ along all horizontal lines, while along vertical lines
ferromagnetic $j_{ij}=1$ and antiferromagnetic $j_{ij}=-\alpha$
($\alpha > 0$) couplings alternate, see Fig.~\ref{fig:FFXY}.
This model is frustrated for any positive $\alpha$. Maximal frustration is
obtained by taking $\alpha = 1$; for this reason this particular
model, the only one we shall consider in the following,
is called fully frustrated XY (FFXY) model.
The square-lattice FFXY model
admits two chirally different ground states, see Fig.~\ref{chiral-sq},
and thus it has the same ground-state degeneracy of the
antiferromagnetic model on the triangular lattice. These two models
are particular examples of a general class of systems
that all have a ${\mathbb Z}_2\otimes O(2)$ ground-state degeneracy.
We will collectively call them FFXY systems.
\begin{figure}[tb]
\centerline{\psfig{width=15truecm,angle=0,file=figura4.eps}}
\caption{
Ground states of the square-lattice FFXY model: in this case
nearest-neighbor spins must satisfy
$\theta_i - \theta_j = 45^{\rm o}$ or $315^{\rm o}$
if they are connected by a ferromagnetic link,
and $\theta_i - \theta_j = 135^{\rm o}$ or $225^{\rm o}$
if they are connected by an antiferromagnetic link.
Once we fix $\theta_P = 0^{\rm o}$, there are two (chirally)
inequivalent possibilities:
(left) $\theta_A = 45^{\rm o}$ or (right) $\theta_B = 45^{\rm o}$.
All other spins are fixed. Note that the ground-state configurations
are invariant under translations of two lattice spacings.
}
\label{chiral-sq}
\end{figure}
Even though the symmetry of the FFXY systems (\ref{HFFXYtr}) and (\ref{FFXY})
is the same as that of the ferromagnetic XY model, we expect the
critical behavior to be different. Indeed, the universality class
is not only determined by the symmetry of the order parameter but also by the
symmetry breaking pattern that is different in the two cases.
In order to determine the critical behavior we can perform a direct numerical
study of the model. There is however another possibility, which is the basis
of the field-theoretical approach to critical phenomena. In this case
one first identifies the critical modes of the microscopic Hamiltonian and
then writes down an effective coarse-grained (continuum)
Hamiltonian for them. The model
one obtains is no longer frustrated; still, it is expected to have
the same critical behavior as the original one.
In order to derive the effective theory, let us consider the
antiferromagnetic model on a triangular lattice. As is evident from
Fig.~\ref{chiral-tr}, in the ground state spins rotate by $120^{\rm o}$
when moving in the $x$ direction from one site to its neighbor.
Thus, critical modes are associated with
fluctuations close to the complex Fourier component $\vec{s}(Q)$ with
$Q = [2\pi/(3 a),0]$, where $a$ is the lattice spacing. These fluctuations
are parametrized by a complex vector
$\vec{\Phi} = \vec{\phi}_1 + i \vec{\phi}_2$. Note that the appearance
of two real two-component fields is at variance with the ferromagnetic case.
Indeed, in that case, the relevant modes are associated with
the zero-momentum component $\vec{s}(q=0)$.
As a consequence of the reality condition $\vec{s}(q) = \vec{s}^*(-q)$,
fluctuations are real and are parametrized by a single real
two-component field.
A standard calculation \cite{CD-85,YD-85,LGK-91}
gives the effective Hamiltonian for the fields
$\vec{\phi}_a$:
\begin{equation}
{\cal H}_{\rm LGW} = \int d^d x
\Bigl\{ {1\over2}
\sum_{a=1,2} \Bigl[ (\partial_\mu \phi_{a})^2 + r \phi_{a}^2 \Bigr]
+ {1\over 4!}u_0 \Bigl( \sum_{a=1,2} \phi_a^2\Bigr)^2
+ {1\over 4} v_0 \phi_1^2 \phi_2^2 \Bigr\}.
\label{HLGW}
\end{equation}
A similar argument applies to the square-lattice FFXY model and gives
the same effective Hamiltonian (\ref{HLGW}).
Hamiltonian (\ref{HLGW}) has a larger symmetry than the
original one.
Indeed, the symmetry group is
$[O(2) \oplus O(2)]\otimes {\mathbb Z}_2$: the $O(2)$ groups are
related to independent rotations of the two fields, while the
${\mathbb Z}_2$ group corresponds to the field-interchange symmetry.
Nonetheless---and this is the only property that matters---Hamiltonian
(\ref{HLGW}) for $v_0 > 0$ and FFXY models have the same symmetry-breaking
pattern. Indeed, for $v_0 > 0$, the ground state corresponds to
$\phi_1^2 = 0$ and $\phi_2^2\not=0$ or the opposite and thus it has the same
ground-state degeneracy: the ground state breaks one of the two $O(2)$
groups and the ${\mathbb Z}_2$ field-interchange symmetry.
A lattice discretization of (\ref{HLGW}) is \cite{HPV-05let}
\begin{eqnarray}
{\cal H}_{\phi} = -
J \sum_{\langle ij\rangle,a} \vec{\phi}_{a,i}\cdot \vec{\phi}_{a,j}
+ \sum_{a,i} \left[ \phi_{a,i}^2 + U (\phi_{a,i}^2-1)^2 \right] +
2 (U+D) \sum_i \phi_{1,i}^2 \phi_{2,i}^2,
\label{HLGWlat}
\end{eqnarray}
where $J > 0$ (the model is ferromagnetic),
$a=1,2$, $\vec{\phi}_{a,i}$ is a real two-component variable, the first sum
goes over all nearest-neighbor pairs, and $\phi^2_a\equiv \vec{\phi}_a \cdot
\vec{\phi}_a$. The correct symmetry-breaking pattern is
obtained for $D > 0$, which corresponds to $v_0 > 0$ in (\ref{HLGW}).
Moreover, stability requires $U > 0$. For $U\to \infty$,
${\cal H}_\phi$ becomes simpler and we obtain
\begin{equation}
{\cal H} =
- J \sum_{\langle ij\rangle,a} \vec{\phi}_{a,i}\cdot \vec{\phi}_{a,j}
+ 2 D \sum_i \phi_{1,i}^2 \phi_{2,i}^2,
\end{equation}
where the fields satisfy the constraint $\phi^2_{1,i} + \phi^2_{2,i} = 1$.
This is the 4-vector model with a spin-4 perturbation that breaks the
$O(4)$ symmetry to $[O(2)\oplus O(2)]\otimes {\mathbb Z}_2$.
If we additionally take $D\to \infty$ we must have
$\phi_{1,i}^2 \phi_{2,i}^2 = 0$. In this case, we can parametrize
\begin{equation}
\vec{\phi}_{1,i} = \case{1}{2} (1 + \sigma_i) \vec{s}_i, \qquad\qquad
\vec{\phi}_{2,i} = \case{1}{2} (1 - \sigma_i) \vec{s}_i, \qquad\qquad
\label{mapping}
\end{equation}
where $\sigma_i$ is an Ising spin and $\vec{s}_i$ is a unit two-component
vector. The Hamiltonian reduces to
\begin{equation}
{\cal H} =
- \frac{J}{2} \sum_{\langle ij\rangle}
(1+\sigma_i \sigma_j) \, \vec{s}_i \cdot \vec{s}_j .
\label{IsXY}
\end{equation}
Hamiltonian (\ref{IsXY}) has the same invariance as (\ref{HLGWlat})
although, in terms of the new fields, the $O(2)\oplus O(2)$ symmetry is
nonlinearly realized:
\begin{eqnarray}
{\vec{s}_i}\! ' &=& [\case{1}{2} (1 + \sigma_i) R_1 +
\case{1}{2} (1 - \sigma_i) R_2 ] \vec{s}_i
\\
\sigma_i' &=& \sigma_i,
\end{eqnarray}
where $R_1$ and $R_2$ are $O(2)$ rotation matrices. It is possible to add
terms of the form $\sigma_i\sigma_j$ without breaking the symmetry
of the Hamiltonian. We can thus consider the more general Hamiltonian
\cite{GKLN-91}
\begin{equation}
{\cal H}_{\rm IsXY} =
- \sum_{\langle ij\rangle}
\left[ \frac{J}{2} (1+\sigma_i \sigma_j) \, \vec{s}_i \cdot \vec{s}_j
+ C \sigma_i \sigma_j \right] ,
\label{HIsXY}
\end{equation}
We will call this model the Ising-XY (IsXY) model. For $J > 0$ and
$C + {J}/{2} > 0$ it has the same symmetry-breaking pattern
as FFXY systems. Thus, it represents another example of this class of models.
We should mention that there are many other systems that share
the same symmetry-breaking pattern: for an extensive list of references
see \cite{HPV-05lungo}.
\section{Results} \label{sec2}
Two-dimensional
FFXY systems (but note that these systems have also been studied,
both theoretically and experimentally, in three
dimensions \cite{Kawamura-98,PV-review,francesi-review,CPV-04})
have been extensively studied in the last thirty years,
after the appearance of the seminal papers by Villain \cite{Villain-77}.
For an extensive list of references, see \cite{HPV-05lungo}. In spite of that,
their critical behavior is object of debate still today.
Two scenarios have been proposed for the critical behavior of models
(\ref{HFFXYtr}) and (\ref{FFXY}).
A first possibility is that these models have two continuous transitions.
As temperature decreases, there is first a transition associated with
the chiral degrees of freedom: at the transition there is no magnetic
ordering but only chiral order. As temperature further decreases, there is
an intermediate phase in which spins are disordered while chiral variables
are magnetized. Then, a second transition occurs, followed by a low-temperature
(LT) phase in which spin-spin correlations decay algebraically.
In this scenario chiral and spin modes do not interact at the transitions
and thus, if the transitions are continuous, one expects a
chiral Ising transition (the order parameter is a scalar) and
a spin Kosterlitz-Thouless (KT) transition. This scenario has not
been confirmed numerically so far. The computed exponents at the chiral
transition do not agree with the Ising ones. For instance, one
finds $\nu\approx 0.8$ instead of the Ising value $\nu = 1$.
Moreover, it is not clear how much one can believe in the presence
of two transitions that are very close to each other: numerically
$(T_{\rm spin} - T_{\rm chiral}) \simeq 10^{-2} J$.
The inconsistencies of the two-transition scenario apparently favor
the presence of a single critical point. In this scenario, the observed
difference between the critical temperatures is interpreted as a
correction-to-scaling effect. Since chiral and spin modes become critical
at the same temperature, it is possible that the transition belongs to
a new universality class with a new set of critical exponents.
In this scenario, the result $\nu\approx 0.8$ would be fully
acceptable. Of course, it is also possible to interpret the
results for $\nu$ in terms of crossover effects. Since the two transitions
are very close, scaling corrections may be large so that the asymptotic
behavior can be observed only on very large lattices \cite{Olsson-95}.
We have considered again the issue \cite{HPV-05let,HPV-05lungo},
performing extensive simulations of the
FFXY model (\ref{FFXY}), of the $\phi^4$ model (\ref{HLGWlat}) with
$U=1$ and of the IsXY model (\ref{HIsXY}). In all cases
we have considered the square lattice. We have used a mixture
of Metropolis and overrelaxation updates as well as cluster updates in the
LT phase, essentially following \cite{GPP-98} (see \cite{HPV-05lungo} for a
detailed discussion). We have studied the
critical behavior on lattices of size $L\times L$, in some cases
up to $L\sim 10^3$: for the FFXY model,
the $\phi^4$ model with $D = 1/2$, and the IsXY model with $C = 0$, the
largest lattice we have used at the chiral transition corresponds to $L=1000$,
$L=1200$, $L=360$ respectively. In the LT phase the
Monte Carlo algorithm is much more efficient and we have been able
to simulate even larger sizes:
for the $\phi^4$ model with $D=1/2$ we performed simulations for $L=2048$.
The analysis of the Monte Carlo
results for the square-lattice FFXY model definitely shows that this model
undergoes two transitions: the chiral one belongs to the Ising universality
class, while the spin one is compatible with a KT behavior.
In the LT phase the critical
behavior of the spin modes is controlled by the same line of Gaussian
fixed points as in the standard XY model. The discrepancies from Ising behavior
at the chiral transition that have been observed in previous
studies are simply crossover effects. They are due to the presence of
a large, albeit finite spin correlation length $\xi_s^{(c)}$ at the
chiral transition. In finite-size scaling studies the
asymptotic behavior can only be observed if $L\gg \xi_s^{(c)}$.
Since $\xi_s^{(c)}$ is quite large, $\xi_s^{(c)} = 118(1)$, the
asymptotic behavior can only be observed in simulations
with $L\simeq 500$-1000, i.e. for values of $L$ that are
much larger than those that could be used in simulations until
a few years ago. In the other models we have studied,
the determination of the asymptotic behavior may be even more difficult.
For instance, in the $\phi^4$ model with $D=1/2$, $\xi_s^{(c)} \simeq 380$.
Thus, even with simulations with $L=1200$, we have not been able to
observe the Ising behavior but only the beginning of the crossover
towards the asymptotic behavior.
In the IsXY and in the $\phi^4$ model we have observed two transitions
in a large parameter region. However, for $D$ or $-C$ large,
we have also observed a unique first-order transition which separates
the LT phase with chiral order and spin quasi-long-range order from the
disordered phase, see Fig.~\ref{phasediag}. Thus, our results confirm
the two-transition scenario for generic FFXY systems
in the sense that we have found no
evidence of a unique {\it continuous} transition where chiral and
spin modes become both critical. A single transition occurs
only if it is of first order.
\begin{figure}[tb]
\begin{minipage}{17pc}
\includegraphics[width=17pc]{phased.eps}
\end{minipage}\hspace{2pc}%
\begin{minipage}{17pc}
\includegraphics[width=17pc]{phasedisxy.eps}
\end{minipage}
\caption{\label{phasediag}
Sketch of the phase diagram of the $\phi^4$ model (\protect\ref{HLGWlat})
for $U=1$ and $D > 0$ (left) and of the IsXY model
(\protect\ref{HIsXY}) (right).
The continuous, dashed, and thick continuous
lines represent Ising, KT, and first-order transition lines.
The distance between the ferromagnetic Ising and KT lines is amplified;
otherwise, the two transitions cannot be distinguished on the scale of the
figure. The phase diagram within the circled region is unknown.
In the IsXY case there is also an antiferromagnetic Ising transition (af)
starting at $C = - C_{\rm Is} = - {1\over2}(1 + \sqrt{2})$, $J=0$.
}
\end{figure}
Beside these results that confirm the two-transition scenario, we have also
observed an unexpected universal crossover behavior.
We find that renormalization-group invariant quantities
(e.g., critical exponents, Binder parameters, $\ldots$)
computed in the different models scale
at the chiral and spin transitions respectively as
\begin{equation}
{\cal R} = f_{\cal R}^{(c)} (L/l), \qquad\qquad
{\cal R} = f_{\cal R}^{(s)} (L/l),
\label{scaling}
\end{equation}
where $l$ is a model-dependent scaling factor that is identical at the two
transitions. At the chiral transition---the only case in which
we have done a systematic investigation by varying $C$ and $D$---corrections
appear to increase as $D$ or $-C$ increases
(in practice, significant deviations are observed for $D \gtrsim 4$
and $C \lesssim -2$).
Of course, it is of interest to have
a renormalization-group explanation of this apparent universality.
Eq.~(\ref{scaling}) can be explained by the presence of a multicritical
point \cite{Amit-book,LS-84,KNF-76}
or a line of multicritical points of the same type) where
chiral and spin modes become both critical. Indeed, close to
a multicritical point we expect that any RG-invariant quantity behaves as
\begin{equation}
{\cal R} = \hat{f}_{\cal R} (L/\xi_s, L/\xi_{\rm ch}),
\label{MCR-general}
\end{equation}
where $\xi_s$ and $\xi_{\rm ch}$ are the infinite-volume
correlation lengths for spin and chiral variables.
Our analysis at the Ising and KT transitions corresponds to fixing
$L/\xi_{\rm ch} = 0$ and $L/\xi_{s} = 0$, i.e. provides the scaling function
along two particular lines.
If the interpretation in terms of a multicritical point
is correct, the functions $f_{\cal R}(x)$ provide informations on
the behavior at the multicritical point. Indeed, while Ising or KT behavior is
observed for $x\to\infty$, in the opposite limit $x\to 0$ we
obtain the value of the RG-invariant quantity $\cal R$
at the multicritical point.
The nature of the
multicritical point is unclear. One possibility
is the $O(4)$ multicritical point that is present in the
$\phi^4$ theory for $D = 0$. Another possibility is the
multicritical point that appears in frustrated XY systems with
modulated couplings (for instance in model (\ref{FFXY}) for $\alpha \not=1$)
\cite{BDGL-86,EHKT-89,GKS-98} or in generalizations
of the IsXY model, in which an additional spin-spin coupling is
added, breaking the $O(2)\oplus O(2)$ symmetry.
Finally, we wish to compare with field-theory (FT) approaches.
Perturbative analyses \cite{CP-01}
of model (\ref{HLGW}) indicate the existence of
a new universality class associated with the symmetry-breaking pattern
$[O(2)\oplus O(2)]\otimes {\mathbb Z}_2\to O(2)$. Even though we have found
no evidence for it, our results do not necessarily contradict those
of Ref.~\cite{CP-01}. It is possible that the models we have considered
are outside the attraction domain of the FT fixed point. If this is the case,
field theory provides another candidate for the multicritical point.
The models we consider could be outside, but close to the attraction domain
of the fixed point---this is not unplausible since $\xi_s^{(c)}$ is
large---so that the crossover behavior is controlled by the FT fixed point.
We wish also to make a remark on the validity of (\ref{HLGW}).
In the derivation of Hamiltonian (\ref{HLGW})
by using the standard Hubbard-Stratonovitch transformation,
terms with more than four fields are neglected \cite{CD-85,YD-85}.
In particular, terms of the form $ (\vec{\phi}_1 \cdot \vec{\phi}_2)^n$
appear at sixth order $(n=3)$ (resp. eighth order)
in the case of the triangular-lattice (resp. square-lattice) FFXY model
\cite{YD-85}. These terms have only ${\mathbb Z}_2 \oplus O(2)$ symmetry
and thus, under renormalization-group transformations, are bound to
generate a term of the form $(\vec{\phi}_1 \cdot \vec{\phi}_2)^2$, or
even a quadratic term
of the form $(\vec{\phi}_1 \cdot \vec{\phi}_2)$. We obtain therefore the
multicritical Hamiltonian
\begin{equation}
{\cal H}_{\rm LGW,2} =
{\cal H}_{\rm LGW} + \int d^d x
\left[
{1\over2} r_2 (\vec{\phi}_1 \cdot \vec{\phi}_2)
+ {1\over 4} z_0 (\vec{\phi}_1 \cdot \vec{\phi}_2)^2
+ {1\over 4} z_1 (\vec{\phi}_1 \cdot \vec{\phi}_2) (\phi_1^2 + \phi_2^2)
\right],
\label{HLGW2}
\end{equation}
as it has been postulated for modulated systems.
If this interpretation is correct, the FT fixed point may only be relevant
at the multicritical point, provided it is stable under the quartic
perturbations that break the
$O(2)\oplus O(2)$ symmetry. This holds in three dimensions \cite{PV-04},
but nothing is known in the two-dimensional case. It should be remarked
that these considerations are only relevant for the FFXY model.
The IsXY and $\phi^4$ theories {\em are} $O(2)\oplus O(2)$ invariant and thus
the correct FT Hamiltonian is clearly (\ref{HLGW}) and not (\ref{HLGW2}).
\section{Chiral transition} \label{sec3}
In order to determine the nature of the chiral transition,
we have studied the behavior of several quantities
at fixed $R_c\equiv \xi_c/L$ where $\xi_c$ is the chiral correlation length
(see \cite{HPV-05lungo} for a precise definition in the different models).
We use the method proposed in \cite{Hasenbusch-99} and
further discussed in \cite{CHPRV-01}.
We fix $R_c$ equal to $R_{\rm Is}$, where $R_{\rm
Is}=0.9050488292(4)$ is the universal value of $\xi/L$ at the critical
point in the 2-$d$ Ising universality class \cite{SS-00}.
We stress that this choice does not bias our analysis in favor of the Ising
nature of the chiral transition. For any chosen value
(as long as it is positive) and whatever the
universality class of the chiral transition is (it may also coincide
with the spin transition), we are studying the
model for $L$-dependent temperatures $T_{\rm eff}(L)$ such that
$T_{\rm eff}(L) \to T_{\rm ch}$ for $L\to \infty$.
Note, however, that quantities like the Binder parameter depend
on the chosen value for $R_c$. Indeed, in the finite-size scaling limit,
$R_c = f_R[L^{1/\nu} (T - T_{\rm ch})]$. Therefore, fixing $R_c$ is
equivalent to fixing $X \equiv L^{1/\nu} (T_{\rm eff}(L) - T_{\rm ch})$.
Since the Binder parameter satisfies an analogous relation
$B_c = f_B[L^{1/\nu} (T - T_{\rm ch})]$, at fixed $R_c$
$B_c$ converges to $f_B(X)$. By fixing $R_c$ to the critical Ising value,
we will be able to perform an additional consistency check.
If the chiral transition belongs to the Ising universality
class, then $X = 0$ (apart from scaling corrections) and we should find that
any RG-invariant quantity
converges to its critical-point value in the Ising model.
\begin{figure}[tb]
\centerline{\psfig{width=9truecm,angle=0,file=xis.eps}}
\caption{
Spin correlation length $\xi_s$ at fixed $R_c=R_{\rm Is}$
(chiral transition).
For $L\to \infty$, we have $\xi_s = 118(1)$ in the FFXY model,
$\xi_s = 52.7(4)$ in the IsXY with $C=0$, and
$\xi_s \approx 380$ in the $\phi^4$ model with $U=1$ and $D=1/2$.
}
\label{xis-chiral}
\end{figure}
We first verify that $\xi_s$ converges to a constant as $L\to\infty$.
In Fig.~\ref{xis-chiral} we show the numerical results. The correlation length
is clearly finite in the FFXY model and in the IsXY model with $C=0$.
In the $\phi^4$ model with $D=1/2$ we do not yet observe that
$\xi_s$ is finite, although it is already clear that $\xi_s$ does not
increase linearly with $L$, as it would be the case if
the spin correlation length were infinite for $L=\infty$.
Then, we verified that the transition, if continuous, belongs to the Ising
universality class. The best evidence is provided by the Binder chiral
parameter. If the transition belongs to the Ising universality
class we should find \cite{SS-00,CHPV-02,HPV-05lungo}
\begin{equation}
B_c = B_{\rm Is} + b L^{-7/4},
\end{equation}
where $B_{\rm Is} = 1.167823(5)$ \cite{SS-00}. The results reported in
Fig.~\ref{bc-chiral} are fully consistent with Ising behavior for
$L\gg \xi_s^{(c)}$. Not only do we observe the Ising asymptotic value,
but also the rate of convergence is well verified. Also the
critical exponents $\nu$ and $\eta$ converge to the Ising values
for $L\gg \xi_s^{(c)}$ (results for $\nu$ will be shown below).
\begin{figure}[tb]
\centerline{\psfig{width=9truecm,angle=0,file=bic.eps}}
\vspace{1mm}
\caption{Chiral Binder parameter $B_c$ at fixed
$R_c=R_{\rm Is}$ (chiral transition). Plot of
$\Delta B_c\equiv B_c - B_{\rm Is}$ at fixed $R_c=R_{\rm Is}$ vs $L^{-7/4}$,
for the FFXY model, the $\phi^4$ model at $D=1/2$, and the IsXY model
at $C=0$. $B_{\rm Is} = 1.167923(5)$ is the value of the
Binder parameter at the critical point in the Ising model \protect\cite{SS-00}.
}
\label{bc-chiral}
\end{figure}
\section{Low-temperature phase and spin transition} \label{sec4}
Since the spin correlation length is finite at the chiral transition there
must be a paramagnetic phase with chiral order. Such a phase ends at a
second transition which is followed by the LT phase in which
chiral order and spin quasi-long-range order coexist.
We first study the nature of the LT phase and
verify the breaking of the ${\mathbb Z}_2$ invariance.
Direct evidence is provided by the chiral Binder parameter $B_c$.
If chiral modes are magnetized, $B_c\to 1 + O(L^{-2})$ for
$L\to \infty$. This behavior is very well verified in all models.
In the $\phi^4$ model, the ${\mathbb Z}_2$ group corresponds to the
field-interchange symmetry. Thus, the ${\mathbb Z}_2$ symmetry breaking
implies that only one of the two fields $\phi_1$ and $\phi_2$
is critical in the LT phase. Our numerical results fully confirm this
expectation. One can distinguish the fields according to the
value of $Q_a = \sum_i \phi^2_a$. The field with the largest value of $Q$
is critical (for instance, the corresponding susceptibility
and correlation length diverge as $L\to \infty$),
while the other one is not (the susceptibility and the correlation length
have a finite limit as $L\to \infty$).
\begin{figure}[tb]
\centerline{\psfig{width=9truecm,angle=0,file=xiloeta.eps}}
\vspace{2mm}
\caption{
Estimates of $R_s\equiv \xi_s/L$ vs $\eta$ in the LT phase.
The continuous line is the prediction obtained by assuming that in the
LT phase criticality is controlled by the same line of Gaussian fixed points
as in the XY model.}
\label{xil-vs-eta}
\end{figure}
Once we have checked that chiral modes are magnetized (this is of course
obvious because of the presence of the chiral transition), we
study the behavior of the spin variables and
verify that the large-$L$ behavior is controlled by the same line of
Gaussian fixed points as in the
standard XY model. In the XY model one can derive universal relations
among renormalization-group invariant quantities that are valid
in the whole LT phase, up to the KT transition \cite{Hasenbusch-05}.
Indeed, below the KT transition the spin-wave approximation is asymptotically
exact as $L\to\infty$ and allows an analytic determination of any quantity
in terms of the spin-wave parameter. Such a parameter is not universal
and can be eliminated by expressing a renormalization-group invariant
quantity in terms of another. For instance, one can express
the helicity modulus on a square lattice $L\times L$ with periodic
boundary conditions in terms of the exponent $\eta$ computed from
the size dependence of the magnetic susceptibility ($\chi\sim L^{2-\eta}$):
\begin{equation}
\Upsilon = {1\over 2\pi \eta} -
{\sum_{n=-\infty}^\infty n^2 \exp(-\pi n^2/\eta) \over
\eta^2 \sum_{n=-\infty}^\infty \exp(-\pi n^2/\eta) },
\end{equation}
where $0< \eta \le 1/4$.
Analogously, one can express $\xi_s/L$ in terms of $\eta$.
In Fig.~\ref{xil-vs-eta} we compare the spin-wave prediction with
numerical results. The agreement is quite good, confirming that
FFXY systems and the standard XY model have the same
LT phase, as far as the spin degrees of freedom are concerned.
The above-reported results for the LT phase make it plausible that
the spin transition belongs to the KT univerality class. Another
check is provided by our numerical results
for $\xi_s/L$ and $\Upsilon$. In the XY model, at the KT transition
\cite{Hasenbusch-05}
$\xi_s/L \approx 0.750691 + 0.212430/\ln(L/C_1)$ and
$\Upsilon \approx 0.636508 + 0.318899/\ln(L/C_2)$.
In all models we have studied
these two quantities assume the XY values approximately at the
same temperature---which we identify with the spin
critical temperature---thereby confirming the KT nature of the
transition.
\section{Crossover behavior} \label{sec5}
As we already mentioned, our data show the scaling behaviors (\ref{scaling}).
In this section we shall give a few details.
\begin{figure}[tb]
\begin{minipage}{17pc}
\includegraphics[width=16pc]{xillog.eps}
\end{minipage}\hspace{2pc}%
\begin{minipage}{17pc}
\includegraphics[width=16pc]{xilisxylog.eps}
\end{minipage}
\caption{
Ratio $R_s\equiv \xi_s/L$ ($\xi_s$ is the spin correlation
length) at the chiral transition vs $L_r \equiv L/l$
for the FFXY model and $\phi^4$ (left) and IsXY (right) models.
We set $l=\xi_s^{(c)}$ for the FFXY model; the values of $l$ for the
other models are obtained by requiring that all data fall
on a single curve. Note the logarithmic scale on both axes.
For $L_r\to \infty$, $R_s$ converges to 0.
}
\label{xisul-crossover}
\end{figure}
In order to verify the scaling behavior (\ref{scaling}) at the
chiral transition\footnote{
Note that our results have been obtained at fixed $R_c$ and not at the
chiral critical point. However, since we are dealing with an Ising
transition and $R_c$ has been fixed to the critical-point Ising
value, this is irrelevant in the scaling limit. Had we fixed $R_c$
to a different value, we would have obtained quantitatively different
scaling curves, corresponding to
the limit $\xi_{\rm ch},L\to \infty$ at fixed $L/\xi_{\rm ch}\not=0$
in Eq.~(\ref{MCR-general}) ($\xi_{\rm ch}$ is the infinite-volume
chiral correlation length).
}
we have first considered the
data for $R_s\equiv \xi_s/L$ ($\xi_s$ is the spin correlation length)
at the chiral transition and we have investigated whether
they fall on a single curve by using a rescaled
variable $L_r \equiv L/l$, where $l$ is a rescaling factor that
depends on the model. The result are reported in Fig.~\ref{xisul-crossover}.
The data fall on a single curve with remarkable precision, i.e.
$\xi_s/L = f_s(L/l)$ where $f_s(x)$ is model independent. Note that
the rescaling factors change significantly from one model to another:
for instance, $l/l_{\rm FFXY} = 0.75$ (resp. 7.0) in the $\phi^4$ model
with $D = 4$ (resp. $D = 1/5$) and
$l/l_{\rm FFXY} = 0.031$ (resp. 12) in the IsXY model
with $C = 0.3$ (resp. $C = -2$). It is easy to realize that $l$ should
be proportional to $\xi_s^{(c)}$, the infinite-volume spin correlation length
at the chiral transition. Indeed, since $\xi_s \to \xi_s^{(c)}$ as
$L\to \infty$, we have $f_s(x) \sim a/x$ for $x\to \infty$, where
$a$ is model independent ($f_s(x)$ is model independent). Moreover,
$\xi_s^{(c)} = l/a$. Therefore, if we fix $l = \xi^{(c)}_s$ in
one model, then the same holds in all different models. In the figures
we have chosen $l_{\rm FFXY} = 118 \approx \xi_s^{(c)}$ and thus the plots
we present are indeed in terms of $L/\xi_s^{(c)}$, even in those cases
in which we have not been able to determine directly the spin
correlation length for $L\to \infty$.
\begin{figure}[tb]
\begin{minipage}{17pc}
\includegraphics[width=16pc]{bi1.eps}
\end{minipage}\hspace{2pc}%
\begin{minipage}{17pc}
\includegraphics[width=16pc]{bi2.eps}
\end{minipage}
\begin{center}
\begin{minipage}{17pc}
\includegraphics[width=16pc]{bi3.eps}
\end{minipage}
\end{center}
\caption{
Spin Binder parameters $B_s$ and $B_{s\phi}$
at the chiral transition vs $L_r\equiv L/l$. Results
for the FFXY, $\phi^4$, and IsXY models.
For $L_r\to \infty$, $B_s$ and $B_{s\phi}$ converge to 2 and 3/2
respectively.
The rescalings $l$ are the same as in Fig.~\protect\ref{xisul-crossover}.
}
\label{bs-crossover}
\end{figure}
\begin{figure}[tb]
\begin{minipage}{17pc}
\includegraphics[width=16pc]{yupsilon.eps}
\end{minipage}\hspace{2pc}%
\begin{minipage}{17pc}
\includegraphics[width=16pc]{bicr.eps}
\end{minipage}
\caption{
Helicity modulus $\Upsilon$ (left) and
chiral Binder parameter $B_c$ (right) at the chiral transition
vs $L_r \equiv L/l$. We report
$\Delta B_c = B_c - B_{\rm Is}$, where
$B_{\rm Is}$ is the value of the Binder parameter at the
critical point in the Ising model.
The rescalings $l$ are the same as in Fig.~\protect\ref{xisul-crossover}.
}
\label{Upsilon-crossover}
\end{figure}
In order to verify the universality of the scaling Ansatz (\ref{scaling}),
we have considered the spin Binder parameters $B_{s\phi}$ and $B_s$
(we use two different inequivalent definitions, see
\cite{HPV-05lungo}). In Fig.~\ref{bs-crossover}
we plot $B_s$ and $B_{s\phi}$ versus $L/l$.
We use the rescaling factors that have been
determined in the analysis of $R_s$. The agreement is quite good.
Deviations appear as $D$ or $-C$ increases. In particular, the data
for $D = 4$ ($\phi^4$ model) and for $C = -3$ (IsXY model) are outside the
curve. They would fall on the same curve as the others, only if the
rescaling factor is changed by a factor of 2 and 5 respectively in the two
cases. In Fig.~\ref{Upsilon-crossover} we plot the results for the
helicity modulus: again all data fall on a single curve quite precisely.
It is interesting to note that, for $0.02\lesssim L_r \lesssim 0.5$
$R_s$ and $\Upsilon$ show an approximate power-law behavior: they
behave as $L^{-\epsilon}_r$ with $\epsilon\approx 0.1$ ($R_s$) and
$\epsilon\approx 0.33$ ($\Upsilon$). If this behavior holds also
for smaller values of $L_r$, we have
$\Upsilon, R_s \to \infty$ for $L_r \to 0$.
\begin{figure}[tb]
\centerline{\epsfig{width=12truecm,angle=0,file=allnu.eps}}
\caption{Effective exponent
$1/\nu_{\rm eff}$ computed by using $R_c\equiv \xi_c/L$, the
Binder chiral parameter $B_c$, and $R_s\equiv \xi_s/L$
($\xi_c$ and $\xi_s$ are respectively the chiral and spin
correlation lengths) vs
$L_r\equiv L/l$ at the chiral transition.
For $L_r\to\infty$, $1/\nu_{\rm eff}$ converges to
$1/\nu_{\rm Is} = 1$ for $R_c$ and $B_c$, and $-1$ for $R_s$.
The rescalings $l$ are the same as in Fig.~\protect\ref{xisul-crossover}.
}
\label{nueff}
\end{figure}
Next we consider the chiral variables. They also show the universal behavior
(\ref{scaling}).
In Fig.~\ref{Upsilon-crossover} we report the Binder chiral parameter,
with the same scaling factors $l$ as before. The data fall again on a single
curve although the curve does not change significantly as $L/l$ varies,
at variance with the spin variables.
Finally, we show the effective exponent $\nu_{\rm eff}$ that can be
obtained from the derivatives of $R_c$, $B_c$, and $R_s$.
The exponent obtained from the chiral variables should converge to the
Ising value $\nu = 1$ as $L\to \infty$, and indeed it does.
The approach is however
nonmonotonic, $\nu_{\rm eff}$ being first smaller than 1, then larger.
It is interesting to note that for $L_r \lesssim 1$,
i.e.~$L\lesssim \xi_s^{(c)}$, $\nu_{\rm eff}$ is approximately constant
and equal to 0.8. This behavior explains previous results. Indeed, if
one performs simulations only for values of $L$ such that
$L_r \lesssim 1$ (this was the case in previous simulations of the
FFXY model since
$\xi_s^{(c)} \approx 10^2$) one would estimate $\nu =0.8$.
Note also that, for $L\lesssim \xi_s^{(c)}$, the effective exponent
$\nu_{\rm eff}$ obtained from the spin variable $R_s$ is approximately
constant and close to the value obtained by using chiral
variables. In this range of values of $L$ chiral and spin variables
appear as if they are both critical.
\begin{figure}[tb]
\begin{minipage}{17pc}
\includegraphics[width=16pc]{xisprs2.eps}
\end{minipage}\hspace{2pc}%
\begin{minipage}{17pc}
\includegraphics[width=16pc]{ysprs2.eps}
\end{minipage}
\caption{
Ratio $R_s\equiv \xi_s/L$ (left) and helicity modulus $\Upsilon$ (right)
at the spin transition vs $L_r \equiv L/l$.
The rescalings $l$ are the same as in Fig.~\protect\ref{xisul-crossover}.
}
\label{spin-crossover}
\end{figure}
The same analysis can be repeated at the spin transition. In
Fig.~\ref{spin-crossover} we report $\xi_s/L$ and $\Upsilon$
versus $L/l$ where the rescaling factors $l$ are those determined at the chiral
transition. The agreement is again quite good. The existence of scaling
at the two transitions with the same rescaling factors is another
piece of evidence in favor of the multicritical origin of
the universality we observe.
The crossover curves we have computed can give us some hints on the nature
of the multicritical point, if it really exists. Indeed, the
behavior at the multicritical point is simply obtained by considering
the limit $L_r \to 0$.
First, let us notice that the data at the chiral transition
apparently exclude the possibility of a decoupled multicritical point
in which spin and chiral modes have XY and Ising behavior.
Indeed, the helicity modulus is
very much different from the KT value,
$\Upsilon_{KT} = 0.63650817819\ldots$
\cite{Hasenbusch-05} and $B_c$ is apparently smaller than
the Ising value for $L_r \to 0$.
O(4) behavior is possible, since, as we already discussed,
our data are compatible with
$\Upsilon, R_s \to \infty$ for $L_r \to 0$. Also the data for the
Binder parameters $B_s$ and $B_{s\phi}$ are compatible with the O(4)
value, $B_s = B_{s\phi} = 1$. For $L_r\to 0$, the crossover
curves at the spin transition should converge to the same values as
those at the chiral critical point. This is not evident from the
results plotted in Fig.~\ref{spin-crossover}.
This can be easily explained.
At the spin transition the natural scale is the
infinite-volume chiral correlation length
at the transition $\xi_c^{(s)}$. We expect XY behavior for
$L/\xi_c^{(s)} \gg 1$ and multicritical behavior in the opposite case.
Numerically, we find $\xi_s^{(c)}/\xi_c^{(s)} \approx 15$, so that
$L/\xi_c^{(s)} = 1$ corresponds to $L_r = L/\xi_s^{(c)} \approx 0.07$.
As it can be seen from the figure, none of our data satisfies the
condition $L_r \ll 0.07$, so that we are unable to observe
multicritical behavior at the spin transition.
\section*{References}
|
1,108,101,566,659 | arxiv | \section*{Introduction}
\begingroup
\renewcommand{\theequation}{\arabic{equation}}
Consider isometric immersions of
$\tilde{\Sigma}^n(c)$ into $\tilde{\Sigma}^{n+1}(c)$,
where $\tilde{\Sigma}^m(c)$ denotes the simply connected
$m$-dimensional space form of constant sectional curvature $c$.
Such immersions are only cylinders \cite{HN}
in the Euclidean case $(c=0)$.
In the spherical case $(c>0)$,
such immersions are only totally geodesic embeddings \cite{OS}.
On the other hand, in the hyperbolic case $(c<0)$,
it is well-known that there are nontrivial examples of
such isometric immersions \cite{Nomizu,Ferus,AbeHaas}
(see Figure~\ref{fig:Nomizu} for the case of $n=2$).
\begin{figure}[htb]
\begin{tabular}{{c@{\hspace{10mm}}c@{\hspace{10mm}}c@{\hspace{10mm}}c}}
\resizebox{2.5cm}{!}{\includegraphics{TG.eps}} &
\resizebox{2.5cm}{!}{\includegraphics{Cylinder.eps}} &
\resizebox{2.5cm}{!}{\includegraphics{InfiniteCone.eps}}&
\resizebox{2.5cm}{!}{\includegraphics{RectifyHelix.eps}} \\
{\footnotesize (A) totally geodesic} &
{\footnotesize (B) Example \ref{ex:1}} &
{\footnotesize (C) Example \ref{ex:2}} &
{\footnotesize (D) Example \ref{ex:3}}
\end{tabular}
\caption{Examples constructed by Nomizu \cite{Nomizu} (see Section 3).}
\label{fig:Nomizu}
\end{figure}
\noindent
We denote by $\H^n=\tilde{\Sigma}^n(-1)$ the $n$-dimensional hyperbolic space,
that is, the complete simply connected and connected Riemannian manifold
of constant curvature $-1$.
Nomizu \cite{Nomizu} and Ferus \cite{Ferus} showed that,
for a given $C^{\infty}$ totally geodesic foliation of codimension $1$ in $\H^n$,
there is a family of isometric immersions of $\H^n$ into $\H^{n+1}$
without umbilic points such that, for each immersion, the foliation defined by
its asymptotic distribution coincides with the given foliation.
Furthermore, Abe, Mori and Takahashi \cite{AbeMoriTakahashi}
parametrized the space of isometric immersions of $\H^n$ into $\H^{n+1}$
by a family of properly chosen countably many $\boldsymbol{R}^n$-valued functions.
In this paper, we shall give another parametrization in the case of $n=2$:
we represent isometric immersions of $\H^2$ into $\H^3$
by curves in the space $L\H^3$ of oriented geodesics in $\H^3$.
Moreover, we characterize certain asymptotic behavior of such immersions
in terms of their mean curvature.
More precisely, an isometric immersion of $\H^2$ into $\H^3$
is a complete {\it extrinsically flat\/} surface in $\H^3$,
that is, a complete surface whose extrinsic curvature vanishes.
It is known that a complete extrinsically flat surface is {\it ruled\/},
i.e., a locus of a $1$-parameter family of geodesics in $\H^3$
\cite{Portnoy} (see Proposition \ref{prop:Portnoy}).
Hence, we shall deal with extrinsically flat ruled surfaces:\
{\it developable\/} surfaces in $\H^3$.
On the other hand, it is well-known that the space of oriented geodesics $L\H^3$
has two significant geometric structures:
the natural complex structure $J$ \cite{Hitchin,GG}
and the para-complex structure $P$ \cite{Kaneyuki,Kanai,Kimura}.
Recently, Salvai \cite{Salvai} determined the family of metrics
$\{\mathcal{G}_{\theta}\}_{\theta \in S^1}$
each of which is invariant under the action of the identity component
of the isometry group of $\H^3$.
Each metric $\mathcal{G}_{\theta}$ is of neutral signature,
K\"ahler with respect to $J$ and para-K\"ahler with respect to $P$.
In this paper,
we especially focus on two neutral metrics
$\mathcal{G}^{\mathfrak{r}}=\mathcal{G}_0$
and $\mathcal{G}^{\mathfrak{i}}=\mathcal{G}_{\pi/2}$
in $\{\mathcal{G}_{\theta}\}_{\theta \in S^1}$.
In Section~\ref{sec:geomstr},
we shall investigate the relationships among
$J$, $P$, $\{\mathcal{G}_{\theta}\}_{\theta \in S^1}$
and the canonical symplectic form on $L\H^3$,
and give a characterization of $\mathcal{G}^{\mathfrak{i}}$
and $\mathcal{G}^{\mathfrak{r}}$ (Proposition \ref{prop:symplectic}).
In Section~\ref{sec:null},
we introduce a representation formula for developable surfaces in $\H^3$
in terms of {\it null-causal curves\/} (Proposition \ref{prop:representation}):
\begin{introtheorem}\label{thm:null}
A curve in $L\H^3$ which is null with respect to $\mathcal{G}^{\mathfrak{i}}$
and causal with respect to $\mathcal{G}^{\mathfrak{r}}$
generates a developable surface in $\H^3$.
Conversely, any developable surface generated by complete
geodesics in $\H^3$ is given in this manner.
\end{introtheorem}
\noindent
Here, a regular curve in a pseudo-Riemannian manifold
is called {\it null\/} (resp.\ {\it causal\/}) if every tangent vector
gives null (resp.\ timelike or null) direction.
In Section~\ref{sec:exponential},
we shall investigate curves in $L\H^3$ which are null with respect to
both $\mathcal{G}^{\mathfrak{r}}$ and $\mathcal{G}^{\mathfrak{i}}$.
Such curves generate cones whose vertices are on the ideal boundary,
which we call {\it ideal cones\/} (Proposition \ref{prop:vertex}).
On the other hand,
on each asymptotic curve $\gamma$ on a complete developable surface,
the mean curvature is proportional to $e^{\pm t}$ or $1/\cosh t$,
where $t$ denotes the arc length parameter of $\gamma$ (Lemma \ref{lem:Massey}).
Based on this fact, a complete developable surface is
said to be {\it of exponential type\/},
if the mean curvature is proportional to $e^{\pm t}$ on each asymptotic curve
in the non umbilic point set (see Definition \ref{def:exptype}).
Then we have the following
\begin{introtheorem}\label{thm:exponential}
A real-analytic developable surface of exponential type is an ideal cone.
\end{introtheorem}
\noindent
The assumption of ``real-analyticity'' cannot be removed (see Example \ref{ex:NRA}).
As mentioned before,
complete flat surfaces in the Euclidean 3-space $\boldsymbol{R}^3$ are only cylinders.
However, if we admit {\it singularities},
there are a lot of interesting examples.
Murata and Umehara \cite{MurataUmehara} investigated
the global geometric properties of
a class of flat surfaces with singularities in $\boldsymbol{R}^3$, so-called {\it flat fronts\/}.
On the other hand, there is another generalization of
ruled (resp.\ developable) surfaces in $\boldsymbol{R}^3$:
{\it horocyclic\/} (resp.\ {\it horospherical flat horocyclic\/}) surfaces in $\H^3$
(for more details, see \cite{IzumiyaSajiTakahashi,TakizawaTsukada}).
\begin{acknowledgements}
Thanks are due to Kotaro Yamada, author's advisor,
for many helpful comments and discussions.
The author also would like to thank Masaaki Umehara
for his intensive lecture on surfaces with singularities
at Kumamoto University on June, 2009,
which made the author interested in current subjects.
He also thanks a lot to Jun-ichi Inoguchi for valuable discussions
and constant encouragement.
Finally, the author expresses gratitude to
Masahiko Kanai, Soji Kaneyuki and Yu Kawakami for their helpful comments.
\end{acknowledgements}
\endgroup
\section{Preliminaries}
\label{sec:prelim}
\subsection{Hyperbolic $3$-space}
\label{sec:H3}
\hspace{2mm}
We denote by $\L^4$ the Lorentz-Minkowski $4$-space with the Lorentz metric
\[
\inner{ {}^t (x_{0}, x_{1}, x_{2}, x_{3}) }{ {}^t (y_{0}, y_{1}, y_{2}, y_{3}) }
= -x_{0}y_{0}+x_{1}y_{1}+x_{2}y_{2}+x_{3}y_{3},
\]
where ${}^t$ denotes the transposition.
Then the hyperbolic $3$-space is given by
\begin{equation}\label{eq:LM-model}
\H^{3}=\left\{\left. \vect{x}={}^t (x_{0}, x_{1}, x_{2}, x_{3})\in \L^{4} \,\right|\,
\inner{\vect{x}}{\vect{x}}=-1, ~x_{0}>0 \right\}
\end{equation}
with the induced metric from $\L^{4}$,
which is a complete simply connected and connected Riemannian $3$-manifold
with constant sectional curvature $-1$.
We identify $\L^4$ with the set of $2\times 2$ Hermitian matrices
${\rm Herm}(2)=\{X^{\ast}=X\}~(X^{\ast}:= {}^t\bar{X})$ by
\[
\L^4 \ni {}^t(x_0,x_1,x_2,x_3) \longleftrightarrow \left(
\begin{array}{cc}
x_0+x_3 & x_1+i x_2 \\
x_1-i x_2 & x_0-x_3
\end{array}
\right) \in \operatorname{Herm}(2)
\]
with the metric
\[
\inner{X}{Y}=-\frac{1}{2}\operatorname{trace}(X\tilde{Y}),\qquad \inner{X}{X}=-\det X,
\]
where $\tilde{Y}$ is the cofactor matrix of $Y$.
Under this identification, the hyperbolic $3$-space $\H^3$ is represented as
\begin{equation}\label{eq:Herm-model}
\H^3=\left\{\left. p \in \operatorname{Herm}(2) \,\right|\, \det p=1,\, \operatorname{trace} p>0 \right\}.
\end{equation}
We call this realization of $\H^3$ the {\it Hermitian model}.
We fix the basis $\{ \sigma_0, \sigma_1, \sigma_2, \sigma_3 \}$ of $\operatorname{Herm}(2)$ as
\begin{equation}
\label{eq:basis}
\sigma_0 = {\rm id}, \quad
\sigma_1= \left(\begin{array}{cc}
0 & 1 \\
1 & 0
\end{array}
\right),
\quad
\sigma_2= \left(\begin{array}{cc}
0 & -i \\
i & 0
\end{array}
\right),
\quad
\sigma_3= \left(\begin{array}{cc}
1 & 0 \\
0 & -1
\end{array}
\right).
\end{equation}
In the Hermitian model, the cross product at $T_p\H^3$ is given by
\begin{equation}\label{eq:cross}
X \times Y= \frac{i}{2}(Xp^{-1}Y-Yp^{-1}X),
\end{equation}
for $X,Y\in T_p\H^3$ (cf. \cite[(3 - 1)]{KRSUY}).
The special linear group $\operatorname{SL}(2,\boldsymbol{C})$ acts isometrically and transitively on $\H^3$ by
\begin{equation}\label{eq:isom}
\H^3 \ni p \longmapsto apa^{\ast} \in \H^3,
\end{equation}
where $a \in \operatorname{SL}(2,\boldsymbol{C})$.
The isotropy subgroup of $\operatorname{SL}(2,\boldsymbol{C})$ at $\sigma_0$ is
the special unitary group $\operatorname{SU}(2)$. Therefore we can identify
\begin{eqnarray*}
\H^3
= \operatorname{SL}(2,\boldsymbol{C})/\operatorname{SU}(2)
= \left\{\left. aa^{\ast} \,\right|\, a \in \operatorname{SL}(2,\boldsymbol{C}) \right\}
\end{eqnarray*}
in the usual way.
Moreover, the identity component of the isometry group ${\rm Isom}_0(\H^3)$
is isomorphic to $\operatorname{PSL}(2,\boldsymbol{C}):= \operatorname{SL}(2,\boldsymbol{C})/\{\pm \text{id} \}$.
\subsection{The unit tangent bundle}
\label{sec:UH3}
\hspace{2mm}
We denote by $U\H^3$ the unit tangent bundle of $\H^3$,
which can be identified with
\[
U\H^3=\left\{ (p,v) \in \operatorname{Herm}(2) \times \operatorname{Herm}(2)
\,\left|\, \begin{array}{cc} \det p=-\det v=1,\\ \operatorname{trace} p>0,
~ \inner{p}{v}=0 \end{array} \right. \right\}.
\]
The projection
\begin{equation}\label{eq:pi}
\pi : U\H^3 \ni (p,v) \longmapsto p \in \H^3
\end{equation}
gives a sphere bundle.
The tangent space at $(p,v) \in U\H^3$ can be written by
\begin{equation}\label{eq:tangUH3}
T_{(p,v)}U\H^3=\left\{ (X,V) \in \operatorname{Herm}(2)\times\operatorname{Herm}(2) \,\left|\,
\begin{array}{cc} \inner{p}{X}=\inner{v}{V}=0,\\
\inner{p}{V}=-\inner{X}{v} \end{array} \right.\right\}.
\end{equation}
The {\it canonical contact form} $\Theta$ on $U\H^3$ is given by
\begin{equation}\label{eq:cano-cont}
\Theta_{(p,v)}(X,V)=\inner{X}{v}=-\inner{p}{V},\qquad (X,V)\in T_{(p,v)}U\H^3.
\end{equation}
The isometric action of $\operatorname{SL}(2,\boldsymbol{C})$ on $\H^3$ as in \eqref{eq:isom} induces
a transitive action on $U\H^3$ as
\[
U\H^3 \ni (p,v) \longmapsto (apa^{\ast},ava^{\ast}) \in U\H^3,
\]
where $a \in \operatorname{SL}(2,\boldsymbol{C})$.
The isotropy subgroup of $\operatorname{SL}(2,\boldsymbol{C})$ at $(\sigma_0,\sigma_3) \in U\H^3$ is
\[
\left\{\left. \left( \begin{array}{cc} e^{i \theta} & 0\\ 0 & e^{-i \theta} \end{array} \right)
\,\right|\, \theta \in \boldsymbol{R}/2\pi\boldsymbol{Z} \right\}
\]
which is isomorphic to the unitary group $\operatorname{U}(1)$,
where $\sigma_0$ and $\sigma_3$ are as in \eqref{eq:basis}.
Hence we have
\begin{equation}\label{eq:unitan}
U\H^3
=\operatorname{SL}(2,\boldsymbol{C})/\operatorname{U}(1)
= \left\{\left. (aa^{\ast},a\sigma_3 a^{\ast}) \,\right|\, a \in {\rm SL}(2,\boldsymbol{C}) \right\}.
\end{equation}
\subsection{The space of oriented geodesics}
\label{sec:LH3}
\hspace{2mm}
The space $L\H^3$ of oriented geodesics in $\H^3$ is
defined as the set of equivalence classes of unit speed geodesics in $\H^3$.
Here, two unit speed geodesics $\gamma_1(t),\,\gamma_2(t)$
in $\H^3$ are said to be {\it equivalent\/}
if there exists $t_0 \in \boldsymbol{R}$ such that $\gamma_1(t+t_0)=\gamma_2(t)$.
We denote by $[\gamma]$ the equivalence class represented by $\gamma(t)$.
The set $L\H^3$ has a structure of a smooth $4$-manifold.
Moreover, if we denote by $\operatorname{SO}^+(1,1)$ the restricted Lorentz group, the projection
\begin{equation}\label{eq:pihat}
\hat{\pi}: U\H^3 \ni (p,v) \longmapsto [\gamma_{p,v}] \in L\H^3
\end{equation}
defines an $\operatorname{SO}^+(1,1)$-bundle,
where $\gamma_{p,v}$ is the geodesic starting at $p \in \H^3$
with the initial velocity $v\in T_p\H^3$.
\subsubsection{The natural complex structure and a holomorphic coordinate system}
\hspace{2mm}
Hitchin \cite{Hitchin} constructed the natural complex structure $J$ on
$L\H^3$ ({\it minitwistor construction}).
Here, we introduce a local holomorphic coordinate system
$(\mu_1,\mu_2)$ of the complex surface $(L\H^3,J)$ \cite{GG}.
We denote by $\partial \H^3$ the ideal boundary of $\H^3$,
that is, the set of asymptotic classes of oriented geodesics.
For a geodesic $\gamma=\gamma(t)$, set $\gamma_+$, $\gamma_- \in \partial \H^3$ as
\begin{equation}\label{eq:endpt}
\gamma_+:=\lim_{t\rightarrow\infty}\gamma(t),\qquad
\gamma_-:=\lim_{t\rightarrow-\infty}\gamma(t).
\end{equation}
Evidently, $\gamma_+$ and $\gamma_-$
are independent of choice of a representative of $[\gamma]$,
and $(\gamma_+,\gamma_-) \in (\partial \H^3\times\partial \H^3) \setminus \Delta$ holds,
where $\Delta$ is the diagonal set of $\partial \H^3\times\partial \H^3$.
Conversely, for any distinct points $a,~b \in \partial \H^3$,
there exists a unique equivalence class $[\gamma] \in L\H^3$
such that $\gamma_+=a$, $\gamma_-=b$.
Thus, we can identify
$L\H^3 = (\partial \H^3\times\partial \H^3) \setminus \Delta$ as a set.
Now we recall the {\it upper-half space model} of $\H^3$:
\begin{equation}\label{eq:upper}
\boldsymbol{R}^3_{+}=\left( \left\{\left. (w,r)\in \boldsymbol{C} \times \boldsymbol{R} \,\right|\, r>0 \right\},\,
\frac{ dw d\bar{w}+dr^2}{r^2} \right).
\end{equation}
A map
\begin{equation}\label{eq:isometry}
\Psi : \H^3 \ni \left(
\begin{array}{cc}
x_0+x_3 & x_1+ix_2\\
x_1-ix_2 & x_0-x_3
\end{array}\right)
\longmapsto \left( \frac{x_1+ix_2}{x_0-x_3}, \frac{1}{x_0-x_3}\right) \in\boldsymbol{R}^3_+
\end{equation}
gives an isometry.
The geodesics of $\boldsymbol{R}^3_+$ are divided into two types:
straight lines parallel to the $r$-axis and semicircles perpendicular to the $w$-plane.
Identifying $\partial \H^3$ with the Riemann sphere $\hat{\boldsymbol{C}}:=\boldsymbol{C}\cup\{\infty\}$,
we may consider $\gamma_{+}$ and $\gamma_{-}$ as points in $\hat{\boldsymbol{C}}$.
Then we set an open subset $\mathcal{U}$ of $L\H^3$ as
\begin{equation}\label{eq:nbd}
\mathcal{U}:=\left\{\left. [\gamma] \in L\H^3 \,\right|\,
\gamma_+ \neq 0,\, \gamma_- \neq \infty \right\},
\end{equation}
and complex numbers $\mu_1$, $\mu_2$ as
\begin{equation}\label{eq:holcoord}
\mu_1:=-\gamma_-,\qquad
\mu_2:=\frac{1}{\bar{\gamma}_+}
\end{equation}
for $[\gamma] \in \mathcal{U}$ (see Figure \ref{fig:upper}).
Georgiou and Guilfoyle \cite{GG} proved that
$(\mathcal{U}; (\mu_1,\mu_2))$ defines a local holomorphic coordinate system of $L\H^3$
compatible to the complex structure $J$,
and the map $[\gamma] \longmapsto (\mu_1,\mu_2)$ extends to a biholomorphic map
\[
(L\H^3,J) \stackrel{\sim}{\longrightarrow} (\hat{\boldsymbol{C}}\times\hat{\boldsymbol{C}})\setminus\hat{\Delta},
\]
where $\hat{\Delta}=\{ (\mu_1, \mu_2) \in \boldsymbol{C}^2 \,|\,
1+\mu_1\bar{\mu}_2=0\} \cup \{ (0,\infty),\, (\infty,0) \}$,
so-called the reflected diagonal.
\begin{figure}[htb]
\begin{tabular}{cc}
\resizebox{11cm}{!}{\includegraphics{holcords.eps}}
\end{tabular}
\caption{The holomorphic coordinate system $(\mu_1,\mu_2).$}
\label{fig:upper}
\end{figure}
\begin{remark}[As a complex line bundle]
Over the complex projective line $\P^1$, the map
\[
\Pi : L\H^3 \ni [\gamma] \longmapsto \gamma_- \in \P^1
\]
gives a complex line bundle.
Each fiber of $\gamma_-$ is $\P^1\setminus\{\gamma_-\}$ which is identified with $\boldsymbol{C}$.
It is easy to see that $\Pi$ is a trivial bundle $\mathcal{O}_{\P^1}(0)$.
On the other hand, the space $L\R^3$ of oriented geodesics in the Euclidean $3$-space
is biholomorphic to the holomorphic tangent bundle $T\P^1$ of $\P^1$ \cite{GK}.
That is $L\R^3 \cong \mathcal{O}_{\P^1}(2)$.
This implies that $L\H^3$ is not isomorphic to $L\R^3$ as a line bundle over $\P^1$.
\end{remark}
\subsubsection{The invariant metrics, K\"ahler and para-K\"ahler structures}
\hspace{2mm}
The isometric action of $\operatorname{SL}(2,\boldsymbol{C})$ on $\H^3$ as in \eqref{eq:isom} induces
an action on $\partial\H^3=\hat{\boldsymbol{C}}$ as
\[
\hat{\boldsymbol{C}} \ni z \longmapsto \frac{a_{11} z+a_{12} }{ a_{21} z+a_{22} } \in \hat{\boldsymbol{C}},
\]
where $a= (a_{ij}) \in \operatorname{SL}(2,\boldsymbol{C})$.
This action induces a holomorphic and transitive action of
${\rm Isom}_0(\H^3)=\operatorname{PSL}(2,\boldsymbol{C})$ on
$L\H^3=(\hat{\boldsymbol{C}}\times\hat{\boldsymbol{C}})\setminus\hat{\Delta}$ as
\begin{equation}\label{eq:hol-act}
(\hat{\boldsymbol{C}}\times\hat{\boldsymbol{C}})\setminus\hat{\Delta} \ni (\mu_1,\mu_2) \longmapsto
\left( \frac{- a_{11} \mu_1+a_{12} }{ \hphantom{-}a_{21} \mu_1-a_{22} },
\frac{\bar{a}_{22} \mu_2+\bar{a}_{21} }{\bar{a}_{12} \mu_2+\bar{a}_{11} } \right)
\in (\hat{\boldsymbol{C}}\times\hat{\boldsymbol{C}})\setminus\hat{\Delta},
\end{equation}
for $a= (a_{ij}) \in \operatorname{PSL}(2,\boldsymbol{C})$.
If we set a $\boldsymbol{C}$-valued symmetric $2$-tensor on $L\H^3$ as
\begin{equation}\label{eq:Killing-coord}
\mathcal{G}:= \frac{4\,d\mu_1d\bar{\mu}_2}{ (1+ \mu_1\bar{\mu}_2)^2 },
\end{equation}
then it holds that
\begin{equation}\label{eq:invariant-metric}
\mathcal{G}_{\theta}
:= \Re \left( e^{-i\theta} \mathcal{G} \right)
= (\cos \theta)\, \mathcal{G}^{\mathfrak{r}} + (\sin \theta)\, \mathcal{G}^{\mathfrak{i}}
\end{equation}
defines a pseudo-Riemannian metric on $L\H^3$
of neutral signature for each $\theta \in \boldsymbol{R}/ 2 \pi \boldsymbol{Z}$,
which is invariant under the action given in \eqref{eq:hol-act},
where $\mathcal{G}^{\mathfrak{r}}$ and $\mathcal{G}^{\mathfrak{i}}$ are
the neutral metrics given by the real and imaginary part of $\mathcal{G}$, respectively,
\begin{equation}\label{eq:metrics}
\mathcal{G}^{\mathfrak{r}}
:=\dfrac{1}{2} \left\{
\dfrac{ 4\,d\mu_1 d\bar{\mu}_2}{(1+\mu_1 \bar{\mu}_2)^2}+
\dfrac{ 4\,d\mu_2 d\bar{\mu}_1}{(1+\mu_2 \bar{\mu}_1)^2} \right\},\quad
\mathcal{G}^{\mathfrak{i}}
:=\dfrac{1}{2i} \left\{
\dfrac{ 4\,d\mu_1 d\bar{\mu}_2}{ (1+ \mu_1\bar{\mu}_2)^2 }-
\dfrac{ 4\,d\mu_2 d\bar{\mu}_1}{ (1+ \mu_2\bar{\mu}_1)^2} \right\}.
\end{equation}
Conversely, Salvai \cite{Salvai} proved that
any pseudo-Riemannian metric on $L\H^3$
invariant under the action as in \eqref{eq:hol-act}
is a constant multiple of $\mathcal{G}_{\theta}$ for some $\theta \in \boldsymbol{R}/ 2 \pi \boldsymbol{Z}$.
Thus we call $\mathcal{G}_{\theta}$ ($\theta \in \boldsymbol{R}/ 2 \pi \boldsymbol{Z}$) {\it invariant metrics\/}.
Any invariant metric $\mathcal{G}_{\theta}$ is K\"ahler
with respect to the natural complex structure
\begin{equation}\label{eq:cpx}
J \left( \frac{\partial}{\partial \mu_1} \right) = i\frac{\partial}{\partial \mu_1}, \qquad
J \left( \frac{\partial}{\partial \mu_2} \right) = i\frac{\partial}{\partial \mu_2}.
\end{equation}
On the other hand, a involutive $(1,1)$-tensor $P$ on $L\H^3$ given as
\begin{equation}\label{eq:P-cpx}
P \left( \frac{\partial}{\partial \mu_1} \right) = -\frac{\partial}{\partial \mu_1}, \qquad
P \left( \frac{\partial}{\partial \mu_2} \right) = \frac{\partial}{\partial \mu_2}
\end{equation}
is a {\it para-K\"ahler structure} on $L\H^3$ for any $\mathcal{G}_{\theta}$.
That is, for $[\gamma]$ in $L\H^3$, we have
\[
\dim_{\boldsymbol{R}}\{ X \in T_{[\gamma]}L\H^3 \,|\, P(X)=\pm X \}=2,\quad
\mathcal{G}_{\theta}(P \cdot ,P \cdot )=-\mathcal{G}_{\theta}(\cdot, \cdot),\quad
\nabla^{L} P=0,
\]
where $\nabla^{L}$ is the common
Levi-Civita connection of $(L\H^3, \mathcal{G}_{\theta})$ for all $\theta$.
\section{The Invariant Metrics and the Canonical Symplectic Form}
\label{sec:geomstr}
In this section, we shall characterize two neutral metrics
$\mathcal{G}^{\mathfrak{r}}$ and $\mathcal{G}^{\mathfrak{i}}$ given
in \eqref{eq:metrics}:
both the para-K\"ahler form of $(L\H^3, \mathcal{G}^{\mathfrak{r}},P)$ and
the K\"ahler form of $(L\H^3, \mathcal{G}^{\mathfrak{i}},J)$ coincide with
the twice of the canonical symplectic form on $L\H^3$ up to sign
(Proposition \ref{prop:symplectic}).
Moreover, identifying $L\H^3=\operatorname{SL}(2,\boldsymbol{C})/\operatorname{GL}(1,\boldsymbol{C})$,
we prove that $\mathcal{G}$ in \eqref{eq:Killing-coord} coincides with
the $\boldsymbol{C}$-valued symmetric $2$-tensor induced from the Killing form
of the Lie algebra $\mathfrak{sl}(2,\boldsymbol{C})$
of $\operatorname{SL}(2,\boldsymbol{C})$ up to real constant multiplication (Proposition \ref{prop:BG}).
\subsubsection*{The canonical symplectic form}
\hspace{2mm}
Let $\omega$ be the {\it canonical symplectic form} on $L\H^3$,
that is, $\omega$ is the symplectic form on $L\H^3$ satisfying
\begin{equation}\label{eq:cano-form}
\hat{\pi}^* \omega = d \Theta,
\end{equation}
where $\Theta$ is the canonical contact form given in \eqref{eq:cano-cont}
on the unit tangent bundle $U\H^3$,
and $\hat{\pi} : U\H^3 \rightarrow L\H^3$ is the projection as in \eqref{eq:pihat}.
We denote by $\omega_{J}$ the K\"ahler form of $(L\H^3, \mathcal{G}^{\mathfrak{i}}, J)$,
and by $\omega_{P}$ the para-K\"ahler form of $(L\H^3, \mathcal{G}^{\mathfrak{r}}, P)$,
that is,
\begin{equation}\label{eq:K-pK}
\omega_{J}=\mathcal{G}^{\mathfrak{i}}(\cdot, J\cdot), \qquad
\omega_{P}=\mathcal{G}^{\mathfrak{r}}(\cdot, P\cdot).
\end{equation}
Then we have the following
\begin{proposition}\label{prop:symplectic}
\[
\omega_{J}=-\omega_{P}=2\omega.
\]
\end{proposition}
To prove this, we introduce metrics on $U\H^3$ and $L\H^3$ induced
from the Killing form of $\mathfrak{sl}(2,\boldsymbol{C})$
considering $U\H^3$ and $L\H^3$ as homogeneous spaces of $\operatorname{SL}(2,\boldsymbol{C})$.
\subsubsection*{The Killing form of $\mathfrak{sl}(2,\boldsymbol{C})$}
\hspace{2mm}
Let $B$ be the half of the Killing form of the Lie algebra
$\mathfrak{sl}(2,\boldsymbol{C})$ of $\operatorname{SL}(2,\boldsymbol{C})$, i.e.,
\begin{equation}\label{eq:Killing}
B(X,Y)=2\operatorname{trace} (XY), \qquad X,Y\in \mathfrak{sl}(2,\boldsymbol{C}).
\end{equation}
Then we set $B^{\mathfrak{r}}$ and $B^{\mathfrak{i}}$
to be the real and imaginary part of $B$, respectively:
\begin{equation}\label{eq:reim-Killing}
B^{\mathfrak{r}}:=\Re B,\qquad B^{\mathfrak{i}}:=\Im B.
\end{equation}
\begin{remark}
The special linear group $\operatorname{SL}(2,\boldsymbol{C})$ is the double cover
of the restricted Lorentz group $\operatorname{SO}^+(1,3)$.
The Killing form of the real Lie algebra of $\mathfrak{so}(1,3)$ of $\operatorname{SO}^+(1,3)$
coincides with a constant multiple of $B^{\mathfrak{r}}$.
\end{remark}
\subsubsection*{The unit tangent bundle}
\hspace{2mm}
The tangent space of the unit tangent bundle $U\H^3=\operatorname{SL}(2,\boldsymbol{C})/\operatorname{U}(1)$
as in \eqref{eq:unitan}
at $(\sigma_0,\sigma_3) \in U\H^3$ is identified with the orthogonal complement of
the Lie algebra $\mathfrak{u}(1)$ of $\operatorname{U}(1)$ with respect to $B^{\mathfrak{r}}$, that is,
\begin{eqnarray*}
T_{(\sigma_0,\sigma_3)} U\H^3
=\mathfrak{u}(1)^{\perp}
=\left\{\left. i \varepsilon \sigma_3+h_{\xi}+ v_{\eta} \,\right|\,
\varepsilon\in \boldsymbol{R},\,\xi,\eta \in \boldsymbol{C} \right\},
\end{eqnarray*}
where $\sigma_0$, $\sigma_3$ are as in \eqref{eq:basis},
and $h_{\xi}$, $v_{\eta}$ are defined by
\begin{equation}\label{eq:horver}
h_{\xi}=
\left(\begin{array}{cccc}
0 & \xi \\
\bar{\xi} & 0
\end{array}\right),\qquad
v_{\eta}=
\left(\begin{array}{cccc}
0 & -\eta \\
\bar{\eta} & 0
\end{array}\right).
\end{equation}
These notations are used since $h_{\xi}$, $v_{\eta}$ are
horizontal and vertical tangent vectors of
the sphere bundle $\pi : U\H^3\rightarrow \H^3$ given in \eqref{eq:pi}, respectively.
The restriction of $B^{\mathfrak{r}}$ in \eqref{eq:reim-Killing}
to $T_{(\sigma_0,\sigma_3)} U\H^3$ can be written by
\begin{equation}\label{eq:metric-on-UH3}
B^{\mathfrak{r}}(X,X)=4(\varepsilon^2 + |\xi|^2-|\eta|^2),
\end{equation}
for $X= i \varepsilon \sigma_3 + h_{\xi} + v_{\eta} \in T_{(\sigma_0,\sigma_3)} U\H^3$.
Thus $B^{\mathfrak{r}}$ defines a pseudo-Riemannian metric
$B_{U}$ on $U\H^3$ of signature $(+,+,+,-,-)$.
Moreover, the projection
\begin{equation}\label{eq:adjust}
\pi : (U\H^3, {B}_{U}) \longrightarrow (\H^3,\inner{~}{~})
\end{equation}
defined as in \eqref{eq:pi} is a pseudo-Riemannian submersion.
\subsubsection*{The space of oriented geodesics}
\hspace{2mm}
Consider the smooth and transitive action of $\operatorname{SL}(2,\boldsymbol{C})$ given as
\[
L\H^3 \ni [\gamma] \longmapsto [a\gamma a^*] \in L\H^3,
\]
for $a \in \operatorname{SL}(2,\boldsymbol{C})$, where $[a\gamma a^*]$ is
the equivalence class of the geodesic $a\gamma(t)a^{\ast}$
for some representative $\gamma$ of $[\gamma]$.
Note that this action coincides with the action given in \eqref{eq:hol-act}.
If we denote by $\gamma_{\sigma_0,\sigma_3}$ the geodesic in $\H^3$ starting
at $\sigma_0$ with initial velocity $\sigma_3$,
then the isotropy subgroup of $\operatorname{SL}(2,\boldsymbol{C})$ at
$[\gamma_0]:=[\gamma_{\sigma_0,\sigma_3}] \in L\H^3$
is given by
\[
\left\{\left. \left(
\begin{array}{cccc}
\l & 0 \\
0 & \l^{-1}
\end{array}
\right)
\,\right|\, \l \in \boldsymbol{C} \setminus \{0\} \right\},
\]
which is identified with the general linear group $\operatorname{GL}(1,\boldsymbol{C})$.
Hence we have
\begin{equation}\label{eq:LH}
L\H^3
=\operatorname{SL}(2,\boldsymbol{C})/\operatorname{GL}(1,\boldsymbol{C})
=\left\{\left. [a \gamma_0 a^{\ast}] \,\right|\, a \in {\rm SL}(2,\boldsymbol{C}) \right\}.
\end{equation}
Then the tangent space of $L\H^3$ at $[\gamma_0]$ is identified with
the orthogonal complement of the Lie algebra $\mathfrak{gl}(1,\boldsymbol{C})$
of $\operatorname{GL}(1,\boldsymbol{C})$ with respect to $B^{\mathfrak{r}}$, that is,
\[
T_{[\gamma_0]}L\H^3
= \mathfrak{gl}(1,\boldsymbol{C})^{\perp}
= \left\{\left. h_{\xi}+v_{\eta} \,\right|\, \xi,\eta \in \boldsymbol{C} \right\},
\]
where $h_{\xi}$ and $v_{\eta}$ are horizontal and vertical vectors
of $T_{(\sigma_0,\sigma_3)}U\H^3$ defined in \eqref{eq:horver}.
The restrictions to $T_{[\gamma_0]} L\H^3$ of $B^{\mathfrak{r}}$ and $B^{\mathfrak{i}}$
defined in \eqref{eq:reim-Killing} can be written by
\[
B^{\mathfrak{r}}\left( X,X\right)= 4(|\xi|^2-|\eta|^2),\qquad
B^{\mathfrak{i}}\left( X,X\right) = 8 \Im(\xi \bar{\eta}),
\]
for $X = h_{\xi}+v_{\eta} \in T_{[\gamma_0]} L\H^3$, respectively.
Thus $B^{\mathfrak{r}}$ and $B^{\mathfrak{i}}$ define pseudo-Riemannian metrics
$B^{\mathfrak{r}}_{L}$ and $B^{\mathfrak{i}}_{L}$ on $L\H^3$
of neutral signature, respectively.
Of course, the projection
\begin{equation}\label{eq:hadjust}
\hat{\pi} : (U\H^3, B_{U}) \longrightarrow (L\H^3, B^{\mathfrak{r}}_{L})
\end{equation}
defined in \eqref{eq:pihat} is a pseudo-Riemannian submersion.
Let $B_{L} := B^{\mathfrak{r}}_L + i B^{\mathfrak{i}}_L$
be the $\boldsymbol{C}$-valued $2$-tensor on $L\H^3 =\operatorname{SL}(2,\boldsymbol{C})/\operatorname{GL}(1,\boldsymbol{C})$
induced from $B$ in \eqref{eq:Killing}. Then we have the following
\begin{proposition}\label{prop:BG}
For the the $\boldsymbol{C}$-valued symmetric $2$-tensor $\mathcal{G}$ on $L\H^3$
defined in \eqref{eq:Killing-coord}, it follows that
\[
\mathcal{G}=-B_{L}.
\]
\end{proposition}
\begin{proof}
It is enough to check the equality at
$[\gamma_0]=[\gamma_{\sigma_0,\sigma_3}] \in L\H^3$ only.
For a sufficiently small neighborhood $\mathcal{R}$ of the origin $o \in \boldsymbol{R}^4$,
consider a map $\psi : \mathcal{R} \rightarrow \operatorname{SL}(2,\boldsymbol{C})$ given by
\begin{equation}\label{eq:parametrization}
\psi(u_1,u_2,v_1,v_2)=
\left(\begin{array}{cc}
1 & u_1-iv_2 + i u_2 - v_1\\
u_1-i v_2 -iu_2+v_1 & 1+(u_1 - i v_2)^2 + (u_2+i v_1)^2
\end{array}\right).
\end{equation}
This map $\psi$ may be considered as a parametrization of $L\H^3=\operatorname{SL}(2,\boldsymbol{C})/\operatorname{GL}(1,\boldsymbol{C})$
around $\psi(o)=[\gamma_0]$. For $\xi,\, \eta\in \boldsymbol{C}$, set
\begin{equation}\label{eq:X}
\Vect{\vect{x}}_{\xi,\eta} :=
(\Re\xi) \left.\frac{\partial}{\partial u_1}\right|_{o} +
(\Im \xi) \left.\frac{\partial}{\partial u_2}\right|_{o} +
(\Re\eta) \left.\frac{\partial}{\partial v_1}\right|_{o} +
(\Im \eta) \left.\frac{\partial}{\partial v_2}\right|_{o} \in T_{o}\mathcal{R},
\end{equation}
and $X:=\psi_{*}(\Vect{\vect{x}}_{\xi,\eta}) \in T_{[\gamma_0]} L\H^3$.
Then we have $X=h_{\xi}+v_{\eta}$, and
\begin{equation}\label{eq:norm}
B^{\mathfrak{r}}_{L}\left( X, X\right)
={B^{\mathfrak{r}}} \left( X, X\right)
= 4(|\xi|^2-|\eta|^2),\qquad
B^{\mathfrak{i}}_{L}\left( X, X\right)
={B^{\mathfrak{i}}} \left( X, X\right)
= 8 \Im(\xi \bar{\eta})
\end{equation}
at $[\gamma_0] \in L\H^3$, where $h_{\xi}$, $v_{\eta}$ are given in \eqref{eq:horver}.
On the other hand, set
$\hat{\psi}:=\pi_1\circ \psi : \mathcal{R} \rightarrow L\H^3$,
where $\pi_1 : \operatorname{SL}(2,\boldsymbol{C}) \ni a \mapsto [a \gamma_0 a^*] \in L\H^3$.
The coordinates $(\mu_1,\mu_2)$ (see \eqref{eq:holcoord}) of
$\hat{\psi}(u_1,u_2,v_1,v_2)$ can be calculated as
\[
\mu_1(u_1,u_2,v_1,v_2)
= -\frac{(u_1+iu_2)-(v_1+iv_2)}{1+(u_1 - i v_2)^2 + (u_2+i v_1)^2},\quad
\mu_2(u_1,u_2,v_1,v_2)
= (u_1+iu_2)+(v_1+iv_2).
\]
Then $\hat{X}:=\hat{\psi}_{*}(\Vect{\vect{x}}_{\xi,\eta}) \in T_{[\gamma_0]} L\H^3$ is given by
\begin{eqnarray*}
\hat{X}
=(-\xi+\eta)\frac{\partial}{\partial \mu_1}+(\xi+\eta)\frac{\partial}{\partial \mu_2}+
(-\bar{\xi}+\bar{\eta})\frac{\partial}{\partial \bar{\mu}_1}+(\bar{\xi}
+\bar{\eta})\frac{\partial}{\partial \bar{\mu}_2}.
\end{eqnarray*}
By \eqref{eq:norm}, we have
\[
\mathcal{G}^{\mathfrak{r}}( \hat{X},\hat{X} )
= -4(|\xi|^2-|\eta|^2)
=- B^{\mathfrak{r}}_{L}\left( X,X\right),\qquad
\mathcal{G}^{\mathfrak{i}}(\hat{X},\hat{X})
= -8\Im(\xi\bar{\eta})
=- B^{\mathfrak{i}}_{L}\left( X,X\right)
\]
at $[\gamma_0] \in L\H^3$,
where $\mathcal{G}^{\mathfrak{r}}$ and $\mathcal{G}^{\mathfrak{i}}$
are as in \eqref{eq:metrics}.
\end{proof}
\begin{proof}[{\bf Proof of Proposition $\ref{prop:symplectic}$}]
\hspace{2mm}
By a similar calculation as in the proof of Proposition \ref{prop:BG},
the complex structure $J$ in \eqref{eq:cpx} and
the para-complex structure $P$ in \eqref{eq:P-cpx} satisfy
\[
J(h_{\xi}+v_{\eta})=h_{i\xi}+v_{i\eta},\qquad P(h_{\xi}+v_{\eta})=h_{\eta}+v_{\xi},
\]
for a tangent vector $h_{\xi}+v_{\eta}\in T_{[\gamma_0]}L\H^3$.
Thus by Proposition \ref{prop:BG}, the K\"ahler form $\omega_J$
and the para-K\"ahler form $\omega_P$
defined in \eqref{eq:K-pK} can be calculated as
\begin{equation}\label{eq:K-pK-forms}
\omega_{P}(X,Y) = -\omega_{J}(X,Y)
=-2\Re(\xi \bar{\delta} - \eta\bar{\beta}),
\end{equation}
where $X=h_{\xi}+v_{\eta}$, $Y=h_{\beta}+v_{\delta} \in T_{[\gamma_0]} L\H^3$.
To calculate the canonical symplectic form $\omega$ in \eqref{eq:cano-form}, set
$\tilde{\psi}:=\pi_2\circ \psi : \mathcal{R} \rightarrow U\H^3$,
where $\psi$ is the map in \eqref{eq:parametrization} and
$\pi_2:\operatorname{SL}(2,\boldsymbol{C}) \ni a \mapsto (a a^*, a\sigma_3 a^*) \in U\H^3$.
Then the horizontal lifts of $X=h_{\xi}+v_{\eta}$,
$Y=h_{\beta}+v_{\delta} \in T_{[\gamma_0]} L\H^3$ are given by
$\tilde{X} :=\tilde{\psi}_{*}(\Vect{\vect{x}}_{\xi,\eta})=( h_{\xi}, h_{\eta} )$,
$\tilde{Y}:=\tilde{\psi}_{*}(\Vect{\vect{x}}_{\beta,\delta})=( h_{\beta}, h_{\delta})
\in T_{(\sigma_0,\sigma_3)}U\H^3$,
where $h_{\xi}$, $h_{\beta}$, $\cdots$ are as in \eqref{eq:tangUH3}
and $\Vect{\vect{x}}_{\xi,\eta}$, $\Vect{\vect{x}}_{\beta,\delta}$ are given in \eqref{eq:X}.
By \eqref{eq:K-pK-forms}, we have
\begin{eqnarray*}\label{eq:calc-cano}
2\omega_{[\gamma_0]}(\tilde{X},\tilde{Y})
&=& 2d\Theta_{(\sigma_0,\sigma_3)}(\tilde{X},\tilde{Y})
= \inner{h_{\xi}}{h_{\delta} }-\inner{h_{\beta}}{h_{\eta}}\\
&=& 2\Re(\xi \bar{\delta} - \eta\bar{\beta})
= -\omega_{P}(X,Y) = \omega_{J}(X,Y)
\end{eqnarray*}
at $[\gamma_0] \in L\H^3$, where $\Theta$ denotes
the canonical contact form in \eqref{eq:cano-cont}.
\end{proof}
\begin{remark}
\label{rem:matome}
The metric $\mathcal{G}^{\mathfrak{i}}=\Im \mathcal{G}$ in \eqref{eq:metrics}
is the twice of the K\"ahler metric defined in \cite[Definition 12]{GG}.
In fact, we defined $\mathcal{G}$ as in \eqref{eq:Killing-coord}
so that the double fibration
\vspace{3mm}
\unitlength 0.1in
\begin{picture}( 38.3000, 4.2000)(-0.0000,-31.1000)
\put(27.0000,-32.7000){\makebox(0,0)[lb]{$(L\H^3=\operatorname{SL}(2,\boldsymbol{C})/\operatorname{GL}(1,\boldsymbol{C}),\,
B^{\mathfrak{r}}_L=-\mathcal{G}^{\mathfrak{r}})$}}%
\put(15.3000,-28.5000){\makebox(0,0)[lb]{$(U\H^3=\operatorname{SL}(2,\boldsymbol{C})/\operatorname{U}(1),\, B_U)$}}%
\put(1.3000,-32.7000){\makebox(0,0)[lb]{$(\H^3=\operatorname{SL}(2,\boldsymbol{C})/\operatorname{SU}(2),\,
\langle~,~\rangle)$}}%
\put(11.9000,-29.0000){\makebox(0,0)[lb]{$\pi$}}%
\put(36.0000,-29.0000){\makebox(0,0)[lb]{$\hat{\pi}$}}%
{\color[named]{Black}{%
\special{pn 8}%
\special{pa 1470 2860}%
\special{pa 1090 3070}%
\special{fp}%
\special{sh 1}%
\special{pa 1090 3070}%
\special{pa 1158 3056}%
\special{pa 1138 3044}%
\special{pa 1140 3020}%
\special{pa 1090 3070}%
\special{fp}%
}}%
{\color[named]{Black}{%
\special{pn 8}%
\special{pa 3380 2850}%
\special{pa 3760 3060}%
\special{fp}%
\special{sh 1}%
\special{pa 3760 3060}%
\special{pa 3712 3010}%
\special{pa 3714 3034}%
\special{pa 3692 3046}%
\special{pa 3760 3060}%
\special{fp}%
}}%
\end{picture}
\vspace{6mm}
\noindent
is compatible, that is, both $\pi$ in \eqref{eq:adjust}
and $\hat{\pi}$ in \eqref{eq:hadjust} are pseudo-Riemannian submersions.
\end{remark}
\begin{remark}[A relationship to the Fubini-Study metric]
\label{rem:signature}
Consider a holomorphic curve
$F : \P^1=\hat{\boldsymbol{C}} \rightarrow L\H^3$ given by
$F|_{\boldsymbol{C}} : \boldsymbol{C} \ni \mu \longmapsto (\mu,\mu) \in L\H^3$.
The image of $F$ in $L\H^3$ can be considered as
\[
L_{o}\H^3=\left\{\left. [\gamma] \inL\H^3 \,\right|\,
\gamma ~\text{through the origin}~ o=(0,0,0) \in {\bmath{B}}^3\right\},
\]
where ${\bmath{B}}^3$ denotes the {\it Poincar\'e ball model\/} of $\H^3$:
\[
{\bmath{B}}^3=\left(\left\{\left. (x,y,z)\in \boldsymbol{R}^3 \,\right|\, x^2+y^2+z^2<1 \right\},\,
4\frac{dx^2+dy^2+dz^2}{(1-x^2-y^2-z^2)^2} \right).
\]
\begin{figure}[htb]
\begin{tabular}{cc}
\resizebox{5cm}{!}{\includegraphics{line.eps}}
\end{tabular}
\caption{An oriented geodesic through the origin.}
\label{fig:Line}
\end{figure}
\noindent
We call $F$ or $L_{o}\H^3$ the {\it standard embedding\/} of $\P^1$.
Moreover, if we equip on $\P^1$ the Fubini-Study metric
$g_{FS}$ of constant curvature $1$,
then the standard embedding
\[
F : (\P^1,g_{FS}) \longrightarrow (L\H^3,\mathcal{G}^{\mathfrak{r}})
\]
is an isometric embedding.
In fact, we defined $\mathcal{G}$ as the opposite sign of $B_{L}$
(Proposition \ref{prop:BG}) because of this fact.
\end{remark}
\section{A Representation Formula for Developable Surfaces}
\label{sec:null}
In this section, we shall prove Theorem~\ref{thm:null} in the introduction.
First, we review fundamental facts on isometric immersions of $\H^2$
into $\H^3$ as surfaces in $\H^3$,
and prove that isometric immersions of $\H^2$ into $\H^3$ are
developable (Proposition \ref{prop:Portnoy}).
Then we shall prove Theorem~\ref{thm:null} (Proposition~\ref{prop:representation}).
\subsection{Isometric immersions and developable surfaces}
\hspace{2mm}
In this paper, a {\it surface} in $\H^3$ is considered as an immersion $f$
of a differentiable $2$-manifold $\Sigma$ into $\H^3$ (cf.\ \eqref{eq:Herm-model}):
\[
f : \Sigma \longrightarrow \H^3 \subset \L^4 = \operatorname{Herm}(2).
\]
We denote by $g=f^{\ast}\inner{~}{~}$ {\it the first fundamental form} of $f$.
For the unit normal vector field $\vect{\nu}$ of $f$, we denote by $A$ and $I\!I$
the {\it shape operator} and the {\it second fundamental form} of $f$, respectively, that is,
$A = -(f_{\ast})^{-1}\circ \vect{\nu}_{\ast}$,
$I\!I(V,W)= - \inner{ \vect{\nu}_{\ast}(V) }{ f_{\ast}(W) }$,
where $V$ and $W$ are vector fields on $\Sigma$.
Let $k_1$, $k_2$ be the {\it principal curvatures} of $f$, then
the {\it extrinsic curvature} $K_{\rm ext}$ and the {\it mean curvature} $H$
can be written as
\[
K_{\rm ext}=k_1k_2,\qquad H=\frac{k_1+k_2}{2},
\]
respectively.
If we denote by $K$ and $\nabla$ the Gaussian curvature and
the Levi-Civita connection of the Riemannian 2-manifold $(\Sigma, g)$, respectively,
then we have
\begin{equation}\label{eq:Gauss}
K=-1+K_{\rm ext},
\end{equation}
\begin{equation}\label{eq:Codazzi}
\nabla_{V}A(W)=\nabla_{W}A(V),
\end{equation}
for vector fields $V,\, W$ on $\Sigma$.
We call \eqref{eq:Gauss} the {\it Gauss equation},
and \eqref{eq:Codazzi} the {\it Codazzi equation}.
A surface in $\H^3$ is said to be {\it extrinsically flat}
if its extrinsic curvature is identically zero.
By the Gauss equation, we have that an isometric immersion of $\H^2$ into $\H^3$
is a complete extrinsically flat surface.
On the other hand, any unit speed geodesic in $\H^3$ can be expressed as
\[
\gamma_{p,v}(t)=p\cosh t+v\sinh t ,\qquad (p,v) \in U\H^3.
\]
\begin{definition}[Ruled surfaces and developable surfaces]
A {\it ruled surface} in $\H^3$ is a locus of $1$-parameter family of geodesics in $\H^3$.
For a ruled surface $f: \Sigma \rightarrow \H^3$,
there exists a local coordinate system $\varphi=(s,t)$ of $\Sigma$ such that
\[
(f\circ\varphi^{-1})(s,t) = c(s) \cosh t + v(s) \sinh t ,
\]
where $c$ is a curve in $\H^3$ and
$v$ is a unit normal vector field along $c$.
A ruled surface is said to be {\it developable} if it is extrinsically flat.
\end{definition}
Then we have the following
\begin{proposition}[{\cite[Theorem 4]{Portnoy}}]\label{prop:Portnoy}
A complete extrinsically flat surface in $\H^3$ is developable.
\end{proposition}
To show this, we first prove an analogue of
{\it Massey's lemma} \cite[Lemma 2]{Massey} (cf.\ Remark \ref{rem:Massey}).
For a surface $f: \Sigma \rightarrow \H^3$,
a curve in $\Sigma$ is said to be {\it asymptotic}
if each tangent space of the curve gives the kernel of
the second fundamental form of $f$.
\begin{lemma}[Hyperbolic Massey's lemma]
\label{lem:Massey}
For an extrinsically flat surface $f : \Sigma \rightarrow \H^3$,
let $\mathcal{W}$ be the set of umbilic points of $f$
and $\gamma$ an asymptotic curve in the non umbilic point set
$\mathcal{W}^c= \Sigma \setminus \mathcal{W}$.
Then the mean curvature $H$ of $f$ satisfies
\[
\frac{\partial^2}{\partial t^2} \left( \frac{1}{H} \right) = \frac{1}{H},
\]
on $\gamma$, where $t$ denotes the arc length parameter of $\gamma$.
\end{lemma}
\begin{proof}
Take a non umbilic point $p \in \mathcal{W}^c$,
and curvature line coordinate system $(s,v)$ around $p$
with $v$-curves asymptotic.
Then the first and second fundamental forms $g$ and $I\!I$ are expressed as
$g = g_{11} ds^2 + g_{22} dv^2 ,~ I\!I =h_{11}ds^2 ~(h_{11} \neq 0)$,
and hence the Codazzi equation~\eqref{eq:Codazzi} is equivalent to
\begin{equation}\label{0}
\frac{\partial h_{11}}{\partial v}=\frac{h_{11}}{2g_{11}} \frac{\partial g_{11}}{\partial v} ,
\end{equation}
\begin{equation}\label{1}
0=\frac{h_{11}}{2g_{11}} \frac{\partial g_{22}}{\partial s}.
\end{equation}
By \eqref{1}, $g_{22}$ depends only on $v$.
Reparametrizing with $dt=\sqrt{g_{22}(v)}\, dv$, we obtain
$g = g_{11} ds^2 + dt^2,~ I\!I = h_{11} ds^2 ~(h_{11} \neq 0)$.
In this coordinate system,
each $t$-curve is an asymptotic curve parametrized by arc length
and the Gaussian curvature $K$ of $f$ is written as
\[
K=-\frac{1}{\sqrt{ g_{11} } }\frac{\partial^2 \sqrt{g_{11}}}{\partial t^2}.
\]
Since $f$ is extrinsically flat, the Gauss equation \eqref{eq:Gauss} yields
\begin{equation}
\frac{\partial^2 \sqrt{g_{11}}}{\partial t^2}=\sqrt{g_{11}}. \label{2}
\end{equation}
On the other hand, by \eqref{0}, we have
\[
\frac{\partial}{\partial t}\log\frac{h_{11}}{\sqrt{g_{11}}}
=\frac{1}{h_{11}}\frac{\partial h_{11}}{\partial t}-\frac{1}{2
g_{11}}\frac{\partial g_{11}}{\partial t}=0,
\]
and hence there exists a function ${a}={a}(s)$ such that
\[
h_{11}(s,t) = a(s) \sqrt{g_{11}(s,t)} \qquad (a(s) \neq 0).
\]
Then the mean curvature $H$ of $f$ can be written as
$H =a(s)/(2\sqrt{g_{11}})$. Besides \eqref{2}, we have
\begin{equation*}
\frac{\partial^2}{\partial t^2} \left( \frac{1}{H} \right)
=\frac{\partial^2}{\partial t^2}\frac{2\sqrt{g_{11}}}{a(s)}
=\frac{2}{a(s)}\frac{\partial^2}{\partial t^2}\sqrt{g_{11}}
=\frac{2}{a(s)}\sqrt{g_{11}}
=\frac{1}{H}.
\end{equation*}
\end{proof}
\begin{remark}
\label{rem:Massey}
Although original Massey's lemma \cite[Lemma 2]{Massey} is for flat surfaces in $\boldsymbol{R}^3$,
we can generalize it for extrinsically flat surfaces in $\S^3$ in the same way.
On the other hand, Murata and Umehara generalized Massey's lemma
for a class of flat surfaces with singlarities ({\it flat fronts\/})
in $\boldsymbol{R}^3$ \cite[Lemma 1.15]{MurataUmehara}.
\end{remark}
\noindent
{\bf Proof of Proposition $\ref{prop:Portnoy}$}
\hspace{2mm}
Most part of this proof is a modification of the proof of
Hartman-Nirenberg theorem given by Massey \cite{Massey}.
However, some part of the original Massey's proof is not valid for hyperbolic case,
thus the final part of this proof is written carefully (see Claim below).
Let $f : \Sigma \rightarrow \H^3$ be a complete extrinsically flat surface
and $\mathcal{W}$ the set of umbilic points of $f$.
Since the restriction of $f$ to $\mathcal{W}$ is a totally geodesic embedding,
$f|_{\mathcal{W}}$ is ruled.
By the proof of Lemma~\ref{lem:Massey},
for any non umbilic point in $\mathcal{W}^c=\Sigma\setminus\mathcal{W}$,
there exists a local coordinate neighborhood $\left(U ; (s,t) \right)$
around the point such that
\[
g = g_{11} ds^2 + dt^2 ,\qquad I\!I = h_{11} ds^2 \quad ( h_{11} \neq 0).
\]
Then it can be shown that the geodesic curvature of each $t$-curve vanishes anywhere.
This means that any asymptotic curve in $\mathcal{W}^c$ is a part of
geodesic in $\H^3$. For a fixed point $q \in \mathcal{W}^c$,
let $G(q)$ be the unique asymptotic curve in $\mathcal{W}^c$ passing through $q$.
By Lemma~\ref{lem:Massey}, it follows that the mean curvature $H$ is given by
\begin{equation}\label{eq:meancurva}
H=\frac{1}{a \cosh t + b \sinh t}
\end{equation}
on $G(q)$, where $a,\, b$ are constants and $t$ denotes the distance induced from the
first fundamental form of $f$ measured from $q$.
If $G(q)$ intersects with the boundary $\partial \mathcal{W}$,
the mean curvature $H$ vanishes at $Q \in \partial \mathcal{W} \cap G(q)$,
a contradiction.
Thus any asymptotic curve in $\mathcal{W}^c$ does not intersect with
the boundary of $\mathcal{W}^c$,
and hence we have $f|_{\mathcal{W}^c}$ is ruled.
It is sufficient to show the following
\vspace{2mm}
\noindent
{\bf Claim .}~{\it
$\partial \mathcal{W}$ is a disjoint union of geodesics in $\H^3$.
}
\begin{proof}
For a point $p \in \partial \mathcal{W}$,
there exists a sequence $\{p_n\}_{n \in \boldsymbol{N}}$ in $\mathcal{W}^c$ such that
$ \lim_{n\rightarrow \infty} p_n = p .$
Let $G(p_n)$ be the unique asymptotic curve through $p_n \in \mathcal{W}^c$.
Since $G(p_n)$ is a geodesic in $H^3$, we can express as
$G(p_n)(t)=p_n \cosh t + v_n \sinh t$,
with a unit tangent vector $v_n \in T_{p_n}\H^3$.
We shall prove that there exists $v$ of the limit of $\{v_n\}_{n \in \boldsymbol{N}}$,
taking a subsequence, if necessary.
Set $p_n=(p_{0_n},\vect{p}_n)$,
$v_n=(v_{0_n},\vect{v}_n) \in \L^4= \boldsymbol{R} \times \boldsymbol{R}^3 $.
Then we have
\[
-p_{0_n}^2+|\vect{p}_n|_{E}^2=-1 ,\qquad
-v_{0_n}^2+|\vect{v}_n|_{E}^2=1,\qquad
-p_{0_n}v_{0_n}+\inner{\vect{p}_n}{\vect{v}_n}_{E}=0,
\]
for all $n \in \boldsymbol{N}$, where $\langle \cdot,\cdot \rangle_{E}$ is
the Euclidean inner product of $\boldsymbol{R}^3$ and
$|\cdot|_{E}$ is the associated Euclidean norm.
By the Cauchy-Schwartz inequality,
\begin{equation*}
|v_{0_n}|
= \frac{1}{p_{0_n}}|\inner{\vect{p}_n}{\vect{v}_n}_{E}|
\leq \frac{1}{p_{0_n}}|\vect{p}_n|_{E}|\vect{v}_n|_{E}
= \sqrt{ \frac{p_{0_n}^2-1}{p_{0_n}^2} }\sqrt{v_{0_n}^2+1},
\end{equation*}
and we have
\begin{equation}
\label{ineq:principle}
\frac{|v_{0_n}|}{\sqrt{v_{0_n}^2+1}} \leq \sqrt{ 1-\frac{1}{p_{0_n}^2} } \leq 1,
\end{equation}
for $n \in \boldsymbol{N}$. If $|v_{0_n}| \rightarrow \infty$,
\[
\frac{|v_{0_n}|}{\sqrt{v_{0_n}^2+1}} \longrightarrow 1
\]
holds and we have
$p_{0_n} \rightarrow \infty$ by \eqref{ineq:principle}.
But it contradicts with $\lim_{n\rightarrow \infty} p_n = p$.
Thus there exists $R>0$ such that $\{v_n\}_{n \in \boldsymbol{N}} \subset B(R)$,
where $B(R)=\{ {}^t(x_0,x_1,x_2,x_3)\in \L^4 \,|\, x_0^2+x_1^2+x_2^2+x_3^2 \leq R \}$.
If we set $\S^3_1:=\{ \vect{x}\in \L^4 \,|\, \inner{\vect{x}}{\vect{x}}=1 \}$,
we also have $\{v_n\}_{n \in \boldsymbol{N}} \subset \S^3_1 \cap B(R)$.
Since $\S^3_1 \cap B(R)$ is compact,
there exists a subsequence $\{ v_{n_k} \} \subset \{ v_n \}$
such that $\lim_{k\rightarrow \infty} v_{n_k} = v$ exists.
Therefore we can define
$G(p) = \lim_{n \rightarrow \infty} G(p_n) \subset \mathcal{W}^c \cup \partial \mathcal{W}$
as $\gamma_{p,v}$. If $G(p) \cap \mathcal{W}^c$ is non empty,
take $q \in G(p) \cap \mathcal{W}^c$.
Then $G(q)=G(p)$ and hence $G(q)$ through $p\in \partial \mathcal{W}$,
a contradiction.
Thus $G(p) \subset \partial \mathcal{W}$.
\end{proof}
As a corollary, we have the following
\begin{corollary}\label{fact:dev}
An isometric immersion of $\H^2$ into $\H^3$ is a complete
developable surface in $\H^3$.
\end{corollary}
\subsection{Proof of Theorem \ref{thm:null}}
\hspace{2mm}
Since a ruled surface in $\H^3$ is a locus of 1-parameter family of geodesics,
it gives a curve in the space of oriented geodesics $L\H^3$.
Conversely, a curve in $L\H^3$ generates a ruled surface
(it may have singularities) in $\H^3$.
Here, we shall investigate the curves given by developable surfaces in $\H^3$.
Let $(\mu_1,\mu_2)$ be a point in $L\H^3$ as in \eqref{eq:holcoord}.
Then it corresponds to a equivalence class $[\gamma]$,
where $\gamma(t)$ is expressed as
\begin{equation}\label{eq:geodesic}
\gamma(t)=\frac{1}{\left| 1+\mu_1\bar{\mu}_2 \right| }
\left(
\begin{array}{cc}
e^{t}+e^{-t}|\mu_1|^2&
e^t \mu_2 -e^{-t} \mu_1 \\
e^t \bar{\mu}_2 -e^{-t} \bar{\mu}_1 &
e^{t}|\mu_2|^2 + e^{-t}
\end{array}
\right) \in \operatorname{Herm}(2).
\end{equation}
A regular curve in a pseudo-Riemannian manifold
is called {\it null\/} (resp.\ {\it causal\/}) if every tangent vector gives
null (resp.\ timelike or null) direction.
Recall that the neutral metrics $\mathcal{G}^{\mathfrak{r}}$
and $\mathcal{G}^{\mathfrak{i}}$ are defined in \eqref{eq:metrics}.
Theorem~\ref{thm:null} is a direct conclusion of the following
\begin{proposition}\label{prop:representation}
For a regular curve $\alpha(s)=(\mu_1(s),\mu_2(s)) :
\boldsymbol{R} \supset I \rightarrow \mathcal{U} \subset L\H^3$
which is null with respect to $\mathcal{G}^{\mathfrak{i}}$ and
causal with respect to $\mathcal{G}^{\mathfrak{r}}$,
a map $f : I \times \boldsymbol{R} \rightarrow \H^3$ defined by
\begin{equation}\label{eq:representation}
f(s,t)=
\frac{1}{\left| 1+\mu_1(s)\bar{\mu}_2(s) \right| }
\left(
\begin{array}{cc}
e^{t}+e^{-t}|\mu_1(s)|^2&
e^t \mu_2(s) -e^{-t} \mu_1(s)\\
e^t \bar{\mu}_2(s) -e^{-t} \bar{\mu}_1(s) &
e^{t}|\mu_2(s)|^2 + e^{-t}
\end{array}
\right)
\end{equation}
is a developable surface.
Conversely, any developable surface generated by
complete geodesics in $\H^3$ can be written locally in this manner.
\end{proposition}
\begin{proof}
By \eqref{eq:geodesic}, a parametrization of the locus of $\alpha$ can be written by
$f $ as in \eqref{eq:representation}.
First we shall prove that if $\alpha$ is null with respect to $\mathcal{G}^{\mathfrak{i}}$
and causal with respect to $\mathcal{G}^{\mathfrak{r}}$, then $f$ is an immersion.
Set
\begin{equation}\label{eq:regularity}
\Lambda(s,t)
:=| f_s \times f_t |^2
= \frac{e^{2t} |\mu'_2|^2 + e^{-2t} |\mu'_1|^2 }{\left| 1+\mu_1\bar{\mu}_2 \right|^2}
-\frac{1}{2}\mathcal{G}^{\mathfrak{r}}(\alpha',\alpha'),
\end{equation}
where ${~}'=d/ds$, $f_s=\partial f/\partial s$, $f_t=\partial f/\partial t$
and $\times$ denotes the cross product of $\H^3$ as in \eqref{eq:cross}.
Thus we have $\Lambda(s,t)$ is positive
if $\mathcal{G}^{\mathfrak{r}}(\alpha',\alpha')$ is negative.
Consider the case $\mathcal{G}^{\mathfrak{r}}(\alpha',\alpha')=0$ at $s \in I$.
Since $\alpha$ is null with respect to $\mathcal{G}^{\mathfrak{i}}$, we have
$|\mu'_1||\mu'_2|=0$.
The regularity of $\alpha$ shows that either $\mu'_1=0$ or $\mu'_2=0$ occurs.
Without loss of generality, we may assume $\mu'_1=0$.
Then the regularity of $\alpha$ means $\mu'_2 \neq 0$, and then
$\Lambda(s,t)=e^{2t} |\mu'_2|^2/\left| 1+\mu_1\bar{\mu}_2 \right|^2$
is positive. Thus $f$ is an immersion.
Next we shall show that $f$ is extrinsically flat.
The unit normal vector field $\vect{\nu}$ of $f$ is given by
\begin{equation}\label{eq:unitnormal}
\vect{\nu}(s,t)
=\frac{ f_s \times f_t }{| f_s \times f_t |}
=\frac{i}{|1+\mu_1\bar{\mu}_2|^3 \sqrt{\Lambda(s,t)} }
\left(
\begin{array}{cc}
a(s,t) & z(s,t)\\ -\bar{z}(s,t) & b(s,t)
\end{array}
\right),
\end{equation}
where
\[
a(s,t) = 2i\Im \{ e^{t} (1+\mu_1\bar{\mu}_2) \bar{\mu}_1 \mu'_2
- e^{-t} (1+\mu_2\bar{\mu}_1) \bar{\mu}_1 \mu'_1\},
\]
\[
b(s,t) =-2i\Im \{ e^{t} (1+\mu_1\bar{\mu}_2) \bar{\mu}_2 \mu'_2
- e^{-t} (1+\mu_2\bar{\mu}_1) \bar{\mu}_2 \mu'_1\},
\]
\[
z(s,t)
= -e^{t}\{ (1+\mu_1\bar{\mu}_2) \mu'_2
+ (1+\mu_2\bar{\mu}_1) \mu_1\mu_2\bar{\mu}'_2 \}
+e^{-t}\{ (1+\mu_2\bar{\mu}_1) \mu'_1
+ (1+\mu_1\bar{\mu}_2) \mu_1\mu_2\bar{\mu}'_1 \}.
\]
Since
\[
K_{\rm ext}
=\frac{\inner{f_s}{\vect{\nu}_s}\inner{f_t}{\vect{\nu}_t}
-\inner{f_s}{\vect{\nu}_t}\inner{f_t}{\vect{\nu}_s}}{
\inner{f_s}{f_s}\inner{f_t}{f_t}-\inner{f_s}{f_t}^2}
\qquad \text{and} \qquad
\mathcal{G}^{\mathfrak{i}}(\alpha', \alpha')
= \Im\frac{4\mu'_1\bar{\mu}'_2}{(1+\mu_1\bar{\mu}_2)^2},
\]
we have
\begin{equation}\label{eq:Kext}
K_{\rm ext}
=
\frac{i}{ \sqrt{\Lambda(s,t)}^3 }
\left\{\frac{\mu'_1\bar{\mu}'_2}{
(1+\mu_1\bar{\mu}_2)^2}-\frac{\mu'_2\bar{\mu}'_1}{(1+\mu_2\bar{\mu}_1)^2} \right\}
=
\frac{-1}{2 \sqrt{\Lambda(s,t)}^3 }
\mathcal{G}^{\mathfrak{i}}(\alpha',\alpha').
\end{equation}
Therefore $\mathcal{G}^{\mathfrak{i}}(\alpha',\alpha') =0$
if and only if $K_{\rm ext} =0$.
Conversely, for a ruled surface $\hat{f} : \Sigma \rightarrow \H^3$,
there exists a $1$-parameter family $\alpha=\alpha(s)$ of geodesics
such that its locus coincides with the given surface $\hat{f}$.
Using a suitable isometry,
we may assume that the image of $\alpha$ is included
in $\mathcal{U}$ in \eqref{eq:nbd}, that is,
\[
\alpha : \boldsymbol{R} \supset I \ni s \longmapsto (\mu_1(s),\mu_2(s)) \in \mathcal{U} \subset L\H^3.
\]
Thus $\hat{f}$ is given by $f$ as in \eqref{eq:representation} locally.
We shall prove that, if the ruled surface $\hat{f}$ is developable,
$\alpha$ is a regular curve
which is null with respect to $\mathcal{G}^{\mathfrak{i}}$ and
causal with respect to $\mathcal{G}^{\mathfrak{r}}$.
If there exists a point such that $\alpha'=0$, $\hat{f}$
is not an immersion because of \eqref{eq:regularity}.
Thus $\alpha$ is a regular curve.
Moreover $\alpha$ is a null with respect to $\mathcal{G}^{\mathfrak{i}}$
by \eqref{eq:Kext}.
Then we shall prove $\alpha$ is causal with respect to $\mathcal{G}^{\mathfrak{r}}$.
If $\mathcal{G}^{\mathfrak{r}}(\alpha',\alpha')>0$,
\[
\mathcal{G}^{\mathfrak{r}}(\alpha',\alpha')
=\Re\frac{4\mu'_1\bar{\mu}'_2}{(1+\mu_1\bar{\mu}_2)^2}
=\frac{4|\mu'_1||\mu'_2|}{|1+\mu_1\bar{\mu}_2|^2},
\]
holds since $\mathcal{G}^{\mathfrak{i}}(\alpha',\alpha')=0$.
Then we have
\begin{eqnarray*}
\Lambda(s,t)=
\frac{4|\mu'_1||\mu'_2|}{\left| 1+\mu_1\bar{\mu}_2 \right|^2}
\sinh^2 \left( t+\frac{1}{2}\log\frac{|\mu'_2|}{|\mu'_1|} \right),
\end{eqnarray*}
and hence $\hat{f}$ has a singular point at $t=(\log|\mu'_1|-\log|\mu'_2|)/2$,
a contradiction.
\end{proof}
\subsection{Examples}
\hspace{2mm}
Nomizu \cite{Nomizu} constructed fundamental examples of
complete developable surfaces in $\H^3$
(cf. Figure~\ref{fig:Nomizu} in the introduction).
\begin{example}[Hyperbolic $2$-cylinders, {\cite[Example 1]{Nomizu}}]
\label{ex:1}
Let ${\bmath{D}}$ be the unit disc in $\boldsymbol{C}$.
For a regular curve $\zeta(s) : \boldsymbol{R} \rightarrow {\bmath{D}}$, set
\[
\alpha_1(s)=(-\zeta(s),\zeta(s)).
\]
Then $\alpha_1$ determines a regular curve in
$L\H^3=(\hat{\boldsymbol{C}}\times\hat{\boldsymbol{C}})\setminus\hat{\Delta}$,
which is null with respect to $\mathcal{G}^{\mathfrak{i}}$
and causal with respect to $\mathcal{G}^{\mathfrak{r}}$.
Thus by Theorem \ref{thm:null}, the locus of $\alpha_1$ is a developable surface,
called {\it hyperbolic $2$-cylinder\/}.
Figure~\ref{fig:Nomizu} (B) shows an example of $\zeta(s)=e^{is}/3$.
\end{example}
\begin{example}[Ideal cones, {\cite[Example 2]{Nomizu}}]
\label{ex:2}
For a regular curve $\mu(s) : \boldsymbol{R} \rightarrow \boldsymbol{C}$,
set
\[
\alpha_2(s)=(\mu(s),0).
\]
Then $\alpha_2$ determines a regular curve in
$L\H^3=(\hat{\boldsymbol{C}}\times\hat{\boldsymbol{C}})\setminus\hat{\Delta}$,
which is null with respect to both $\mathcal{G}^{\mathfrak{i}}$ and
$\mathcal{G}^{\mathfrak{r}}$.
Thus by Theorem \ref{thm:null}, the locus of $\alpha_2$ is a developable surface.
Figure~\ref{fig:Nomizu} (C) shows an example of $\mu(s)=e^{is}/2$.
We will see this example more precisely in Section \ref{sec:exponential}.
\end{example}
\begin{example}[Rectifying developables of helices, {\cite[Example 3]{Nomizu}}]
\label{ex:3}
For constants $\kappa,\, \tau \in \boldsymbol{R} \setminus \{0\}$, set
$a_{\pm}:=\sqrt{(\kappa \pm 1)^2+\tau^2}$,
$A_{\pm}:=\sqrt{\pm(1-\kappa^2-\tau^2)+a_{+}a_{-} }$
and $\alpha_3 : \boldsymbol{R} \rightarrow \boldsymbol{C}^2$ as
\begin{multline*}
\alpha_3(s)= \left(
\kappa \frac{4\sqrt{2}\sqrt{\kappa^2+\tau^2}i+4\tau A_{-}}{
(\sqrt{2}\sqrt{\kappa^2+\tau^2}i+4\tau A_{+})( a_{+} + a_{-} )^2+4\kappa A_{-}}
\exp \left( \frac{ A_{+}+iA_{-} }{ \sqrt{2} }s \right) ,\right.\\
\left.
\frac{1}{\kappa}
\frac{(\sqrt{2}\sqrt{\kappa^2+\tau^2}-\tau A_{+})( a_{+} + a_{-} )^2-4\kappa A_{-}}{
4\sqrt{2}\sqrt{\kappa^2+\tau^2}i+4\tau A_{-}- ( a_{+} + a_{-} )^2 A_{+}}
\exp \left( \frac{ -A_{+}+iA_{-} }{ \sqrt{2} }s \right)
\right).
\end{multline*}
Then $\alpha_3$ determines a regular curve in
$L\H^3=(\hat{\boldsymbol{C}}\times\hat{\boldsymbol{C}})\setminus\hat{\Delta}$,
which is null with respect to $\mathcal{G}^{\mathfrak{i}}$
and causal with respect to $\mathcal{G}^{\mathfrak{r}}$.
Thus by Theorem \ref{thm:null}, the locus of $\alpha_3$ is a developable surface.
In fact, this is a rectifying developable \cite{Nomizu}
of the helix of constant curvature $\kappa$ and torsion $\tau$ in $\H^3$.
Figure~\ref{fig:Nomizu} (D) shows an example of $\kappa=\tau=1$.
\end{example}
\section{Ideal Cones and Behavior of the Mean Curvature}
\label{sec:exponential}
In this section, we shall prove Theorem~\ref{thm:exponential} in the introduction.
First, we define ``ideal cones'', determine the corresponding curves in $L\H^3$
and investigate behavior of their mean curvature.
Next, we introduce the notion of developable surfaces {\it of exponential type\/}
in $\H^3$. Finally, we prove Theorem~\ref{thm:exponential}.
\subsection{Null curves and ideal cones}
\label{subsec:3_idealcones}
\hspace{2mm}
\begin{definition}[Ideal cones]
\label{def:idealcones}
We call a complete developable surface in $\H^3$ an {\it ideal cone\/},
if it is a locus of 1-parameter family of geodesics
sharing one side end as a same point in the ideal boundary.
The shared point is called {\it vertex\/}.
\end{definition}
\begin{proposition}
\label{prop:vertex}
An ideal cone gives a curve in $L\H^3$ which is null with respect to both
$\mathcal{G}^{\mathfrak{i}}$ and $\mathcal{G}^{\mathfrak{r}}$.
Conversely, if the locus of a curve in $L\H^3$ which is null with respect to
both $\mathcal{G}^{\mathfrak{i}}$ and $\mathcal{G}^{\mathfrak{r}}$
is complete, then the locus is an ideal cone.
\end{proposition}
\begin{proof}
Without loss of generality, we may assume the vertex of the ideal cone
is $\infty \in \partial \H^3$.
Then the curve
$\alpha(s)=( \mu_1(s), \mu_2(s) ) \in (\hat{\boldsymbol{C}}\times\hat{\boldsymbol{C}})\setminus\hat{\Delta} = L\H^3$
given by the ideal cone satisfies $\mu_2(s)=0$. Hence
$\mathcal{G}^{\mathfrak{r}}(\alpha',\alpha')
=\mathcal{G}^{\mathfrak{i}}(\alpha',\alpha')=0$ holds.
Conversely, a curve $\alpha(s)=( \mu_1(s), \mu_2(s) )$ in $L\H^3$
is null with respect to $\mathcal{G}^{\mathfrak{i}}$
if and only if
$\mathcal{G}(\alpha', \alpha')$ is always real.
Moreover if $\alpha$ is null with respect to $\mathcal{G}^{\mathfrak{r}}$, we have
\begin{equation}\label{eq:EQcondition}
\mathcal{G}(\alpha', \alpha')
=\frac{\mu'_1(s)\bar{\mu}'_2(s)}{(1+\mu_1(s)\bar{\mu}_2(s))^2}
= 0,
\end{equation}
for all $s$. By the regularity of $\alpha$,
\eqref{eq:EQcondition} holds if and only if either $\mu'_1(s)$ vanishes identically
or so does $\mu'_2(s)$.
This means the locus of $\alpha$ is a ruled surface which
is asymptotic to a point in the ideal boundary.
\end{proof}
\begin{remark}
By Proposition \ref{prop:vertex},
it follows that a complete {\it ruled\/} surface
which is a locus of 1-parameter family of geodesics
sharing one side end as a same point in the ideal boundary
is necessarily developable, that is, an ideal cone.
If the vertex is $\infty \in \partial \H^3$, the shape of ideal cone is
a cylinder over a plane curve in the upper half space $\boldsymbol{R}^3_+$
(cf.\ Figure \ref{fig:ballupper}).
\end{remark}
\begin{figure}[htb]
\begin{center}
\begin{tabular}{c@{\hspace{30mm}}c}
\resizebox{3cm}{!}{\includegraphics{Exp.eps}} &
\resizebox{3cm}{!}{\includegraphics{Exp-up.eps}}\\
{\footnotesize (a) in the Poincar\'e ball model} &
{\footnotesize (b) in the upper half space model}
\end{tabular}
\end{center}
\caption{An ideal cone whose vertex at $\infty$.}
\label{fig:ballupper}
\end{figure}
Now we shall investigate behavior of the mean curvature of ideal cones.
\begin{proposition}
\label{prop:idealcone}
For an ideal cone $f$,
let $\gamma$ be an asymptotic curve of the non umbilic point set of $f$
such that $\gamma_+$ is the vertex of $f$,
and let $t$ be the arc length parameter of $\gamma$.
Then the mean curvature $H$ of $f$ is proportional to $e^t$ on $\gamma$.
\end{proposition}
\begin{proof}
Without loss of generality,
we may assume the vertex of $f$ is $\infty \in \partial \H^3$.
Then the curve $\alpha$ in $L\H^3$ corresponding to $f$ is given by
$\alpha(s)=(\mu(s),0)$ on $\mathcal{U} \subset L\H^3$.
By the representation formula \eqref{eq:representation},
$f$ can be written as
\begin{equation}\label{eq:repre-IC}
f(s,t)=
\left(
\begin{array}{cc}
e^{t}+e^{-t}|\mu(s)|^2&
-e^{-t} \mu(s) \\
-e^{-t} \bar{\mu}(s) &
e^{-t}
\end{array}
\right).
\end{equation}
Then the induced metric $g=f^*\inner{~}{~}$ is
\begin{equation}\label{eq:ind-met}
g=e^{-2t} |\mu'|^2ds^2+dt^2.
\end{equation}
Now we shall see that $\mu(s)$ can be considered as an
Euclidean plane curve as follows.
By the isometry $\Psi : \H^3 \rightarrow \boldsymbol{R}^3_+$ as in \eqref{eq:isometry},
$f$ is transferred to $(\Psi \circ f)(s,t) =( \mu(s), e^t ) \in \boldsymbol{R}^3_+$, that is,
the cylinder over the plane curve $\mu(s)\in \boldsymbol{C}$.
Set $\Omega := \{(w,1) \,|\, w \in \boldsymbol{C}\} \subset \boldsymbol{R}^3_+$,
a complete flat surface in $\boldsymbol{R}^3_+$ so-called the {\it horosphere\/}
through $(0,1)$ and $\infty$.
Thus $\Omega$ can be considered as the Euclidean plane.
Then the intersection of $f$ and $\Omega$ is parametrized by
$ (\Psi \circ f)(s,0) = ( \mu(s), 1 )$.
Thus we can consider $\mu$ as a curve in the Euclidean plane $\Omega$.
If we take the arc length parameter $s$ of the curve $\mu$ in $\Omega$,
the induced metric $g$ in \eqref{eq:ind-met} is written as $g=e^{-2t} ds^2+dt^2$.
Since the unit normal vector field $\vect{\nu}$ of $f$ can be expressed by
\[
\vect{\nu}(s,t)=\left(
\begin{array}{cc}
2\Im(\bar{\mu}\mu' ) & i\mu'\\ -i\bar{\mu}' & 0
\end{array}\right),
\]
the second fundamental form $I\!I$ of $f$ is written as
$I\!I = e^{-t} \Im(\mu' \bar{\mu}'')ds^2 = - e^{-t} \kappa_{E}(s) ds^2,$
where $\kappa_{E}$ is the curvature of $\mu$ in the Euclidean plane $\Omega$.
Therefore the mean curvature $H$ of $f$ is given by
$H(s,t) = -e^t \kappa_{E}(s) / 2.$
\end{proof}
\subsection{Developable surfaces of exponential type}
\label{subsec:3_exptype}
\hspace{2mm}
Here we shall investigate behavior of the mean curvature
of {\it complete\/} developable surfaces.
For a complete developable surface $f : \Sigma \rightarrow \H^3$,
let $p \in \Sigma$ be a non umbilic point.
Then there exists a unique asymptotic curve $\gamma$
through $p$ which is a geodesic in $\H^3$.
By hyperbolic Massey's lemma (Lemma~\ref{lem:Massey}), it holds that
\[
\frac{1}{H}=P \cosh t + Q \sinh t
\]
on $\gamma$ (see \eqref{eq:meancurva}),
where $P$ and $Q$ are constants and $t$ is the arc length parameter of $\gamma$.
Without loss of generality, we may assume $P$ is positive. Then
\begin{equation*}
\frac{1}{H}=
\begin{cases}
\sqrt{P^2 -Q^2}\cosh\left(t +\dfrac{1}{2}\log \dfrac{P+Q}{P-Q}\right)
& \qquad(\text{if}~ P>|Q|), \\
P e^{\pm t}
& \qquad(\text{if}~ P=|Q|),\\
\sqrt{Q^2 -P^2}\sinh \left(t +\dfrac{1}{2}\log \dfrac{Q+P}{Q-P}\right)
& \qquad(\text{if}~ P<|Q|).
\end{cases}
\end{equation*}
Completeness of $f$ implies that $t$ varies from $-\infty$ to $\infty$.
But in the third case,
the mean curvature diverges at some $t \in \boldsymbol{R}$, a contradiction.
Hence only the first and the second cases can happen,
that is, the mean curvature $H$ of a complete developable surface
is proportional to exponential function or hyperbolic secant function
on each asymptotic curves with respect to the arc length parameter.
\begin{definition}[Developable surfaces of exponential type]
\label{def:exptype}
A complete developable surface is said to be {\it of exponential type\/}
if it is not totally umbilic and the mean curvature is proportional to $e^{\pm t}$
on each asymptotic curves in the set of non umbilic points,
where $t$ is the arc length parameter of the asymptotic curve.
\end{definition}
Proposition \ref{prop:idealcone} says that
non totally umbilic ideal cones are developable surfaces of exponential type.
\subsection{Proof of Theorem \ref{thm:exponential}}
\label{subsec:3_sufficient}
\hspace{2mm}
\begin{definition}[Asymptotics of geodesics]
\label{def:asymp-geod}
Two unit speed geodesics $\gamma_1$, $\gamma_2$ in $\H^3$ are said to be
{\it asymptotic\/} if
$ \left\{ d\left( \gamma_1(t),\gamma_2(t) \right) ~|~ t > 0 \right\} $
is bounded from above, where $d$ denotes the hyperbolic distance.
\end{definition}
For $(p,v)$, $(q,w) \in U\H^3$, it is known that the geodesics
\[
\gamma_{p,v}(t)=p\cosh t+v\sinh t ,\qquad \gamma_{q,w}(t)=q\cosh t+w\sinh t
\]
are asymptotic if and only if
$ \inner{p+v}{q+w}=0$ holds.
Theorem~\ref{thm:exponential} in the introduction is proved directly by the following
\begin{proposition}\label{prop:exp-IC}
A developable surface of exponential type
whose umbilic point set has no interior is an ideal cone.
That is, asymptotic curves of such a surface are asymptotic to each other.
\end{proposition}
Let $f : \Sigma \rightarrow \H^3$ be a developable surface of exponential type
whose umbilic point set has no interior.
We may assume $\Sigma$ is simply connected,
taking the universal cover $\H^2$, if necessary.
Here, we consider $\H^2$ as the hyperboloid in
the Lorentz-Minkowski $3$-space $\L^3$.
The proof is divided into three steps (Claims $1$--$3$).
\vspace{2mm}
\noindent
{\bf Claim 1.}~{\it
There exists a global coordinate system
$\varphi=(s,t) : \Sigma=\H^2 \rightarrow \boldsymbol{R}^2$ such that
\begin{equation}\label{eq:expimms}
(f \circ \varphi^{-1})(s,t)= c(s)\cosh t + v(s)\sinh t
\end{equation}
holds, the induced metric $g$ and the second fundamental form $I\!I$ of $f$
are given by
\[
g = g_{11}(s,t) ds^2 + dt^2,\qquad I\!I= e^{t} \delta(s) g_{11}(s,t) ds^2,
\]
respectively, where $\delta$ is a smooth function of $s$.
}
\begin{proof}
Since the umbilic point set of $f$ has no interior,
the proof of Proposition \ref{prop:Portnoy}
implies that each connected component of umbilic point set is a geodesic in $\H^3$.
Thus by the proof of Lemma \ref{lem:Massey},
we can find a coordinate neighborhood $(U ; (s,t)) \subset \H^2$
such that $U$ is open dense in $\H^2$ and $g = g_{11}(s,t) ds^2 + dt^2$ hold on $U$.
By taking $t \mapsto t + {\rm constant}$, if necessary,
each coordinate system $(s,t)$ can be joined smoothly
over the umbilic point set.
\end{proof}
\vspace{2mm}
\noindent
{\bf Claim 2.}~{\it
The vector field $v(s)$ in \eqref{eq:expimms} is expressed as
\begin{equation}\label{eq:direction}
v(s) = \frac{ \vect{n}(s)+\delta(s)\vect{b}(s) }{\sqrt{1+\{\delta(s)\}^2}},
\end{equation}
where $\vect{n}$ and $\vect{b}$ denotes
the principal and binormal normal vector field of the curve ${c}$ in $\H^3$, respectively.
Furthermore, the curvature $\kappa$ and the torsion $\tau$ of ${c}$ satisfy
\begin{equation}\label{eq:curvature-torsion}
\kappa(s)=\sqrt{1+\{\delta(s)\}^2} ,\qquad \tau(s)=\frac{\delta'(s)}{1+\{\delta(s)\}^2}.
\end{equation}
}
\begin{proof}
We may assume the curve $c$ in $\H^3$ is parametrized by the arc length $s$.
Let $\beta$ be the curve in $\H^2$
which is the inverse image of the curve ${c}$ by $f$.
By changing the orientation of $\beta$, if necessary,
we may assume the unit normal vector $N$ of $\beta$ in $\H^2$ satisfies
\begin{equation}\label{eq:conormal}
f_{*}(N)=v.
\end{equation}
Then the map $Y:\boldsymbol{R}^2 \rightarrow \H^2 \subset \L^3$ defined by
\[
Y(s,t)=\beta(s)\cosh t + N(s)\sinh t
\]
gives a parametrization of $\H^2$.
Let $\vect{\nu}$ be the unit normal vector field of $f$.
Then the shape operator $A$ of $f$ satisfies
$A(Y_s) = \delta(s)e^{t} Y_s$, $A(Y_t) = \vect{0}$.
Let $\kappa_{\beta}$ be the geodesic curvature of $\beta$
and $\nabla$ the Levi-Civita connection of $\H^2$.
By the Frenet formula for the curve $\beta$ in $\H^2$,
\begin{equation}\label{eq:Frenet}
\nabla_s{N}=N'(s)=-\kappa_{\beta}(s)\beta'(s)
\end{equation}
holds, where we consider $N$ is the $\L^3$-valued function and $N'=dN/ds$, etc.
Thus we have
$Y_s := \partial Y/\partial s = (\cosh t- \kappa_{\beta}(s)\sinh t )\beta'(s)$,
and hence
\[
\nabla_t Y_s
= \frac{\sinh t -\kappa_{\beta}(s)\cosh t}{\cosh t -\kappa_{\beta}(s)\sinh t} Y_s
\]
holds. Since the shape operator $A$ of $f$ satisfies
the Codazzi equation \eqref{eq:Codazzi}, it follows that
\begin{eqnarray*}
\vect{0}=(\nabla_tA)(Y_s)-(\nabla_sA)(Y_t)
=\nabla_t(\delta(s)e^{t} Y_s)
=\left( 1 + \frac{\sinh t -\kappa_{\beta}(s)\cosh t}{
\cosh t -\kappa_{\beta}(s)\sinh t} \right) \delta(s)e^{t} Y_s,
\end{eqnarray*}
where $Y_t=\partial Y/\partial t$. Substituting $t=0$ into this, we have that
\begin{equation}\label{eq:horocycle}
\kappa_{\beta}(s) = 1
\end{equation}
for $s$ in $\boldsymbol{R}$, that is, $\beta$ is congruent to the horocycle.
Next, we shall calculate the principal normal vector field $\vect{n}$,
the binormal vector field $\vect{b}$,
curvature $\kappa$ and torsion $\tau$ of the curve $c$ in $\H^3$.
Let $D$ be the Levi-Civita connection of $\H^3$.
By \eqref{eq:Frenet} and \eqref{eq:horocycle},
$\nabla_s \beta'(s)=N(s)$ holds. Moreover, by \eqref{eq:conormal}, it holds that
\begin{eqnarray*}
D_s{c}'(s)
&=&f_{\ast}(\nabla_s \beta'(s))+I\!I(\beta'(s),\beta'(s)) \vect{\nu}(s,0)\\
&=&f_{\ast}(N(s))+\delta(s)\vect{\nu}(s,0)
={v}(s)+\delta(s)\vect{\nu}(s,0),
\end{eqnarray*}
and hence we have
\[
\kappa(s)
=\left| D_sc'(s) \right|
=\sqrt{1+\{\delta(s)\}^2},
\qquad
\vect{n}(s)
=\frac{D_sc'(s)}{\kappa(s)}
=\frac{{v}(s)+\delta(s)\vect{\nu}(s,0)}{\sqrt{1+\{\delta(s)\}^2}}.
\]
If we denote by $\vect{e}(s)=c' (s)$ the unit tangent vector field of ${c}$,
$\vect{b}(s)$ is obtained as
\begin{equation*}
\vect{b}(s)
=\vect{e}(s) \times \vect{n}(s)
=\frac{\vect{\nu}(s,0)-\delta(s){v}(s)}{\sqrt{1+\{\delta(s)\}^2}},
\end{equation*}
where $\times$ is the cross product in $\H^3$ (cf.\ \eqref{eq:cross}). Since
\[
\left\{
\begin{array}{ll}
D_s\vect{\nu}(s,0)
=-f_{\ast}(A(Y_s)(s,0))
=-f_{\ast}(\delta(s)Y_s(s,0))
=-\delta(s)\vect{e}(s)\\
D_s{v}(s)
=-f_{\ast}(\nabla_sN)- \inner{A(N)}{\beta'} \vect{\nu}(s,0)
=f_{\ast}(-\beta'(s))=-\vect{e}(s),
\end{array}
\right.
\]
we have
\begin{eqnarray*}
D_s\vect{b}(s)=\vect{b}'(s)
=-\frac{\delta'(s)}{1+\{\delta(s)\}^2}\frac{{v}(s)+\delta(s)\vect{\nu}(s,0)}{%
\sqrt{1+\{\delta(s)\}^2}}
=-\frac{\delta'(s)}{1+\{\delta(s)\}^2}\vect{n}(s).
\end{eqnarray*}
Thus the torsion $\tau$ of ${c}$ is given as in \eqref{eq:curvature-torsion}.
Since the unit vector field $v(s)$ is included in the normal plane of ${c}$ and satisfies
\[
\langle {v}(s),\vect{n}(s) \rangle =\frac{1}{\sqrt{1+\{\delta(s)\}^2}} ,\qquad
\langle {v}(s),\vect{b}(s) \rangle = -\frac{\delta(s)}{\sqrt{1+\{\delta(s)\}^2}},
\]
we have that $v(s)$ is the form given in \eqref{eq:direction}.
\end{proof}
\vspace{2mm}
\noindent
{\bf Claim 3.}~{\it
Any two asymptotic curves are asymptotic to each other in the sense of
Definition $\ref{def:asymp-geod}$.
}
\begin{proof}
Under the notations in Claim 1 and 2, we have
\[
(f \circ \varphi^{-1})(s,t)
= {c}(s)\cosh t + \frac{ \vect{n}(s)+\delta(s)\vect{b}(s) }{\kappa(s)} \sinh t .
\]
For $s \in \boldsymbol{R}$, set $\gamma_{s}(t):=(f \circ X)(s,t)$.
It is sufficient to prove that, for fixed $s_0 \in \boldsymbol{R}$, the function
\[
\rho : \boldsymbol{R} \ni s \longmapsto
\inner{{c}(s)+\frac{ \vect{n}(s)+\delta(s)\vect{b}(s) }{\kappa(s)}}{{c}(s_0)
+\frac{ \vect{n}(s_0)+\delta(s_0)\vect{b}(s_0) }{\kappa(s_0)}} \in \boldsymbol{R},
\]
is equivalently zero. Using the Frenet-Serret formula
\[
\vect{e}'(s)=c(s)+\kappa(s)\vect{n}(s),\qquad
\vect{n}'(s)=-\kappa(s)\vect{e}(s)+\tau(s)\vect{b}(s),\qquad
\vect{b}'(s)=-\tau(s)\vect{n}(s)
\]
for the curve ${c}$ in $\H^3$, we have
\begin{multline}\label{eq:phi}
\frac{d}{ds}\left(c(s)+\frac{ \vect{n}(s)+\delta(s)\vect{b}(s) }{\kappa(s)}\right) =
\frac{\kappa(s)\tau(s)\delta(s)-\kappa'(s)}{\kappa^2(s)}\vect{n}(s)\\
+\frac{\kappa(s)\tau(s)-\kappa(s)\delta'(s)+\kappa'(s)\delta(s)}{\kappa^2(s)}\vect{b}(s).
\end{multline}
On the other hand, we have
\[
\kappa(s)\tau(s)\delta(s)-\kappa'(s)
=\kappa(s)\tau(s)-\kappa(s)\delta'(s)+\kappa'(s)\delta(s)
=0,
\]
by \eqref{eq:curvature-torsion} in Claim $2$.
Substituting this into \eqref{eq:phi},
we have $\rho'(s) = 0$ for all $s$.
Besides $\rho(s_0)=0$, we obtain $\rho(s) = 0$ for all $s$.
\end{proof}
\subsection{A non-real-analytic example}
\label{subsec:ex-exp}
\hspace{2mm}
\begin{example}
\label{ex:NRA}
The assumption of analyticity in Theorem \ref{thm:exponential} cannot be
removed since non-real-analytic developable surfaces of exponential type
might have more than one asymptotic points.
Figure~\ref{fig:expn2} shows an example asymptotic to distinct two points
in the ideal boundary.
\begin{figure}[htb]
\begin{tabular}{cc}
\resizebox{4cm}{!}{\includegraphics{Exp-2.eps}}
\end{tabular}
\caption{
A non-real-analytic developable surface of
exponential type asymptotic to $0$ and $\infty$.}
\label{fig:expn2}
\end{figure}
\end{example}
\noindent
The corresponding curve $\alpha(s)$ in $L\H^3$ is given by
$\alpha(s)=(x_1(s)+iy_1(s),x_2(s)+iy_2(s))$, where
\[
x_1(s)=
\begin{cases}
0
& (s \leq -1) \\
(\sqrt{2}-1)(s+1)/(1+e^{\frac{1}{s}+\frac{1}{s+1}})
&(-1<s<0)\\
(\sqrt{2}-1)(s+1)
& (0\leq s),
\end{cases} \qquad
y_1(s)=
\begin{cases}
0
& (s \leq \sqrt{2}) \\
2 e^{\frac{\sqrt{2}+1}{\sqrt{2}-s}}
& (\sqrt{2}<s),
\end{cases}
\]
\[
x_2(s)=
\begin{cases}
(\sqrt{2}-1)(1-s)
& (s \leq 0) \\
(\sqrt{2}-1)(1-s)/(1+e^{\frac{1}{1-s}-\frac{1}{s}})
&(0<s<1)\\
0
& (1\leq s),
\end{cases} \qquad
y_2(s)=
\begin{cases}
2 e^{\frac{\sqrt{2}+1}{\sqrt{2}-s}}
& (s \leq -\sqrt{2}) \\
0
& (-\sqrt{2}<s).
\end{cases}
\]
|
1,108,101,566,660 | arxiv | \section{INTRODUCTION}
With the advent of deep learning, increasing computational power, and massive datasets, we have been able to achieve better accuracy in many domains. Even though deep learning is applied widely in the food industry, object detection on traditional Indian cuisine remains uncharted territory.
A food photo, especially of an Indian platter (also called a \textit{thali}), comprises of several kinds of food dishes (see Fig.~\ref{Indian_Dish}). These food dishes can either be served in the same plate (non-distinct boundaries) or in different plates and bowls (distinct boundaries). In these cases, a single-label image classification models fail.
\begin{figure}[!hbt]
\centering
\includegraphics[width=2.5in]{fig1.png}
\caption{Some examples of Indian platter dishes}
\label{Indian_Dish}
\end{figure}
Multi-label object detection models such as F-RCNN, SSD and YOLO can detect multiple items in an image and localize them using bounding boxes. Localization is useful in developing real-world applications, wherein the user can find the position of each dish on a given plate.
In this paper, we provide two labelled data sets: \textbf{\emph{IndianFood10}}: a dataset of 10 traditional Indian food items with more than 12,000 images, and \textbf{\emph{IndianFood20}}: an extension of IndianFood10 with 10 more popular Indian traditional Indian food classes. Further, we demonstrate the use of transfer learning with YOLOv4 on \emph{IndianFood10} dataset. Our key contributions include providing two comprehensive data sets and achieving a state-of-the-art mAP score of 91.8\% in object detection in the Indian food domain. Please note that our work with the 20 class data set is preliminary and we have not reported its results.
\section{RELATED WORK}
Several researchers addressed the challenges in the field of food image recognition. Kawano and Yanai~\cite{kawano2014food} used deep convolutional neural network to recognize food items from UECFood-100 image dataset~\cite{matsuda2012recognition} and achieved a top-1 accuracy of 72.26\% and top-5 accuracy of 92\%. In \cite{liu2016deepfood}, the authors used convolutional neural network on Food-101 and UEC-256 dataset, to achieve 77.4\% and 93.7\% as the top-1 and top-5 accuracy, respectively, on the former dataset. For the latter dataset, they reported 63.8\% and 87.2\% as the top-1 and top-5 accuracy, respectively. None of the above mentioned data sets had any Indian food item in them.\\
In the field of computer vision, object detection and localization are additional task aims to detect the food items from multiple food images. Previous studies used SIFT \cite{lowe1999object}, HOG \cite{wang2009evaluation} and SURF \cite{bay2006surf} on ImageNet and COCO datasets to detect the objects but could not achieve high accuracy. In recent years, deep learning models showed great improvement in object detection. Ashutosh et al. \cite{singla2016food} used deep neural network based GoogLeNet classifier to segregate food and non-food images and classified each food item. Amongst the initial works in food object detection, Matsuda et al.~ \cite{matsuda2012recognition} used various traditional computer vision methods for object detection on food images. In \cite{7900117} the authors used a very interesting technique for food item detection. They created a food activation map (probabilities heat map) and made bounding boxes using them and then used these boxes to classify the items in the boxes. Hoff et al.~\cite{hoff2018snap} have made a food object detection app which can be used to track user's daily food intake.\\
\begin{figure*}[!hbt]
\centering
\includegraphics[width=5.5in]{yolov4_image.jpg}
\caption{YOLOv4 Architecture \cite{yolo_diagram}}
\label{yolo_arch}
\end{figure*}
Although some work is done in image recognition in Indian food context \cite{rajayogi2019indian,yadav2021food}, almost very little or no work is done in localising or detecting multiple Indian food items in an image. In \cite{ramesh2020real}, the authors performed object detection using Single Shot Detector(SSD) and Inceptionv2 on Indian food dataset for 60 classes using 4200 images (only 70 images per class). However, even after using image augmentation, their images per class is relatively low which will tend to overfitting and many of the food classes in their dataset were not traditional Indian dishes (e.g. pizza, pasta, noodles etc.) and some were very trivial classes (e.g. tomato, cucumber, water etc.). In BTBU-60 \cite{cai2019btbufood} the authors provide a dataset on 60 daily-use food items in Chinese cuisine for the object detection and some classes were mango, papaya, potato and tomato, and some Chinese items. Even though our objectives are similar (object detection in food items), their dataset domain is different than ours as our focus is on fully cooked Indian food cuisine and we do not have any raw ingredients or vegetables/fruits as part of our dataset.
As mentioned earlier, an Indian food platter (\emph{thali}, see Fig. 1) consists of several food items in a single plate and hence single image classification models are not useful for them. Therefore, two large datasets of 10 and 20 traditional Indian food items are proposed in this research along with the application of transfer learning with YOLOv4 for detecting multiple food items in a single image, which is suitable for Indian \emph{thali}.\\
\section{BACKGROUND}
\subsection{Object Detection}
Object Detection \cite{zou2019object} is a critical problem in the domain of computer vision where model's task is to locate and identify all objects in an image. Object Detection is a combination of object localization and classification for multiple objects in an image.\\
Earlier due to the lack of effective image representation techniques, handcrafted traditional methods namely Viola Jones Detectors, HOG Detector, Deformable Part-based Model (DPM) were used. With the advancement in technology, deep learning based methods namely Two Stage and One Stage Detectors are used to detect objects within an image.
\subsection{YOLOv4}
YOLO (You Only Look Once) is a fast one-stage object detector model. YOLO's architecture (Fig.~\ref{yolo_arch}) is similar to FCNN (Fully Convolutional Neural Network). Rather than just the local perspective, it considers the entire image and includes all the contextual information. It even uses features from the entire image to predict each bounding box.\\
\begin{figure*}[!hbt]
\centering
\includegraphics[width=18cm]{pipeline.png}
\caption{Flow chart of the steps and the model pipeline}
\label{fig:pipeline}
\end{figure*}
YOLOv4 is the current state-of-the-art object detection model\cite{bochkovskiy2020yolov4} and was even used in detecting fashion apparel in \cite{lee2021two}. Starting with the backbone or the feature formation part, the authors go with CSPDarknet53 model based on DenseNet. It then contains an SPP block. Moving on, YOLOv4's Neck or the Feature Aggregation component is PANet. Finally the head or the detection step remains unchanged from the YOLOv3. It has 3 levels of detection along with an anchor based detection system.
\subsection{Metrics}
The metrics used for object detection is \textbf{Average Precision} (average of precision under different recalls) and \textbf{Mean Average Precision} (mAP: mean of average precision of different classes in the dataset). To find the object localization accuracy, \textbf{IoU} (Intersection Over Union) metric is calculated between the ground truth and the predicted bounding box. If the IoU is above the set threshold, then we infer that the object is successfully detected. Generally, the IoU threshold is set as 0.5.
\begin{figure*}[!hbt]
\centering
\includegraphics[width=5.5in,height=7.5cm]{chapati.png}
\caption{(i) Different orientations of chapati (ii) Model prediction}
\label{fig:Chapati Image}
\end{figure*}
\section{MATERIALS AND METHODS}
\subsection{Data Preparation}
From a list of more than 100 Indian food items, we analyzed the number of images/posts on Instagram for each food item by using a hashtag and then selected few of the most popular Indian food items (such as \#alooparatha, \#plainrice, \#biryani etc.). We chose Instagram since it has more than 1 billion monthly users who share more than 100 million posts every day~\cite{nobles2020automated}. We used Python's \textit{Selenium} library to scrape Instagram post's URLs for every hashtag and then downloaded the images using another python library, \textit{Requests}, similar work was done by \cite{nobles2020automated} where they used \#HIV to download images from Instagram.
\subsection{Dataset}
Our \textit{IndianFood10} dataset consists of a total of 11,547 images with annotation text files for every image (see Table~\ref{tab:perf_stat} for all the food classes in \textit{IndianFood10}). Out of the total images, 842 images (approximately 7\% of the whole dataset) are multi-dish images (i.e., image containing more than one unique food class). For single dish images, we annotated the single dish of interest. For platter (multi-dish) images, we annotated more than one dish of interest. The average dishes-per-image ratio for platter images is 2.33.
By the virtue of being such a diverse cuisine, each Indian food dish can be paired with multiple other food dishes, yielding no clear boundaries between two dishes. Even the same dish may have large visual variance (high intra-class variation) because of different cooking methods and presentation, which brings certain challenges to the recognition \cite{Wang2019}. For example, the class \textit{chapati} (a type of Indian bread) can co-occur with various classes like \textit{palak paneer}, \textit{plain rice} etc. in different orientations (full-opened, half-folded and quarter-folded, (Fig.~\ref{fig:Chapati Image} (i)). Hence, a large number of images of each class at different scales, lighting, rotations, sides and on different backgrounds are required \cite{bochkovskiy2020yolov4} and due to this reason we have such a large dataset.
\subsection{Annotation}
We annotated the images using an open-source software, makesense.ai~\cite{make-sense}. Here, we uploaded the raw input images of our dataset and annotated each food item of interest present in every image by manually creating the bounding boxes and labelling each box with the food class. For every image in the dataset, a text file was generated in YOLOv4 format which contained the information about the coordinates of the bounding boxes created for each food item in an image with the food class number.
\subsection{Approach}
We trained Alexey Bochkovskiy's YOLOv4\cite{bochkovskiy2020yolov4} object detection model on 80\% of the entire \textit{IndianFood10} dataset (single-dish images and platter images), the dataset is available on \href{https://drive.google.com/drive/u/0/folders/16kRxAgfQfVBD2ebLdHTqylZdWSB2rygQ}{Google Drive} and also uploaded on IEEE dataport \cite{IndianFoodDataset}. The model was trained on Google Colab, which provided Tesla K80 and Tesla T4 GPUs. At the end of training, the metrics were computed by testing against the validation set (20\% of the \textit{IndianFood10} dataset).
\section{RESULTS AND DISCUSSION}
The most common evaluation metrics for any object detection model are Precision-Recall curves and Average Precision. We use the standardised code provided by Padilla et al.~\cite{padillaCITE2020} to compute our model scores. With an IoU threshold of 0.5, we achieve a 10-class mean average precision (mAP) score of 91.76\% and F1 score of 0.90 (see Table~\ref{tab:perf_it}).
Table~\ref{tab:perf_stat} lists the individual class average precision scores and their confusion matrix (Fig.~\ref{confusion_mat}). An extra class \textit{None} was introduced to account for images where the model failed to predict any class. The true class of a single-dish image can never be \textit{None}, and hence the last row in the confusion matrix has been greyed out. Our model performs very well and it is able to detect food classes correctly despite many of the food items having high intra-class variation (see Fig.~\ref{fig:Chapati Image} (ii)) and no clear boundaries between them (see Fig.~\ref{fig:fig_results}). Fig.~\ref{PR_curve} shows the PR-curves for all the 10 classes.
\begin{figure}[hbt!]
\centering
\includegraphics[width=3.25in]{single_dish_confusion_matrix.png}
\caption{Confusion Matrix for 10 classes}
\label{confusion_mat}
\end{figure}
\begin{figure*}[!hbt]
\centering
\includegraphics[width=15cm]{result.png}
\caption{Predictions from trained YOLOv4 model}
\label{fig:fig_results}
\end{figure*}
\begin{figure*}[!hbt]
\centering
\includegraphics[width=14cm]{fig3.png}
\caption{PR Curves for 10 classes}
\label{PR_curve}
\end{figure*}
\begin{table}[!hbt]
\centering
\caption{Average Precision for Each Class}
\label{tab:perf_stat}
\begin{tabular}{|l|c|} \hline
\textbf{Class in \textit{IndianFood10}} & \textbf{Average Precision (AP) in \%} \\ \hline
Aloo Paratha & 78.3 \\
Biryani & 93.0 \\
Chapati & 79.4 \\
Chicken Tikka & 85.1 \\
Khichdi & 91.0 \\
Omelette & 91.9 \\
Palak Paneer & 94.3 \\
Plain rice & 89.7 \\
Poha & 91.5 \\
Rasgulla & 94.9 \\ \hline
\end{tabular}
\end{table}
\begin{table}[!hbt]
\centering
\caption{Mean Average Precision for each iterations}
\label{tab:perf_it}
\begin{tabular}{|c|c|c|} \hline
\textbf{Iterations} & \textbf{Mean Average Precision (in \%)} & \textbf{F1-Score} \\ \hline
7000 & 90.49 & 0.89\\
8000 & 91.57 & 0.90\\
9000 & 90.75 & 0.89\\
\textbf{10000} & \textbf{91.76} & \textbf{0.90}\\
11000 & 90.99 & 0.90\\
12000 & 90.80 & 0.90\\
13000 & 91.03 & 0.90\\
14000 & 90.41 & 0.90\\
15000 & 90.26 & 0.90\\
16000 & 90.28 & 0.90\\
17000 & 90.83 & 0.91\\
18000 & 89.89 & 0.90\\
19000 & 90.16 & 0.91\\
20000 & 90.83 & 0.91\\
\hline
\end{tabular}
\end{table}
\begin{table}[!hbt]
\centering
\caption{Summary of mAP Scores}
\label{tab:comp_Score}
\begin{tabular}{|l|c|} \hline
\textbf{Model} & \textbf{mAP Score} \\ \hline
BTBU-Food-60 \cite{cai2019btbufood} & 67.7\% \\
SSD\_InceptionV2 \cite{ramesh2020real} & 76.9\% \\
\textbf{YOLOv4 on \textit{IndianFood10}} & \textbf{91.8\%} \\ \hline
\end{tabular}
\end{table}
\section{CONCLUSION}
Our literature survey overview revealed that there is a lack of work done on object detection in the context of Indian cuisine. We have been able to curate a large dataset (\textit{IndianFood10}) with more than 11,000 annotated images for 10 popular Indian dishes as classes. We achieved a mAP score of 91.8\% for object detection in Indian cuisine using the YOLOv4 architecture, Table~\ref{tab:comp_Score} summarises the mAP scores of previous research works done in the field of object detection in food items. This work has implications for calorie estimation in the food images and thus is expected to have a larger impact of public health.
\section{FUTURE WORK}
With the rise of computer vision, object detection continues to be an important problem and especially in traditional Indian food context because currently there is no public dataset available. As to scale up our proposed work and dataset, we have also created a dataset with 20 Indian dishes as classes- \textit{IndianFood20} (an extension of \textit{IndianFood10}) which contains 17,817 images (see Table~\ref{tab:food20}). Our final work for \textit{IndianFood20} is preliminary but we would like to share the datasets \textit{IndianFood10} and \textit{IndianFood20} with the research community so that work on the area of object detection in context of Indian cuisine could be accelerated. Future research opportunities in Indian food context include:
\begin{itemize}
\item Deploying a mobile application for detecting food items and provide its recipe, ingredients and nutrition facts \cite{Sun2019FoodTrackerAR}
\item Estimation of total calories present in a meal by considering the volume of each food item present in it
\end{itemize}
\begin{table}[!hbt]
\centering
\caption{Food classes in \textit{IndianFood20}}
\label{tab:food20}
\begin{tabular}{|l|l|} \hline
\multicolumn{2}{|c|}{\textbf{List of Food Items}} \\ \hline
Indian Bread & Dosa \\
Rasgulla & Rajma \\
Biryani & Poori \\
Uttapam & Chole \\
Paneer & Dal \\
Poha & Sambhar \\
Khichdi & Papad \\
Omelette & Gulab Jamun \\
Plain Rice & Idli \\
Dal Makhni & Vada \\ \hline
\end{tabular}
\end{table}
\section*{ACKNOWLEDGEMENT}
G.B. thanks Indraprastha Institute of Information Technology (IIIT Delhi) for the computational support. G.B. thanks Technology Innovation Hub (TiH) Anubhuti for the research grant. D.P, P.P, G.T, V.A, S.D are summer interns and M.G. is a research scholar in Dr. Bagler's lab at IIIT Delhi and thankful to IIIT Delhi for the support. M.G. thanks IIIT Delhi for the fellowship.
{\small
\bibliographystyle{IEEEtran}
\section{INTRODUCTION}
With the advent of deep learning, increasing computational power, and massive datasets, we have been able to achieve better accuracy in many domains. Even though deep learning is applied widely in the food industry, object detection on traditional Indian cuisine remains uncharted territory.
A food photo, especially of an Indian platter (also called a \textit{thali}), comprises of several kinds of food dishes (see Fig.~\ref{Indian_Dish}). These food dishes can either be served in the same plate (non-distinct boundaries) or in different plates and bowls (distinct boundaries). In these cases, a single-label image classification models fail.
\begin{figure}[!hbt]
\centering
\includegraphics[width=2.5in]{fig1.png}
\caption{Some examples of Indian platter dishes}
\label{Indian_Dish}
\end{figure}
Multi-label object detection models such as F-RCNN, SSD and YOLO can detect multiple items in an image and localize them using bounding boxes. Localization is useful in developing real-world applications, wherein the user can find the position of each dish on a given plate.
In this paper, we provide two labelled data sets: \textbf{\emph{IndianFood10}}: a dataset of 10 traditional Indian food items with more than 12,000 images, and \textbf{\emph{IndianFood20}}: an extension of IndianFood10 with 10 more popular Indian traditional Indian food classes. Further, we demonstrate the use of transfer learning with YOLOv4 on \emph{IndianFood10} dataset. Our key contributions include providing two comprehensive data sets and achieving a state-of-the-art mAP score of 91.8\% in object detection in the Indian food domain. Please note that our work with the 20 class data set is preliminary and we have not reported its results.
\section{RELATED WORK}
Several researchers addressed the challenges in the field of food image recognition. Kawano and Yanai~\cite{kawano2014food} used deep convolutional neural network to recognize food items from UECFood-100 image dataset~\cite{matsuda2012recognition} and achieved a top-1 accuracy of 72.26\% and top-5 accuracy of 92\%. In \cite{liu2016deepfood}, the authors used convolutional neural network on Food-101 and UEC-256 dataset, to achieve 77.4\% and 93.7\% as the top-1 and top-5 accuracy, respectively, on the former dataset. For the latter dataset, they reported 63.8\% and 87.2\% as the top-1 and top-5 accuracy, respectively. None of the above mentioned data sets had any Indian food item in them.\\
In the field of computer vision, object detection and localization are additional task aims to detect the food items from multiple food images. Previous studies used SIFT \cite{lowe1999object}, HOG \cite{wang2009evaluation} and SURF \cite{bay2006surf} on ImageNet and COCO datasets to detect the objects but could not achieve high accuracy. In recent years, deep learning models showed great improvement in object detection. Ashutosh et al. \cite{singla2016food} used deep neural network based GoogLeNet classifier to segregate food and non-food images and classified each food item. Amongst the initial works in food object detection, Matsuda et al.~ \cite{matsuda2012recognition} used various traditional computer vision methods for object detection on food images. In \cite{7900117} the authors used a very interesting technique for food item detection. They created a food activation map (probabilities heat map) and made bounding boxes using them and then used these boxes to classify the items in the boxes. Hoff et al.~\cite{hoff2018snap} have made a food object detection app which can be used to track user's daily food intake.\\
\begin{figure*}[!hbt]
\centering
\includegraphics[width=5.5in]{yolov4_image.jpg}
\caption{YOLOv4 Architecture \cite{yolo_diagram}}
\label{yolo_arch}
\end{figure*}
Although some work is done in image recognition in Indian food context \cite{rajayogi2019indian,yadav2021food}, almost very little or no work is done in localising or detecting multiple Indian food items in an image. In \cite{ramesh2020real}, the authors performed object detection using Single Shot Detector(SSD) and Inceptionv2 on Indian food dataset for 60 classes using 4200 images (only 70 images per class). However, even after using image augmentation, their images per class is relatively low which will tend to overfitting and many of the food classes in their dataset were not traditional Indian dishes (e.g. pizza, pasta, noodles etc.) and some were very trivial classes (e.g. tomato, cucumber, water etc.). In BTBU-60 \cite{cai2019btbufood} the authors provide a dataset on 60 daily-use food items in Chinese cuisine for the object detection and some classes were mango, papaya, potato and tomato, and some Chinese items. Even though our objectives are similar (object detection in food items), their dataset domain is different than ours as our focus is on fully cooked Indian food cuisine and we do not have any raw ingredients or vegetables/fruits as part of our dataset.
As mentioned earlier, an Indian food platter (\emph{thali}, see Fig. 1) consists of several food items in a single plate and hence single image classification models are not useful for them. Therefore, two large datasets of 10 and 20 traditional Indian food items are proposed in this research along with the application of transfer learning with YOLOv4 for detecting multiple food items in a single image, which is suitable for Indian \emph{thali}.\\
\section{BACKGROUND}
\subsection{Object Detection}
Object Detection \cite{zou2019object} is a critical problem in the domain of computer vision where model's task is to locate and identify all objects in an image. Object Detection is a combination of object localization and classification for multiple objects in an image.\\
Earlier due to the lack of effective image representation techniques, handcrafted traditional methods namely Viola Jones Detectors, HOG Detector, Deformable Part-based Model (DPM) were used. With the advancement in technology, deep learning based methods namely Two Stage and One Stage Detectors are used to detect objects within an image.
\subsection{YOLOv4}
YOLO (You Only Look Once) is a fast one-stage object detector model. YOLO's architecture (Fig.~\ref{yolo_arch}) is similar to FCNN (Fully Convolutional Neural Network). Rather than just the local perspective, it considers the entire image and includes all the contextual information. It even uses features from the entire image to predict each bounding box.\\
\begin{figure*}[!hbt]
\centering
\includegraphics[width=18cm]{pipeline.png}
\caption{Flow chart of the steps and the model pipeline}
\label{fig:pipeline}
\end{figure*}
YOLOv4 is the current state-of-the-art object detection model\cite{bochkovskiy2020yolov4} and was even used in detecting fashion apparel in \cite{lee2021two}. Starting with the backbone or the feature formation part, the authors go with CSPDarknet53 model based on DenseNet. It then contains an SPP block. Moving on, YOLOv4's Neck or the Feature Aggregation component is PANet. Finally the head or the detection step remains unchanged from the YOLOv3. It has 3 levels of detection along with an anchor based detection system.
\subsection{Metrics}
The metrics used for object detection is \textbf{Average Precision} (average of precision under different recalls) and \textbf{Mean Average Precision} (mAP: mean of average precision of different classes in the dataset). To find the object localization accuracy, \textbf{IoU} (Intersection Over Union) metric is calculated between the ground truth and the predicted bounding box. If the IoU is above the set threshold, then we infer that the object is successfully detected. Generally, the IoU threshold is set as 0.5.
\begin{figure*}[!hbt]
\centering
\includegraphics[width=5.5in,height=7.5cm]{chapati.png}
\caption{(i) Different orientations of chapati (ii) Model prediction}
\label{fig:Chapati Image}
\end{figure*}
\section{MATERIALS AND METHODS}
\subsection{Data Preparation}
From a list of more than 100 Indian food items, we analyzed the number of images/posts on Instagram for each food item by using a hashtag and then selected few of the most popular Indian food items (such as \#alooparatha, \#plainrice, \#biryani etc.). We chose Instagram since it has more than 1 billion monthly users who share more than 100 million posts every day~\cite{nobles2020automated}. We used Python's \textit{Selenium} library to scrape Instagram post's URLs for every hashtag and then downloaded the images using another python library, \textit{Requests}, similar work was done by \cite{nobles2020automated} where they used \#HIV to download images from Instagram.
\subsection{Dataset}
Our \textit{IndianFood10} dataset consists of a total of 11,547 images with annotation text files for every image (see Table~\ref{tab:perf_stat} for all the food classes in \textit{IndianFood10}). Out of the total images, 842 images (approximately 7\% of the whole dataset) are multi-dish images (i.e., image containing more than one unique food class). For single dish images, we annotated the single dish of interest. For platter (multi-dish) images, we annotated more than one dish of interest. The average dishes-per-image ratio for platter images is 2.33.
By the virtue of being such a diverse cuisine, each Indian food dish can be paired with multiple other food dishes, yielding no clear boundaries between two dishes. Even the same dish may have large visual variance (high intra-class variation) because of different cooking methods and presentation, which brings certain challenges to the recognition \cite{Wang2019}. For example, the class \textit{chapati} (a type of Indian bread) can co-occur with various classes like \textit{palak paneer}, \textit{plain rice} etc. in different orientations (full-opened, half-folded and quarter-folded, (Fig.~\ref{fig:Chapati Image} (i)). Hence, a large number of images of each class at different scales, lighting, rotations, sides and on different backgrounds are required \cite{bochkovskiy2020yolov4} and due to this reason we have such a large dataset.
\subsection{Annotation}
We annotated the images using an open-source software, makesense.ai~\cite{make-sense}. Here, we uploaded the raw input images of our dataset and annotated each food item of interest present in every image by manually creating the bounding boxes and labelling each box with the food class. For every image in the dataset, a text file was generated in YOLOv4 format which contained the information about the coordinates of the bounding boxes created for each food item in an image with the food class number.
\subsection{Approach}
We trained Alexey Bochkovskiy's YOLOv4\cite{bochkovskiy2020yolov4} object detection model on 80\% of the entire \textit{IndianFood10} dataset (single-dish images and platter images), the dataset is available on \href{https://drive.google.com/drive/u/0/folders/16kRxAgfQfVBD2ebLdHTqylZdWSB2rygQ}{Google Drive} and also uploaded on IEEE dataport \cite{IndianFoodDataset}. The model was trained on Google Colab, which provided Tesla K80 and Tesla T4 GPUs. At the end of training, the metrics were computed by testing against the validation set (20\% of the \textit{IndianFood10} dataset).
\section{RESULTS AND DISCUSSION}
The most common evaluation metrics for any object detection model are Precision-Recall curves and Average Precision. We use the standardised code provided by Padilla et al.~\cite{padillaCITE2020} to compute our model scores. With an IoU threshold of 0.5, we achieve a 10-class mean average precision (mAP) score of 91.76\% and F1 score of 0.90 (see Table~\ref{tab:perf_it}).
Table~\ref{tab:perf_stat} lists the individual class average precision scores and their confusion matrix (Fig.~\ref{confusion_mat}). An extra class \textit{None} was introduced to account for images where the model failed to predict any class. The true class of a single-dish image can never be \textit{None}, and hence the last row in the confusion matrix has been greyed out. Our model performs very well and it is able to detect food classes correctly despite many of the food items having high intra-class variation (see Fig.~\ref{fig:Chapati Image} (ii)) and no clear boundaries between them (see Fig.~\ref{fig:fig_results}). Fig.~\ref{PR_curve} shows the PR-curves for all the 10 classes.
\begin{figure}[hbt!]
\centering
\includegraphics[width=3.25in]{single_dish_confusion_matrix.png}
\caption{Confusion Matrix for 10 classes}
\label{confusion_mat}
\end{figure}
\begin{figure*}[!hbt]
\centering
\includegraphics[width=15cm]{result.png}
\caption{Predictions from trained YOLOv4 model}
\label{fig:fig_results}
\end{figure*}
\begin{figure*}[!hbt]
\centering
\includegraphics[width=14cm]{fig3.png}
\caption{PR Curves for 10 classes}
\label{PR_curve}
\end{figure*}
\begin{table}[!hbt]
\centering
\caption{Average Precision for Each Class}
\label{tab:perf_stat}
\begin{tabular}{|l|c|} \hline
\textbf{Class in \textit{IndianFood10}} & \textbf{Average Precision (AP) in \%} \\ \hline
Aloo Paratha & 78.3 \\
Biryani & 93.0 \\
Chapati & 79.4 \\
Chicken Tikka & 85.1 \\
Khichdi & 91.0 \\
Omelette & 91.9 \\
Palak Paneer & 94.3 \\
Plain rice & 89.7 \\
Poha & 91.5 \\
Rasgulla & 94.9 \\ \hline
\end{tabular}
\end{table}
\begin{table}[!hbt]
\centering
\caption{Mean Average Precision for each iterations}
\label{tab:perf_it}
\begin{tabular}{|c|c|c|} \hline
\textbf{Iterations} & \textbf{Mean Average Precision (in \%)} & \textbf{F1-Score} \\ \hline
7000 & 90.49 & 0.89\\
8000 & 91.57 & 0.90\\
9000 & 90.75 & 0.89\\
\textbf{10000} & \textbf{91.76} & \textbf{0.90}\\
11000 & 90.99 & 0.90\\
12000 & 90.80 & 0.90\\
13000 & 91.03 & 0.90\\
14000 & 90.41 & 0.90\\
15000 & 90.26 & 0.90\\
16000 & 90.28 & 0.90\\
17000 & 90.83 & 0.91\\
18000 & 89.89 & 0.90\\
19000 & 90.16 & 0.91\\
20000 & 90.83 & 0.91\\
\hline
\end{tabular}
\end{table}
\begin{table}[!hbt]
\centering
\caption{Summary of mAP Scores}
\label{tab:comp_Score}
\begin{tabular}{|l|c|} \hline
\textbf{Model} & \textbf{mAP Score} \\ \hline
BTBU-Food-60 \cite{cai2019btbufood} & 67.7\% \\
SSD\_InceptionV2 \cite{ramesh2020real} & 76.9\% \\
\textbf{YOLOv4 on \textit{IndianFood10}} & \textbf{91.8\%} \\ \hline
\end{tabular}
\end{table}
\section{CONCLUSION}
Our literature survey overview revealed that there is a lack of work done on object detection in the context of Indian cuisine. We have been able to curate a large dataset (\textit{IndianFood10}) with more than 11,000 annotated images for 10 popular Indian dishes as classes. We achieved a mAP score of 91.8\% for object detection in Indian cuisine using the YOLOv4 architecture, Table~\ref{tab:comp_Score} summarises the mAP scores of previous research works done in the field of object detection in food items. This work has implications for calorie estimation in the food images and thus is expected to have a larger impact of public health.
\section{FUTURE WORK}
With the rise of computer vision, object detection continues to be an important problem and especially in traditional Indian food context because currently there is no public dataset available. As to scale up our proposed work and dataset, we have also created a dataset with 20 Indian dishes as classes- \textit{IndianFood20} (an extension of \textit{IndianFood10}) which contains 17,817 images (see Table~\ref{tab:food20}). Our final work for \textit{IndianFood20} is preliminary but we would like to share the datasets \textit{IndianFood10} and \textit{IndianFood20} with the research community so that work on the area of object detection in context of Indian cuisine could be accelerated. Future research opportunities in Indian food context include:
\begin{itemize}
\item Deploying a mobile application for detecting food items and provide its recipe, ingredients and nutrition facts \cite{Sun2019FoodTrackerAR}
\item Estimation of total calories present in a meal by considering the volume of each food item present in it
\end{itemize}
\begin{table}[!hbt]
\centering
\caption{Food classes in \textit{IndianFood20}}
\label{tab:food20}
\begin{tabular}{|l|l|} \hline
\multicolumn{2}{|c|}{\textbf{List of Food Items}} \\ \hline
Indian Bread & Dosa \\
Rasgulla & Rajma \\
Biryani & Poori \\
Uttapam & Chole \\
Paneer & Dal \\
Poha & Sambhar \\
Khichdi & Papad \\
Omelette & Gulab Jamun \\
Plain Rice & Idli \\
Dal Makhni & Vada \\ \hline
\end{tabular}
\end{table}
\section*{ACKNOWLEDGEMENT}
G.B. thanks Indraprastha Institute of Information Technology (IIIT Delhi) for the computational support. G.B. thanks Technology Innovation Hub (TiH) Anubhuti for the research grant. D.P, P.P, G.T, V.A, S.D are summer interns and M.G. is a research scholar in Dr. Bagler's lab at IIIT Delhi and thankful to IIIT Delhi for the support. M.G. thanks IIIT Delhi for the fellowship.
{\small
\bibliographystyle{IEEEtran}
|
1,108,101,566,661 | arxiv | \section{Introduction}
One of the most intriguing systems in string theory
is that of $r$ overlapping NS5 branes
\cite{Strominger:1996ac}
in the decoupling limit.
The world-volume theory of an isolated NS5 brane in type IIA
is given by a $N=(0,2)$ supersymmetric theory in six-dimension, and
it therefore involves self-dual antisymmetric tensorfields
\cite{Romans:1986er, Riccioni:1997np}.
For several NS5-branes the dominant low energy degrees of freedom
are tensionless strings that arise in M-theory from M2-branes
suspended between M5-branes when the M5-branes' world-volumes
coincide. These theories appear also in type IIB compactified
on $K3$ \cite{Witten:1995zh,Witten:1996em}, in which formulation
it becomes clear that the appearing tensionless strings
are not fundamental strings and that they have $ADE$
gauge symmetry. The heterotic description
involves a small $E_8$ instanton
\cite{Ganor:1996mu} at the core of which gauge symmetry is enhanced.
Hence, it is natural to look for a way to describe these systems
in terms of a local quantum field theory that involves a
non-Abelian self-dual two-form in six dimensions.
However, such a theory does not exist \cite{Bekaert:1999dp} (cf.~also \cite{Witten:1996hc}),
and one needs an indirect or nonperturbative description.
Wilson surfaces and loop equations
in these systems have been studied in
\cite{Ganor:1997nf}. There is also a M(atrix) theory construction
\cite{Seiberg:1997zk}.
Another interesting system involves the $N=(1,1)$ `new' gauge theories of Witten
in six-dimensions \cite{Witten:1998kz}.
In type IIB they appear on a ${\mathbb C}^2/{\mathbb Z}_r$
orbifold when nonperturbative closed string states appear
from open strings that start and end on points in ${\mathbb C}^2$
that are identified by the ${\mathbb Z}_r$
action. In low energies the dynamics should however reduce to
the six-dimensional infrared-free $N=(1,1)$ Yang--Mills theory.
The bulk origin of the anti-symmetric tensor field is the
Neuveu--Schwarz two-form $B$, the gauge field of fundamental
string charge. There are many interesting phenomena connected
to it, such as the appearance of noncommutative Yang--Mills
theories in a constant background condensate
\cite{Douglas:1998fm,Schomerus:1999ug,Seiberg:1999vs}. If the curvature $[H]$
of the $B$ field is a torsion class in integral cohomology,
D-brane charges can be classified \cite{Minasian:1997mm}
in a twisted version of K-theory \cite{Witten:1998cd}, and
the Chan--Paton gauge fields appear as connections on a module of
a noncommutative algebra \cite{Kapustin:1999di}.
If the curvature is not a torsion class
then classification in terms of K-theory fails.
For general
curvature $H$, and thus bound states that involve nontrivial
NS five-brane charge, the classification problem is still open.
The proper mathematical framework for treating these
three-form fluxes seems to be that of gerbes \cite{Giraud}.
In all generality gerbes are sheafs of grupoids
\cite{Brylinski,Finlay}, but they can be understood
more concretely as collections of local principal bundles
and their isomorphisms. As such they also include the modules
of noncommutative spaces \cite{Connes} that appear in noncommutative
Yang--Mills. Abelian gerbes allow for a geometric
interpretation in terms of local line bundles
\cite{Chatterjee,Hitchin}, and of hypercohomology
\cite{Brylinski,Alvarez:1985es,Gawedzki:1987ak}.
The role of hypercohomology
is to provide a differential geometric framework for
studying gerbes. Physically this corresponds to
finding the correct local degrees of freedom for field theory.
However, this has been done until now only for Abelian gerbes.
Non-Abelian gerbes do exist, actually the concept was
originally introduced to formulate noncommutative
cohomology. However, the description uses holonomies and isomorphisms,
which physically corresponds to a Wilson loop, or surface, observables.
Recently it was shown \cite{Kalkkinen:1999uz} that the local line
bundles of Hitchin \cite{Hitchin}
indeed appear in effective type IIA solutions in massive
supergravity, when NS5 branes and D6-branes are involved.
The same considerations also gave reason to suspect
that gerbes should enter whenever NS charges are involved, including
the world-sheet theories on D-branes.
However, these theories involve non-Abelian bundles on the world-volumes, and
the Abelian gerbes are clearly not suited for describing them.
In this article our aim is to
find a straight forward non-Abelian
generalization of the Abelian hypercohomology underlying a general
gerbe.
In order to do this
we shall consider the quantum theory of
strings on a branched space-time or, more concretely,
multivalued $B$-fields.
These models are adequate for describing strings both on an
orbifold ${\mathbb C}^2/{\mathbb Z}_r$ in type IIB description, and on
$r$ coinciding NS5 branes, when each one of them
is carrying an independent $B$-field.
This serves as a rough bosonic model for strings in both six dimensional
$N=(0,2)$ and $N=(1,1)$ systems as we are not imposing
self-duality constraints on the two-form. In fact, what follows does not
depend on the dimensionality of the brane, either.
These arguments are
sufficient for establishing
what fields and which symmetries to expect in a differential
geometry description of gerbe.
A simple non-Abelian
generalization of the cocycle conditions and symmetry transformations of
an Abelian gerbe then yields a strongly constrained system which
fits well in the physical picture; it involves a collection of essentially Abelian
RR fields, and a Chan--Paton and NS two-form, which will turn out to be
non-Abelian, though in a somewhat
restricted sense. The result is suitably Abelian as to allow
us to use some field theory intuition in these systems, but display
the underlying non-Abelian structure clearly enough, so that we see
why and when we would have to move from a local
field theory description to a nonperturbative one, and to Wilson surfaces.
The plan of the paper is as follows: In the next section we shall consider
string sigma-models and non-Abelian currents on a branched space-time.
These results motivate in Section 3 a non-Abelian generalization of
Abelian hypercohomology. In Section 4 we show
how the mathematical framework fits together with what we know about
superstring solitons, and their effective low energy theories.
\section{World-sheet actions and currents}
We shall start by considering the bosonic string sigma-model
in a framework that naturally accommodates what we know about
open string dynamics in the presence of several D-branes.
Consider D-branes $Q_i$, where $i=1,\ldots,r$. Each of
these branes carries a Chan--Paton vector potential $A_{M,i}$, where
$M,N=1,\dots,D$ are space-time indices.
In the sigma-model this condensate field is integrated over
the components of the boundary of the world-sheet $\Sigma$ on
different D-branes, namely $\partial \Sigma_i$.
It is natural to think of the boundary as a vector in $H^1(M) \otimes
{{\bf h}}$ where ${{\bf h}}$ is a vector space of dimension $r$ with basis
$\{{\bf e}^i \}$.
Similarly, $A$ should be thought of as an element of $\Omega^1(M,{{\bf h}}^*)$.
One particular realization of this is to take ${{\bf h}}$ to be
the Cartan subalgebra of some Lie algebra ${\bf g}$ of rank $r$.
Let us in particular consider the configuration where all
of the D-branes $Q_i$ lie on top of each other, and the boundaries
$\partial\Sigma_i$ coincide with $\partial\hat\Sigma$. Then we can write
\begin{eqnarray}
\int_{\partial\Sigma} A = \sum_i \int_{\partial\Sigma_i} A_{M,i}~
\partial_{\varphi}X^{M}_i ~{\mathrm d} \tau = \int_{\partial\hat\Sigma} ~(A_{M},~
\partial_{\varphi}X^{M}) ~{\mathrm d} \tau~.
\end{eqnarray}
where $\varphi$ denotes $\tau$ for parallel and $\sigma$
for transverse coordinates to the D-brane,
and $(~,~)$ is the Killing form of the Lie algebra.
The index of the boundary component became in this way formally
an index of the at the moment diagonal {coordinate matrix} $X^M$.
Let us next turn to the B-field, and the full world-sheet.
Consider, for simplicity, a world-sheet that is composed of disjoint
{cylinders} $\Sigma_{ij}$ that connect
the boundaries
$\partial\Sigma_i$ and $\partial\Sigma_j$. As we already attached the vectors
${\bf e}^i$ and ${\bf e}^j$ to them
it is natural to attach their difference ${\bf e}^i - {\bf e}^j = \underline\alpha$
to the interpolating world-sheet $\Sigma_{\underline\alpha}$,
hence $\partial \Sigma_{\underline\alpha} =
\partial\Sigma_i - \partial\Sigma_j = {\bf e}^i - {\bf e}^j = \underline\alpha$.
In particular in the case that the Lie group ${\bf g}$ is just $A_r$
the vector $\underline\alpha$ is one of its roots.
In string theory there is only one bulk $B$-field. However,
it is natural to associate different pull-backs of this field
$B_{MN,\underline\alpha}$ for each component of the world-sheet
$\Sigma_{\underline\alpha}$.
For disjoint cylinders we can write without any loss of generality
\begin{eqnarray}
B_{MN,\underline\alpha} = (B_{MN}, ~\underline\alpha^{\vee}) ~,
\label{Broots}
\end{eqnarray}
where $\underline\alpha^{\vee}$ is the coroot\footnote{We follow the
conventions of \cite{Fuchs}.}.
The pertinent world-sheet integrals can now be written in the form
\begin{eqnarray}
\int_{\Sigma} B = \sum_{\underline\alpha}
\int_{\Sigma_{\underline\alpha}} (B, ~\underline\alpha^{\vee})~.
\end{eqnarray}
\subsection{Non-Abelian currents}
\label{current}
Until now all Lie algebra has been used for
keeping track of disconnected components of the world-sheet.
Consider now a configuration where the cylinders fuse
into one geometrical object. Let us take this limit
in such a way that the components of the
fields $B_{MN}$ and $A_M$ stay independent on each component cylinder;
This means that these fields effectively live on different branches of
space-time\footnote{This approach can be naturally formulated in Connes' noncommutative
geometry \cite{Connes}. It has been studied in open string context for instance in
\cite{Kalkkinen:1997ci}.}. For the
vector fields this is the standard limit of coinciding D-branes.
As to what concerns the $B_{MN}$ field, this kind of a situation could arise
for instance at an orbifold such as ${\mathbb C}^2/{\mathbb Z}_r$ where
the different components come
from different fundamental domains of the ${\mathbb Z}_r$ action,
though we do not have an explicit CFT construction for this.
Suppose further that we have two cylinders $\Sigma_{\underline\alpha}$
and $\Sigma_{\underline\beta}$ that overlap on one of their boundaries.
In the limit we are taking both of the cylinders are forced to occupy
the same part of space-time, and as they are connected, one is simply folding one
on top of the other. However, as far as the $B$ field is concerned
we could have equally well
started with the combined cylinder $\Sigma_{\underline\alpha + \underline\beta}$.
This means that even in the limit where one allows the $B$ field be independent
on different world-sheets we have to impose a consistency condition
\begin{eqnarray}
B_{\underline\alpha} + B_{\underline\beta} = B_{\underline\alpha + \underline\beta}~.
\end{eqnarray}
Assuming (\ref{Broots}) solves this condition, as anticipated.
The result of this analysis is hence that the string world-sheets
on different branches of space-time and the $r$ different boundaries of the
cylinders can both be
associated to the same Cartan subalgebra of a Lie algebra ${\bf g} = A_r$,
and the connecting cylinders
to the roots of the same algebra.
This argument generalizes to world-sheets with an arbitrary number of boundary components.
For instance, given the boundary
$\Sigma = {\bf e}^1 + {\bf e}^2 - {\bf e}^3 = {\bf v}$, the correct
$B$-field proportional to $(B,{\bf v})$. These world-sheets belong
naturally to some representation of ${\bf g}$, with Dynkin labels ${\bf v}$.
Note also that we are not restricted to the unitary series,
but modding by a suitable symmetry we get all of the
simply laced Lie algebrae in the $ADE$ series.
In order to learn how to describe these models in effective field theory we
need a theory that can accommodate non-Abelian $B$ fields.
This question will occupy us for the rest of the paper.
Define\footnote{$H^i$ are Cartan generators,
$\underline\alpha$ the positive roots, and $E^{\underline\alpha}$ their
generators \cite{Fuchs}.}
$A_M = A_{M,i}~H^i$ and
$B_{MN} = B_{MN,i}~H^i$.
It is also useful to introduce the non-Abelian line element
\begin{eqnarray}
{\mathrm d} X^M &=& {\mathrm d} \tau \partial_{\varphi} X^M_i H^i + {\mathrm d} z
\partial X^M_{\underline\alpha} E^{\underline\alpha} \label{line-ele}
+ {\mathrm d} \bar{z} \bar{\partial} X^M_{-\underline\alpha} E^{-\underline\alpha}~.
\end{eqnarray}
Then the full non-Abelian world-sheet action can be succinctly
summarized in
\begin{eqnarray}
S &=& {\mathrm{tr}} \int_{\hat\Sigma} (G_{MN} + B_{MN})~ {\mathrm d} X^M {\mathrm d} X^N + {\mathrm{tr}}
\int_{\partial\hat\Sigma} A_M ~{\mathrm d} X^M \label{ncgmodel}~.
\end{eqnarray}
We stress that the Lie algebra indices $i$ for a boundary component and
$\underline\alpha$ for a connecting world-sheet
arose geometrically when one evaluated
coordinate functions on different components of the world-sheet.
From open string interactions between $r$
coinciding D-branes we know \cite{Polchinski:1995mt,Witten:1996im}
that the gauge fields $A_M$
can be extended to the full
Lie algebra ${\bf g}$, and that the scalars that appear
as transversal coordinates take values in the same space.
In (\ref{ncgmodel}) this means that we should allow $A_M$ and hence $\partial_{\varphi} X^M$
take arbitrary values in the full Lie algebra, and interpret
\begin{eqnarray}
\int_{\partial\hat\Sigma} \delta(x - x(\tau)) ~{\mathrm d} X^M
\end{eqnarray}
as the non-Abelian current\footnote{Note that $X^M$
denotes a sigma-model coordinate in all of the space-time
directions; The physical transverse coordinates of the brane
are included in $A_M$.} carried by a particle moving along $\partial\hat\Sigma$.
Extending this procedure to the bulk fields (on the world-sheet)
$B_{MN}$ and $X^M(z,\bar z)$ is tempting. This would formally
mean that we introduce new degrees of freedom to the theory,
namely the non-diagonal components of the $B$-field. These components
couple to coordinate functions $X^M_{\underline\alpha}$ on {\em different}
world-sheets and would seem to correspond to strings propagating from one
world-sheet -- or sheet of space-time -- to another.
\subsection{Effective actions and symmetries}
The action (\ref{ncgmodel}) describes the
coupling of a macroscopic string $\hat\Sigma$
to the string condensates. From
the effective field theory point of view it hence appears
as a non-Abelian current. In order to address the dynamics of
the full background fields $A$ and $B$ one produces the generating functional
of their interactions from the path integral evaluated in
the presence of this current.
In the absence of the $B$ field the functional is just
the Wilson line \cite{Tseytlin:1988ww}
\begin{eqnarray}
{\mathrm e}^{-F[A_c, B_c]} &=& \Big\langle ~{\mathrm{tr}} ~{\rm Pexp} - i
\int_{\partial\hat\Sigma} A_c \Big\rangle~. \label{wilson}
\end{eqnarray}
It can be evaluated assuming that one is allowed
to neglect derivative terms and
commutators of the field strength, and the result is Tseytlin's
generalization of the DBI action \cite{Tseytlin:1997cs}.
This argument also tells us how to study {non-Abelian}
$B$-fields in string theory. We like to simply insert the
current (\ref{ncgmodel}), and ask what the resulting
generating function tells us about the dynamics of the field.
As the microscopic description for non-Abelian $B$ field is
lacking we have to rely on indirect arguments, such as those that
make use of the Wilson line above, or general underlying
structures associated to the field, gerbes.
\label{gaugesymm}
In order to find out what gauge symmetries we have,
let us in particular assume first $B={\mathrm d} C$, where $C$ is a diagonal,
but make no restrictions on $A$.
Then we can eliminate the $B$ field, and confirm that
the path integral (\ref{wilson}) is invariant under the transformations
$A+C \longrightarrow k^{-1}(A + C + {\mathrm d})k$.
The coordinate system in which $B$ is diagonal in
isospin indices tells us what the geometrical
direction in the isospin space should be -- much like the coordinate
system in which the gauge field of a D-particle
is diagonal defines what we mean by asymptotic space-time.
However, when $A$ is generally non-Abelian, there should
not be a particularly preferred choice of this diagonal
direction because we can always change the
basis in (\ref{ncgmodel}). Hence, we may have an
independent freedom to change $B$ by an isospin rotation.
In all, we have found three,
as it seems, independent symmetries of the theory
\begin{itemize}
\item[{\bf (G1)}]
The generalized NS symmetry for a local one-form $\eta \in \Omega^1(Q,{\bf h})$
where ${\bf h} \subset {\bf g}$ is the Cartan subalgebra
\begin{eqnarray}
A & \longrightarrow & A - \eta \\
B & \longrightarrow & B + {\mathrm d} \eta ~.
\end{eqnarray}
This symmetry relies heavily on the fact that $\eta$ is assumed diagonal
with respect to the basis (\ref{line-ele}).
\item[{\bf (G2)}]
The ordinary non-Abelian gauge symmetry $k:Q \longrightarrow G$
\begin{eqnarray}
A & \longrightarrow & k^{-1} (A + {\mathrm d}) k \\
B & \longrightarrow & B ~.
\end{eqnarray}
\item[{\bf (G3)}]
Provisionally, we include also the
choice of the physical direction in isospin space
$h:Q \longrightarrow G$
\begin{eqnarray}
A & \longrightarrow & A \\
B & \longrightarrow & h^{-1} B h~.
\end{eqnarray}
Note, however, that the non-Abelian DBI action is not invariant under this
symmetry unless $k=h$.
\end{itemize}
Next we shall try to combine these local symmetries and fields
in a global framework. For this we shall, however, have to
find out how to describe a non-Abelian gerbe in
terms of differential geometry.
\section{Hypercohomology}
Let us consider a space-time manifold $X$ and a fixed\footnote{Neither the patches in the cover
nor their intersections need to be contractible. {\v{C}ech}-cohomology
does not depend on the cover, if the cover is fine enough. Here we shall not
dwell on the dependence of the construction on the choice of cover. See, however, end of Section
\ref{rajotus}.}
open cover $\{ {\cal U}_{\alpha}\}$.
The isomorphism class of
an Abelian one-gerbe with connective structure and curving
is given by a two-cocycle in the hypercohomology of the complex $C^{\infty}
\longrightarrow \Omega^1 \longrightarrow \Omega^2$
\cite{Alvarez:1985es,Gawedzki:1987ak}.
A gerbe can hence be thought of as a two-cocycle in the
hypercohomology ${\cal H}^2$ of
{\v{C}ech}-cocycles with de~Rham forms on the coefficient sheaves
\cite{Brylinski}.
A representative of this class is then a closed
two-cochain\footnote{The index in square brackets denotes the de Rham form-degree, and
Greek alphabet is used to label the intersections of local patches
where the object is defined. Thus, for instance, $A^{[1]}_{\alpha\beta}$
is a one-form defined on every twofold intersection of local charts,
namely ${\cal U}_{\alpha} \cap {\cal U}_{\beta} = {\cal U}_{\alpha\beta}$.}
\begin{eqnarray}
\underline{w} &=& \Big[ ~ g^{[0]}_{\alpha\beta\gamma},
~ A^{[1]}_{\alpha\beta}, ~B^{[2]}_\alpha ~ \Big ] \label{gerbe}~,
\end{eqnarray}
and any two representatives of the class are connected by a shift
with an exact term, which will just turn out to be a
gauge transformation. We shall give the cocycle conditions and the gauge
transformation rules below.
The {\v{C}ech}-coboundary operator $\delta$ acts by adding an index to
$h_{\alpha\beta}$, such that for instance
\begin{eqnarray}
\delta h_{\alpha\beta\gamma} = h_{\beta\gamma} ~h^{-1}_{\alpha\gamma}
~h_{\alpha\beta}~.
\end{eqnarray}
The {\v{C}ech}-indices always remain antisymmetric, and $\delta^2 = 0$.
The zero-forms are multiplicative and the higher de Rham forms additive.
The coboundary operator of the complex introduced above
is, when acting on an $i$-form,
\begin{eqnarray}
{\mathcal D} = {\mathrm d}_{{\rm de Rham}} + (-1)^i \delta_{\mbox{\scriptsize {\v{C}ech}}} ~.
\end{eqnarray}
The statement that $\underline{w}$ be closed under this coboundary operator
${\mathcal D} \underline{w} = 0$ gives the cocycle conditions
\begin{eqnarray}
g_{\beta\gamma\delta} ~g^{-1}_{\alpha\gamma\delta}
~g_{\alpha\beta\delta} ~g^{-1}_{\alpha\beta\gamma} = 1
& \mbox{ on } & {\cal U}_{\alpha\beta\gamma\delta} \label{co1} ~,\\
g^{-1}_{\alpha\beta\gamma} ~{\mathrm d} g_{\alpha\beta\gamma} -
A_{\beta\gamma} + A_{\alpha\gamma} - A_{\alpha\beta} = 0
& \mbox{ on } &
{\cal U}_{\alpha\beta\gamma} \label{co2} ~, \\
{\mathrm d} A_{\alpha\beta}+ B_{\beta} - B_{\alpha}
= 0 & \mbox{ on } &
{\cal U}_{\alpha\beta}\label{co3} ~.
\end{eqnarray}
Gauge symmetry arises from shifting the gerbe $\underline{w}$ by
an exact term
\begin{eqnarray}
\underline{w}' &=& \underline{w} + {\mathcal D} \underline{\lambda}~,
\end{eqnarray}
where $\underline{\lambda}$ is a cochain in the lower complex
$C^{\infty} \longrightarrow \Omega^1$
\begin{eqnarray}
\Big[ ~ h^{[0]}_{\alpha\beta}, ~ \eta^{[1]}_{\alpha}~ \Big]~.
\end{eqnarray}
This makes $g_{\alpha\beta\gamma}$ into an ordinary
{\v{C}ech}-cocycle, $A_{\alpha\beta}$ into a connection of a
line bundle defined on ${\cal U}_{\alpha\beta}$, and
$H = {\mathrm d} B_{\alpha}$ into a globally
defined three-form. Concretely,
\begin{eqnarray}
g_{\alpha\beta\gamma}' &=& g_{\alpha\beta\gamma} h_{\beta\gamma} h^{-1}_{\alpha\gamma}
h_{\alpha\beta} \label{ga1} \\
A_{\alpha\beta}' &=& A_{\alpha\beta} + h^{-1}_{\alpha\beta} {\mathrm d} h_{\alpha\beta}
\label{ga2} - \eta_{\beta} + \eta_{\alpha} \\
B_{\alpha}' &=& B_{\alpha} + {\mathrm d} \eta_{\alpha} \label{ga3}~.
\end{eqnarray}
Similarly, $\underline{\lambda}$ becomes a one-cocycle, if one imposes the condition
${\mathcal D} \underline{\lambda}=0$. It is easy to see that $\underline{\lambda}$
is then a principal
bundle with connection $\eta$ and transition functions $h$.
\subsection{Non-Abelian gerbes}
\label{fullgen}
A straight forward attempt to simply add a Lie algebra
(or group) index to $\underline{w}$
fails as it seems to be rather difficult
to define a non-Abelian generalization of the
cocycle condition (\ref{co1}). Indeed, there is
no non-Abelian {\v{C}ech}-theory, and hence
a non-Abelian hypercohomology formulation is absent as well.
Gerbes were originally invented for studying
non-Abelian cohomology \cite{Giraud}, and they have appeared e.g.~in \cite{Finlay}
as Dixmier--Duady sheaves of grupoids \cite{Brylinski,Finlay}.
Abelian gerbes carry a natural differential geometry,
but this structure has not been extended to the non-Abelian case
\cite{Brylinski,Hitchin}. For physics the differential complexes
are rather essential as they correspond to physical fields.
The formulation in terms of
grupoids corresponds rather to a QFT formulation in terms
of Wilson line and other holonomy operators
\cite{Freed:1999vc}.
The idea of the present construction comes from D-particle physics.
There a number of classical particles moving in the target space is
described by a diagonal matrix. In physical processes this coordinate
matrix has to be asymptotically diagonal -- in some basis -- but the
dynamics of the theory allow processes where also off-diagonal elements
of the coordinate matrices are excited. Then the notion of local space-time
vanishes and we are in the realm of noncommutative geometry, or
stringy geometry.
In particular, one could consider a process where the in-coming and out-going
particles live in different Cartan subalgebrae of the pertinent group;
then the
definition of space-time seems to have changed under the process.
In what follows we shall study a non-Abelian generalization of the Abelian
hypercohomology relevant for gerbes, which is in many respects still
essentially Abelian. One circumvents the problem of defining non-Abelian
{\v{C}ech}-cohomology
in formulae (\ref{co1}) and (\ref{ga1}) by assuming that the system is actually
Abelian on threefold intersections. One can still
allow non-Abelian behaviour outside
these by assumption isolated patches on the manifold.
More concretely we choose for each threefold
intersection ${\cal U}_{\alpha\beta\gamma}$ a
fixed torus $T_{\alpha\beta\gamma}$ inside a Lie group $G$, and
assume that the {\v{C}ech}{} two-cocycles $g_{\alpha\beta\gamma}$
as well as the restrictions of the gauge transformations $h_{\alpha\beta}$
on any ${\cal U}_{\alpha\beta\gamma}$
take values in the same fixed torus.
Outside triple intersections we need not constrain these
transformations. However, we shall have to impose further
assumptions on other fields.
The obvious generalization of the one-form is now to make it
a connection on a principal
$G$-bundle on ${\cal U}_{\alpha\beta}$.
The cocycle condition (\ref{co2}) can then be
accepted as is; for the fixed torus part the condition is
then just a collection of Abelian equations, for the rest of
the Lie algebra it reduces to a cocycle condition that does not involve
$g_{\alpha\beta\gamma}$. This restriction on the form of the one-form
$A_{\alpha\beta}$ only constrains
the field on the triple intersections.
Under the non-Abelian gauge transformations we have to impose
further
\begin{eqnarray}
\delta({\mathrm {Ad}}(h_{\alpha\beta}) A_{\alpha\beta}) = 0~. \label{A-3}
\end{eqnarray}
This condition can be solved by assuming that on
three-fold intersections the one-forms take values in the Lie algebra of the fixed torus
${\bf t}_{\alpha\beta\gamma} = {\mathrm {Lie}}(T_{\alpha\beta\gamma})$.
Finally, the two-form field should be made Lie algebra valued as well.
We are now ready to write down an ansatz for
the non-Abelian generalization of hypercohomology.
Our strategy will be to try to find a non-Abelian generalization
for the two-cochain $\underline{w}$, together with the corresponding
new cocycle conditions and transformation rules. In addition to this
two-cocycle it will turn out that it is actually necessary to also include
a fixed one-cochain $\underline{v}$ together with
its transformation rules. This
one-cochain need not be closed.
In the following, we consider then a fixed two-cocycle and
a fixed one-cochain
\begin{eqnarray}
\underline{w} &=& \Big[ ~ g^{[0]}_{\alpha\beta\gamma},
~ A^{[1]}_{\alpha\beta}, ~B^{[2]}_\alpha ~ \Big ] \\
\underline{v} &=& \Big[ ~ \phi^{[0]}_{\alpha\beta},
~ \chi^{[1]}_{\alpha} ~ \Big ]~,
\end{eqnarray}
and the action of the two cochains
\begin{eqnarray}
\underline{\lambda} &=& \Big[ ~ h^{[0]}_{\alpha\beta},
~ \eta^{[1]}_{\alpha} ~ \Big ] \\
\underline{\kappa} &=& \Big[ ~ k^{[0]}_{\alpha} ~ \Big ]
\end{eqnarray}
on them. Here all of the one and two-forms take values in the Lie algebra
${\bf g}$, and the functions in the corresponding Lie group.
As it will turn out, $\underline{v}$ describes a local
principal bundle on each coordinate patch, and isomorphisms
between bundles on different though intersecting patches.
If the bundle were global, the cocycle conditions
\begin{eqnarray}
\phi_{\beta\gamma} \phi_{\gamma\alpha} \phi_{\alpha\beta} &=& 1\\
\phi^{-1}_{\alpha\beta} (\chi_\alpha + {\mathrm d}) \phi_{\alpha\beta} - \chi_\beta &=& 0
\end{eqnarray}
would be satisfied everywhere. In the Abelian case the cocycle conditions can be succinctly
stated by saying that $\underline{v}$ is closed, ${\mathcal D}\underline{v}=0$.
The bundles $\underline{v}$ could fail to
be a global bundle if the class $g_{\alpha\beta\gamma} = \phi_{\beta\gamma}
\phi_{\gamma\alpha} \phi_{\alpha\beta}$ were not trivial.
With this identification $\underline{w}$ actually measures to what extent
the structure $\underline{v}$ is local; it is hence
an obstruction for making $\underline{v}$ a global bundle.
In this nested structure $\underline{\kappa}$ acts as gauge transformations
on $\underline{\lambda}$ and $\underline{v}$, and both $\underline{\kappa}$ and
$\underline{\lambda}$ act on $\underline{w}$.
In particular, the action of $\underline{\lambda}$ on $\underline{w}$ is
\begin{eqnarray}
g_{\alpha\beta\gamma}' &=& g_{\alpha\beta\gamma} h_{\beta\gamma} h^{-1}_{\alpha\gamma}
h_{\alpha\beta} ~, \\
A_{\alpha\beta}' &=& h^{-1}_{\alpha\beta} (A_{\alpha\beta} - \eta_{\beta} + \eta_{\alpha}
+ {\mathrm d}) h_{\alpha\beta} ~, \\
B_{\alpha}' &=& B_{\alpha} + F(\eta_{\alpha})~, \label{B-eta}
\end{eqnarray}
where $F(x) = {\mathrm d} x + x \wedge x$.
Having $\underline{v}$ at our disposal we could have
defined $B_{\alpha}' = B_{\alpha} + D(\chi_{\alpha})\eta$
instead of (\ref{B-eta}), but this would lead to
wrong transformation properties in (\ref{Hdef}) later on.
The gauge transformations
$\underline{\kappa}$ act according to
\begin{eqnarray}
h_{\alpha\beta}' &=& k^{-1}_\alpha h_{\alpha\beta} k_\beta \\
\eta'_\alpha &=& k^{-1}_\alpha (\eta_\alpha + {\mathrm d}) k_\alpha~. \label{ggaa1}
\end{eqnarray}
on the local principal bundles. The action on $\underline{w}$ is both through
\begin{eqnarray}
B_{\alpha}' &=& k_\alpha^{-1} B_{\alpha} k_\alpha~, \label{B-k}
\end{eqnarray}
and through the action induced through $\underline{\lambda}$ in (\ref{ggaa1}).
The highest object obeys the cocycle condition
${\mathcal D} \underline{w} = 0$ namely,
\begin{eqnarray}
g_{\beta\gamma\delta} ~g^{-1}_{\alpha\gamma\delta}
~g_{\alpha\beta\delta} ~g^{-1}_{\alpha\beta\gamma} = 1
& \mbox{ on } & {\cal U}_{\alpha\beta\gamma\delta} ~,\\
g^{-1}_{\alpha\beta\gamma} ~{\mathrm d} g_{\alpha\beta\gamma} -
A_{\beta\gamma} + A_{\alpha\gamma} - A_{\alpha\beta} = 0
& \mbox{ on } &
{\cal U}_{\alpha\beta\gamma} ~,\\
F(A_{\alpha\beta}) + B_{\beta} - B_{\alpha}
= 0 & \mbox{ on } &
{\cal U}_{\alpha\beta} \label{B-A} ~.
\end{eqnarray}
We call the collection of fields $\underline{w}$ a non-Abelian one-gerbe
if it satisfies these consistency conditions.
Where ever the ``zero-gerbes'' $\underline{v}$ and $\underline{\lambda}$
obey the cocycle conditions ${\mathcal D} \underline{v} =0$ or ${\mathcal D} \underline{\lambda} =0$
they are actually locally defined principal $G$-bundles.
The former should not be assumed globally closed ${\mathcal D}
\underline{v} \neq 0$,
as otherwise it would indeed extend to a global principal bundle, and the
obstruction to this $\underline{w}$, in which we are actually interested, should vanish.
Also assuming $\underline{\lambda}$ closed
would imply that it act at least in the Abelian case trivially on
$\underline{w}$. Also $\underline{\kappa}$ has a geometrical interpretation:
it is just the set of gauge transformations of $\underline{\lambda}$.
We should take care that the cocycle conditions ${\mathcal D} \underline{w} =0$
are invariant under the action of $\underline{\kappa}$ and
$\underline{\lambda}$. The first two conditions are still trivial,
thanks to the assumption that all relevant fields collapse to tori
on triple intersections, cf.~(\ref{A-3}).
The last cocycle condition, however, gives a
restriction on
$\underline{\lambda}$ and $\underline{\kappa}$. In all generality
\begin{eqnarray}
0 &=& \Big({\mathrm {Ad}}(h') - 1 \Big) F(A) - {\mathrm {Ad}}(h') \Big( D(A) \delta\eta'
- \delta\eta' \wedge \delta\eta' \Big) \nonumber \\
& & + \delta F(\eta) + \delta\Big({\mathrm {Ad}}(k) - 1 \Big) (B + F(\eta)) ~, \label{condi}
\end{eqnarray}
where the prime denotes the action of $\underline{\kappa}$ on
$\underline{\lambda}$. In the next section, we shall simplify
this condition. For this, however, we shall have to
equip our construction with some more structure.
\subsection{Consistency conditions}
There is a way to restrict fields in order to
make contact with the original hypercohomology. The idea is to
restrict the covariant derivatives on the various principal
bundles so that they commute
with the {\v{C}ech}-coboundary operator\footnote{
The formula (\ref{A-3}) is actually already an example of this:
it just states that $\delta {\mathrm {Ad}}(h) = {\mathrm {Ad}}(g) \delta$. But the RHS is trivial
because the fields are on a torus.}.
In what follows we
derive the relevant commutative
diagrams adding some geometrical assumptions. Note, however,
that the rules of the previous chapter were not derived, but arouse
as a natural extension of Abelian structure. The analysis below
serves hence as a justification for these definitions.
The cocycle condition (\ref{B-A})
implies that $\delta B$ is a covariantly constant
section of the bundle where $A$ is the connection. This means
that under $\underline{\lambda}$ for $\eta=0$ it transforms according to
$\delta B'_{\alpha\beta} = {\mathrm {Ad}}(k^{-1}_{\alpha}h_{\alpha\beta}k_{\beta})\delta B_{\alpha\beta}$.
On the other hand, we had already fixed $B$'s transformation properties in (\ref{B-k}).
Hence ${\mathrm {Ad}}(h')\delta = \delta {\mathrm {Ad}}(k)$, which means that the diagram
\begin{equation}
\begin{CD}
\Omega^{[2]}_{\alpha} @>{\mathrm {Ad}}(k)>> \Omega^{[2]}_{\alpha} \\
@VV{\delta}V @VV{\delta}V \\
\Omega^{[2]}_{\alpha\beta} @>{\mathrm {Ad}}(h')>> \Omega^{[2]}_{\alpha\beta}
\end{CD} \label{cd0}
\end{equation}
should commute. This assumption relates the gauge transformations
on twofold intersections so that $k_\alpha=k_\beta=h_{\alpha\beta}$.
In the Abelian case we found the globally defined three-form
$H={\mathrm d} B_{\alpha}$ useful for distinguishing different gerbes.
In the present situation we can build a covariant
three-form under $\underline{\kappa}$ from $B_\alpha$ on ${\cal U}_\alpha$
by setting
\begin{eqnarray}
H_{\alpha} = D(\chi_\alpha) B_{\alpha}~. \label{Hdef}
\end{eqnarray}
The identity
$ D(A) \delta B = \delta D(\chi) B$
would imply on twofold intersections that $H_{\alpha}$
extend to a section
of the local bundle associated to $D(\chi_{\alpha})$. If this local bundle extends into
a global one, $H$ extends to its section.
This compatibility constraint is natural in the sense that it is just
the covariantization of the observation the the exterior derivative
and the {\v{C}ech}-coboundary operators commute in the diagram
\begin{equation}
\begin{CD}
\Omega^{[2]}_{\alpha} @>D(\chi)>> \Omega^{[3]}_{\alpha} \\
@VV{\delta}V @VV{\delta}V \\
\Omega^{[2]}_{\alpha\beta} @>D(A)>> \Omega^{[3]}_{\alpha\beta}
\end{CD} \label{cd1}
\end{equation}
However, it is a restriction on $A_{\alpha\beta}$ and $\chi_\alpha$.
The commutativity of the above diagram translates into the condition
\begin{eqnarray}
[A,\delta B] &=& \delta [\chi, B]~.
\end{eqnarray}
We shall give later an explicit example.
If instead of acting on $B$, we consider the action on the one-forms $\eta$, the result
would be that the one-forms commute with each other $[\eta,\eta]=0$ and with $A$, namely
$[\eta, A]=0$.
We shall be lead to this result presently, though through another route.
The same argument puts the one-forms $\chi$ on the same torus on double intersections.
Setting $h=k=1$ in (\ref{condi}) yields
\begin{eqnarray}
F(A-\delta \eta) = F(A) - \delta F(\eta)~. \label{cond1}
\end{eqnarray}
We shall impose this formula as a restriction on $A$ and $\delta\eta$.
This leads to $\delta F(\eta) = {\mathrm d} \delta F(\eta)$
corresponding to the commutative diagram
\begin{equation}
\begin{CD}
\Omega^{[1]}_{\alpha} @>F(\eta)>> \Omega^{[2]}_{\alpha} \\
@VV{\delta}V @VV{\delta}V \\
\Omega^{[1]}_{\alpha\beta} @>{\mathrm d} >> \Omega^{[2]}_{\alpha\beta}
\end{CD} \label{cd2}
\end{equation}
One can verify that as was expected on general grounds,
$[A,\delta \eta] = [\eta, \delta\eta] = 0$.
Now (\ref{condi}) is identically satisfied.
We shall have to ensure that the condition
(\ref{cd1}) is consistent with
the transformation rules. This is not automatic, but we have
to make yet a forth restriction
\begin{eqnarray}
\delta D(\chi) F(\eta) =0~, \label{cond2}
\end{eqnarray}
or, equivalently, $\delta [\chi,F(\eta)]=0$.
In summary, we have had to assume the commutativity of diagrams
(\ref{cd0}) and (\ref{cd1}), and that conditions (\ref{cond1}) and
(\ref{cond2}) hold. All of these conditions are geometrical, and fit nicely
together with Abelian hypercohomology.
\subsubsection*{A solution}
In order to see how these assumptions affect
the differential forms it is useful to find
concrete examples
that satisfy them.
The geometrical picture that arises from
these considerations restricts the various
fields in the following way:
\begin{itemize}
\item[{\bf (C1)}] The connections
$\chi_\alpha \in \Omega^{[1]}({\cal U}_\alpha,{\bf g})$
define locally a subspace $ \ker{\mathrm {ad}}(\chi_\alpha) \in
\Omega^{*}({\cal U}_\alpha,{\bf g})$.
We can now choose the forms $B_\alpha$ so that their
restrictions on ${\cal U}_{\alpha\beta}$ belong there.
Outside the double intersection there is no restriction.
\item[{\bf (C2)}]
Having hence fixed $\delta B$ on each ${\cal U}_{\alpha\beta}$
we have actually also fixed $F(A) = -\delta B$. Because $[A,{\mathrm d} A] =0$
the connection $A$ and $\delta B$ should get their values in the same
Cartan subalgebra.
\item[{\bf (C3)}]
On triple intersections ${\cal U}_{\alpha\beta\gamma}$
this Cartan subalgebra should be a part of the algebra ${\bf t}_{\alpha\beta\gamma}$
of a fixed torus $T_{\alpha\beta\gamma}$.
\item[{\bf (C4)}]
The {\v{C}ech}{} 2-cocycle $g$ is built out of the transition functions $\phi$
of the local bundle $\underline{v}$ according to
\begin{eqnarray}
g_{\alpha\beta\gamma} =
\phi_{\alpha\beta} \phi_{\beta\gamma} \phi_{\gamma\alpha}
~\label{nong}~.
\end{eqnarray}
On ${\cal U}_{\alpha\beta\gamma}$
$g$ is constrained to lie in the fixed
torus $T_{\alpha\beta\gamma}$. The torus could
vary as one moves over the triple intersection.
\end{itemize}
Our construction is hence essentially Abelian
on triple and double intersections. The non-Abelianity
of the construction lies outside the double intersections,
and in the way in which these various locally
Abelian constructions are related to each other. This means that the
restrictions appear rather as boundary conditions. The crucially non-Abelian
objects are the transition functions $\phi$, the one-forms $\chi$ and the two-forms $B$.
To see how this works, consider, for instance, how
$\ker{\mathrm {ad}}(\chi_{\alpha})$
viewed as a collection of sections of the local principal bundle
$\underline{v}$ transforms
on a three-fold intersection ${\cal U}_{\alpha\beta\gamma}$ as we
transport it around:
The transition functions of the local bundle $\underline{v}$
combine under this tour into $g$ as defined in (\ref{nong}).
This holonomy does not need to be trivial
as we did not assume that the local bundles combine into a
global one.
This far we did not impose much structure on $\underline{v}$. Let us now
suppose further that
$g_{\alpha\beta\gamma}$ happens to be an element of $T_{\alpha\beta\gamma}$
in order to satisfy condition C4 above.
In addition, assume that on four fold intersections the $g$
are compatible in the sense that $\delta g = 0$. Despite
the notation, (\ref{nong}) does not imply that $g$ would be
exact (or even closed)
as a collection of Abelian {\v{C}ech}{} cocycles, because
the transition functions $\phi$ are not in general Abelian.
It turns out \cite{Finlay} that
the definition
(\ref{nong}) of $g$
produces the right three-index object even if the transition functions
$\phi$ are just general isomorphisms between principal bundles on different
charts. Then the cocycle
condition has to be modified, however, and there
does not seem to exist at
present representations
of the underlying sheaves of grupoids in terms of
differential geometry on them, or hypercohomology.
For concreteness,
let us consider a toy model on a triple intersection ${\cal U}_{123}$,
with the group $G=SU(2)$.
Suppose $\chi_1 = x (\sigma^1 - \sigma^2)$,
$\chi_2 = x (\sigma^1 + \sigma^2)$, and
$\chi_3 = -x (\sigma^1 + \sigma^2)$.
Then the transition functions can be taken constants
$\phi_{12} = i \sigma^1$, $\phi_{23} = i \sigma^3$,
and $\phi_{31} = i \sigma^2$. The resulting
holonomy is $g_{ijk} = - \varepsilon_{ijk}$.
The differences in the $B$ field are
$\delta B_{12} = (B_2-B_1)\sigma^1 +(B_2+B_1) \sigma^2$,
$\delta B_{23} = - (B_3 + B_2) (\sigma^1 + \sigma^2)$, and
$\delta B_{31} = (B_1 + B_3) \sigma^1 + (-B_1 + B_3) \sigma^2$.
In order to find the connections $A$
we have to assume $B_1 =0$. Then we can choose
one-forms $A_{ij} = \varepsilon_{ijk} A_k (\sigma^1 + \sigma^2)$.
This also fixes the embedding of the
torus $T_{ijk} \subset SU(2)$.
\subsubsection*{Transformations}
Given in the above sense consistent data $\underline{w}$, $\underline{v}$, let us now
see what symmetries $\underline{\lambda}$, $\underline{\kappa}$ are left.
\begin{itemize}
\item[{\bf (S1)}]
On each patch $\underline{\kappa}$ acts as the gauge symmetries
of the bundle $\underline{v}$.
\item[{\bf (S2)}]
On double intersections ${\cal U}_{\alpha\beta}$ these transformations are fixed to
coincide with the corresponding transformations of the gerbe $k_\alpha=h_{\alpha\beta}$.
\item[{\bf (S3)}]
On triple intersections also the transformations of the gerbe $h$ are fixed to respect the tori
$T_{\alpha\beta\gamma}$
\item[{\bf (S4)}]
The translations $\eta$ and their differences $\delta \eta$
belong on ${\cal U}_{\alpha\beta}$ to a Cartan subalgebra $\ker{\mathrm {ad}}(\chi_\alpha) \in
\Omega^{*}({\cal U}_{\alpha\beta},{\bf g})$.
\end{itemize}
In the previous toy model
example the gauge transformations (S1) and (S2) make $\chi$ and $A$
into ordinary connections on the respective coordinate patches.
The remaining shift symmetry $\eta$ can be found just as the $B_i$ were
found above, except that now
$[\eta,\delta\eta]=0$. It follows, $\eta_1=0$, $\eta_2 = a(\sigma^1 + \sigma^2)$, and
$\eta_3 = b(\sigma^1 + \sigma^2)$. It acts then on the triple intersections
just as in the Abelian case, along the fixed torus. Assuming $A_2=0$ we can also
choose $\eta_1 = a A_3 (\sigma^1 - \sigma^2)$ and $\eta_2=\eta_3=0$.
\subsection{Geometrical Interpretation}
The non-Abelian gerbe $\underline{w}$ found in the previous section
provides a tool to study the set of local, non-Abelian
principal bundles $\underline{v}$. The local symmetry of the bundle
is frozen on double intersections so that gauge transformations
on both charts are identical. This transformation then acts
also on the gerbe $\underline{w}$. The two-form $B$ can be assumed
to commute with the connection $\chi$ on two-fold intersections. $B$
provides us an Abelian connection $A$ on double intersections. The translations
$\eta$ are Abelian on triple intersections as well, and act on $A$ in the same way
as in the Abelian case.
The gerbe $\underline{w}$ would then look exactly like ${\mathrm{rank}}~G$ copies
of Abelian gerbes, were it not for the fact that $B$ is
generally Lie algebra valued
outside double intersections, and that $h$ can mix the diagonal elements of $A$
on double intersections. The crucial non-Abelianity resides
in the principal
bundles $\underline{v}$, and $\underline{w}$ should be seen as
an {\em almost} Abelian
obstruction for extending $\underline{v}$ into a global bundle.
If the local bundles in $\underline{v}$ are trivial, then its
sections can be conjugated to the Cartan subalgebrae fixed on
various double intersections. The transition functions $\phi$
do still not have to be trivial, but they act as isomorphisms between these tori.
In particular, $g_{\alpha\beta\gamma}$ is an automorphism of the torus associated
to $\chi_\alpha$, and maps the torus back to itself thus permuting
the diagonal elements. In this way, the gerbe can be used to describe
a {\em braid}.
\subsubsection*{Limitations}
\label{rajotus}
{\v{C}ech}-cohomology does not depend on the choice of
cover if the cover is fine enough \cite{Spanier}. However, in our
discussions the cover is very particular.
For instance, if there were enough
three-fold intersections to cover the whole space
the whole construction would collapse to $r$ copies
of Abelian gerbes. For our considerations it is, however, quite
sufficient to know that there does exist a cover independent formulation
of non-Abelian gerbes \cite{Giraud, Finlay} that describes obstructions to
extend local bundles into global ones. We choose one of these configurations
together with a cover that is as simple as possible but still carries the
interesting information. In other words we smooth the system as
much as possible, and try to push
the obstructions to trivializing it into as
small and isolated neighbourhoods as possible.
We have found a very particular differential geometry description of these
objects as well. In that it seems to give an extension of Abelian hypercohomology
the restrictions on the fields are under control.
One should ask, however, whether the parametric
tori $T_{\alpha\beta\gamma}$ are actually necessary data.
The observation that $g_{\alpha\beta\gamma}$ can become
non-Abelian in general gives hope that
there actually might be a formulation where this data
becomes superfluous. However, from the physical
point of view the tori seem to be necessary, much as the
physical Cartan subalgebrae in D-particle scattering,
as we shall see presently.
\section{String backgrounds and gerbes}
In this section we make contact with the string
theory considerations of Section \ref{current},
where we coupled
the NS two-form fields and the Chan--Paton
vector fields to a non-Abelian current carried by a world-sheet. Three
different symmetries acted on these fields {(G1)}, {(G2)}, and {(G3)}
in the notation of
Section \ref{gaugesymm}.
We also have an essentially Abelian gauge field from the RR sector, and a
gauge symmetry associated to it.
On the
geometrical
side there is
associated to the gerbe a local principal
bundle $\underline{v} = [\phi,\chi]$. This should be
identified with
the Chan--Paton bundle on a D-brane. The
obstruction to extend this bundle is $g$.
The gauge symmetry {(G3)} is then just the action of
$\underline\kappa$.
On two-fold intersections we have the essentially Abelian
gauge field $A$. This should be the RR gauge
field for the D-particle, or the D6-brane.
The Chan--Paton gauge transformations
were correlated to the RR gauge transformations $h$ on
these two-fold intersections.
If the action of these gauge transformations is not
Abelian it seems that an isospin rotation
on the Chan--Paton sector induces a redefinition of
which Cartan subalgebra
the RR fields live.
We already noticed that the gauge
transformations $h$ in $\underline{\lambda} = [h,\eta]$
connect the Chan--Paton
transformations and the RR gauge transformations.
Also the transformations generated by $\eta$
play an important role. As they shift the $B$-field
by the curvature $F(\eta)$ cf.~(\ref{B-eta}) they are
the natural generalization of the NS symmetry (G1).
The $\eta$ transformation also acts on the RR field in the way
NS transformation does.
As was pointed out in \cite{Kalkkinen:1999uz} the Abelian
version of the cocycle condition (\ref{B-A}) guarantees that
the right gauge invariant field strength is
the same as in massive IIA supergravity, namely
\begin{eqnarray}
{\cal F}^{[2]} &=& F(A_{\alpha\beta}) + B_\beta
\end{eqnarray}
It then readily follows that the two-form $B$ in $\underline{w}$ is the
NS two-form.
The NS gauge invariant combination in open string
theory $B + F(A_{\mathrm CP})$ appears here as well, but in the form
${\cal F} =
B_{\alpha} + F(\chi_\alpha)$. Its curvature is $H = D(\chi) {\cal F}$,
as it should be, but $\underline{\lambda}$ does not seem to implement
the NS symmetry (G1) correctly. Fortunately, all of the previous
calculations on double intersections remain unchanged even if
we extend the action of $\underline{\lambda}$ onto $\underline{v}$.
Then we have to assume
again $[\chi,\eta]=[\eta,\eta]=0$, which
makes $\eta$ into an effectively Abelian connection
so that $F(\chi-\eta) = F(\chi) - F(\eta)$, and ${\cal F}$ is again invariant.
However, this would be pushing the NS symmetry too far. Though NS symmetry
is present for Abelian currents coupled to the world-sheet --
which we have again correctly reproduced above in hypercohomology --
it is not there for non-Abelian currents, as it heavily relies on
the Abelianity of $F(A_{\mathrm CP})$.
One should therefore think of the NS-symmetry $(G1)$ rather
as a freedom to redefine
the connection $\chi$ by shifting it with suitably Abelian
form. Our construction therefore necessitates a nontrivial
non-Abelian extension of the NS-symmetry.
There is exactly the same interplay
between Abelian and non-Abelian currents in the effective supergravity
Lagrangians and the above generalized hypercohomology.
Let us finally consider conserved charges. The local bundles
$\underline{v}$ are classified
by the Chern class ${\mathrm{Ch}}(F(\chi_\alpha))$. The bundles in $\underline{w}$
also have nontrivial first Chern class
$\mathrm{ch}_1(F(A_{\alpha\beta}))$. The invariant quantity
associated to $B$ is ${\mathrm{tr}} ~H = {\mathrm{tr}} ~D(\chi)B$.
Consider its integral over a sphere $S^3$ that is divided into
two discs ${\cal U}_\alpha$,
${\cal U}_\beta$, whose boundaries $S^2$ coincide. Then
\begin{eqnarray}
Q_{{\mathrm NS}} = \int_{S^3} {\mathrm{tr}} ~H = \int_{S^2} {\mathrm{tr}} ~(B_\alpha -
B_\beta) = \int_{S^2} {\mathrm{tr}} ~F(A_{\alpha\beta}) ~. \label{varaus}
\end{eqnarray}
Thus NS charge is non-trivial, if ${\mathrm{tr}} F(A_{\alpha\beta}) $ has
monopole number, i.e.~there are D6-branes \cite{Kalkkinen:1999uz}.
The NS charge is well defined under the $\eta$ shifts as well, because
\begin{eqnarray}
\int_{S^2} {\mathrm{tr}}~\delta F(\eta_{\alpha\beta}) = \int_{S^2}
{\mathrm{tr}}~{\mathrm d}~\delta\eta_{\alpha\beta} =0 ~.
\end{eqnarray}
For fixed bases of RR fields these formulae yield
charges that do not depend on $\eta$ or the choice of
homology cycles, even if the traces are dropped in (\ref{varaus}).
\section{Conclusions}
We started by studying a branched cover of space-time and showed
how the NS two-form fields are made to carry the same Lie algebra indices
that the Chan--Paton gauge fields have.
Much in the same way that the latter fields are promoted to non-Abelian
Lie algebra fields in the case of a stack of
coinciding D-branes, we argued that there should appear additional light
degrees of freedom from strings that connect D-branes on different branches
of the space-time. A DBI action argument was also used to indicate which
symmetries there should be present.
The curvature of the NS $B$ field appears
on the level of effective supergravity as the
characteristic class of a gerbe.
In order to set the stage for addressing dynamical
issues concerning this non-Abelian
$B$ field it is therefore necessary to
generalize the Abelian hypercohomology
construction. This we did, and the resulting structure
incorporates strikingly well, and in particular
without introducing unphysical degrees of freedom,
all the relevant supergravity fields and symmetries.
This construction sheds light on the difficulties encountered
in trying to describe perturbatively for instance the exotic $N=(0,2)$
theories in six dimensions. In the case of non-Abelian Yang--Mills the
right object to study in supergravity seems to be the Wilson line,
i.e.~the holonomies of the principal bundle. It seems therefore that
the right strategy to attack the dynamical problem here should be, analogously, to
understand the holonomies of the gerbe using the techniques developed here.
These very same holonomies arise also in guaranteeing that the string world-sheet
measure is anomaly free. For instance
the analysis in \cite{Freed:1999vc}
was concerned in essentially defining
the the holonomy of an Abelian gerbe.
\vspace{1cm}
\noindent
{\bf Acknowledgements}: I thank L.~Bonora, R.~Iengo, and F.~Thompson
for useful discussions, and in particular
P.~Tran-Ngoc-Bich for collaboration in the early stages of this project.
This work was supported in part by the European Union TMR program CT960045.
|
1,108,101,566,662 | arxiv | \section{Introduction}\label{sec:intro}
Image segmentation is a fundamental process in several medical applications. Diagnosis, treatment, planning and monitoring, as well as pathology characterization, benefit from accurate segmentation. In this paper we are interested in brain sub-cortical structures located at the frontostriatal system. Previous studies have shown the involvement of the frontostriatal structures in different neurodegenerative and neuropsychiatric disorders, including schizophrenia, Alzheimer’s disease, attention deficit, and subtypes of epilepsy~\cite{Chudasama2006frontostriatal}. Segmenting these parts of the brain enables a physician to extract various volumetric and morphological indicators, facilitating the quantitative analysis and characterization of several neurological diseases and their evolution.
In the past few years, deep learning techniques, and particularly Convolutional Neural Networks (CNNs), have rapidly become the tool of choice for tackling challenging computer vision tasks. CNNs were popularized by Lecun, after delivering state-of-art\ results on hand-written digit recognition~\cite{lecun1998gradient}. However, they fell out of favor in the following years, mostly due to hardware and training data limitations. Nowadays, the availability of large-scale datasets (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot ImageNet), powerful GPUs and appropriate software libraries, have rekindled the interest in deep learning and have made it possible to harness their power. Krizhevsky \emph{et al}\onedot \cite{krizhevsky2012imagenet} published results demonstrating clear superiority of deep architectures over hand-crafted features or shallow networks, for the task of image classification. Since then, CNNs have helped set new performance records for many other tasks; object detection, texture recognition and object semantic segmentation just to name a few.
Our work is similar in spirit to~\cite{prasoon2013deep}, but with some notable differences. In~\cite{prasoon2013deep} the authors train one CNN for each of the three orthogonal views of MRI scans, for knee cartilage segmentation, with the loss being computed on the concatenated outputs of the three networks. The inputs to each CNN are $28\times 28$ image patches and the output is a softmax probability of the central pixel belonging to the tibial articular cartilage. In contrast, our method operates on full 2D image slices, exploiting context information to accurately segment regions of interest in the brain. In addition, we use \emph{fully convolutional} CNNs~\cite{long2014fully} to construct dense segmentation maps for the whole image, instead of classifying individual patches. Furthermore, our method handles multiple class labels instead of delivering a foreground-background segmentation, and it does that efficiently, performing a single forward pass in$~5ms$.
CNNs are characterized by large receptive fields that allow us to exploit context information across the spatial plane. Processing 2D slices individually, however, means that we remain agnostic to \emph{3D context} which is important, since we are dealing with volumetric data. The obvious approach of operating directly on the 3D volume instead of 2D slices, would drastically reduce the amount of data available for training, making our system prone to overfitting, while increasing its computational requirements. Alternatively, we construct a Markov Random Field on top of the CNN output in order to impose volumetric homogeneity to the final results. The CNN scores are considered as unary potentials of a multi-label energy minimization problem, where spatial homogeneity is propagated through the pair-wise relations of a 6-neighborhood grid. For inference we choose the popular alpha-expansion technique that leads to guaranteed optimality bounds for the type of energies we define~\cite{boy01}.
\section{Using CNNs for Semantic Segmentation}\label{sec:cnns}
Our network is inspired by the Deeplab architecture that was recently proposed for semantic segmentation of objects~\cite{chen2014semantic}. Due to limited space, we refer the reader to~\cite{chen2014semantic} for details. One obvious and straightforward choice for adapting the Deeplab network to our task, would be to simply fine-tune the last three convolutional layers that replace their fully connected counterparts in the VGG-16 network, while initializing the rest of the weights to the VGG-16 values. This is a common approach when adapting an already existing architecture to a new task, but given the very different nature of natural RGB images and MR image data (RGB \vs grayscale, varying \vs black background), we decided to train a fully convolutional network from scratch.
Training a deep network from scratch presents us with some challenges. Medical image datasets tend to be smaller than natural image datasets, and segmentation annotations are generally hard to obtain. In our case, we only have a few 3D scans at our disposal, which increases the risk of overfitting. In addition, the repeated pooling and sub-sampling steps that are applied in the input images as it flows through a CNN network, decrease the output resolution, making it difficult to detect and segment finer structures in the human brain. To address these challenges, we make a series of design choices for our network: first, we opt for a shallower network, composed of five pairs of convolutional/max pooling layers. We sub-sample the input only for the first two max-pooling layers, and keep a stride of $1$ for the remaining layers, introducing holes, as in~\cite{chen2014semantic}. This allows us to keep increasing the effective receptive field of filters, without further reducing the resolution of the output response maps. For a $256\times256$ input image, the total sub-sampling factor of the network is $4$, resulting in a $64\times 64\times L$ array, where $L$ is the number of class labels. A $1-$pixel stride is used for all convolutional layers and $0.5$ activation probability for all dropout layers. The complete list of layers and important parameters is given in~\reftab{tab:architecture}.
At test time, a 2D image is fed to the network and the output is a three-dimensional array of probability maps (one for each class), obtained via a softmax operation. To obtain a brain segmentation at this stage, we simply resize the output to the input image dimensions using bilinear interpolation and assign at each pixel the label with the highest probability. However, we still need to impose volumetric homogeneity to the solution. We propose to do it using Markov Random Fields.
\begin{table}[t!]
\resizebox{\linewidth}{!}{
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
Block & conv kernel & \# filters & hole stride & pool kernel & pool stride & dropout\\\hline
1 & 7$\times$7 & 64 & 1 & 3$\times 3$ & 2 & no \\\hline
2 & 5$\times$5 & 128 & 1 & 3$\times 3$ & 2 & no \\\hline
3 & 3$\times$3 & 256 & 2 & 3$\times 3$ & 1 & yes \\\hline
4 & 3$\times$3 & 512 & 2 & 3$\times 3$ & 1 & yes \\\hline
5 & 3$\times$3 & 512 & 2 & 3$\times 3$ & 1 & yes \\\hline
6 & 4$\times$4 & 1024 & 4 & no pooling & & yes \\\hline
7 & 1$\times$1 & 39 & 1 & no pooling & & no \\\hline
\end{tabular}
}
\caption{Layers used in our architecture. All convolutional layers have a stride of one pixel; a hole stride of "1" means that we introduce no holes.}
\label{tab:architecture}
\end{table}
\subsection{Multi-label segmentation using CNN-based priors}\label{sec:segmentation}
For every slice of a 3D image, the output of the proposed CNN is a softmax map that indicates the probability of every pixel to be part of a given brain structure $l \in \mathcal{L}$ (label). We consider the volume $P^{\mathrm{CNN}}_{i}(l):\mathcal{L} \rightarrow [0,1] $ formed by the stacked CNN output slices, as a prior of the brain 3D structures, where $i$ indicated a voxel from the original image.
Let $\mathcal{G}=\langle \mathcal{V},\mathcal{E} \rangle$ be a graph representing a Markov Random Field, where nodes in $\mathcal{V}$ are variables (voxels) and $\mathcal{E}$ is a standard 6-neighborhood system defining a 3D grid. Variables $i \in \mathcal{V}$ can take labels $l_i$ from a labelspace $\mathcal{L}$. A labeling $\mathcal{S}= \{l_i \mid i \in \mathcal{V} \}$ assigns one label to every variable. We define the energy $E(\mathcal{S})$ which consists of unary potentials $V_i$ and pair-wise potentials $V_{ij}$ such that it is minimum when $\mathcal{S}$ corresponds to the best possible labeling.
Unary terms are defined as $V_i(l_i) = -\log(P^{\mathrm{CNN}}_{i}(l_i))$, and they assign low energy to high probability values. Pair-wise terms encode the spatial homogeneity constraint by simply encouraging neighbor variables to take the same semantic label. In order to align the segmentation boundaries with intensity edges, we made this term inversely proportional to the difference of the intensity $I_i$ and $I_j$ associated to the given voxels. The pair-wise formulation is $V_{i,j}(l_i, l_j) = w_{ij}.[l_i\neq~l_j]$ where $w_{ij} = \exp\left(-\frac{\mid (I_i - I_j) \mid^2}{2 \sigma^2}\right)$. Finally, the energy minimization problem is defined as:
\begin{equation}
\mathcal{S}^* = \operatornamewithlimits{argmin} E(\mathcal{S}) = \operatornamewithlimits{argmin} \sum\limits_{i \in \mathcal{V}} V_i(l_i) + \lambda \hspace{-2mm} \sum\limits_{(i,j) \in \mathcal{E}} \hspace{-2mm} V_{i,j}(l_i, l_j).
\end{equation}
$\mathcal{S}^*$ represents the optimal label assignment. Note that this energy is a metric in the space of labels $\mathcal{L}$; thus, it is guaranteed that using alpha-expansion technique we can find a solution $\hat{\mathcal{S}}$ whose energy lies within a factor of 2 with respect to the optimal energy (i.e. $E(\hat{\mathcal{S}}) \leq 2.E({\mathcal{S}^*})$). Alpha-expansion is a well known move-making technique to perform approximate inference using graph cuts, that has shown to be accurate in a broad range of vision problems. We refer the reader to \cite{boy01} for a complete discussion on energy minimization using alpha-expansion.
\section{Experiments and Discussion}\label{sec:results}
\begin{figure*}[t!]
\centering
\includegraphics[width=\textwidth]{figures/3plots_IBSR.pdf}
\caption{Average Dice coefficient, Hausdorff distance, and contour mean distance on eight subcortical structures of IBSR dataset. The proposed CNN-based method outperforms the RF-based approach (better viewed in color and magnified).}
\label{fig:IBSR}
\end{figure*}
\begin{figure}[t!]
\includegraphics[scale=1.05]{figures/3plots_RE.pdf}
\caption{The average Dice coefficient, Hausdorff distance, and contour mean distance on left and right putamen structure of RE dataset. The proposed CNN-based method generates more accurate segmentation results compared to the RF-based approach (better viewed in color and magnified).}
\label{fig:MTL}
\end{figure}
We used the proposed method to segment a group of sub-cortical structures located at the frontostriatal network, including thalamus, caudate, putamen and pallidum. We evaluated our approach on two brain MRI datasets.
The first one is a publicly available dataset provided by the Internet Brain Segmentation Repository (IBSR)~\cite{Rohlfing2012Image}. It contains $18$ labeled 3D T1-weighted MR scans with slice thickness of around $1.3~mm$. In this work we use the subset of $8$ primarily subcortical labels, including left and right thalamus, caudate, putamen, and pallidum. The second dataset is obtained from a Rolandic Epilepsy (RE) study, including $17$ children with epilepsy and $18$ matched healthy individuals. For each participant, T1-weighted magnetic resonance images (MRI) were acquired with a $3$ T scanner (Philips Acheiva) with an in-plane resolution of $256 \times 256$ and slice thickness of $1~mm$. The left and right putamen structures were manually annotated by an experienced user. For both datasets, we process volumes slice by slice, after resizing them to $256 \times 256$ pixels. We treat these 2D slices as individual grayscale images to train our CNN.
In the first experiment, we compare the performance of our segmentation method using CNN priors, with an approach based on Random Forest priors, where the same MRF refinement is applied. The RF-based per-voxel likelihoods are computed in the same way as~\cite{Alchatzidis2014Discrete}. Then, the RF probability maps are considered as the unary potentials of a Markov Random Field and alpha-expansion is used to compute the most likely label for each voxel, as explained in \refsec{sec:segmentation}.~\reffig{fig:IBSR} and~\reffig{fig:MTL} show the average Dice coefficient, Hausdorff distance, and contour mean distance between output segmentations and the ground truth\ for different structures. These results show that the CNN-based approach achieves higher Dice compared to RF-based method, while producing lower Hausdorff and contour mean distance.
In the second experiment, we compare the accuracy of our proposed method with two publicly available state-of-the-art\ automatic segmentation toolboxes, Freesurfer~\cite{Fischl2002Whole}, and FSL-FIRST~\cite{Patenaude2011Bayesian}. In \reftab{tab:IBSRMTL} we report the average Dice coefficient for the left and right structures; these results show that our method provides better segmentations compared to the state-of-the-art\ for three sub-cortical structures in both IBSR and RE dataset. However, Freesurfer results in better segmentation for caudate in the IBSR dataset which could be attributed to the limitation of CNN in capturing thin tail areas of the caudate structures. In~\reffig{fig:visualRes} we show qualitative results.
\subsection{CNN Training and Evaluation Details}\label{sec:training}
The input to our network is a single 2D slice from a 3D MRI scan, along with the corresponding label map. We apply data augmentation to avoid overfitting: we use horizontally flipped and translated versions of the input images by 5, 10, 15, 20 pixels, across the $x/y$ axes. Other transformations, such as rotation, could be considered as well. The MR image data are centered and the background always takes zero values, so we do not perform mean image subtraction as is usually the case.
In the case of IBSR, we split the available data into three sets. Each time, we use two of the sets as training data (approximately $100K$ training samples) and the third set as test data. One of the training data volumes is left out and used as validation data. Similarly, we split RE into two subsets of equal size, using one for training and one for testing, each time. We train on both datasets for $35$ epochs starting with a learning rate of $0.01$ and dropping it at a logarithmic rate until $0.0001$. For training, we use standard SGD with a momentum of $0.9$ and a softmax loss. For all our experiments we used MATLAB\ and the deep learning library MatConvNet~\cite{vedaldi2014matconvnet}. Code, computed probability maps, and more results can be found at~\url{https://github.com/tsogkas/brainseg}.
We also experimented with CNNs trained on 2D slices from the other two views (sagittal and coronal) but the resulting models performed poorly. The problem is rooted in the inherent symmetry of some brain structures and the fact that the CNN is evaluated on individual slices, ignoring 3D structure. For instance, when processing slices across sagittal view, the right and left putamen appear at roughly the same positions in the image. They are also very similar in terms of shape and appearance, which fools the system into assigning the same label to both regions. This simple example demonstrates the need for richer priors that take into account the full volume structure to assign class labels.
\section{Conclusion}\label{sec:conclusion}
\begin{figure}[t!]
\includegraphics[scale=0.4]{figures/visualResults.png}
\caption{2D slice segmentation (IBSR). \textbf{Left:} Groundtruth. \textbf{Middle:} RF-based results. \textbf{Right:} CNN-based results.}
\label{fig:visualRes}
\end{figure}
In this paper, we proposed a deep learning framework for segmenting frontostriatal sub-cortical structures in MR images of the human brain. We trained a fully convolutional neural network for segmentation of 2D slices and treated the output probability maps as a proxy for the respective voxel likelihoods. We further improved segmentation results by using the CNN outputs as potentials of a Markov Random Field (MRF) to impose spatial volumetric homogeneity.
Our experiments show that the proposed method outperforms approaches based on other learned priors, as well as state-of-the-art\ segmentation methods.
However, we also note some limitations: the current model is not able to accurately capture thin tail areas of the caudate structures. Second, symmetric structures confound the CNN training process when considering views which are parallel to the plane of symmetry. Third, graph-based methods have to be used to impose volumetric consistency since training is done on 2D slices. Different network layouts, taking account of volumetric structure can possibly help overcome these limitations.
\begin{table}[t!]
\caption{The average Dice coefficient of the three methods on different brain structures. Values are reported as the average of the left and right structures.}
\tiny
\resizebox{\linewidth}{!}{
\begin{tabular}{|c|c|c|c|c}
\hline
& Proposed & Freesurfer & FSL \\ \hline
IBSR-Thalamus & \bf0.87 & 0.86 & 0.85 \\ \hline
IBSR-Caudate & 0.78 & \bf0.82 & 0.68 \\ \hline
IBSR-Putamen & \bf0.83 & 0.81 & 0.81 \\ \hline
IBSR-Pallidum & \bf0.75 & 0.71 & 0.73 \\ \hline
RE-Putamen & \bf0.89 & 0.74 & 0.88 \\ \hline
\end{tabular}
\label{tab:IBSRMTL}}
\end{table}
\bibliographystyle{IEEEbib}
|
1,108,101,566,663 | arxiv | \section{Introduction}
It was James Clerk Maxwell who first used the quaternionic
algebra by Sir William Rowan Hamilton to deal with
electrodynamic equations. Since then, the areas of applying of
quaternions steadily extend. Bi-quaternion algebra, that is the
algebra by William Kingdon Clifford in 2-dimension complex space,
turns to be especially useful in physical applications.
The ground for so extensive employing the quaternions is that this
formalism provides us with simple algebraic tools to deal with
spinors of relativistic physics without any use of the cumbersome
index technique \cite{Berezin-Kurochkin-Tolkachev-1989}.
The article aims to demonstrate the effectiveness of this
algebraic technique to approach symmetries of Maxwell-like
equations, equivalent in the sense by Seiberg -- Witten
\cite{Seiberg-Witten-1999} to the electrodynamic equations in a
noncommutative space-time. As known \cite{Douglas-Nekrasov-2001},
\cite{Seiberg-Witten-1999},
\cite{Chaichian-Kulish-Nishijima-Tureanu-2004},
interest in field theory models in a noncommutative space-time
has been grown notably after creating in
\cite{Seiberg-Witten-1999} a general algorithm to relate usual
Yang-Mills gauge models to their noncommutative counterparts.
There appears a great deal of new physical problems to
investigate, besides the question of the hypothetic coordinate
non-commutativity has become of practically testable nature.
Noticeable progress in describing symmetry of noncommutative
spaces was achieved on the base of twisted Poincar\'e group
\cite{Chaichian-Kulish-Nishijima-Tureanu-2004},
\cite{Alvarez-Vazquez-2003}, \cite{Fiore-Wess-2007},
\cite{Chaichian-Kulish-Nishijima-Tureanu-2008}.
For instance, the mapping by Seiberg -- Witten refers the
noncommutative extension of electro\-dynamics to the usual
microscopic Maxwell theory with special nonlinear constitutive
relations. Examining all possible sym\-metries of these new
constitutive relations seems to be a significant point in order
to discern the effects of the space-time non-commutativity in
observable electromagnetic non-linear effects.
The problem of form-invariance of the non-commutativity structural
equations (see below) was considered in
\cite{Alvarez-Vazquez-2003},\cite{Chaichian-Kulish-Nishijima-Tureanu-2008}.
Several simple noncommutative parameters were listed which alow
for existence of some residual Lorentz symmetry -- the later is
recognized to have the structure $SO(2) \otimes SO(1.1)$. In was
claimed in \cite{Alvarez-Vazquez-2003} that in the case of an
arbitrary noncommutative matrix no residual Lorentz symmetry
exists.
As known, the commutator for space-time coordinates $\hat{x}^{a}$ in the Weyl-Moyal space
is defined by an antisymmetric matrix $\theta^{ab}$ elements of which are real numbers:
\begin{eqnarray}
[ \hat{x}^{\mu}, \hat{x}^{\nu} ] = i \; \theta^{\mu \nu}
\;.
\nonumber
\end{eqnarray}
To any operator $f(\hat{x})$ there corresponds the Weyl symbol $f(x)$ defined on commutative Minkowski
space with coordinates $x^{a}$. To the product of operators corresponds an operation $*$
\begin{eqnarray}
(f * g)(x) =
\left.
f(x) \; \mbox{exp} \; ({i \over 2} \theta^{\mu \nu} \overleftarrow{\partial}_{\mu}
\overrightarrow{\partial}_{\nu} )\; g(x') \right |_{x'=x}\; .
\label{1}
\end{eqnarray}
\noindent From (\ref{1}) it follows
\begin{eqnarray}
\; [ x^{\mu} , x^{\nu} ] _{*} = x^{\mu} * x^{\nu} - x^{\nu} * x^{\mu} = i \; \theta ^{\mu \nu} \; .
\label{2}
\end{eqnarray}
Consistent use of the operation $*$ permits to define twisted analogue for Poincar\'e algebra [3-6]
and provides us with possibility to construct representation of the Poincar\'e group.
It was shown in \cite{Chaichian-Kulish-Nishijima-Tureanu-2008} that at special
particular choice of $\theta^{\mu \nu }$ " ... the twisted Poincar\'e symmetry
of noncommutative (quantum) field theory is reduced to the residual $O(1,1) \otimes S0(2)$ symmetry, but
still carrying representations of the full Lorentz group. ... The meaning of the twisted Poincar\'e symmetry
in NC QFT becomes transparent:
it represents actually the invariance with respect to the stability group of $\theta_{\mu \nu }$,
while the quantum fields carry representations of the full Lorentz group and the Hilbert space of states has the richness of
particle representations of the commutative QFT".
In this context it is important to have known what is the full residual Lorentz symmetry for
arbitrary matrix $\theta_{ab}$; because it was claimed in \cite{Alvarez-Vazquez-2003}
that in the case of arbitrary noncomutative matrix no residual symmetry exists.
There exist several different views \cite{Amelino...-2007} on the transforms of the matrix $\theta^{\mu\nu}$
under the Lorentz group, most radical attitude is to consider $\theta_{ab}$ as new fundamental constants like
the minimal length. In fact, our consideration does not depend on these different views.
In the paper, we use an quaternionic technique as a main tool for
describing all Lorentz subgroups leaving invariant any 2-rank
antisymmetric tensor. The treatment is given in the most general
form irrespective of explicit vector constituents of the
tensor. We place these quite conventional mathematical facts in
the context of Maxwell equations in noncommutative
space-time. As known according to of Seiberg -- Witten
\cite{Seiberg-Witten-1999}, in the first order approximation in
parameters $\theta^{\mu\nu}$ we get the ordinary Maxwell equations
and special nonlinear constitutive relations. Just the symmetry
properties of these nonlinear constitutive relations is the main
subject of the paper. It permits us to to make more clear and full
the known results on residual Lorentz symmetry $SO(2) \otimes
SO(1.1)$.
As mentioned, several particular examples of such small (or
stability) subgroups were noticed in the literature, so our
analysis extends and completes previous considerations. In a
sense, the problem may be solved with the
help of old and well elaborated technique in the theory of the
Lorentz group.
In the main parts, our examination of the problem
is based on the use of bi-quaternion formalism
\cite{Berezin-Kurochkin-Tolkachev-1989}. Brief translation to the
more traditional technique \cite{Fedorov-1980} developed for the
Lorentz group and related to it will be given too, when instead of
the antisymmetric tensor $\theta_{\mu\nu}$ we use a
corresponding
3-vector under the complex
orthogonal group SO(3.C) which is equivalent to a symmetrical
2-rank spinor
under SL(2.C)\footnote{More detailed treatment
of the problem in the frame of
Rieman-Zilberstein-Majorana-Oppenheimer formalism and
conventional spinor formalism
will be published elsewhere.}.
In the context of general study of various dual symmetries in
noncommutative field theory \cite{Aschiery-2001},
\cite{Ganor-Rajesh-Sethi-2000}, \cite{Reya-Ungeb-2007}, one other
problem will be considered: it is demonstrated explicitly that
the known nonlinear constitutive equations arising from
noncommutative electrodynamics in the first order approximation
are not invariant under continuous dual rotations, instead only
invariance under discrete dual transformation exists, which
contrasts with claim of the paper \cite{Aschiery-2001}.
\section{ Residual Lorentz symmetry of the noncommutative Maxwell theory, quaternion treatment }
It is known that extended Maxwell equations in noncommutative
space-time, by means of Seiberg -- Witten map in the first order
approximation in parameters $\theta^{\mu\nu}$ provide us with the
ordinary Maxwell equations and special nonlinear constitutive
relations.
\begin{eqnarray}
\mbox{div} \; \vec{ B} = 0 \; , \qquad \mbox{rot} \;\vec{E} =
-{\partial \vec{B} \over \partial t} \; ; \label{1.2a}
\\
\mbox{div}\; \vec{D} = 0\; , \qquad
\mbox{rot} \; \vec{H} = {\partial \vec{D} \over \partial t} \; ,
\label{1.2b}
\end{eqnarray}
\noindent and constitutive relations
\begin{eqnarray}
\vec{D} = \vec{E} + [ \; ( \vec{\epsilon} \vec{E}) - (
\vec{\theta} \vec{B})\; ]\; \vec{E} +
[ \; ( \vec{\theta} \vec{E}) + (\vec{\epsilon} \vec{B})\; ]\; \vec{B} +
(\vec{E} \vec{B})\;
\vec{\theta} + {1\over 2}(\vec{E}^{2}-
\vec{B}^{2}) \; \vec{\epsilon} \; ,
\nonumber
\\
\vec{H} = \vec{B} + [ \; ( \vec{\epsilon} \vec{E}) - ( \vec{\theta} \vec{B})\; ]\; \vec{B} -
[ \; ( \vec{\theta} \vec{E}) + ( \vec{\epsilon} \vec{B})\; ]\; \vec{E} -
( \vec{E} \vec{B})\; \vec{\epsilon} + {1\over 2}( \vec{E}^{2}-
\vec{B}^{2}) \; \vec{\theta} \; ,
\label{1.3a}
\end{eqnarray}
\noindent and inverse relations (within the accuracy of first
order terms)
\begin{eqnarray}
\vec{E} = \vec{D} +
[ \; \vec{\theta} \vec{H} - \vec{\epsilon} \vec{D} \; ] \; \vec{D} -
[ \; \vec{\theta} \vec{D} + \vec{\epsilon} \vec{H} \; ] \;
\vec{H} - ( \vec{D} \vec{H} ) \; \vec{\theta} +{1 \over 2} \; (
\vec{H}^{2} - \vec{D}^{2} ) \; \vec{\epsilon} \; , \nonumber
\\
\vec{B} = \vec{H} +
[ \; \vec{\theta} \vec{H} - \vec{\epsilon} \vec{D} \; ] \; \vec{H} +
[ \; \vec{\theta} \vec{D} + \vec{\epsilon} \vec{H} \; ] \;
\vec{D} +
( \vec{D} \vec{H} \; \vec{\epsilon} +{1 \over 2}
\; ( \vec{H}^{2} - \vec{D}^{2} ) \; \vec{\theta} \; .
\label{1.3b}
\end{eqnarray}
\noindent The conventional notation is used:
\begin{eqnarray}
E^{m} = F^{m0}\; , \;\; B^{k} = -{1 \over 2} \epsilon^{klm}
F_{lm}\; , \qquad \epsilon^{m} = \theta^{m0}\; , \;\; \theta^{k}
= -{1 \over 2} \epsilon^{klm} \theta_{lm}\; ; \nonumber
\end{eqnarray}
\noindent $\vec{\epsilon}$ and $\vec{\theta}$ appear to be
parameters of effective nonlinear media. In general, each of
equations (\ref{1.2a}) and (\ref{1.2b}) exhibits a 20-dimensional
Lie group symmetry \cite{Nikitin-1983}; in which manner the
presence of the nonlinear constitutive equations (\ref{1.3a}) and
(\ref{1.3b}) must constrict
this symmetry. This is a main question.
In solving the problem we will apply quaternion technique \cite{Berezin-Kurochkin-Tolkachev-1989}.
Any element in bi-quaternion algebra (algebra over complex
numbers) can be presented as\footnote{Ordinary 3-vectors are
referred as $\vec{a}$, whereas $\underline{p}$ designates a
vector part of a quaternion.}
\begin{eqnarray}
q = q_{0} e_{0} + q_{a} \; e_{a} = q_{0} + \underline{q} =
q_{s} + q_{\mbox{v}}\; , \nonumber
\end{eqnarray}
\noindent where basic quaternions obey
\begin{eqnarray}
e_{0} e_{a} = e_{a} e_{0}\; , \qquad e_{0}^{2} = e_{0}, \qquad
e_{a} e_{b} = - \delta_{ab} \; e_{0}+ \epsilon_{abc} \; e_{c}\; , \nonumber
\end{eqnarray}
\noindent so the product of two quaternions is given by
\begin{eqnarray}
q\; p = (q_{0} \; p_{0} - \vec{q} \; \vec{p} ) \; e_{0}+ (q_{0}\;
\vec{p} + p_{0} \; \vec{q} + \vec{q} \times \vec{p}) \;
\vec{e}\; . \nonumber
\end{eqnarray}
\noindent Two special operations for bi-quaternions are defined:
quaternion conjugation
\begin{eqnarray}
\bar{q} = q_{0} - e_{a}\; q_{a} = q_{0} - \underline{q}\; ,
\qquad q\; \bar{q} = \bar{q}\; q = q_{0}^{2} + \vec{q}^{\;2} \;,
\qquad \stackrel{-}{(q p)}= \bar{p} \; \bar{q} \nonumber
\end{eqnarray}
\noindent and complex conjugation
\begin{eqnarray}
q^{*} = q_{0}^{*} - q_{a}^{\;*} e_{a} = q_{0}^{*} -
\underline{q}^{*} \; . \nonumber
\end{eqnarray}
\noindent With the use of notation
\begin{eqnarray}
\nabla = -i \partial_{t} e_{0} + \underline{e} \; \nabla = -i \partial_{t}
+ \underline{\nabla}\; , \nonumber
\\
\underline{f} = \underline{B} - i \underline{E} \; ,\qquad
\underline{h} = \underline{H} - i \underline{D}
\end{eqnarray}
\noindent Maxwell equations in media can be presented in the form
of the quaternion equation
\cite{Berezin-Kurochkin-Tolkachev-1989}
\begin{eqnarray}
\nabla \; [ ( \underline{B} - i \underline{E}) +(\underline{H} - i
\underline{D} )]+
\stackrel{----------}
{ \{ \nabla \;[ ( \underline{B} - i \underline{E}) -(\underline{H} - i \underline{D} )] \} } ^{*} = 0 \; .
\label{1.4}
\end{eqnarray}
The Lorentz invariance of these equations is realized by the
following transforms\footnote{Quaternion $L$ corresponds to a
spinor matrix
$B=k_{0} -i \vec{\sigma} \;\vec{k}\; \; \in SL(2.C)$ of the complex linear group
SL(2.C), spinor covering of the Lorentz group; restrictive
relation $L\;\bar{L}=1 $ is equivalent to separation special
linear group by imposing $\mbox{det}B = 1$.}
\begin{eqnarray}
\nabla ' = L \nabla \bar{L}^{*}\; , \qquad \qquad L \; \bar{L} =
k_{0}^{2} + \underline{k}^{2} = k_{0}^{2} + \vec{k}^{\;2} = 1 \;
, \nonumber
\\
(\underline{B}' - i \underline{E}') = L^{*} (\underline{B} - i
\underline{E}) \bar{L}^{*}\; , \qquad (\underline{H}' - i
\underline{D}') = L^{*} (\underline{H} - i \underline{D})
\bar{L}^{*}\; . \label{1.5}
\end{eqnarray}
The constitutive relations in quaternionic form look
\begin{eqnarray}
(\underline{B} - i \underline{E}) = (\underline{H} - i
\underline{D} ) + [ ( \underline{H} + i \underline{D} ) \;
(\underline{\theta} + i \underline{\epsilon} ) ]_{S} \;
(\underline{H} - i \underline{D} ) + {1 \over 2} ( \underline{H}
+ i \underline{D} )^{2}_{S} \; (\underline{\theta} - i
\underline{\epsilon} ) \; , \nonumber
\\
(\underline{H} - i \underline{D}) = (\underline{B} - i
\underline{E} ) - [ ( \underline{B} + i \underline{E} ) \;
(\underline{\theta} + i \underline{\epsilon} ) ]_{S} \;
(\underline{B} - i \underline{E} ) - {1 \over 2} ( \underline{B}
+ i \underline{E} )^{2}_{S} \; (\underline{\theta} - i
\underline{\epsilon} ) \; .\nonumber
\\
\label{1.6}
\end{eqnarray}
Under the Lorentz group the quaternion $ \underline{\Phi} =
(\underline{\theta} - i \underline{\epsilon} )$ transforms
according to the law
\begin{eqnarray}
\underline{\Phi}' = L^{*} \; \underline{\Phi}\; \bar{L}^{*}\; .
\label{1.7}
\end{eqnarray}
We have arrived at a key relationship determining small Lorentz
group for the noncommutativity object:
\begin{eqnarray}
L^{*} \; \underline{\Phi}\; \bar{L}^{*} = \underline{\Phi} \; ,
\qquad \mbox{or} \qquad L^{*} \; \underline{\Phi}=
\underline{\Phi}\; L^{*} \;. \label{1.8}
\end{eqnarray}
\noindent it describes all inertial observers to which effects
of non-commutativity will be seen exactly the same. It is
convenient to introduce a new variable $\Phi^{*}=\varphi $, then
eq. (\ref{1.8}) reads
\begin{eqnarray}
(k_{0} + \underline{k}) \; \underline{\varphi}=
\underline{\varphi}\; (k_{0} + \underline{k}) \; . \label{1.9}
\end{eqnarray}
\noindent Evidently, that eq. (\ref{1.9}) is satisfied if and only
if quaternions $\underline{q}$ and $\underline{\varphi} $ are
proportional to each other:
\begin{eqnarray}
L_{\varphi} = (k_{0} + w \; \underline{\varphi}) \; , \qquad
\underline{\varphi} =
(\underline{\theta} + i \underline{\epsilon} ) \; .
\label{1.9}
\end{eqnarray}
To proceed further in description of the subgroup (\ref{1.9}) we
should distinguish between two cases: $\underline{\varphi}^{2}
\neq 0$ and $\underline{\varphi}^{2} = 0$.
In the first case one can introduce new parametrization in term
of a complex angle and unit quaternion:
\begin{eqnarray}
\underline{\varphi}^{2} \neq 0\;, \qquad k_{0} = \cos \chi \; ,
\qquad \underline{\varphi} = \sqrt{\underline{\varphi}^{2} } \;
{ \underline{\varphi} \over \sqrt{\underline{\varphi}^{2} }} =
\sin \chi \; \hat{\underline{\varphi}}\; , \qquad
\hat{\varphi}^{2} = +1 \; ; \label{1.10}
\end{eqnarray}
\noindent so that the small Lorentz group with simple Abelian
multiplication law is given by
\begin{eqnarray}
L_{\varphi} = \cos \chi + \sin \chi \; \hat{\underline{\varphi}}\;
, \qquad \chi'' = \chi' + \chi \; .
\end{eqnarray}
Taking matrix realization for quaternion units, $e_{0}=I, e_{a} =
-i \sigma_{a}$, we get an explicit spinor form for $L_{\varphi} $
\begin{eqnarray}
L_{\varphi} = \cos \chi - i \; \sin \chi \; \vec{\sigma} \; (
\vec{n} + i \vec{m}) \; , \qquad \vec{n} + i \vec{m} = { (
\vec{\theta} + i \; \vec{\epsilon} ) \over \sqrt{ \vec{\theta}^{\;2}
- \vec{\epsilon}^{\;2} + 2i\; \vec{\theta} \; \vec{\epsilon} }} \; .
\label{1.10}
\end{eqnarray}
Immediately, particular examples when physical interpretation for
$\chi$-parameter is evident can be pointed out: Euclidean
rotations and Lorentz boosts along vector $\vec{n}$:
\begin{eqnarray}
\vec{n} \neq 0 \;, \; \vec{m}= 0\; , \; \chi = \alpha + i \; 0\;
,\qquad L_{\varphi} = \cos \alpha - i \; \sin\alpha \;
\vec{\sigma} \; \vec{n} \;; \nonumber
\\
\vec{n} \neq 0 \;, \; \vec{m}= 0\; , \; \chi = 0 + i\; \beta \;
,\qquad \;\;\; \; L_{\varphi} = \mbox{ch}\; \beta + \;
\mbox{sh} \; \beta \; \vec{\sigma} \; \vec{n} \; .
\end{eqnarray}
\noindent It should be added, that the case of arbitrary
nonisotropic $\theta^{\mu \nu}$ with the help of special Lorentz
transformation can be translated to a more simple form properties:
\begin{eqnarray}
\vec{n} + i \vec{m} \;, \qquad \vec{n}^{2} - \vec{m}^{2} = 1=
\mbox{inv} \; , \;\; \vec{n}\; \vec{m} = 0=\mbox{inv} \qquad
\Longrightarrow \nonumber
\\
\vec{n}' + i \; 0 \;, \qquad \vec{n}^{'2}= 1 =\mbox{inv} \; , \qquad
L'_{\varphi} = \cos \chi + i \; \sin \chi \; \vec{\sigma} \; \vec{n}' \; .
\nonumber
\end{eqnarray}
Thus, for arbitrary $\theta^{\mu \nu}$-tensor we arrive at the
corresponding small Lorentz group $SO(2) \otimes SO(1,1)$, just
this structure was previously described with special choice of
non-commutativity matrix in \cite{Alvarez-Vazquez-2003},
\cite{Chaichian-Kulish-Nishijima-Tureanu-2008}.
In the second \underline{(isotropic)} case the normalization
condition $L\; \bar{L}=1 $ gives
\begin{eqnarray}
\underline{\varphi}^{2} = 0\; , \qquad k_{0}^{2} + w^{2} \;
\underline{\varphi}^{2} = 1 \;, \qquad k_{0} = \pm 1\; ; \nonumber
\end{eqnarray}
\noindent therefore now the small Lorentz group (see \ref{1.9}) )
is specified by relations
\begin{eqnarray}
L = \pm (1 + w\; \underline{\varphi} ) \; , \qquad
\underline{\varphi}^{2}= 0\; , \qquad w'' = w' + w \; ,
\label{1.13}
\end{eqnarray}
\noindent where $w$ is any complex number. This is an Abelian
group of displacements in complex plane, $T_{2}$.
For readers preferring the Maxwell theory in vector notation
let us give translation to this language. Electromagnetic vectors
make up two complex 3-vector under complex orthogonal group
$SO(3.C)$:
\begin{eqnarray}
\vec{f} = \vec{B} -i\; \vec{E} \; , \qquad \vec{h} =
\vec{H} - i \; \vec{D} \; ,\qquad \vec{K} = \vec{\epsilon} -
i \; \vec{\theta} \; . \label{3.1a}
\end{eqnarray}
\noindent Complex orthogonal group may be defined as $2
\rightarrow 1$ mapping from $SL(2.C)$ -- their elements are given
by
\begin{eqnarray}
SO(3.C)\; , \qquad O(k) =\left | \begin{array}{lll}
1 -2 (k_{2}^{2} + k_{3}^{2}) & -2k_{0}k_{3} + 2k_{1}k_{2} & +2k_{0}k_{2} + 2k_{1}k_{3} \\
+2k_{0}k_{3} + 2k_{1}k_{2} & 1 -2 (k_{3}^{2} + k_{1}^{2}) & -2k_{0}k_{1} + 2k_{2}k_{3} \\
-2k_{0}k_{2} + 2k_{1}k_{3} & +2k_{0}k_{1} + 2k_{2}k_{3} & 1 -2 (k_{1}^{2} + k_{2}^{2})
\end{array} \right |
\label{3.1b}
\end{eqnarray}
\noindent governs their behavior under Lorentz group in accordance
with $ O(k) \; \vec{f} = \vec{f}\; '\; , \; O(k) \; \vec{h} =
\vec{h}' . $ One may note straightforwardly an identity
\begin{eqnarray}
O(k_{0}, \vec{k} ) \lambda \; \vec{k} = \lambda \; \vec{k} \;
\label{3.1c}
\end{eqnarray}
\noindent which is a base to explore the problem of small groups
for complex 3-vectors.
\section{ On discrete dual symmetry }
Let us turn to Maxwell equation in quaternion form and rewrite
them as follows
\begin{eqnarray}
\nabla ( \underline{f} + \underline{h}) +
\stackrel{--------}{[\nabla ( \underline{f} -
\underline{h})]^{*}}= 0 \; , \label{2.1}
\end{eqnarray}
\noindent where $\underline{f} = \underline{B} - i \underline{E}
\; , \; \underline{h} = \underline{H} - i \underline{D} \;$ .
This equation is invariant under dual rotation:
\begin{eqnarray}
\underline{f} + \underline{h} = \underline{G}\; , \qquad \underline{G}' = e^{i \chi} \; \underline{G} \; ,
\nonumber
\\
\underline{f} - \underline{h} = \underline{R}\; , \qquad \underline{R}' = e^{-i \chi} \; \underline{R} \; .
\label{2.2}
\end{eqnarray}
\noindent We adhere to \cite{Abe} and take dual rotation for
$\underline{K} = \underline{\theta} - i \underline{\epsilon}$ in
the form $ \underline{K} ' = e^{i \chi} \; \underline{K} \;$. In
these variables, the constitutive relations read
\begin{eqnarray}
\underline{f}= \underline{h} + (\underline{h}^{*}
\underline{K}^{*})_{s} \underline{h}+ {1 \over 2} ( \underline{h}
^{*} \underline{h}^{*})_{s}\; K \; , \nonumber
\\
\underline{h}= \underline{f} - (\underline{f}^{*}
\underline{K}^{*})_{s} \underline{f}- {1 \over 2} ( \underline{f}
^{*} \underline{f}^{*})_{s}\; K \; . \label{2.3}
\end{eqnarray}
\noindent Summing and subtracting these two equations we get
\begin{eqnarray}
0 = -{1 \over 2} ( \underline{R}^{*} \underline{K}^{*})_{s} G +
{1\over 2} (\underline{G}^{*} \underline{K}^{*})_{s} \underline{R}
- {1 \over 2} ( \underline{G}^{*} \underline{R}^{*})_{s}
\underline{K} - {1 \over 2} (\underline{R}^{*}
\underline{G}^{*})_{s} K \; , \nonumber
\\
2\underline{R} = {1 \over 2} ( \underline{R}^{*}
\underline{K}^{*})_{s} R + {1\over 2} (\underline{G}^{*}
\underline{K}^{*})_{s} \underline{G} + {1 \over 2} (
\underline{G}^{*} \underline{G}^{*})_{s} \underline{K} + {1 \over
2} (\underline{R}^{*} \underline{R}^{*})_{s} K \; . \label{2.4}
\end{eqnarray}
Requiring invariance of these two relation with respect to dual
rotation
\begin{eqnarray}
\underline{G}= e^{-i\chi} \underline{G}' \; , \qquad
\underline{G}^{*} = e^{i\chi} \underline{G}^{'*} \; , \nonumber
\\
\underline{R}= e^{i\chi} \underline{R}' \; , \qquad
\underline{R}^{*} = e^{-i\chi} \underline{R}^{'*} \; , \nonumber
\\
\underline{K} = e^{-i\chi} \underline{K'} \; , \qquad
\underline{K}^{*} = e^{i\chi} \underline{K}^{'*} \; , \nonumber
\end{eqnarray}
\noindent we arrive at two equations: $ e^{i\chi} = e^{-3i\chi} \;
, \; e^{-i\chi} = e^{+3i\chi} $
with simple solution:
$ e^{i\chi} = 1,-1, +i, -i \; . $ Therefore, only discrete dual
transformation leaves invariant the nonlinear constitutive
equations,
it corresponds to $e^{i\chi}= \pm i $.
Thus, the dual symmetry's status in noncommutative electrodynamics
differs with that in ordinary linear Maxwell theory in
commutative space, this fact is to be interpreted in physical
terms.
\vspace{5mm}
Author are grateful to Professor Kurochkin Ya.A. for discussion
and advice. This work was partially supported by the Fund for
Basic Research of Belarus, grant F08R-039.
|
1,108,101,566,664 | arxiv | \section{Introduction}
An understanding of the gauge and other symmetries in string theory
is of the utmost importance in understanding the physical significance
of strings. This is lacking at the moment. From a practical point
of view we would like the symmetries to be manifest in the computational
scheme also. An approach that looks promising to us in this respect
is the loop variable approach \cite{BS1} which is a
generalization of the sigma
model renormalization group method\cite{CL,CC,AS,FT,DS,JP,BM}.
However the work in
\cite{BS1,BS2} deals with the free theory. One needs to extend it
to include interactions. There are several issues that arise:
one is the question of modifying the gauge transformations.
The second is the question of massive modes and finally there is the
issue of going off shell. There is a well defined answer to these
questions in string field theory \cite{W1}
but we would like to approach it in
the loop variable framework because of the computational simplicity.
The loop variable approach was developed as an extension of
the results of \cite{BS3,BS4}
to gauge invariant interactions.
In \cite{BS3} it was shown that the equations of motion
of the tachyon
in string
theory can be written as a proper time equation by
analogy with point particles. The connection with the renormalization
group follows from the fact that the proper time $\tau$ in string theory
is related to the coordinate $z$ of the sigma model by
$z= e^{\tau + i \sigma}$ and so $\frac{d}{d\tau}$ is a generator
of scale transformations. It was also shown
in \cite{BS3} that if one keeps a
finite cutoff one finds that instead of obtaining the low energy
non polynomial effective equations of motion where the massive
modes are
integrated out, one gets an equation
in which the massive modes are present and
which, for an appropriate choice of the cutoff,
is quadratic in the fields.
For the special case of a tachyon we showed in \cite{BS4} that the off
shell
3-tachyon vertex of string field theory can be reproduced if we keep a
finite cutoff. In the language of vertex operators a finite cutoff
is equivalent to a hole of finite radius on the world sheet. If one
lets the radius go to zero one recovers the usual punctured world
sheet. In this case the vertex operator has to be of dimension (1,1)
or equvalently the particle has to be on shell. If we keep a finite
radius, on the other hand, the particle can be off shell.
In the language of the renormalization group if one is far
away from the fixed point and one has all the irrelevant operators
then, effectively, you have a cutoff in the theory. When the
cutoff goes to zero one is pushed towards the neighbourhood of
a fixed point where only the
marginal and relevant operators are present. Conversely
if one is to keep a finite radius (cutoff) then one should keep all
the massive modes. All this analysis has been done for the tachyon.
If one
keeps track of the reparametrizations of the boundary of this hole
in the world sheet, then one needs extra variables in the theory
and it turns out that
this enables one to write down gauge invariant (free)
equations for the massive modes \cite{BS1,BS2}.
In order to extend the results obtained for the tachyon to higher
mass states what
needs to be done is to generalize
this construction to the interacting case. Fortunately, for the massless
vector one does not need all this machinery to maintain gauge
invariance. In this paper we concentrate on the massless case and
for simplicity we stay close to the mass shell. It will
turn out that the proper time formalism can be extended to describe
this situation in a straightforward way. We will do it both for the
point particle and the string. The results of \cite{BS3,BS4}
suggest that it should be possible to extend this off the mass shell
also. We will also discuss briefly the propagation of
a gauge (point) particle.
This paper is organized as follows: In Section 2 we describe briefly
three different schemes for deriving free gauge invariant
equations in the sigma model formalism. In Section 3 we describe
the proper time formalism for a particle in a background vector field.
The mechanism
of gauge invariance in the interacting case can be understood from this
example. In Section 4 we extend this to strings and discuss the
mechanism of gauge invariance there. In Section 5 we give some
concluding remarks and point out the similarity with Witten's
formulation of the background independent open string
equation.
\newpage
\section{Gauge Invariance in the Sigma Model Formalism}
\setcounter{equation}{0}
Let us describe three different ways of deriving the equations of motion
for a massless vector field, i.e. Maxwell's equations, in the open
string. They each involve imposing some requirements on the vertex
operator:
\begin{equation}
\int dz V(x) \equiv A_{\mu}(x)\mbox {$ \partial$} _{z} X^{\mu} \equiv
\int dz \int dk A_{\mu} (k) e^{ikX} \mbox {$ \partial$} _{z} X^{\mu}
\end{equation}
\underline{Method I}:
We require that $\mbox {$\frac{\delta}{\delta \sigma}$} V(x) \mid _{\sigma =0} =0$ where the \mbox{$\sigma $}
- dependence arises due to ultraviolet divergences that we usually
remove by
normal ordering . Thus:
\begin{equation}
V(x) = : V_{N}(x, \mbox{$\sigma $} ):
\end{equation}
To get the \mbox{$\sigma $} - dependence of the expression in (2.1) a simple
method is to consider the vertex operator
$e^{i(kX + A \partial X)}$ and write it as
\begin{equation}
exp(i(kX + A \mbox {$ \partial$} X) + \frac{k^{2}}{2} <XX> + A.k <X \mbox {$ \partial$} X> )
\end{equation}
\[
= exp(i(kX + A \mbox {$ \partial$} X) + k^{2} \sigma + A.k \mbox {$ \partial$} \sigma )
\]
We have used $<XX> = 2 \sigma $ and $ <X \mbox {$ \partial$} X> = \mbox {$ \partial$} \sigma $ .
Expanding the
exponent and keeping the term linear in $A$ we get
\begin{equation}
A_{\mu}(k) e^{ikX} \mbox {$ \partial$} _{z} X^{\mu}
= A_{\mu }(k) :e^{ikX} \mbox {$ \partial$} _{z}X^{\mu}
:e^{k^{2} \sigma }
\end{equation}
\[
-ik.A:e^{ikX}:\mbox {$ \partial$} _{z} \mbox{$\sigma $} e^{k^{2} \sigma }
\]
Varying w.r.t. \mbox{$\sigma $} gives
\begin{equation}
(k^{2} A_{\mu} (k) - k_{\mu} k.A ) :e^{ikX}\mbox {$ \partial$} X^{\mu}: =0
\end{equation}
which is nothing other than $\mbox {$ \partial$} _{\mu} F_{\mu \nu} = 0$ in momentum
space. Note that the crucial point (for gauge invariance) in this
derivation is the fact that \mbox{$\sigma $} depends on $z$.
This is already a generalization of the usual $\beta$ - function method
where we require $ \frac{dV}{dln a} = 0$ , where $a$ is a fixed
cutoff. One way to think of this is that
the flat world sheet cutoff $a$ is being replaced by $ae^{\sigma}$.
To lowest order in \mbox{$\sigma $} this is sufficient. To get results accurate
to higher orders one can replace the cutoff by the geodesic distance,
as has been done for instance in \cite{BE}. There are other ways
of obtaining the higher order pieces also. Another crucial feature
is that in deriving (2.5) one has to perform an integration by
parts. This assumes that there are no surface terms. This will
not be true when we include interactions.
\underline{Method II} : We impose
\begin{equation}
L_{0} V = 0 = L_{1} V
\end{equation}
where $L_{n}$ are the Virasoro generators. [$L_{n} V=
0 $ trivially
for $n>1$].
Naively this imposes two requirements on the vertex operator:
\begin{equation}
k^{2}A^{\mu}=0 \, \, and \, \, k.A=0
\end{equation}
the so called 'physical state' conditions. However note that we have the
freedom to add to $V$ vertex operators of the form
\begin{equation}
B(k) k . \mbox {$ \partial$} X e ^{ikX}
= B(k) \mbox {$ \partial$} _{z} e^{ikX} = L_{-1}(Be^{ikX})
\end{equation}
i.e. a total derivative.
Thus (2.7) becomes
\begin{equation}
k^{2}A^{\mu} + k^{\mu} k^{2}B =0
\end{equation}
and
\[
k.A + k^{2}B =0
\]
In the first equation we can replace
$k^{2} B $ by $-k.A$ and obtain eqn(2.5): $k^{2}A^{\mu} - k^{\mu}k.A=0$.
The role played by the Liouville mode is taken over by the
auxiliary field $B$.
\underline{Method III}
We require $ \{ Q, cV \} =0$ where $Q$ is the BRST operator and $c$
is the ghost (fermionic) field. Using
\begin{equation}
Q= \oint dz c(z)[-1/2 \mbox {$ \partial$} X \mbox {$ \partial$} X + \mbox {$ \partial$} c b ]
\end{equation}
and $V$ as before we get
\begin{equation}
\{Q,cV\}= 1/2(A.k \mbox {$ \partial ^{2}$} c c(z) - i k^{2} A_{\mu} \mbox {$ \partial$} X^{\mu} \mbox {$ \partial$} c(z) c(z))
\end{equation}
Setting the RHS of (2.10) to zero we would get the usual physical state
conditions (2.7). However we can add to $cV$ another operator of the
same dimension and ghost number:
\begin{equation}
W = B(k) \mbox {$ \partial$} _{z} c e^{ikX}
\end{equation}
and
\begin{equation}
\{Q,W\}= (B c \mbox {$ \partial ^{2}$} c + ik \mbox {$ \partial$} X B c \mbox {$ \partial$} c ) e^{ikX}
\end{equation}
Thus we should actually require that
$\{ Q, cV+W \} =0$ and this gives two equations:
\begin{equation}
A.k/2 - B =0
\end{equation}
and
\[
k^{2}/2 A^{\mu} - k^{\mu} B =0
\]
which, combined together, give Maxwell's equation. Note that this
method is very similar to method II in that we need an auxiliary field
$B$.
Each of these methods can be generalized to the massive cases as
well. Before we describe that let us describe the gauge transformations.
In method III it is obvious:
\begin{equation}
\delta (cV) = [Q, \Lambda ]
\end{equation}
where $\Lambda$ has ghost number zero, since $\{ Q, [Q, \Lambda ] \} =0$
identically (in 26 dimensions). That is we can add to the vertex
operator $cV$ the piece $[Q, \Lambda ]$ and it does not affect the
BRST invariance properties.
Thus letting $\Lambda = \Lambda _{0} e^{ikX}$ we get
\begin{equation}
[Q, \Lambda ] = cik^{\mu} \Lambda \mbox {$ \partial$} X^{\mu} e^{ikX} +
k^{2}/2 \mbox {$ \partial$} c e^{ikX}
\Lambda
\end{equation}
which gives
\begin{equation}
\delta A_{\mu} = k_{\mu} \Lambda \, \, , \delta B = (k^{2}/2) \Lambda
\end{equation}
This method is obviously the sigma model version of Witten's string
field theory equation\cite{W1}:
\begin{equation}
Q \Psi = 0
\end{equation}
and has the gauge invariance :
\begin{equation}
\delta \Psi = Q \Lambda
\end{equation}
The generalization to higher mass levels is immediate - it is just a
matter of writing down the relevant vertex operators. Although we
will not need it in this paper we will, for future reference,
give very briefly the results
for the next mass level. The general vertex operator is
\begin{equation}
W= [ S^{\mu} c \mbox {$ \partial ^{2}$} X^{\mu} + S^{\mu \nu} c \mbox {$ \partial$} X^{\mu} \mbox {$ \partial$} X^{\nu}
+ D \mbox {$ \partial ^{2}$} c +
\end{equation}
\[
+ B^{\mu} \mbox {$ \partial$} c \mbox {$ \partial$} X^{\mu} + E c \mbox {$ \partial$} c b ] e^{ikX}
\]
The equations are $\{ Q, W \} =0 $.
\begin{equation}
-(k^{2}/2 +1) S^{\mu} + B^{\mu} + ik^{\mu} D =0
\end{equation}
\[
-S^{\mu} + ik^{\mu} S^{\mu \nu} + ik^{\mu} D + B^{\mu} =0
\]
\[
ik.S/3 + S^{\mu}_{\mu} /6 + D + 2/3 E =0
\]
\[
(1+ k^{2}/2 )D + ik.B/2 -3/2 E =0
\]
\[
(k^{2}/2 +1)S^{\mu \nu} - ik^{\mu} B^{\nu} + 1/2 \delta ^{\mu \nu} E=0
\]
and the gauge transformations are $[Q, \Lambda ] $ with
\begin{equation}
\Lambda = [\Lambda ^{\mu} \mbox {$ \partial$} X^{\mu} + \Lambda cb]
\end{equation}
which gives:
\begin{equation}
\delta S^{\mu} = \Lambda ^{\mu} - ik^{\mu} \Lambda
\end{equation}
\[
\delta D = -ik.\Lambda /2 -3/2 \Lambda
\]
\[
\delta E = -(k^{2}/2 +1) \Lambda
\]
\[
\delta S^{\mu \nu} = i/2(k^{(\mu} \Lambda ^{\nu )} ) +1/2 \delta
^{\mu \nu} \Lambda
\]
\[
\delta B^{\mu} = (k^{2} /2 \, \, +1) \Lambda ^{\mu}
\]
(2.19) is invariant under (2.21) only in 26 dimensions.
In metod II the gauge transformation evidently corresponds to the
freedom of adding a piece $L_{-1} B e^{ikX}$ to the vertex operator
$A_{\mu} \mbox {$ \partial$} X^{\mu}
e^{ikX}$. The point is that this ambiguity is already
allowed for by the addition of (2.8) and hence {\em a fortiori}
is an invariance of the equations of motion.
The generalization to higher mass levels
would be to add
\begin{equation}
L_{-n} \Psi _{n}
\end{equation}
to the vertex operator $V$ and then impose
\begin{equation}
L_{m} (V+ \Sigma _{n} L_{-n} \Psi _{n} ) =0
\end{equation}
The equations obtained on eliminating the $\Psi _{n}$ are
guaranteed to have gauge invariance of the form $V \rightarrow
V + L_{-n} \Lambda _{n}$ \cite{BP}
This is the sigma
model version of the Banks-Peskin string field
theory. Of course ,as shown there, this naive generalization , while it
has all the gauge invariances, does not correspond to string theory.
One has to get rid of many redundant fields and gauge invariances
associated with those fields. The end result is a fairly involved
expression for the equation of motion\cite{BP}.
Nevertheless one could, if one
so desired, transcribe these results to the sigma model framework.
Finally, in method I gauge invariance corresponds to the freedom
to add total derivatives of the form $\mbox {$ \partial$} _{z} \Lambda(X)$
to the action (2.1).
The generalization to massive modes is what is described in detail in
\cite{BS1,BS2}. It involves introducing an infinite number of new
variables $x_{n}$ and vertex operators are expressed as derivatives
in $x_{n}$ rather than $z$. The freedom to add total derivatives
in $z$ is generalized to that of adding total derivatives in $x_{n}$.
This method is closest in spirit to the renormalization group since
in the end we still require $\mbox {$\frac{\delta}{\delta \sigma}$} V =0$. The gauge transformations
in this method are fairly simple\cite{BS1,BS2}. We will not describe
it here since we are not going to discuss the massive modes.
In this section we have described three approaches to understanding
the issue of gauge invariance in the sigma model language, at the
free level. We now have to generalize this to
the interacting level. The BRST method (III) has been generalized
in the string field theory language to the interacting level \cite{W1}
and in a
form more closely related to
sigma model and two dimensional field theory \cite{AA,AC,W2,WL,W3}.
We are looking
for an analogous generalization for the first method. At the
free level there appear to be certain advantages to this method and
the hope is that this may be true at the interacting level also.
In this paper we will restrict ourselves to the massless vector
(and the tachyon) - so we will not need the extra variables
used in the loop variable generalization of the first method.
\newpage
\section{The Proper Time Formalism and Gauge Invariance for Point
Particles}
\setcounter{equation}{0}
The proper time formalism for free particles is well known
\cite{F,S,N,M,SM,T}
In \cite{BS3}
we modified it to describe a self interacting scalar particle.
It was then
shown that one could write a very similar equation for strings and this
led
directly to a proof of the proportionality of the equations of motion
and the $\beta$ - function (for the tachyon). Describing gauge
theories in the first quantized formalism is a little harder. A lot of
work
has been done in applying the BRST formalism to this end \cite{SB}.
In this
section
we want to describe a point particle in a background gauge field using
the proper time formalism. We will also discuss briefly
the propagation of a gauge particle itself (albeit a free one) which is a
little trickier.
The proper time equation for a massless free relativistic particle
is
\begin{equation}
\frac {\mbox {$ \partial$} \phi [X, \tau]}{\mbox {$ \partial$} \tau} = \Box \phi [X, \tau ] =0
\end{equation}
The solution to the first part of the equation is
\begin{equation}
\phi [X, \tau ] = \int dX_{i} \int _{X(0) =X_{i}} ^{X(T)=X_{f}}
{\cal D} X e^{i/2\int ^{T}_{0} d \tau ( \frac{\partial X}
{\partial \tau })^{2}}
\phi [X,0]
\end{equation}
The kernel in equation (3.2) is the evolution operator in
proper time. Integrating over $T$ from $0$ to $\infty$ sets
$\frac{d \phi }{d \tau} =0$ in eqn.(3.1) and gives us the
Klein Gordon propagator. We will use (3.2) and require $ \frac{d \phi}
{d\tau} =0$ as in \cite{BS3}. We can, if we want, now modify
the action to include various backgrounds and then requiring
$\frac{d \phi}
{d \tau} =0$ should give the required generalization of (3.1) to the
interacting equation. In \cite{BS3}
this
was done for a self interacting scalar field.
Following \cite{BS3} we write
\begin{equation}
\phi (k',\tau ) = \int dk < e^{ik'X(\tau )}e^{ikX(0)}> \phi (k,0)
\end{equation}
However unlike \cite{BS3}
the expectation is calculated using the action
\begin{equation}
\int ^{T}_{0} d \tau [ 1/2 (\frac {\mbox {$ \partial$} X }{\mbox {$ \partial$} \tau })^{2} +
A_{\mu} \frac {\mbox {$ \partial$} X^{\mu}}{\mbox {$ \partial$} \tau }]
\end{equation}
The free two point function is given by :
\begin{equation}
<X^{\mu} ( \mbox {$ \tau _{1}$} ) X^{\nu} ( \mbox {$ \tau _{2}$} ) > = \delta ^{\mu \nu} \mid \mbox {$ \tau _{1}$} - \mbox {$ \tau _{2}$}
\mid
\end{equation}
To lowest order we get using momentum conservation
\begin{equation}
\phi (k, \tau ) = e^{ k^{2} \tau } \phi (k,0)
\end{equation}
Requiring $\frac{d \phi}{d \tau} \mid _{\tau =0} = 0$
gives $k^{2} \phi =0$
- the massless Klein Gordon equation. To next order we have to calculate
\begin{equation}
\int ^{T} _{0} d \mbox {$ \tau _{1}$} < e^{ik' X(\tau )}\mbox{$\dot{X}$} ( \mbox {$ \tau _{1}$} ) e^{ipX( \mbox {$ \tau _{1}$} )}
e^{ikX (0)} >
\end{equation}
In (3.7) we have written $A_{\mu} (x) \frac{\partial X^{\mu}}
{\partial \tau} $
as $\int dp A_{\mu}(p) e^{ikX(\tau )} \mbox{$\dot{X^{\mu}}$} $. The
range of integration is restricted from 0 to $T$. We can
simplify the calculation by exponentiating $\mbox{$\dot{X}$} (\tau)$ into
$e^{i(p.X(\tau_{1}) + p_{1}. \dot{X} (\tau _{1}))}$ and we will
remember in the end to keep the piece linear in $p_{1}$.
We get, for (3.7),
\begin{equation}
\int d \mbox {$ \tau _{1}$} exp ( k'.p( \tau - \mbox {$ \tau _{1}$} ) -k'.p_{1} + p.k \mbox {$ \tau _{1}$} +
p_{1}.k + k'.k \tau )
\end{equation}
The linear piece in $p_{1}$ gives
\begin{equation}
(p_{1}.k - p_{1}.k') \int ^{\tau }_{0} d \mbox {$ \tau _{1}$} exp((k'.p +k'.k)\tau
-k'.p \mbox {$ \tau _{1}$} + k.p \mbox {$ \tau _{1}$} )
\end{equation}
which in turn gives (using $k+k' +p =0$)
\begin{equation}
(p_{1}.k -p_{1}.k') e^{-k'^{2} \tau}[\frac{ e^{(k.p-k'.p) \tau } -1}
{p.(k-k')}]
\end{equation}
Setting $k'^{2}=0$ and requiring $\frac{d}{d\tau} \mid _{\tau =0} =0$
gives the piece (replacing $p_{1}$ with $A_{\mu} (p)$)
\begin{equation}
(A.k - A.k')\phi (k) =(2A(p).k + A(p).p)\phi (k)
\end{equation}
To next order we have to calculate
\begin{equation}
\int ^{\tau}_{0} d \mbox {$ \tau _{1}$} \int ^{\tau _{1}}_{0} d \mbox {$ \tau _{2}$}
<e^{ik'.X(\tau )} e^{i (p.X(\mbox {$ \tau _{1}$} ) + p_{1} \dot{X} (\mbox {$ \tau _{1}$} ))}
e^{i(qX(\mbox {$ \tau _{2}$} ) + q_{1} \dot{X} ( \mbox {$ \tau _{2}$} ) )} e^{ik' X(0) }>
\end{equation}
In calculating this expression we need correlators like
$<\mbox{$\dot{X}$} (\tau _{1}) \mbox{$\dot{X}$} (\tau _{2})>$
and it is important to keep track of the absolute value prescription
in (3.5) (otherwise the correlator vanishes). To lowest order
in momentum we have
\begin{equation}
\lim _{\epsilon \rightarrow 0}
p_{1}.q_{1} \int _{0}^{\tau} d \mbox {$ \tau _{1}$} \int _{0}^{\mbox {$ \tau _{1}$}} d \mbox {$ \tau _{2}$}
<[\frac{X(\mbox {$ \tau _{1}$} + \epsilon ) - X( \mbox {$ \tau _{1}$} - \epsilon )}{2\epsilon}]
[\frac{X(\mbox {$ \tau _{2}$} + \epsilon ) - X( \mbox {$ \tau _{2}$} - \epsilon )}{2 \epsilon}]>
\end{equation}
As long as $\tau _{2} < \mbox {$ \tau _{1}$} - 2 \epsilon $
the correlator is zero. Otherwise it gives
\begin{equation}
\int _{\mbox {$ \tau _{1}$} - 2 \epsilon}^{\mbox {$ \tau _{1}$}} d \mbox {$ \tau _{2}$} (2(\mbox {$ \tau _{1}$} - \mbox {$ \tau _{2}$} ) - 4 \epsilon )
=-4 \epsilon ^{2}
\end{equation}
Thus (3.13) gives $-p_{1}.q_{1} \tau $ and acting on it with
$\frac{d}{d \tau}$ gives $-p_{1}.q_{1} $ or $-A^{2}$. Adding all three
contributions gives $(i \mbox {$ \partial$} - A ) ^{2}\phi $ the Klein Gordon
equation in a background electromagnetic field.
The other pieces from (3.12) give zero when we act with
$\frac{d}{d \tau } \mid _{\tau =0}$ on them.
From (3.4) one can see that the construction is gauge invariant.
The transformation $A_{\mu} \rightarrow A_{\mu} + \mbox {$ \partial$} _{\mu} \Lambda$
does not leave the action invariant but
results in a boundary term :
\begin{equation}
\int _{0} ^{T} d \tau \mbox{$\dot{X}$} \frac{d \Lambda }{d X} = \Lambda (T) -
\Lambda (0)
\end{equation}
This results in a phase,
which can be compensated by a gauge transformation
\begin{equation}
\phi (\tau ) \rightarrow e^{i\Lambda (\tau )} \phi (\tau )
\end{equation}
As explained in the last section, gauge invariance at the free level
is due
to the freedom to add total derivatives. However if there are
boundary terms then the action is not invariant. This is the
situation when one has interactions.
We then have to compensate by the transformation (3.16). This is
the origin of inhomogeneous terms , i.e. those of the form
$\delta \phi = i \Lambda \phi $, (as against terms of the
form $\delta A_{\mu} = \mbox {$ \partial$} _{\mu} \Lambda $) - they arise from
boundaries of the integration region.
It is not obvious in the calculation of the covariant Klein Gordon
equation that the interaction terms $ A_{\mu } \mbox {$ \partial$} ^{\mu} \phi ,\,
\mbox {$ \partial$} . A \phi \, , \, A.A \phi $
also arise in this manner (from surface terms), but this is in
fact the case.
In the next section we will repeat
the calculation in a way that makes this
fact manifest.
One can now
ask the following question: We understand how gauge invariance
is maintained as far as background gauge fields are concerned. What
about deriving equations of motion
for the gauge particle itself (i.e. Maxwell's or Yang Mills equations)
in this formalism?
This is a little tricky since we do not usually treat the electromagnetic
field in first quantized form. However motivated by strings we can
extend the previous discussion and consider an object of the form
\begin{equation}
<k_{1}.\mbox{$\dot{X}$} (\tau ) e^{ik.X(\tau )} A_{1}.\mbox{$\dot{X}$} (0) e^{ip.X(0)}>
\end{equation}
and require $\frac{d}{d \tau} \mid _{\tau =0}=0$ as before.
\footnote{In string theory \mbox{$\dot{X}$} acts on the ground state
and excites it to a vector state. There is no such interpretation
for a point particle. Perhaps we can think of $\mbox{$\dot{X}$} \mid 0>$
as a current source for a photon. For our purposes
we will not worry about interpreting it but
we will formally treat it just as in string theory since that is our
real interest in any case.} We immediately run into a problem - that
of gauge invariance. In eqn.(3.4) the vertex operator $\mbox{$\dot{X^{\mu}}$} (\tau)$
was integrated over $\tau$. So it was a gauge invariant expression
(except
for surface terms which we took care of by transformong $\phi$).
$\mbox{$\dot{X^{\mu}}$} (0) $ in the unintegrated form
has no such gauge invariance. We will therefore modify
(3.17) to
\begin{equation}
\int d \mbox {$ \tau _{1}$} \int d \mbox {$ \tau _{2}$}
<k_{1}.\mbox{$\dot{X}$} (\tau ) e^{ik.X(\tau )} A_{1}.\mbox{$\dot{X}$} (0) e^{ip.X(0)}>
\end{equation}
This construction is gauge invariant but now the proper time equation
makes no sense - since \mbox {$ \tau _{1}$} and \mbox {$ \tau _{2}$} are both integrated over.
One must generalize the proper time prescription. We can do as follows:
We know that $<X( \tau) X(0)>= \mid \tau \mid $. Let us treat the
entity $<X(\tau ) X(0)>$ as a {\em field} $\Sigma ( \tau )$ and require
$\frac{\delta}{\delta \Sigma} =0$. Here $\Sigma$ plays the same
role as the Liouville mode $\mbox{$\sigma $}$ in section 2. As in sec.2 the integrals
$\int d \mbox {$ \tau _{1}$} \int d \mbox {$ \tau _{2}$} $ allow us to
integrate by parts. In that case (3.18) gives
\begin{equation}
\int d \mbox {$ \tau _{1}$} \int d \mbox {$ \tau _{2}$} [
k_{1}.A (p) \mbox {$ \partial$} _{\mbox {$ \tau _{1}$}} \mbox {$ \partial$} _{\mbox {$ \tau _{2}$}}
<X(\mbox {$ \tau _{1}$} ) X(\mbox {$ \tau _{2}$} )>
\end{equation}
\[
+k_{1}.p A.k \mbox {$ \partial$} _{\mbox {$ \tau _{1}$}}
<X(\mbox {$ \tau _{1}$} ) X(\mbox {$ \tau _{2}$} )>
\mbox {$ \partial$} _{\mbox {$ \tau _{2}$}}
<X(\mbox {$ \tau _{1}$} ) X(\mbox {$ \tau _{2}$} )> ]
e^{k.p
<X(\mbox {$ \tau _{1}$} ) X(\mbox {$ \tau _{2}$} )> }
\]
\[
=
\int d \mbox {$ \tau _{1}$} \int d \mbox {$ \tau _{2}$} [
k_{1}.A (p) \mbox {$ \partial$} _{\mbox {$ \tau _{1}$}} \mbox {$ \partial$} _{\mbox {$ \tau _{2}$}}
\Sigma ( \mbox {$ \tau _{1}$} - \mbox {$ \tau _{2}$} )
\]
\[
+k_{1}.p A.k \mbox {$ \partial$} _{\mbox {$ \tau _{1}$}}
\Sigma ( \mbox {$ \tau _{1}$} - \mbox {$ \tau _{2}$} )
\mbox {$ \partial$} _{\mbox {$ \tau _{2}$}}
\Sigma ( \mbox {$ \tau _{1}$} - \mbox {$ \tau _{2}$} )]
e^{k.p
\Sigma ( \mbox {$ \tau _{1}$} - \mbox {$ \tau _{2}$} )}
\]
Varying w.r.t $\Sigma$ gives
\begin{equation}
(k_{1}.A k.p - k_{1}.p A.k )\mbox {$ \partial$} _{\mbox {$ \tau _{1}$} } \mbox {$ \partial$} _{\mbox {$ \tau _{2}$}} \Sigma ( \mbox {$ \tau _{1}$} - \mbox {$ \tau _{2}$} )
e^{k.p \Sigma ( \mbox {$ \tau _{1}$} - \mbox {$ \tau _{2}$} )}
\end{equation}
Set $p_{0} = - k_{0} $ (momentum conservation) and look at the
coefficient of $k_{1}^{\mu}$: It gives Maxwell's equation
$\mbox {$ \partial$} _{\mu} F^{\mu \nu} =0$. The same method obviously
works for strings also
since we never needed the explicit form of the two point function of $X$.
\newpage
To summarize this section, we have derived the gauge invariant
equation for a scalar using the proper time method.
We have also shown how the proper time formalism can be
used for gauge particles at the free level. Both these
can be immediately generalized to strings.
\newpage
\section{Proper Time Formalism and Gauge Invariance for Strings}
\setcounter{equation}{0}
We now apply the proper time formalism to strings: Replace $\tau$ by
$ln z$ to get
\begin{equation}
[\frac{d}{dlnz} -2]<e^{ik'X(z)} e^{ikX(0)}>\phi (k) =0
\end{equation}
In sec.2 we derived equations of motion by requiring that the
vertex operator have dimension one.
In eqn.4.1 we have two vertex operators and so it has dimension two
and hence should fall off as $1/z^{2}$ as equation (4.1) indicates.
We will calculate the expectation value using the action
\begin{equation}
1/2 \int d^{2}z \mbox {$ \partial$} _{z}X \bar {\mbox {$ \partial$}} _{\mbox{$\bar{z}$}}X + \int _{0}^{w}
A_{\mu} \mbox {$ \partial$} _{z}
X^{\mu}
\end{equation}
The action has the gauge invariance
\begin{equation}
A_{\mu} \rightarrow A_{\mu} + \mbox {$ \partial$} _{\mu} \Lambda \, , \, \phi
\rightarrow e^{i\Lambda } \phi
\end{equation}
as in the point particle case.
The two point function is :
\begin{eqnarray}
<X(z_{1}) X(z_{2})> & = & ln (z_{1}-z_{2}) , z_{1} \neq z_{2} \\
& = & ln(ae^{\sigma}) , z_{1}=z_{2}
\end{eqnarray}
However we will just leave it as $<X(z_{1}) X(z_{2})>$ till the end of the
calculation. To lowest order we get from (4.1) $(k^{2} -2) \phi$.
At the next order we have
\begin{equation}
<e^{ik'X(z)} \int _{w} ^{z} dz_{1} A_{\mu} \mbox {$ \partial$} _{z} X^{\mu} (z_{1})
e^{ikX(z_{1})}
e^{ipX(w)}>
\end{equation}
which gives
\begin{equation}
\int _{w}^{z} dz_{1} [iA.k'\mbox {$ \partial$}_{z_{1}}
<X(z)X(z_{1})>
+iA.p\mbox {$ \partial$}_{z_{1}}
<X(z_{1}) X(w)>
]
\end{equation}
\[
exp (k.k'
<X(z)X(z_{1})> +
k.p
<X(z_{1}) X(w)> +
k'.p
<X(z) X(w)> )
\]
To lowest order we get the surface terms:
\begin{equation}
iA.k'[<X(z) X(z) - <X(z)X(w)>] +
\end{equation}
\[
iA.p[<X(z)X(w)>-<X(w)X(w)>]
\]
\[
=-i(A.k' -A.p)ln(\frac{z-w}{a})
\]
This contributes $-i(A.k' - A.p)$ to the equation of motion.
At the next order we have
\begin{equation}
<e^{ik'X(z)}\int _{w} ^{z} du \int _{w}^{u} dv A(k)\mbox {$ \partial$} X(u) e^{ikX(u)}
A(q)\mbox {$ \partial$} X(v) e^{iqX(v)}e^{ipX(w)} >
\end{equation}
Again to lowest order in momenta we get
\begin{equation}
\int _{w}^{z} du\int _{w}^{u} dvA(k)A(q)<\mbox {$ \partial$} _{u} X(u) \mbox {$ \partial$} _{v} X(v)>
\end{equation}
\[
=\int _{w} ^{z} A(k)A(q) [<\mbox {$ \partial$} _{u} X(u) X(u)> - <\mbox {$ \partial$} _{u} X(u) X(w) >]
\]
\[
=\int _{w} ^{z} du A(k)A(q)
[1/2 \mbox {$ \partial$} _{u} <X(u) X(u)> - \mbox {$ \partial$} _{u} <X(u) X(w)>]
\]
\[
=A(k)A(q)[1/2[<X(z)X(z)>- <X(w)X(w)>]
\]
\[
-<X(z)X(w)> + <X(w)X(w)>]
\]
\begin{equation}
=A.Aln(\frac{z-w}{a})
\end{equation}
Adding up all the pieces we get $(\mbox {$ \partial$} - A ) ^{2} \phi = 0$
In following the steps from (4.6) to (4.10) one can see how
each contribution
is the surface term
in an integral and how they conspire to reproduce
the gauge invariance as described in eqn.(4.3). All this works
exactly the same way as for the point particle since we never really
needed to know the functional form of the two point function. In
fact as indicated at the end of the last section we could have
just required $\frac{\delta}{\delta <X(z)X(w)>} = 2$ instead of
$\frac{d}{dln(z-w)} =2$.
In this section we have concentrated on understanding
the features that are
common to particles and strings, in particular, those that deal
with the massless gauge invariance. We have shown that the
proper time formalism can be made gauge
invariant.
\footnote{We can derive Maxwell's equation also in the string case
just as was done at the end of the last section by requiring
$\frac{\delta}{\delta <X(z)X(w)>} \int dz \int dw < \mbox {$ \partial$} _{z} X e^{ik.X}
\mbox {$ \partial$} _{w} X e^{ip.X}> = 0$.}
In this section we kept only the lowest order (in momentum) terms.
For point particles if we had similarly kept only the lowest
order terms the result (i.e. the Klein-Gordon equation)
would still be exact, as the calculation in Section 3 shows.
Thus the higher order terms must vanish. This is not so for
strings, however. There are higher order corrections
to the Klein Gordon equation that ought to be evaluated. Some of
these have
been calculated in various approximation schemes\cite{FT2,AA,AC}.
It should be possible, however, to do it in a systematic
way where the degree to which the massive modes are integrated out
can be controlled. The parameter that controls this would be
the cutoff of the two dimensional field theory. The
proper time formalism \cite{BS3,BS4}
appears to be a way of implementing this
idea.
\newpage
\section{Conclusion}
\setcounter{equation}{0}
In this paper we have attempted to understand gauge invariance
in the framework of the renormalization group both at the free level
and interacting case. Our aim is to have an understanding at
the computational level rather than a formal proof of gauge
invariance. To this end we have made some progress in understanding
gauge invariance of the massless particle at the interacting level
provided we stay close to the mass shell.
One can also address these questions in the
BRST framework. We saw in the second section the similarities
between the two approaches at the free level. In fact proceeding
to the interacting theory we can see that eqn.(4.1) is very similar
to the equation based on the Batalin-Vilkovisky formalism used
in \cite{W2,WL,W3}. Instead of $d/dlnz$ acting on the two point
function
one can have $Q
_{BRST}$ act on it. Witten's anti bracket is
essentially the Zamolodchikov metric-the two point function. If we
were to include ghosts and use $cV$ instead of $V$ in (4.1) ($c$ being
the reparametrization ghost) we would have Witten's antibracket. In
fact we have already seen in Sect3 that when dealing with gauge particles
the vertex operator should be integrated over. Thus we should have
$\int dz V$ (which has the same dimension as $cV$). Thus this formalism
seems very similar to that of \cite{W2,WL,W3}.
We would like to extend the results of this paper
by going off shell and including the massive modes.
This issue can be hopefully addressed in this
formalism, just as was done for the case of the tachyon, by keeping
a finite cutoff. As we change the value of the cutoff one should
be able to interpolate continuously from a string field
theory where all the modes are present to a low energy effective action
obtained via the sigma model formalism. Presumably the extra coordinates
of \cite{BS1} will need to be introduced to maintain reparametrization
invariance. We hope to return to these questions.
\underline{Acknowledgement}: I would like to thank W. Siegel
for many useful discussions. Most of this work was done while the
author was visiting the Institute for Theoretical Physics at
Stony Brook. I would like to thank the members of the ITP, and
especially M. Rocek,
for their hospitality.
\newpage
|
1,108,101,566,665 | arxiv | \section{Introduction}
Semiconductor nanowires emerged a few years ago as promising thermoelectric devices~\cite{Hicks1993}. In comparison to their bulk counterparts,
they provide opportunities to enhance the dimensionless figure of merit $ZT=S^2\sigma T/\kappa$, which governs the efficiency of thermoelectric
conversion at a given temperature $T$. Indeed, they allow one to reduce the phonon contribution $\kappa_{ph}$ to thermal conductivity
$\kappa$~\cite{Hochbaum2008,Boukai2008,Martin2009}. On the other hand, through
their highly peaked density of states they offer the large electron-hole asymmetry required for the enhancement of the thermopower $S$~\cite{Mahan1996,Tian2012}.
This makes them now rank, with other nanostructured materials, among the best thermoelectrics in terms of achievable values of $ZT$. Yet, maximizing
the figure of merit is not the ultimate requirement on the quest for improved thermoelectrics. The actual electric power that can be extracted from a
heat engine (or conversely the actual cooling power that can be obtained from a Peltier refrigerator) is also of importance when thinking of
practical applications. From that point of view, nanowire-based thermoelectric devices are also promising: they offer the scalability needed for
increasing the output power, insofar as they can be arranged in arrays of nanowires in parallel.\\
\indent The main issue of this and the subsequent paper~\cite{Bosisio2013} is the determination of the dopant density optimizing the
thermopower in a single semiconductor nanowire. From the theory side, this question has mainly been discussed at room temperature
when the semi-classical Boltzmann
theory can be used~\cite{Lin2000,Mingo2004,Neophytou2011} or in the ballistic regime~\cite{Liang2010} when the presence of the disorder is completely
neglected. The goal was to describe the thermoelectric properties of nanowires at room temperature where the quantum effects become negligible, and
in particular to probe the role of their geometry (diameter, aspect ratio, orientation, ...). From the experimental side, investigations have been
carried out by varying the carrier density in the nanowire with an external gate electrode~\cite{Liang2009,Zuev2012,Tian2012,Moon2013,Wu2013,Roddaro2013}.
Different field effect transistor device configurations can be used: either the nanowire and its metallic contacts are deposited on one side of
an insulating layer, while a metallic back-gate is put on the other side (see for instance Refs.~\cite{Moon2013,Brovman2013}), or one can take
a top-gate covering only the nanowire (see for instance Ref.~\cite{Poirier1999}).
Recently, Brovman \textit{et al} have measured at room temperature the thermopower of Silicon and Silicon-Germanium nanowires and
observed a strong increase when the nanowires become almost depleted under the application of a gate voltage~\cite{Brovman2013}. Interestingly, this work
points out the importance of understanding thermoelectric transport near the band edges of semiconductor nanowires. It also reveals a lack of
theoretical framework to this field that we aim at filling.\\
\indent In that purpose, we shall first identify as a function of the temperature $T$ and the applied gate voltage $V_g$ the dominant mechanism
of electronic transport through a given nanowire. At low temperature $T<T_x$, transport is dominated by elastic tunneling processes and quantum
effects must be properly handled. Due to the intrinsic disorder characterizing doped semiconductors, the electronic transport is much affected
by Anderson localization while electron-phonon coupling can be neglected inside the nanowire. Above the activation temperature
$T_x$, electron-phonon coupling inside the nanowire start to be relevant. One enters the inelastic Variable Range Hopping (VRH) regime~\cite{Mott1979}
where phonons help electrons to jump from one localized state to another, far away in space but quite close in energy. At temperatures higher than
the Mott temperature $T_M$, the VRH regime ceases and one has simple thermal activation between nearest neighbor localized states.
The different regimes are sketched in Fig.~\ref{fig_Tscale} for a nanowire modeled by a one-dimensional (1D) tight-binding
Anderson model. Note that they are highly dependent on the gate voltage $V_g$. The inelastic VRH regime will be addressed in a subsequent paper~\cite{Bosisio2013}.\\
\indent In this work, we focus our study to the low temperature elastic regime or more precisely, to a subregion $T<T_s$ inside the elastic regime
in which the thermopower can be evaluated using the Landauer-B\"uttiker scattering formalism and Sommerfeld expansions. An experimental study of
the gate dependence of the electrical conductance of Si-doped GaAs nanowire in this elastic coherent regime can be found in Ref.\cite{Poirier1999}.\\
\indent We will mainly consider nanowires of size $N$ larger than their localization length $\xi$, characterized by exponentially small values of the electrical conductance. Obviously, this drastically reduces the output power associated with the thermoelectric conversion.
Nevertheless, the advantage of considering the limit $N \gg \xi$ is twofold: first, the typical transmission at an energy $E$ is simply given by $\exp [-2N/\xi]$ in this limit, and second, at weak disorder, $\xi(E)$ is analytically known. This makes possible to derive analytical expressions describing the typical behavior of the thermopower.
To avoid the exponential reduction of the conductance at large $N/\xi$, one should take shorter lengths ($N \approx \xi$). To study thermoelectric conversion in this crossover regime would require to use the scaling theory discussed in Ref.~\cite{Anderson1980, Pichard1986}.
Furthermore, another reason to consider $N \gg \xi$ is that the delay time distribution (which probes how the scattering matrix depends on
energy) has been shown to have a universal form~\cite{Texier1999} in this limit. We expect that this should be also the case for the
fluctuations of the thermopower (which probes how the transmission depends on energy). This gives the theoretical reasons for focusing
our study to the limit $N \gg \xi$.\\
\indent The outline of the manuscript is as follows. Section~\ref{section_LB} is a reminder about the Landauer-B\"uttiker formalism which allows one
to calculate thermoelectric coefficients in the coherent regime. In section~\ref{section_model}, we introduce the model and outline the numerical
method used in this work, which is based on a standard recursive Green's function algorithm. Our results are presented in
sections~\ref{section_Styp}, \ref{section_distrib} and~\ref{section_Tc}. Section~\ref{section_Styp} is devoted to the study of
the typical behavior of the thermopower as the carrier density in the nanowire is modified with the gate voltage. We show that the thermopower
is drastically enhanced when the nanowire is being depleted and we provide an analytical description
of this behavior in the localized limit. In section~\ref{section_distrib}, we extend the study to the distribution of the thermopower. We show
that the thermopower is always Lorentzian
distributed, as long as the nanowire is not completely depleted by the applied gate voltage and provided it is long enough with respect to the
localization length. Interestingly, the mesoscopic fluctuations appear to be basically larger and larger as the carrier density in the nanowire
is lowered and the typical thermopower increases. As a matter of course, this ceases to be true when the gate voltage is so large that the
nanowire, almost emptied of carriers, behaves eventually as a (disordered) tunnel barrier. In that case, the thermopower distribution is found
to be Gaussian with tiny fluctuations. The evaluation of the ``crossover temperature'' $T_s$ (see Fig.~\ref{fig_Tscale}) is the subject of
section~\ref{section_Tc}. Finally, we draw our conclusions in section~\ref{section_ccl}.
\begin{figure}
\centering
\includegraphics[keepaspectratio,width=0.8\columnwidth]{fig_Tscale2.eps}
\caption{\label{fig_Tscale}
(Color online) For a Fermi energy taken at the band center ($E_F=0$), the different regimes of electronic transport are given as a
function of a positive gate voltage $V_g$. From bottom to top, one can see the elastic regime ($T<T_x$, blue), the inelastic VRH regime
($T_x<T<T_M$, gray) and the simply activated regime ($T>T_M$, red)). The temperature scales $T_s$, $T_x=\xi/(2\nu N^2)$ and $T_M=2/(\xi\nu)$
are plotted for the 1D model introduced in Sec.~\ref{section_model} with $E_F=0$, $W=t$ and $N=1000$. $T_s$ is given for $\epsilon=0.01\%$
(see Sec.~\ref{section_Tc}). Transport exhibits the bulk behavior of the nanowire impurity band as far as $V_g$ does not exceed a value of
order $1.5 t$ and its edge behavior in the interval $1.5t < V_g < 2.5t$. When $V_g> 1.5 t$, the bulk weak-disorder expansions (see
section~\ref{section_model}) cease to be valid for $W=t$, while $ V_g > 2t+W/2=2.5t$ is necessary for completely depleting the nanowire
in the limit $N \to \infty$. This paper is restricted to the study of region (I), corresponding to low
temperatures $T<T_s$ at which the Sommerfeld expansion can be applied for the calculation of the thermoelectric coefficients. The VRH region~(II) will be studied in Ref.~\cite{Bosisio2013}.
}
\end{figure}
\section{Thermoelectric transport coefficients in the Landauer-B\"uttiker formalism}
\label{section_LB}
We consider a conductor connected via reflectionless leads to two reservoirs $L$ (left) and $R$ (right) in equilibrium at temperatures
$T_L$ and $T_R$, and chemical potentials $\mu_L$ and $\mu_R$. To describe the thermoelectric transport across the conductor, we use the
Landauer-B\"uttiker formalism~\cite{Datta1995}. The heat and charge transport are supposed to be mediated only by electrons and the phase
coherence of electrons during their propagation through the conductor is supposed to be preserved. In this approach, the dissipation of
energy takes place exclusively in the reservoirs while the electronic transport across the conductor remains fully elastic.
The method is valid as long as the phase-breaking length (mainly associated to electron-electron and electron-phonon
interactions) exceeds the sample size. From a theoretical point of view, it can be applied to (effective) non-interacting models. In this
framework, the electric ($I_e$) and heat ($I_Q$) currents flowing through the system are given by~\cite{Sivan1986,Butcher1990}
\begin{align}
I_e&=\frac{e}{h}\int\! dE \,\mathcal{T}(E)[f_L(E)-f_R(E)] \label{eq_Igeneral}\\
I_Q&=\frac{1}{h}\int\! dE \,(E-\mu_L)\mathcal{T}(E)[f_L(E)-f_R(E)] \label{eq_Jgeneral}
\end{align}
where $f_\alpha(E)=(1+\exp[(E-\mu_\alpha)/(k_B T_\alpha)])^{-1}$ is the Fermi distribution of the lead $\alpha$ and $\mathcal{T}(E)$ is the
transmission probability for an electron to tunnel from the left to the right terminal. $k_B$ is the Boltzmann constant, $e<0$ the electron
charge and $h$ the Planck constant. The above expressions are given for spinless electrons and shall be doubled in case of spin degeneracy.\\
\indent We now assume that the differences $\Delta\mu=\mu_L-\mu_R$ and $\Delta T=T_L-T_R$ to the equilibrium values $E_F\approx\mu_L\approx\mu_R$
and $T\approx T_L\approx T_R$ are small. Expanding the currents in Eqs.~(\ref{eq_Igeneral},\,\ref{eq_Jgeneral}) to first order in $\Delta\mu$ and
$\Delta T$ around $E_F$ and $T$, one obtains~\cite{Butcher1990}
\begin{equation}
\begin{pmatrix}
I_e \\
I_Q
\end{pmatrix} =
\begin{pmatrix}
L_0 & L_1 \\
L_1 & L_2
\end{pmatrix}
\begin{pmatrix}
\Delta \mu/eT \\
\Delta T/T^2
\end{pmatrix}
\ee
where the linear response coefficients $L_i$ are given by
\begin{equation}
\label{eq_coeffLi}
L_i=\frac{e^2}{h}T\int\! dE \,\mathcal{T}(E)\left(\frac{E-E_F}{e}\right)^i\left(-\frac{\partial f}{\partial E}\right)\,.
\ee
The electrical conductance $G$, the electronic contribution $K_e$ to the thermal conductance $K$, the Seebeck coefficient $\mathcal{S}$
(or thermopower) and the Peltier coefficient $\Pi$ can all be expressed in terms of the Onsager coefficients $L_i$ as
\begin{align}
G&\equiv\left.\frac{eI_e}{\Delta \mu}\right|_{\Delta T=0}=\frac{L_0}{T}\label{eq_dfG}\\
K_e&\equiv\left.\frac{I_Q}{\Delta T}\right|_{I_e=0}=\frac{L_0L_2-L_1^2}{T^2L_0}\label{eq_dfkappa}\\
\mathcal{S}&\equiv-\left.\frac{\Delta \mu}{e\Delta T}\right|_{I_e=0}=\frac{L_1}{TL_0}\label{eq_dfS}\\
\Pi&\equiv\left.\frac{I_Q}{I_e}\right|_{\Delta T=0}=\frac{L_1}{L_0}~~.\label{eq_dfPi}
\end{align}
The Seebeck and Peltier coefficients turn out to be related by the Kelvin-Onsager relation~\cite{Onsager1931,Casimir1945}
\begin{equation}
\label{eq_relPiS}
\Pi=\mathcal{S}T
\ee
as a consequence of the symmetry of the Onsager matrix. Note that, by virtue of Eq.~\eqref{eq_coeffLi}, in presence of particle-hole
symmetry we have $\mathcal{S}=\Pi=0$. Further, the link between the electrical and thermal conductances is quantified by the Lorenz
number $\mathcal{L}=K_e/GT$.\\
\indent In the zero temperature limit $T\to 0$, the Sommerfeld expansion~\cite{Ashcroft1976} can be used to estimate the
integrals~\eqref{eq_coeffLi}. To the lowest order in $k_BT/E_F$, the electrical conductance reduces
to $G\approx\frac{e^2}{h}\mathcal{T}(E_F)$ (ignoring spin degeneracy) while the thermopower simplifies to
\begin{equation}
\label{eq_SeebeckMott}
\mathcal{S}\approx\frac{\pi^2}{3}\frac{k_B}{e}\,k_BT\,\left.\frac{\mathrm{d}\ln\mathcal{T}}{\mathrm{d}E}\right|_{E_F}\,.
\ee
The Lorenz number $\mathcal{L}$ takes in this limit a constant value,
\begin{equation}
\label{eq_WFlaw}
\mathcal{L}\approx\mathcal{L}_0\equiv\frac{\pi^2}{3}\left(\frac{k_B}{e}\right)^2,
\ee
as long as $|\mathcal{S}|\ll\sqrt{\mathcal{L}_0}\simeq 156\,\mathrm{\mu V.K^{-1}}$. This reflects the fact that the electrical and thermal
conductances are proportional and hence cannot be manipulated independently, an important although constraining property known as the
Wiedemann-Franz (WF) law. This law is known to be valid for non-interacting systems if the low temperature Sommerfeld expansion is
valid~\cite{Balachandran2012,Vavilov2005}, when Fermi liquid (FL) theory holds~\cite{Ashcroft1976,Chester1961} and for metals at room
temperatures~\cite{Ashcroft1976}, while it could be largely violated in interacting systems due to non FL behaviors~\cite{Kane1996,Wakeham2011}.
\section{Model and method}
\label{section_model}
The system under consideration is sketched in Fig.~\ref{fig_model}(a). It is made of a 1D disordered nanowire coupled via perfect
leads to two reservoirs $L$ (left) and $R$ (right) of non-interacting electrons, in equilibrium at temperature $T_L=T+\Delta T$ [$T_R=T$] and
chemical potential $\mu_L=E_F+\Delta\mu$ [$\mu_R=E_F$]. The nanowire is modeled as a 1D Anderson chain of $N$ sites, with lattice spacing $a=1$.
Its Hamiltonian reads,
\begin{equation}
\label{eq_modelAnderson1D}
\mathcal{H}=-t\sum_{i=1}^{N-1}\left(c_i^{\dagger}c_{i+1}+\text{h.c.}\right)+\sum_{i=1}^{N}\epsilon_i c_i^{\dagger}c_i\,,
\ee
where $c^{\dagger}_i$ and $c_i$ are the creation and annihilation operators of one electron on site $i$ and $t$ is the hopping energy.
The disorder potentials $\epsilon_i$ are (uncorrelated) random numbers uniformly distributed in the interval $[-W/2,W/2]$. The two sites at
the ends of the nanowire are connected with hopping term $t$ to the leads which can be 1D semi-infinite chains or 2D semi-infinite square
lattices, with zero on-site potentials and the same hopping term $t$. The simpler case of the Wide Band
Limit (WBL) approximation, where the energy dependence of the self-energies of the leads is neglected, is also considered. Finally, an extra
term
\begin{equation}
\mathcal{H}_{gate}=\sum_i V_g c_i^{\dagger}c_i
\ee
is added in the Hamiltonian~\eqref{eq_modelAnderson1D} to mimic the presence of an external metallic gate.
It allows to shift the whole impurity band of the nanowire.
\begin{figure}
\centering
\includegraphics[keepaspectratio,width=0.75\columnwidth]{fig_sys.eps}
\caption{\label{fig_model}
(Color online) (a) Sketch of the system: a 1D nanowire made of $N$ sites is connected to two leads at its extremities. An external gate voltage
$V_g$ is applied. (b) Band diagram. The impurity band of the nanowire (in blue) can be shifted by the application of $V_g$ in order to probe
either the bulk, the edges or the outside of the impurity band at Fermi energy $E_F$. Here, the leads are bidimensional (conduction band of the
leads in red) and hence, $E_F\in[-4t,4t]$.}
\end{figure}
\subsection{Recursive Green's function calculation\\ of the transport coefficients}
In the Green's function formalism, the transmission $\mathcal{T}(E)$ of the system at an energy $E$ is given by the Fisher-Lee
formula~\cite{Datta1995}
\begin{equation}
\label{eq_FisherLee}
\mathcal{T}(E)=\mathrm{Tr}[\Gamma_L(E)G(E)\Gamma_R(E)G^\dagger(E)]
\ee
in terms of the retarded single particle Green's function $G(E)=[E-\mathcal{H}-\Sigma_L-\Sigma_R]^{-1}$ and of the retarded self-energies
$\Sigma_L$ and $\Sigma_R$ of the left and right leads.
The operators $\Gamma_\alpha=i(\Sigma_\alpha-\Sigma_\alpha^\dagger)$ describe the coupling between the conductor and the lead $\alpha=L$ or $R$.
A standard recursive Green's function algorithm~\cite{Lassl2007} allows us to compute the transmission $\mathcal{T}(E)$. The logarithmic
derivative $\mathrm{d}\ln\mathcal{T}/\mathrm{d}E$ can be calculated as well with the recursive procedure, without need for a discrete
evaluation of the derivative. It yields the thermopower $\mathcal{S}$ in the Mott-Sommerfeld approximation~\eqref{eq_SeebeckMott}. Hereafter, we will refer to a dimensionless thermopower
\begin{equation}
\label{eq_df_S}
S=-t\left.\frac{\mathrm{d}\ln\mathcal{T}}{\mathrm{d}E}\right|_{E_F}
\ee
which is related, in the Mott-Sommerfeld approximation, to the true thermopower $\mathcal{S}$ as
\begin{equation}
\mathcal{S}=\frac{\pi^2}{3}\left(\frac{k_B}{|e|}\right)\left(\frac{k_BT}{t}\right)S\,.
\ee
We now discuss the expressions of the self-energies $\Sigma_L(E)$ and $\Sigma_R(E)$ of the left and right leads which are to be given as
input parameters in the recursive Green's function algorithm. The nanowire of length $N$ sites is supposed to be connected on one site at
its extremities to two identical leads, which are taken 1D, 2D or in the WBL approximation. Hence, the self-energies $\Sigma_\alpha$ (as well
as the operator $\Gamma_\alpha$) are $N\times N$ matrices with only one non-zero component (identical for both leads) that we denote with
$\Sigma$ (or $\Gamma$). When the wide-band limit is assumed for the leads, $\Sigma$ is taken equal to a small constant imaginary number
independent of the energy $E$. When the leads are two 1D semi-infinite chains or two 2D semi-infinite square lattices, $\Sigma$ is given by
the retarded Green's function $G_\mathrm{lead}$ of the lead under consideration evaluated at the site $X$ (in the lead) coupled to the nanowire,
$\Sigma=t^2\langle X | G_\mathrm{lead} | X \rangle$. Knowing the expressions of the retarded Green's functions of the infinite 1D chain and the
infinite 2D square lattice~\cite{Economou2006}, it is easy to deduce $G_\mathrm{lead}$ for the semi-infinite counterparts by using the method of
mirror images. For 1D leads, one finds $\Sigma(E)=-te^{ik(E)}$ where $E=-2t\cos k$ and $k$ is the electron wavevector~\cite{Datta1995}. For 2D
leads, the expression of $\Sigma(E)$ is more complicated (see Appendix~\ref{app_SelfNRJ}). As far as the Fermi energy $E_F$ is not taken near the
edges of the conduction band of the leads, the thermopower behaviors using 1D and 2D leads coincide with those obtained using the WBL approximation
(see Sec.~\ref{section_Styp}). This shows us that the dimensionality D becomes irrelevant in that limit, and we expect that taking
3D leads will not change the results.
\subsection{Scanning the impurity band of the Anderson model}
\label{subsec_dos}
The density of states per site $\nu(E)$ of the Anderson model, obtained by numerical diagonalization of the Hamiltonian~\eqref{eq_modelAnderson1D},
is plotted in Fig.~\ref{fig_model2}(a) in the limit $N \to \infty$. It is non-zero in the interval $[E_c^{-},E_c^{+}]$ where $E_c^{\pm}=\pm(2t+W/2)$
are the edges of the impurity band. In the bulk of the impurity band (\textit{i.e.} for energies $|E|\lesssim 1.5t$), the density of states is given
with a good precision by the formula derived for a clean 1D chain (red dashed line in Fig.~\ref{fig_model2}(a)),
\begin{equation}
\label{eq_dstOfStateBulk}
\nu_b(E)=\frac{1}{2\pi t\sqrt{1-(E/2t)^2}}~.
\ee
As one approaches the edges $E_c^{\pm}$, the disorder effect cannot be neglected anymore. The density of states is then well
described by the analytical formula obtained by Derrida and Gardner around $E_c^{\pm}$, in the limit of weak disorder and large $N$
(see Ref.~\cite{Derrida1984}),
\begin{equation}
\label{eq_dstOfStateEdge}
\nu_e(E)=\sqrt{\frac{2}{\pi}}\left(\frac{12}{tW^2}\right)^{1/3}\frac{\mathcal{I}_1(X)}{[\mathcal{I}_{-1}(X)]^2}
\ee
where
\begin{equation}
X=(|E|-2t)t^{1/3}(12/W^2)^{2/3}
\label{eq_scaling-variable}
\ee
and
\begin{equation}
\label{eq_integralIn}
\mathcal{I}_n(X)=\int_0^{\infty} y^{n/2}\,e^{-\frac{1}{6}y^3+2Xy}\,dy\,.
\ee
\indent In this paper, we study the behavior of the thermoelectric coefficients as one probes at the Fermi energy $E_F$ electron transport either
inside or outside the nanowire impurity band, and more particularly in the vicinity of its band edges. Such a scan of the impurity band can be
done in two ways. One possibility is to vary the position of the Fermi energy $E_F$ in the leads. Doing so, we modify the distance between $E_F$
and the band edges $E_c^{\pm}$ but also the one between $E_F$ and the band edges of the leads. This can complicate the analysis of the data, the
dimensionality of the leads becoming relevant when $|E_c^{\pm}-E_F|\to 0$. To avoid this complication, we can keep $E_F$ fixed far from
$E_c^{\pm}$ and vary the gate voltage $V_g$ (see Fig.~\ref{fig_model}(b)).
\begin{figure}
\centering
\includegraphics[keepaspectratio,width=0.8\columnwidth]{fig_dos_xi.eps}
\caption{\label{fig_model2}
(a) Density of states per site $\nu$ as a function of energy $E$ for the 1D Anderson model~\eqref{eq_modelAnderson1D} with disorder amplitude $W/t=1$.
The circles correspond to numerical data (obtained with $N=1600$). The red dashed line and the blue line are the theoretical
predictions~\eqref{eq_dstOfStateBulk} and~\eqref{eq_dstOfStateEdge}, expected in the bulk and at the edges of the nanowire conduction band
for $N \to \infty$. (b) Localization length $\xi$ of the 1D Anderson model~\eqref{eq_modelAnderson1D} (with $W/t=1$) as a function of energy $E$. The circles
correspond to numerical data (obtained with Eq.~\eqref{eq_typtrasm}). The red dashed line and the blue line are the theoretical
predictions~\eqref{eq_xsi_bulk} and~\eqref{eq_xsi_edge} obtained in the limit $N \to \infty$.}
\end{figure}
\subsection{Localization length of the Anderson model}
In the disordered 1D model~\eqref{eq_modelAnderson1D} we consider, all eigenstates are exponentially localized, with a localization length $\xi$.
As a consequence, the typical transmission of the nanowire drops off exponentially with its length $N$. More precisely, when $N\gg\xi$
(localized limit), the distribution of $\ln\mathcal{T}$ is a Gaussian~\citep{Pichard1990,Pichard1991} centered around the value
\begin{equation}\label{eq_typtrasm}
[\ln\mathcal{T}]_0(E)=-\frac{2N}{\xi(E)}\,,
\ee
as long as the energy $E$ of the incoming electron is inside the impurity band of the nanowire. The inverse localization length $1/\xi$
can be analytically obtained as a series of integer powers of $W$ when $W \to 0$. To the leading order (see e.g.~\cite{Kramer1993}), this
gives
\begin{equation}
\label{eq_xsi_bulk}
\xi_b(E)\approx \frac{24}{W^2}\left(4t^2-E^2\right)\,.
\ee
The formula is known to be valid in the weak disorder limit inside the bulk of the impurity band (hence the index $b$). Strictly speaking, it fails
in the vicinity of the band center $E=0$ where the perturbation theory does not converge~\cite{Kappus1981} but it gives nevertheless a good
approximation. As one approaches one edge of the impurity band, the coefficients characterizing the expansion of $1/\xi$ in integer powers of $W$
diverge and the series has to be reordered. As shown by Derrida and Gardner~\cite{Derrida1984}, this gives (to leading order in $W$) the non
analytical behavior $1/\xi \propto W^{2/3}$ as one edge is approached instead the analytical behavior $1/\xi \propto W^2$ valid in the bulk of
the impurity band. More precisely, one find in the limit $W \to 0$ that
\begin{equation}
\label{eq_xsi_edge}
\xi_e(E)=2\left(\frac{12t^2}{W^2}\right)^{1/3}\frac{\mathcal{I}_{-1}(X)}{\mathcal{I}_{1}(X)}
\ee
as $E$ approaches the band edges $\pm 2t$. The integrals $\mathcal{I}_i$ and the parameter $X$ have been defined in Eq.~\eqref{eq_integralIn} and
Eq.~\eqref{eq_scaling-variable}. As shown in Fig.~\ref{fig_model2}(b), both formula~\eqref{eq_xsi_bulk} and~\eqref{eq_xsi_edge} are found to be in very
good agreement with our numerical evaluation of $\xi(E)$, in the respective range of energy that they describe, even outside a strictly weak
disorder limit ($W=t$ in Fig.~\ref{fig_model2}(b)).
\section{Typical thermopower}
\label{section_Styp}
We compute numerically the thermopower $S$ for many realizations of the disorder potentials $\epsilon_i$ in Eq.~\eqref{eq_modelAnderson1D}, and we
define the \emph{typical} value $S_0$ as the median of the resulting distribution $P(S)$. As it will be shown in Sec.~\ref{section_distrib},
$P(S)$ is typically a smooth symmetric function (Lorentzian or Gaussian), and thus its median coincides with its most probable value. We study
the behavior of $S_0$ as one scans the energy spectrum of the nanowire by varying the position of the Fermi energy $E_F$ in the leads or the
gate voltage $V_g$.\\
\indent In Fig.~\ref{fig_Styp}(a), the typical thermopower $S_0$ of a long nanowire in the localized regime ($N \gg \xi$) is plotted as a function
of $E_F$ without gate voltage ($V_g=0$). Since $S_0\to -S_0$ when $E_F\to -E_F$, data are shown for positive values of $E_F$ only. In the figure,
three different kinds of leads are considered: 1D leads, 2D leads or leads in the WBL approximation. In all cases, as expected, we find that $S_0=0$
at the center of the conduction band of the leads ($E_F=0$). Indeed, the random potentials being
symmetrically distributed around a zero value, one has a statistical particle-hole symmetry at the band center and the thermopower can only be a statistical
fluctuation around a zero typical value. As $E_F$ is increased, the statistical particle-hole symmetry breaks down and $S_0$ gets finite. Here
$S_0>0$ because charge transport is dominated by holes for $E_F>0$. When the wide band limit is assumed for both leads (triangles in
Fig.~\ref{fig_Styp}(a)), we find that the typical thermopower $S_0$ increases with $E_F$ and reaches a maximum just before $E_c^+=2t+W/2$, the asymptotic
$N\to\infty$ value for the edge ($E_c^+=2.5\,t$ in Fig.~\ref{fig_Styp}(a) where $W=t$) before decreasing. The same curve is obtained with 1D [2D]
leads as long as the Fermi energy $E_F$ remains far enough below the upper band edge of the $D$-dimensional leads. When $E_F$ approaches $2t$ [$4t$], the
typical thermopower $S_0$ of the nanowire is found to increase drastically, contrary to the WBL case (of course, no data are available for
$|E_F|\geq 2t\,[4t]$, charge transfer being impossible outside the conduction band of the leads). This singularity at the band edge of the leads
can be easily understood using Eqs.~\eqref{eq_FisherLee} and~\eqref{eq_df_S} and noticing that for 1D [2D] leads,
$\mathrm{d}\ln\Gamma/\mathrm{d}E\to -\infty$ as
$E\to 2t~[4t]$. This is obvious in the case of 1D leads where $\Gamma(E)=2t\sqrt{1-(E/2t)^2}$ and it can also be shown for 2D leads. We will see
in Sec.~\ref{section_Tc} that this apparent divergence of the thermopower is actually only valid in an infinitesimally small range of temperatures
above $0\,$K.\\
\begin{figure}
\centering
\includegraphics[keepaspectratio,width=\columnwidth]{fig_Styp.eps}
\caption{\label{fig_Styp}
(Color online) Typical value of the dimensionless thermopower per unit length, $S_0/N$, as a function of the Fermi energy $E_F$ at $V_g=0$ (a) and
as a function of the gate voltage $V_g$ at $E_F=0$ (b). In panel~(a), the data were obtained at fixed $N=500$, by using either 1D
leads~({\large$\circ$}), 2D leads~({\tiny{\color{red}$\square$}}) or the wide-band limit approximation~({\scriptsize{\color{blue}$\blacktriangle$}}).
With 1D [2D] leads, the typical thermopower shows a divergent behavior at the band edge of the leads (black [red] vertical dashed line).
In panel~(b), 1D leads are used. The symbols stand for different lengths of the nanowire ($N=200$~({\large{\color{red}$\circ$}}),
$800$~({\tiny{\color{DarkGreen}$\square$}}) and $1600$~({\scriptsize{\color{blue}$\blacklozenge$}})). The full black line, the full red line
and the dashed black line correspond respectively to the theoretical fits~\eqref{eq_S0bulk},~\eqref{eq_S0edge} and~\eqref{eq_S0TB} expected when
$E_F$ probes the bulk, the edge and the outside of the impurity band. In both panels, $W/t=1$. The arrows indicate the position of the edge of the
impurity band of the nanowire.}
\end{figure}
\indent With the gate voltage $V_g$, we can explore the impurity band of the nanowire while keeping $E_F$ fixed. The behavior of $S_0$ as a function
of $V_g$ is shown in Fig.~\ref{fig_Styp}(b) for $E_F=0$ and 1D leads. It is found to be identical to the behavior of $S_0$ as a function of $E_F$
obtained at $V_g=0$ in the WBL approximation. This remains true if 2D leads are used in Fig.~\ref{fig_Styp}(b) and we have no doubt that it also
remains true with 3D leads. Moreover, the results are unchanged if $E_F$ is fixed to any other value, as long as it does not approach too closely
one edge of the conduction band of the leads (but it can be chosen close enough to one band edge to recover the continuum limit of the leads). Our
main observation is that the typical thermopower $S_0$ increases importantly when the Fermi energy probes the region around the edges of the impurity
band of the nanowire. Qualitatively, this is due to the fact that the typical transmission of the nanowire drops down when the edges are
approached: this huge decrease results in a enhancement of the typical thermopower, the thermopower being somehow a measure of the energy dependence
of the transmission. A quantitative description of this behavior can also be obtained. Indeed, since the distribution of the transmission
$\mathcal{T}$ is log-normal in the localized regime~\citep{Pichard1990,Pichard1991} and the thermopower $S$ is calculated for each disorder configuration
with the Mott approximation~\eqref{eq_df_S}, one expects to have
\begin{equation}
S_0=-t\left.\frac{\mathrm{d}[\ln\mathcal{T}]_0}{\mathrm{d}E}\right|_{E_F}
\ee
where $[\ln\mathcal{T}]_0$ is the median of the $\ln\mathcal{T}$ Gaussian distribution (which in this case coincides with the most probable value).
Moreover, according to Eq.~\eqref{eq_typtrasm}, the energy dependence of $[\ln\mathcal{T}]_0$ is given by the
energy dependence of the localization length, \textit{i.e.} by Eqs.~\eqref{eq_xsi_bulk} and~\eqref{eq_xsi_edge}. This allows us to derive the
following expressions for the typical thermopower in the bulk and at the edges:
\begin{equation}
\label{eq_S0bulk}
S_0^b=N\frac{(E_F-V_g)\,W^2}{96t^3[1-((E_F-V_g)/2t)^2]^2},
\ee
\begin{equation}
\label{eq_S0edge}
S_0^e=2N\left(\frac{12t^2}{W^2}\right)^{1/3}\left\{ \frac{\mathcal{I}_{3}(X)}{\mathcal{I}_{-1}(X)}-\left[\frac{\mathcal{I}_{1}(X)}{\mathcal{I}_{-1}(X)}\right]^2\right\},
\ee
where now $X$ is modified to
\begin{equation}
X=(|E_F-V_g|-2t)t^{1/3}(12/W^2)^{2/3}
\ee
in order to take into account the effect of the gate voltage $V_g$.
When the outside of the impurity band, rather than the inside, is probed at $E_F$ (i.e. when the wire is completely depleted), no more states
are available in the nanowire to tunnel through. Electrons coming from one lead have to tunnel directly to the other lead through the disordered
barrier of length $N$. We have also calculated the typical thermopower of the nanowire in that case, assuming that the disorder effect is
negligible (see Appendix~\ref{appThermopowerCTB}). We find
\begin{equation}
\label{eq_S0TB}
\frac{S_0^{TB}}{N} \underset{N\to\infty}{\approx} -\frac{1}{N}\frac{2t}{\Gamma(E_F)}\left.\frac{\mathrm{d}\Gamma}{\mathrm{d}E}\right|_{E_F}\mp\frac{1}{\sqrt{\left(\frac{E_F-V_g}{2t}\right)^2-1}}
\ee
with a $+$ sign when $E_F\leq V_g-2t$ and a $-$ sign when $E_F\geq V_g+2t$. Fig.~\ref{fig_Styp}(b) shows a very good agreement between the
numerical results (symbols) and the expected behaviors (Eqs.~\eqref{eq_S0bulk},~\eqref{eq_S0edge} and~\eqref{eq_S0TB}). One consequence of these
analytical predictions is that the peak in the thermopower curves gets higher and narrower as the disorder amplitude is decreased (and vice-versa).
\section{Thermopower distributions}
\label{section_distrib}
\begin{figure}
\centering
\includegraphics[keepaspectratio, width=\columnwidth]{fig_distrib_lor.eps}
\caption{\label{fig_distribS_lor}
(Color online) Top panels: probability distributions of the rescaled thermopower $(S-S_0)/N$ at $V_g=0$ (a) and $V_g=2t$ (b), with $W=t$, $E_F=0$
and 1D leads. In each panel, the different symbols correspond to various lengths of the chain ($N\approx\xi$~({\tiny{\color{black}$\triangle$}}),
$N\approx 10\,\xi$~({\small{\color{green}$\circ$}}), $N\approx 50\,\xi$~({\scriptsize{\color{red}$\square$}}) and
$N\approx 100\,\xi$~({\tiny{\color{red}$\blacksquare$}}), respectively $N=100$, $1000$, $5000$ and $10000$ in (a) and
$N=10$, $100$, $500$ and $1000$ in (b). The distributions obtained for $N\geq 50\,\xi$ collapse on a single curve which is well fitted by a
Lorentzian distribution function (thick blue lines). The widths $\Lambda/N$ of the Lorentzian fits are plotted as a function of $V_g$ in panel
(c), for $N=200$ ({\scriptsize{\color{magenta}$\square$}}), $1000$ ({\scriptsize{\color{green}$\blacklozenge$}}), $5000$
({\Large{\color{red}$\circ$}}) and $10000$ ({\large{\color{blue}$\bullet$}}), together with the density of states per site at $E_F$,
$t\nu_F$, of the closed chain (red line). The probability distributions of the rescaled thermopower $(\Delta_F/2\pi t)(S-S_0)$, obtained in the
large $N$ limit ($N\approx 100\,\xi$) and for various sets of parameters ($W=0.5t$ and $V_g=2t$~({\large{\color{red}$\diamond$}}), $W=t$ and
$V_g=0$~({\Large{\color{black}$\circ$}}), $W=t$ and $V_g=2t$~({\scriptsize{\color{green}$\square$}}), $W=2t$ and
$V_g=0$~({\small{\color{brown}$\times$}}), and $W=2t$ and $V_g=2.3t$~({\scriptsize{\color{DarkGreen}$\blacktriangledown$}}), with
$E_F=0$ in all cases), are shown in panel (d). They all collapse on the blue line which is the Lorentzian function $y=1/[\pi(1+x^2)]$. }
\end{figure}
In the coherent elastic regime we consider, the sample-to-sample fluctuations of the thermopower around its typical value are expected to be large.
The most striking illustration occurs at the center of the impurity band of the nanowire ($E_F=V_g$), when the typical thermopower is zero due to
statistical particle-hole symmetry but the mesoscopic fluctuations allow for large thermopower anyway. Van Langen \textit{et al} showed
in Ref.~\cite{VanLangen1998} that in the localized regime $N\gg\xi$ without gate ($V_g=0$) and around the band center ($E_F\approx 0$), the
distribution of the low-temperature thermopower is a Lorentzian,
\begin{equation}
\label{eq_distrib_lor}
P(S)=\frac{1}{\pi}\frac{\Lambda}{\Lambda^2+(S-S_0)^2}\,,
\ee
with a center $S_0=0$ and a width
\begin{equation}
\label{eq_width_lor}
\Lambda=\frac{2\pi t}{\Delta_F}
\ee
given by $\Delta_F=1/(N\nu_F)$, the average mean level spacing at $E_F$. This was derived under certain assumptions leading to $S_0=0$.
As we have shown, $S_0=0$ is exact only at the impurity band center ($E_F=0$ when $V_g=0$) and remains a good approximation as far as
one stays in the bulk of the impurity band. But the distribution $P(S)$ is no more centered around zero as one approaches the band edge.
\\
\indent We propose here to investigate how the thermopower distribution $P(S)$ is modified when this is not only the bulk, but the edges (or
even the outside) of the impurity band which are probed at the Fermi energy $E_F$. To fix the ideas, we set the Fermi
energy to $E_F=0$ and the disorder amplitude to $W=t$ (so that the band edges are $V_g+E_c^\pm=V_g\pm2.5t$). First, we check in
Fig.~\ref{fig_distribS_lor}(a) that at $V_g=0$ and in the localized regime, the thermopower distribution is indeed a Lorentzian with a width
$\Lambda\propto N$. We note that very long chains of length $N\approx 50\xi$ ($\xi\approx100$ here) are necessary to converge to the
Lorentzian ~\eqref{eq_distrib_lor}. Moreover, we have checked that this is also in this limit that the delay time distribution converges towards
the universal form predicted in Ref~\cite{Texier1999}.
Then we increase the gate potential up to $V_g=2t$ to approach the edge $E_c^-$ of the impurity band and find that the thermopower distribution
remains a Lorentzian in the localized regime ($N\gtrsim 50\xi$) with a width $\Lambda\propto N$, as shown in Fig.~\ref{fig_distribS_lor}(b). It
turns out actually that the fit of the thermopower distribution with a lorentzian (in the large $N$ limit) is satisfactory in a broad range of
gate potentials $|V_g|\lesssim 2.25t$, as long as the Fermi energy $E_F=0$ probes the impurity band without approaching too closely
its edges $V_g+E_c^\pm$. In Fig.~\ref{fig_distribS_lor}(c), we show in addition that in this regime, the widths $\Lambda$ of the Lorentzian fits to
the thermopower distributions $P(S)$ obey $\Lambda/(2\pi Nt)=\nu_F$, \textit{i.e.} Eq.~\eqref{eq_width_lor}.
Therefore (Fig.~\ref{fig_distribS_lor}(d)), we can use this parameter to rescale all the distributions obtained in a broad range of parameters,
on the same Lorentzian function $y=1/[\pi(1+x^2)]$. A direct consequence of~Eq.~\eqref{eq_width_lor} is that the mesoscopic fluctuations of the
thermopower are maximal for $|E_F-V_g|\approx 2t$.\\
\indent When the gate voltage $|V_g|$ is increased further, the number of states available at $E_F$ in the nanowire decreases exponentially and
eventually vanishes: one approaches eventually a regime where the nanowire becomes a long tunnel barrier and where
the thermopower fluctuations are expected to be smaller and smaller. In this limit, we find that the thermopower distribution is no more a
Lorentzian but becomes a Gaussian,
\begin{equation}
\label{eq_distrib_gauss}
P(S)=\frac{1}{\sqrt{2\pi}\lambda}\exp\left[-\frac{(S-S_0)^2}{2\lambda^2}\right]\,,
\ee
provided the chain is long enough. This result is illustrated in Figs.~\ref{fig_distribS_gauss}(a) and~\ref{fig_distribS_gauss}(b) for
two values of $V_g$. The Gaussian thermopower distribution is centered around a typical value $S_0$ given by Eq.~\eqref{eq_S0TB} and its
width $\lambda$ is found with great precision to increase linearly with $\sqrt{N}$ and $W$. To be more precise, we find that the dependency of
$\lambda$ on the various parameters is mainly captured by the following formula
\begin{equation}
\label{eq_width_gauss}
\lambda\approx0.6\frac{Wt\sqrt{N}}{\left(E_F-V_g\right)^2-\left(2t+W/4\right)^2}\,,
\ee
at least for $0.5t\lesssim W \lesssim 4t$, $ 2.35t\lesssim |E_F-V_g|\lesssim 6t$ and $N\gtrsim 100$ (see Fig.~\ref{fig_distribS_gauss}(c)).
We stress out that Eq.~\eqref{eq_width_gauss} is merely a compact way of describing our numerical data. In particular, the apparent divergence
of $\lambda$ when $|E_F-V_g|\to 2t+W/4$ is meaningless and in fact, it occurs outside the range of validity of the fit. To double-check the
validity of Eq.~\eqref{eq_width_gauss}, we have rescaled with the parameter $\lambda$ given by Eq.~\eqref{eq_width_gauss}, a set of thermopower
distributions obtained in the disordered tunnel barrier regime, for various $W$ and $V_g$. All the resulting curves (plotted in
Fig.~\ref{fig_distribS_gauss}(d)) are superimposed on the unit gaussian distribution, except the one for the smallest disorder value $W=0.5t$ for
which the fit~\eqref{eq_width_gauss} to $\lambda$ is satisfactory but not perfect.
\begin{figure}
\centering
\includegraphics[keepaspectratio, width=\columnwidth]{fig_distrib_gauss.eps}
\caption{\label{fig_distribS_gauss}
(Color online) Top panels: probability distributions of the rescaled thermopower $(S-S_0)/\sqrt{N}$ at $V_g=2.35t$ (a) and $V_g=2.6t$ (b),
with $W=t$, $E_F=0$ and 1D leads. In each panel, the distributions are plotted for various lengths of the chain
($N=10$~({\normalsize{\color{black}$\ast$}}), $50$~({\tiny{\color{green}$\square$}}), $200$~({\small{\color{blue}$\bullet$}}),
$500$~({\normalsize{\color{blue}$\circ$}}) and $1000$~({\small{\color{blue}$\blacktriangle$}})) and collapse at large $N$ on one single curve,
well fitted by a Gaussian distribution (red line). The widths $\lambda/\sqrt{N}$ of the Gaussian fits are plotted as a function of $V_g$ in
panel (c), for various lengths ($N=50$ (triangle), $200$ (circle), $400$ (square), $800$ (diamond) and $1600$ (star)) and two disorder
amplitudes ($W=t$ (empty symbols) and $W=4t$ (full symbols)). The solid and dashed lines are the fits given by Eq.~\eqref{eq_width_gauss},
respectively for $W=t$ and $W=4t$. Panel (d): collapse of the thermopower distributions, obtained with $N=500$ and various parameters
($W=0.5t$ and $V_g=2.25t$ ({\tiny{\color{magenta}$\square$}}), $W=0.5t$ and $V_g=5t$ ({\small{\color{DarkGreen}$\blacktriangledown$}}),
$W=t$ and $V_g=2.5t$ ({\small{\color{black}$\bullet$}}), $W=t$ and $V_g=5t$ ({\normalsize{\color{green}$\ast$}}), and $W=4t$ and
$V_g=4t$ ({\scriptsize{\color{blue}$\lozenge$}})), after a rescaling by $\lambda$ as given in Eq.~\eqref{eq_width_gauss}. The red line is
the Gaussian distribution $y=(1/\sqrt{2\pi})\exp(-x^2/2)$.}
\end{figure}
To identify precisely the position of the crossover between the Lorentzian regime and the Gaussian regime, we introduce now the parameter
$\eta$,
\begin{equation}
\label{eq_df_eta}
\eta=\frac{\int dS|P(S)-P_G(S)|}{\int dS|P_L(S)-P_G(S)|}\,,
\ee
which measures, for a given thermopower distribution $P(S)$ obtained numerically, how closed it is from its best Gaussian fit $P_G(S)$ and
from its best Lorentzian fit $P_L(S)$\footnote{One could be tempted to compare an arbitrary thermopower distribution $P(S)$ to the Lorentzian
and Gaussian distributions given in Eqs.~(\ref{eq_distrib_lor}\,-\,\ref{eq_width_lor}) and~(\ref{eq_distrib_gauss}\,-\,\ref{eq_width_gauss})
respectively. However, to define $\eta$ for any set of parameters, one should extend to the outside of the spectrum the
formula~\eqref{eq_width_lor} for the width $\Lambda$ of the Lorentzian, and to the inside of the spectrum the formula~\eqref{eq_width_gauss}
for the width $\lambda$ of the Gaussian. We avoid this problem by taking instead the best Lorentzian and Gaussian fits to $P(S)$ in the definition
of $\eta$. It allows us to distinguish whether $P(S)$ is a Lorentzian or a Gaussian (or none of both) but of course, the precise form of $P(S)$ is
not probed by $\eta$ as defined.}. If $P(S)$ is a Lorentzian, $\eta=1$ while $\eta=0$ if it is a Gaussian. Considering first the case where $E_F=0$
and $W=t$, we show in the left panel of Fig.~\ref{fig_eta} that $\eta$ converges at large $N$ for any $V_g$ (inset). The asymptotic values of
$\eta$ (given with a precision of the order of $0.05$ in the main panel) undergo a transition from $\eta\approx 1$ to $\eta\approx 0$ when $V_g$ is
increased from $0$ to $4t$. This reflects the crossover from the Lorentzian to the Gaussian thermopower distribution already observed in the top
panels of Figs.~\ref{fig_distribS_lor} and~\ref{fig_distribS_gauss}. We see in addition that the crossover is very sharp around the value
$V_g\approx 2.3t$, indicating a crossover which remains inside the impurity band of the infinite nanowire, since the band is not shifted enough when
$V_g\approx 2.3t$ to make the Fermi energy coincides with the band edge $V_g+E_c^- = V_g-2.5t$. We have obtained the same results for other values of
the disorder amplitude. After checking the convergence of $\eta$ at large $N$, we observe the same behavior of the asymptotic values of $\eta$ as
a function of $V_g$, for any $W$. Only the position of the crossover is disorder-dependent. Those results are summarized in the right panel of
Fig.~\ref{fig_eta} where one clearly sees the crossover (in white) between the Lorentzian regime (in blue) and the Gaussian regime (in red).
It occurs around $V_g\approx 1.92t+0.34W$, not exactly when $E_F=V_g+E_c^-$, but in a region where the number of states available at $E_F$ in the
nanowire becomes extremely small. To be precise, we point out that the values of $\eta$ in the 2D colorplot are given with a precision of the
order of $0.1$. Hence, one cannot exclude that the white region corresponding to the crossover actually reduces into a single line $V_g^c(W)$.
One could also conjecture the existence of a third kind of thermopower distribution (neither Lorentzian, nor Gaussian) associated to this critical
value $V_g^c$. Our present numerical results do not allow to favor one scenario (sharp crossover) over the other (existence of a critical edge
distribution).\\
\begin{figure}
\centering
\begin{psfrags}
\psfrag{XXX}{\footnotesize{$W$ (in unit of $t$)}}
\psfrag{YYY}{\footnotesize{$V_g$ (in unit of $t$)}}
\psfrag{Y}{\footnotesize{$\eta$}}
\includegraphics[keepaspectratio,trim = 0mm 0mm 2mm 3mm, clip, width=\columnwidth]{fig_eta.eps}
\end{psfrags}
\caption{\label{fig_eta}
(Color online) Left panel: in the inset, $\eta$ parameter as a function of $N/\xi$ for various gate voltages
($V_g=1.9\,t$~($\circ$), $2.35\,t$~({\tiny{\color{blue}$\square$}}) and $2.5\,t$~({\color{violet}$\diamond$})), at $E_F=0$ and $W/t=1$.
The horizontal lines show the convergence of $\eta$ at large $N$. The asymptotic values are plotted in the main panel as a function of $V_g$.
Right panel: $\eta$ parameter in the limit of large $N$ as a function of $V_g$ and $W$, at $E_F=0$. Upon shifting the spectrum of the nanowire
with $V_g$, the thermopower distribution moves from a Lorentzian distribution for $V_g\lesssim V_g^c$ ($\eta\approx 1$, blue) to a Gaussian
distribution for $V_g\gtrsim V_g^c$ ($\eta\approx 0$, red), where $V_g^c=1.92t+0.34W$ (dashed line).}
\end{figure}
\section{Temperature range of validity of the Sommerfeld expansion}
\label{section_Tc}
All the results discussed in this paper have been obtained in the low temperature limit, after expanding the thermoelectric coefficients to
the lowest order in $k_BT/E_F$. To evaluate the temperature range of validity of this study, we have calculated the Lorenz number
$\mathcal{L}=K_e/GT$ beyond the Sommerfeld expansion, and looked at its deviations from the WF law $\mathcal{L}=\mathcal{L}_0$
(see Eq.~\eqref{eq_WFlaw}): We have computed numerically the integrals~\eqref{eq_coeffLi} enterings Eqs.~\eqref{eq_dfG} and~\eqref{eq_dfkappa},
deduced $\mathcal{L}(T)$ for increasing values of temperature, and then recorded the temperature $T_s$ above which $\mathcal{L}(T)$ differs from
$\mathcal{L}_0$ by a percentage $\epsilon$, $\mathcal{L}(T_s)=\mathcal{L}_0(1\pm\epsilon)$. We did it sample by sample and deduced the temperature
$T_s$ averaged over disorder configurations. Our results are summarized in Fig.~\ref{fig_Tc}.\\
\indent In panel~(a), we analyze how sensitive $T_s$ is to the precision $\epsilon$ on the Lorenz number $\mathcal{L}$. We find that $T_s$
increases linearly with $\sqrt{\epsilon}$, $T_s(\epsilon)=T_s^*\sqrt{\epsilon}$, at least for $\epsilon\leq 2\%$. This is not surprising since
the Sommerfeld expansion leads to $\mathcal{L}-\mathcal{L}_0\propto (k_BT)^2$, when one does not stop the expansion to the leading order in
temperature ($\mathcal{L}=\mathcal{L}_0$) but to the next order.\\
\indent The main result of this section is shown in Fig.~\ref{fig_Tc}(b) where we have plotted the temperature $T_s$ as a function of the gate
voltage $V_g$, for chains of different lengths, at fixed $E_F=0$ and $W=t$. As long as the Fermi energy probes the inside of the spectrum without
approaching too much its edges ($|V_g|\leq 2t$), $T_s$ is found to decrease as $V_g$ is increased. More precisely, we find in the large $N$ limit
($N\gtrsim 10\xi$) that $Nk_BT_s\propto \nu_F^{-1}$ with a proportionality factor depending on $\epsilon$ (solid line in Fig.~\ref{fig_Tc}(b)).
The temperature $T_s$ is hence given by (a fraction) of the mean level spacing at $E_F$ in this region of the spectrum ($k_BT_s\propto\Delta_F$).
When $V_g$ is increased further, $T_s$ reaches a minimum around $|V_g|\approx 2.1t$ and then increases sharply. Outside the spectrum, this increase
of $T_s$ with $V_g$ is well understood as follows: Since in the tunnel barrier regime, the transmission behaves (upon neglecting the disorder effect)
as $\mathcal{T}\propto\exp(-N\zeta)$, with $\zeta=\cosh^{-1}[|E-V_g|/(2t)]$, the temperature scale below which the Sommerfeld expansion of
integrals~\eqref{eq_coeffLi} holds is given by $k_BT_s\propto[N\left.\frac{\mathrm{d}\zeta}{\mathrm{d}E}\right|_{E_F}]^{-1}$, which yields
$Nk_BT_s\propto t\sqrt{[(E_F-V_g)/(2t)]^2-1}$. Our numerical results are in perfect agreement with this prediction (dashed line in
Fig.~\ref{fig_Tc}(b)).\\
\indent In Fig.~\ref{fig_Tc}(c), we investigate the behavior of $T_s$ when the spectrum of the nanowire is either scanned by varying $V_g$ at
$E_F=0$ or by varying $E_F$ at $V_g=0$. We find that $T_s$ only depends on the part of the impurity band which is probed at $E_F$ (\textit{i.e.}
the curves $T_s(V_g)$ and $T_s(E_F)$ are superimposed), except when $E_F$ approaches closely one edge of the conduction band of the leads. In
that case, $T_s$ turns out to drop fast to zero as it can be seen in Fig.~\ref{fig_Tc}(c) for the case of 1D leads ($T_s\to 0$ when $E_F\to 2t$).
This means that the divergence of the \textit{dimensionless} thermopower $S$ observed in Fig.~\ref{fig_Styp}(a) is only valid in an infinitely
small range of temperature above $0\,\mathrm{K}$. It would be worth figuring out wether or not a singular behavior of the thermopower at the band
edges of the conduction band persists at larger temperature.\\
\indent Let us give finally an order of magnitude in Kelvin of the temperature scale $T_s$. In Fig.~\ref{fig_Tc}(b), the lowest $T_s$ reached
around $V_g\approx 2.1t$ is about $Nk_BT_s^{min}/t\sim 0.001$ for $\epsilon=0.004\%$. Asking for a precision of $\epsilon=1\%$ on $\mathcal{L}$,
we get $Nk_BT_s^{min}/t\sim 0.016$. For a bismuth nanowire of length $1\,\mu\mathrm{m}$ with effective mass $m^*=0.2m_e$ ($m_e$ electron mass)
and lattice constant $a=4.7$\,\AA, the hopping term evaluates at $t=\hbar^2/(2m^*a^2)\sim 0.84\,\mathrm{eV}$ and hence,
$T_s^{min}\sim 72\,\mathrm{mK}$. The same calculation for a silicon nanowire of length $1\,\mu\mathrm{m}$ with $m^*=0.2m_e$ and $a=$5.4\,
\AA~yields $T_s^{min}\sim 64\,\mathrm{mK}$. Those temperatures being commonly accessible in the laboratories, the results discussed in this paper
should be amenable to experimental checks.
\begin{figure}
\centering
\includegraphics[keepaspectratio, width=\columnwidth]{fig_Tc2.eps}
\caption{\label{fig_Tc}
(Color online) Temperature scale $T_s$ above which the WF law breaks down. (a)~$Nk_BT_s/t$ as a function of the desired precision
$\epsilon$ on $\mathcal{L}$. The critical temperatures were extracted for different values of $V_g$ ($V_g=t$~({\large{\color{black}$\circ$}}),
$1.5t$~({\tiny{\color{red}$\square$}}), $2.02t$~({\tiny{\color{green}$\triangle$}})), with $E_F=0$, $W/t=1$, $N=500$ and 1D leads. The
solid lines are fits $T_s=T_s^*\sqrt{\epsilon}$. (b)~$Nk_BT_s/t$ (extracted for $\epsilon=4\times 10^{-5}$) as a function of $V_g/t$, for
chains of different length ($N=150$~({\large{\color{black}$\circ$}}), $300$~({\scriptsize{\color{magenta}$\triangle$}}),
$500$~({\normalsize{\color{green}$\ast$}}), $1500$~({\small{\color{blue}$\square$}}) and $3000$~({\tiny{\color{red}$\blacksquare$}})),
with $E_F=0$, $W/t=1$ and 1D leads. The solid line is $4.04\times 10^{-4}/(\nu_F t)$, the dashed line is $4.37\times 10^{-3}\sqrt{(V_g/2t)^2-1}$
and the arrow indicates the position of the edge of the impurity band. (c)~$Nk_BT_s/t$ (extracted for $\epsilon=4\times 10^{-5}$) as a function
of $E_F/t$ at $V_g=0$ ({\scriptsize{\color{red}$\square$}}) and as a function $V_g/t$ at $E_F=0$ ({\large{\color{black}$\circ$}}), with $N=150$,
$W/t=1$ and 1D leads. Dashed lines are guides to the eye.}
\end{figure}
\section{Conclusion}
\label{section_ccl}
We have systematically investigated the low-temperature behavior of the thermopower of a single nanowire, gradually depleted with a gate voltage
in the field effect transistor device configuration. Disorder-induced quantum effects, unavoidable in the low-temperature coherent regime, were
properly taken into account. We have provided a full analytical description of the behavior of the typical thermopower as a function of the gate
voltage and have confirmed our predictions by numerical simulations. Our results show that the typical thermopower is maximized when the Fermi
energy lies in a small region inside the impurity band of the nanowire, close to its edges. Moreover, since thermoelectric conversion strongly
varies from one sample to another in the coherent regime, we have carefully investigated the mesoscopic fluctuations of the thermopower around
its typical value. We have shown that the thermopower is Lorentzian-distributed inside the impurity band of the nanowire and that its fluctuations
follow the behavior of the density of states at the Fermi energy when the gate voltage is varied. In the vicinity of the edges of the impurity band
and outside the band, the thermopower was found Gaussian-distributed with tiny fluctuations.\\The thermopower enhancement which we predict around
the edges looks in qualitative agreement with the recent experimental observation reported in Ref~\cite{Brovman2013}, using silicon and
germanium/silicon nanowires in the field effect transistor device configuration. We stress out however that those measurements were carried out
at room temperatures, and not in the low temperature coherent regime which we consider. To describe them, inelastic effects must be included.
It will be the purpose of our next paper~\cite{Bosisio2013}. The low temperature coherent regime considered in this paper has been studied
in Ref.~\cite{Poirier1999}, where the conductances $G$ of half a micron long Si-doped GaAs nanowires have been measured at $T=100 mK$ in the field
effect transistor device configuration. Assuming Eq.~\eqref{eq_SeebeckMott} for evaluating the thermopower $\mathcal{S}$ from $\ln G(V_g)$, the typical
behavior and the fluctuations of $\ln G(V_g)$ given in Ref.~\cite{Poirier1999} are consistent with the large enhancement of $\mathcal{S}$ near the band
edges which we predict.\\
\indent Electron-electron interactions were not included in our study. A comprehensive description of the thermopower of a 1D disordered
nanowire should definitely consider them. Nevertheless, we expect that the drastic effects of electronic correlations in 1D leading to the formation
of a Luttinger liquid are somehow lightened by the presence of disorder. Second, the gate modulation of the thermopower we predict here is mainly
due to a peculiar behavior of the localization length close to the edges of the impurity band. And experimentally, coherent electronic transport
in gated quasi-1D nanowires turned out to be well captured with one-electron interference models~\cite{Poirier1999}. Of course, one could think of
including electronic interactions numerically with appropriate numerical 1D methods but regarding the issue of thermoelectric conversion in nanowires,
we believe the priority rather lies in a proper treatment of the phonon activated inelastic regime.\\
\indent Finally, let us discuss the potential of our results for future nanowire-based thermoelectric applications. To evaluate the efficiency of
the thermoelectric conversion~\cite{Callen} in a nanowire, one needs to know also its thermal conductance $K$. Below the temperature $T_s$, the electron
contribution $K_e$ to $K$ is related to the electrical conductance $G$ by the WF law. This gives $(\pi^2k_B^2 T)/(3h) [2 \exp\{-2N/\xi\}]$
for the typical value of $K_e$. The evaluation of the phonon contribution $K_{ph}$ to the thermal conductance of a nanowire is beyond the scope
of the used Anderson model, since {\it static} random site potentials are assumed. In one dimension, one can expect that $K_{ph}$
should be also much smaller than the thermal conductance quantum $(\pi^2k_B^2 T)/(3h)$ which characterizes the ballistic phonon regime~\cite{Pendry1983,Kirczenow1998}.
However, it remains unlikely that $K$ could be as small as $G$ for giving a large figure of merit $ZT$ in a single insulating nanowire
at low temperature.\\
\indent Similarly, if we were to look to the delivered (electric) output power, we would find that a large length $N$ would make it
vanish, as the electrical conductance in this regime would be exponentially small. Indeed, looking at the power factor $\mathcal{Q}=\mathcal{S}^2G$,
which is a measure of the maximum output power~\cite{VandenBroeck2005}, we realize that the enhancement of $\mathcal{S}$ at the edge of the impurity
band would not be enough to face the exponentially small values of $G$. Obviously, the optimization of the power factor $\mathcal{Q}$ for a single
nanowire requires to take shorter lengths ($N\approx\xi$), while the optimization of the thermopower $\mathcal{S}$ requires to take long sizes
($N\gg\xi$). Moreover, because of the strong variation of the localization length as the energy varies inside the impurity band, the optimization
of the power factor for a given size $N$ requires also to not be too close from the edges of the impurity band. This illustrates the fact that a
compromise has always to be found when thinking of practical thermoelectric applications. A way to optimize the efficiency and the output power could
consist in taking a large array of nanowires in parallel instead of a single one. Since the conductances $G$ in parallel add while the thermopower
$\mathcal{S}$ does not scale with the number of wires (at least if we take for $\mathcal{S}$ its typical value, neglecting the sample to sample
fluctuations), the compromise could favor the limit of long nanowires with applied gate voltages such that electron transport occurs near the edges
of impurity bands. Nowadays, it is possible to grow more than $10^8$ InAs nanowires~\cite{Persson2009} per $cm^2$, a large number which could balance
the smallness of the conductance of an insulating nanowire.\\
\indent Actually, when thinking of practical applications, the results of the present paper are
rather promising regarding Peltier refrigeration. Indeed, our conclusions drawn here for the thermopower at low temperature also hold for the Peltier
coefficient, the two being related by the Kelvin-Onsager relation $\Pi=\mathcal{S}T$. One could imagine to build up Peltier modules with doped nanowires
for cooling down a device at sub-Kelvin temperature in a coherent way. Besides, whether it be for energy harvesting or Peltier cooling, it would be
worth considering more complicated setups using the nanowire as a building block (e.g. arrays of parallel nanowires in the field effect transistor
device configuration) in order to reach larger values of output electric/cooling power.
\acknowledgments{Useful discussions with G. Benenti, O. Bourgeois, C. Gorini, Y. Imry, K. Muttalib and H. Shtrikman are gratefully acknowledged. This work has been supported by CEA through the DSM-Energy Program (project E112-7-Meso-Therm-DSM).}
|
1,108,101,566,666 | arxiv | \section{Introduction}
An interesting recent development in nonequilibrium statistical mechanics is to study the motion of ``probes'' in contact with a nonequilibrium environment \cite{simi,jsp,stef,carlo}. The probe can be a colloid, a real additional particle immersed in the bath or, more abstractly, the probe can also represent a macroscopic variable which is moving on a slower time-scale than the relevant bath degrees of motion. The bath consists of a huge number of particles that interact with the probe via some macroscopic interaction, i.e., involving many particles. While the problem is thus formally identical with deriving Brownian motion or Langevin dynamics \cite{vK,bha}, in the present study the bath is out of thermal equilibrium (even before the probe disturbs it). The bath-particles are driven and dissipate energy in yet another (now equilibrium) environment at fixed temperature. The question is to find the relevant changes in systematic and fluctuating forces on the probe due to the nonequilibrium condition of the bath.\\
There are many realizations of such systems having three levels (probe, nonequilibrium medium and thermal environment) of description, but much remains to be explored when the medium is out-of-equilibrium. Other similar studies of the induced motion on a probe in contact with a nonequilibrium medium include \cite{simi,jsp,stef,carlo}. Here we take a bath dynamics where the nonequilibrium is due to random resettings of the particles at a position $A$. Stochastic resetting \cite{ss} has recently been studied in connection with algorithmic searches \cite{ma2,ma1,ma3,ma4,rold,pal,pal2} or as an elementary solvable example of a system with an out-of-equilibrium steady state \cite{seifert}. Here the resetting stands for the non-equilibrium aspect of the bath and we think of our model as being similar to having a bath in a randomly pulsating volume, or a gas that is maintained out-of-equilibrium by random kicks. The technical advantage of the present set-up is that between any two resettings each particle undergoes an undriven motion in contact with the probe. We can calculate basically everything exactly for a linear version of the bath with resettings, which thus represents a useful reference case for further explorations. Despite its simplicity, we find that the model exhibits an interesting phenomenology. In a first regime we find that the resetting induces an equilibrium-like dynamics on the probe, although it already differs from the pure equilibrium case: the friction on the probe is reduced by the resetting and the fluctuation--dissipation relation is broken. That motivates the introduction of an effective temperature in that case. In a second regime the influence of the nonequilibrium becomes more severe: the noise felt by the probes becomes nonGaussian, and can even exhibit power-law distributed jumps (third regime). That occurs when the bath would be unstable in the absence of resetting: the resetting stabilizes the bath particles against an inverted harmonic well, a mechanism that produces heavy tails in the position distribution of the bath particles. \\
We start in the next section with the model, and point out what quantities in the probe-bath system are relevant for the induced motion. The linearity of the model replaces the more general regime of linear response around the bath nonequilibrium steady condition, as was outlined first in \cite{jsp}.
In Section \ref{mre} we give the main results in the scaling regime of infinite baths. The detailed calculations follow in Section \ref{Sec:SingleParticleBathCase} with the rescaling in Section \ref{Sec:ManyParticleBath}.
\section{Model and questions}
We start with the set-up which will provide the logic of the arguments and will point to the calculations that need to be performed.
\subsection{Coupled dynamics}
For simplicity we use one-dimensional notation.\\
A point probe at position $q_t\in {\mathbb R}$ interacts with a large number $N$ of bath particles at positions $x^i_t\in {\mathbb R}$
following the potential,
\begin{eqnarray}
\text{U}(x,q) = \sum_{i} U(x^i , q) \quad , \quad U(x,q) = \frac{a_1}{2} q^2 + \frac{a_2}{2} x^2 + \frac{a_3}{2} (x-q)^2 \, .\label{ene}
\end{eqnarray}
All interactions are linear and the probe is confined by talking $a_1 + a_3 >0$. The parameters $a_2$ and $a_3$ are not necessarily both positive, and we will be interested in the case where $b=a_2+a_3 < 0$.
The force between probe and bath is via the average position of bath particles $ \sum_{i=1}^N x^i$ so that we may think of $q$ as being trapped near the center of mass of the bath when $a_3>0$ is large.\\
The bath dynamics is described by an overdamped diffusion with random resetting, \cite{ss,ma2,ma1,ma3,ma4,rold}. For the latter we pick a rate $r\geq 0$ to select a sequence of random times $t_k^i$, independently for each bath particle, with waiting times $t_{k+1}^i - t_k^i$ exponentially distributed with rate $r$. At these times for that particle we reset its position at a fixed position $A$. In other words, at each of these times, the particle then say at $x^i$ moves/jumps instantaneously to $x=A$. The equation of motion is therefore, for $i=1,\ldots, N$,
\begin{equation}\label{odm}
\gamma \dot{x}^i_t = - a_2\, x^i_t - a_3\,(x^i_t-q_t)+ \sqrt{2 \gamma T}\, \xi^i_t + \sum_{k} \gamma \delta(t-t_k^i) (A - x_t^i) \,
\end{equation}
with damping coefficient $\gamma$ and environment temperature $T$.
There is no direct mutual interaction between the bath particles but there is with the probe at position $q_t$ at time $t$.\\
The probe dynamics is Newtonian,
\begin{equation}\label{pd}
M \ddot{q}_t - g_t(\dot{q}_t,q_t) = - a_1 q_t - a_3 \sum_{i=1}^{N} (q_t- x_t^i ) \,
\end{equation}
where $g_t$ is an additional arbitrary time-dependent force, possibly also random but which plays no further role for the present paper. The objective is to ``integrate out'' the $N$ bath particles, where we will decide later about the time ($t$) and energy scale ($M$) to be considered. \\
\begin{figure}
\centerline{\includegraphics[width=12cm]{mainfig2}}
\caption{A probe (purple) at position $q$ interacts linearly with a large number of bath particles (red) and an external parabolic well (that can be replaced by an arbitrary potential). The bath particles feel a parabolic well (eventually repulsive) centered around the origin $x=0$, are in contact with an external (equilibrium) bath at temperature $T$ and are driven out-of-equilibrium by a resetting process: with rate $r$ the position of each bath particles is independently reset to the position $x=A$.}
\label{fig:Main}
\end{figure}
The equations \eqref{odm}--\eqref{pd} specify the coupled model dynamics. See Fig.~\ref{fig:Main} for a cartoon of the coupled system. The initial condition is far in the past, say at time $-\infty$, so that we only care here for the stationary nonequilibrium condition. When the probe is released and interacts with the bath the particles there back-react to its motion providing the statistical forces that determine the induced law of the probe's stochastic motion.
\subsection{The induced motion}
Let $\langle x_t^i | \{q_{t'}\}^t_{-\infty}\rangle$ denote the average bath-particle position given the probe's history from arbitrarily far in the past. For obtaining the probe's induced motion from \eqref{pd} we use the strategy of \cite{jsp} and we start by the decomposition,
\begin{equation}\label{deo}
x_t^i = \langle x_t^i | \{q_{t'}\}^t_{-\infty}\rangle + \eta_t^i, \quad \eta_t^i:= x_t^i - \langle x_t^i | \{q_{t'}\}^t_{-\infty}\rangle \, ,
\end{equation}
formally leading to a reduced probe dynamics (assuming $g_t\equiv0$ in \eqref{pd}),
\begin{eqnarray}\label{deo2}
M \ddot{q_t} = -(a_1 +N a_3) q_t + a_3 N \langle x_t^i | \{q_{t'}\}^t_{-\infty}\rangle + a_3 \sum_{i=1}^N \eta_t^i \, .
\end{eqnarray}
The physics behind that decomposition is that $\langle x_t^i | \{q_{t'}\}^t_{-\infty}\rangle$ corresponds to the systematic force exerted by the out-of-equilibrium bath on the probe, containing notably the thermodynamic force and the friction, while $\eta_t^i$ is the fluctuating force.
The noise $\eta_t^i$, with mean zero, is the result of a last resetting of the $i-$th particle at random time $s(t)<t$ after which it evolves under thermal noise $\xi_{t'}, s(t)\leq t'\leq t$, following equation
\begin{equation}\label{odma}
\gamma \dot{x}^i_t = - a_2\, x^i_t - a_3\,(x^i_t-q_t)+ \sqrt{2 \gamma T}\, \xi^i_t
\end{equation}
(equation \eqref{odm} without last sum).
Under some scaling limit the sum of noises in \eqref{deo2} will converge to the noise on the probe. That noise is Gaussian when $a_2 + a_3 +\gamma\,r/2 >0$, and the covariance will be calculated explicitly in Section \ref{var}. If not (a case that only exists in the presence of the nonequilibrium, $r>0$), the second moment in the stationary distribution of the bath particles diverges and the noise is another stable distribution (requiring another scaling procedure). In the most severe case $a_2 + a_3 +\gamma\,r \leq 0$ the averaged value $\langle x_t^i | \{q_{t'}\}^t_{-\infty}\rangle$ also diverges and we have to abandon the decomposition \eqref{deo}, the noise on the probe then exhibits power-law distributed jumps. Details will be given in the next section. \\
For estimating $\langle x_t^i | \{q_{t'}\}^t_{-\infty}\rangle$ (when it exists), we use that for $r>0$ each bath-particle has been reset at $A$ with probability one, so that
\begin{equation}\label{est}
\left\langle x_t^i | \{q_{t'}\}^t_{-\infty}\right\rangle = \int_{-\infty}^t \textrm{d} s \,r\,e^{-(t-s)r}\,\langle x_t^i | \{q_{t'}\}^t_{s}\rangle^0 \, ,
\end{equation}
where the time $s=s(t)$ in the integration stands for the last resetting time of the $i-$th particle and the last expectation $\langle \cdot\rangle^0$ in the integrand is for the undriven dynamics
\eqref{odma} started at time $s$ with $x_s^i=A$. We thus need to know the right-hand side of \eqref{est} which is possible to compute exactly in this linear dynamics.\\
The main interest is to have an exactly solvable model of a probe motion in contact with a nonequilibrium bath. The typical question that arises here is to see what are the possible differences from the equilibrium case, in terms of friction, noise and stability.
\section{Main results}\label{mre}
We present here the main findings; the remaining sections give the detailed derivations.
\subsection{Stationary bath distribution}
Assuming that the probe is fixed at some position $q_t=q$ in \eqref{odm}, the position of the bath particle reaches at large times the stationary distribution\footnote{The (non-stationary) distribution of $x_t$ conditioned on the probe history is obtained by replacing $q \frac{\lambda}{b} (1 - e^{-\frac{\tau b}{\gamma}}) \to \frac{\lambda}{\gamma} \int_{0}^\tau dt' q_{t-t'} e^{-\frac{t' b}{\gamma}}$ in the exponential in \eqref{Eq:ProbabilityDistribution}.},
\begin{eqnarray} \label{Eq:ProbabilityDistribution}
p_\text{stat}(x|q) = \int_{0}^{\infty} r \,\textrm{d} \tau \,e^{-r\tau} \frac{1}{\sqrt{2\pi\, T \,(1- e^{- 2\frac{\tau b}{\gamma}})/b}} \exp\{- \frac{(x-A e^{- \frac{\tau b}{\gamma}} - q \frac{\lambda}{b} (1 - e^{-\frac{\tau b}{\gamma}}))^2}{ 2T (1- e^{- 2\frac{b\tau}{\gamma}})/b}\} \, ,
\end{eqnarray}
where we have introduced $b=a_2+a_3$ and $\lambda=a_3$. That defines a probability distribution whenever the resetting rate $r>0$, even if $b \leq 0$. In that last case the bath particle is stabilized by the resetting. Indeed, irrespective of the sign of $b$, the argument of the last exponential in \eqref{Eq:ProbabilityDistribution} converges to a constant for large $\tau$, and the convergence is thus ensured by the first exponential $e^{-r\tau}$. We note that the distribution \eqref{Eq:ProbabilityDistribution} was already obtained in \cite{pal} where stochastic resetting in various potential landscapes was considered. For $x\rightarrow A$, there is always a cusp. The resetting induces a jump in the first derivative: the coefficients $c_{\pm} = \lim_{x \to A^\pm} \partial_x p_\text{stat}(x|q ,A)$ satisfy $c_+ - c_- = -2 \times \frac{r \gamma}{2T}$, independently of all the other parameters but with in general $c_+ \neq - c_-$. For $q=0=A$, $p_\text{stat}(x=0|q = 0 ,A=0) - p_\text{stat}(x|q = 0 ,A=0) \simeq \frac{r \gamma}{2T} |x| $. That follows from studying the stationary Fokker-Planck equation satisfied by $p_\text{stat}$.\\
The bulk and asymptotic properties of that nonequilibrium distribution however strongly depend on the sign of $b$. Remember that $b=a_2 + a_3$ and $bx^2/2$ in \eqref{ene} gives the harmonic (anti-)well for the bath particles near the origin.\\
Consider first the case $q=A=0$ and $b \neq 0$. If $b>0$, then one gets a Gaussian decay $p_\text{stat}(x|q = 0 ,A=0) \sim e^{-b\frac{x^2}{2T}}$ (up to subdominant terms) which is the same as in equilibrium ($r=0$).\\
If $b <0$ (where the bath would be unstable under \eqref{odma}), then,
\begin{eqnarray} \label{Eq:FatTail}
p_\text{stat}(x|q = 0 ,A=0) \sim_{|x| \to \infty} G \frac{\gamma r/|b|}{(2T/|b|)^{\frac{\gamma r}{2b}}}\,\,\frac{1}{|x|^{1 -\gamma r/b}} \, ,
\end{eqnarray}
with $G = \frac{1}{2 \sqrt{\pi}} \int_{0}^{+\infty} \textrm{d} y y^{\frac{\gamma r/b-3}{2}} e^{-1/y} = \frac{\Gamma(\frac{3-\gamma r/b}{2})}{2\sqrt{\pi}} $. The distribution thus develops a fat tail when $b <0$, as the result of a competition between the distance $\Delta x \sim \sqrt{\frac{T}{|b|}} e^{|b|\tau/\gamma}$ traveled by the bath particle between two resetting events at time $t$ and $t+\tau$, and the probability $\sim e^{-r\tau}$ to observe such a resetting. Note that the first, respectively the second moment of the stationary distribution thus ceases to exist when $b$ is too negative, more precisely when $r\gamma+b \leq 0$, respectively $r\gamma+2b \leq 0$. Finally the (physically relevant) intermediary case is obtained taking $b \to 0^+$. In that case we obtain
\begin{eqnarray}
p_\text{stat}(x|q = 0 ,A=0) =\frac{\gamma r }{4T\sqrt{\pi}} \int_{0}^{\infty} \frac{\textrm{d} y}{\sqrt{y}}\, e^{-\frac{x^2}{y} - \frac{r \gamma}{4T}y} = \sqrt{\frac{r\gamma}{4T}}\, e^{-\sqrt{\frac{\gamma r}{T}} |x| } \, ,
\end{eqnarray}
the known result for the stationary distribution of a free particle under resetting \cite{ma2}. The distribution $p_\text{stat}(x|q = 0 ,A=0)$ is plotted in Fig.~\ref{fig:plot123} for various choices of parameters.\\
The general case $q\neq 0\neq A$ adds various complications and the distribution is not symmetric anymore; see Fig.~\ref{fig:plot45} for some examples. The large $x$ behavior remains qualitatively similar but the tails are now asymmetric ({\it i.e.,} the prefactors depend on the direction). For example, in the case $b<0$ we have in general
\begin{eqnarray} \label{Eq:FatTail2}
p_\text{stat}(x|q ,A) \sim_{ x \to \pm \infty} G_\pm \frac{\gamma r/|b|}{(2T/|b|)^{\frac{\gamma r}{2b}}} \,\,\frac{1}{|x|^{1 -\gamma r/b}} \, ,
\end{eqnarray}
with $G_{\pm} = \frac{1}{2 \sqrt{\pi}} \int_{0}^{+\infty} dy y^{\frac{\gamma r/b-3}{2}} e^{- \frac{1}{y}(1 \mp (A - \frac{q\lambda}{b}) \sqrt{\frac{-b y}{2T}})^2} $. If $0<q \lambda -Ab$ (which is the force felt by the bath particle just after resetting), then $G_+ >G_-$ as it is more likely for the bath particle to quickly diverge (under the effect of the repulsive parabolic well) towards the region $x>0$.
\begin{figure}
\centerline{\includegraphics[width=16cm]{plot123.pdf}}
\caption{Plot of the stationary distribution for the bath particles when both the probes and the resetting are at the origin $q=A=0$ in linear scales (left), logarithmic scale (middle) and log-log scale (right), for all parameters set to $1$ and $b=1$ (blue line), $b=0$ (black dashed line) and $b=-1$ (red line). For $b>0$ the large $x$ behavior is Gaussian-like, while for $b<0$ the distribution exhibits a fat-tail. In the case $b=0$ the decay is simply exponential.}
\label{fig:plot123}
\end{figure}
\begin{figure}
\centerline{\includegraphics[width=5.5cm]{plot4bis} \quad \includegraphics[width=5.5cm]{plot5bis}}
\caption{Plot of the stationary distribution for the bath particles for $q=2$ and $A=1$. All parameters of the models are set to unity except $\lambda=1$ on the left, and $\lambda=-1$ (repulsive probe-bath interaction) on the right; $b=1$ (blue line), $b=0$ (lack-dashed line) and $b=-1$ (red line), with also $b=2$ on the right (blue dotted line).}
\label{fig:plot45}
\end{figure}
\subsection{Exact reduced probe dynamics in the large $N$ limit}
\subsubsection{Gaussian case}
We first discuss the case where $r\gamma + 2b >0$, $b = a_2 + a_3$. We rescale
\begin{eqnarray} \label{Eq:ScalingLimit}
a_3 = a_3'/N \quad , \quad a_2 = a_2'/N \quad , \quad t = N t' \quad , \quad r = r'/N \quad , \quad M = N^2 M' \, ,
\end{eqnarray}
for which $r'\,t' = r\,t$: the rate at which the bath particles are resetted is kept constant. We obtain the reduced probe dynamics exactly in that scaling limit. Our result is that $Q_{t'} := \lim_{N \to \infty} q_{N t'}$ follows the stochastic equation of motion (assuming $g_t\equiv0$ in \eqref{pd})
\begin{eqnarray}\label{redd}
M' \ddot{Q} + \int_0^{+\infty}\textrm{d}{\tau}\, K_{r'}'(\tau) \dot{Q}_{t-\tau} = F_\text{tot}(Q_t) + \zeta_t \, ,
\end{eqnarray}
where for each $r\geq 0$ ($r=0$ being the equilibrium case, also included in this result),
\begin{itemize}
\item
The total force acting on the particle is
\begin{eqnarray} \label{Eq:IntroFtot}
&& F_\text{tot}(Q) = - a_1 Q -\frac{a_2' a_3'}{a_2' +a_3'} Q - \frac{a_3'^2}{a_2'+a_3'} \frac{\gamma r'}{\gamma r' + a_2'+a_3'} \left( Q - \frac{a_2'+a_3' }{a_3' } A \right) \, .
\end{eqnarray}
The first and second terms on the right-hand side can be identified as the systematic (thermodynamic) force exerted by the bath particles on the probe, equal to $-\partial_Q {\cal F}[T,Q]$ with ${\cal F}[T,Q]$ the equilibrium free-energy (per particle) of the bath with the probe fixed at position $q$: ${\cal F}[T,q]= \frac{a_3\,a_2}{2 (a_2+a_3)} q^2 - \frac{T}{2} \log( \frac{2 \pi T}{a_2+a_3})$. The third term is only present in the out-of-equilibrium $r>0$ case and cannot simply be related to a free-energy function; see \cite{prl}. If we had $a_2<0$ with still $ a_2 + a_3 >0$ so that for small $a_1>0$ the origin would be an unstable fixed point for the probe dynamics $M' \ddot{Q} = F_\text{tot}(Q_t)$ in the case $r=0$ (equilibrium), a resetting process at the origin (third term with $A=0$) could stabilize it. If $A \neq 0$ the equilibrium position $Q^*$ defined by $F_\text{tot}(Q^*) = 0$ is shifted from the origin.
\item
The friction kernel is
\begin{eqnarray}\label{frk}
K_{r'}'(t) = \frac{a_3'^2}{a_2'+a_3' + \gamma r'} e^{- (r' + \frac{a_2'+a_3'}{\gamma}) t},\qquad t\geq 0
\end{eqnarray}
with the resetting parameter $r'$ lowering the overall amplitude of the friction kernel and the correlation time. That is an example of out-of-equilibrium shear thinning, as the probe moves more easily through the medium when the resetting rate $r$ grows.
\item
The noise $\zeta_t$ is Gaussian with mean zero and covariance
\begin{eqnarray} \label{Eq:Intro:NoiseCorrelations}
\langle \zeta_t\, \zeta_{t'} \rangle = T\, \frac{a_3'^2}{a_2' +a_3' +\gamma r'/2} e^{- (r' + \frac{a_2'+a_3'}{\gamma})|t-t'|} \label{rdf} \, .
\end{eqnarray}
There is the equilibrium decorrelation time $\gamma/(a_2' + a_3')$ to which is added the time-scale of the resetting, where the noise becomes more white for larger resetting rate $r$. The amplitude of the noise suggests to introduce an effective temperature as we show in Section \eqref{eft}.
\end{itemize}
\subsubsection{NonGaussian cases}\label{ngc}
When $a_2 + a_3 =b<0$ and $ -b< \gamma r < -2b $, the fat tail \eqref{Eq:FatTail} in the distribution of bath particles renders the second moment of $x$ divergent while the first moment remains finite\footnote{ The transition point $\gamma r =-2b$ requires some special care and is not considered here.}. The sum of noises $\zeta_t = \lambda \sum_{i=1}^N \eta_t^i$ can still be rescaled however to converge to a nonGaussian noise whose one-time distribution is a generalized stable distribution (see e.g. \cite{GCLT}). We now take a scaling limit with the fat-tail exponent
\begin{eqnarray}
\alpha = - \frac{\gamma r}{b}
\end{eqnarray}
of the one-time distribution of $\eta_t^i$ fixed. We choose $a_2 \sim \frac{a_2'}{N^{2/\alpha}}$, $a_3=\lambda \sim \frac{a_3'}{N^{2/\alpha}}$, $r \sim \frac{r'}{N^{2/\alpha}}$, $t \sim t' N^{2/\alpha} {}$ and $M \sim M' N^{4/\alpha}$. Under that rescaling the systematic part of the reduced dynamics (force and friction) converges to $0$ and we obtain in the limit (assuming $g_t\equiv 0$ in \eqref{pd}) the equation,
\begin{eqnarray}\label{14}
M' \ddot{Q}_t = - a_1 Q_t + \zeta_t \, ,
\end{eqnarray}
with $\zeta_t$ a noise such that its one time distribution is a generalized $\alpha$-stable distribution $S(\alpha , 0 , c , 0)$ with scale factor
\begin{eqnarray} \label{Eq:StableDistc}
c = \left( \frac{ \sqrt{\pi} \Gamma((3+\alpha)/2)}{2 \sin(\frac{\pi \alpha}{2}) \Gamma(\alpha))} \right)^{1/\alpha} \sqrt{ \frac{2 T (a_3')^2}{|a_2'+a_3'|} } \, .
\end{eqnarray}
That means that the characteristic function of the noise is $\langle e^{i \omega \zeta_t } \rangle =e^{ - c^{\alpha} |\omega|^{\alpha}} $. The result \eqref{14} remains true even when $\gamma r \leq -b$ and the decomposition \eqref{deo} does not make sense (because $\langle x^i_t \rangle = \infty$). Then, the noise is given by $\zeta_t = \lim_N \frac{a_3'}{N^{2/\alpha}} \sum_{i=1}^N x_t^i$ and still has the law $S(\alpha , 0 , c , 0)$. In general there is no known explicit expression for the probability density $p(\zeta)$ of the distribution $S(\alpha , 0 , c , 0)$ beyond its asymptotic decay $p(\zeta) \sim_{|\zeta| \to \infty} \frac{c^{\alpha} \sin(\frac{\pi \alpha}{2} )\Gamma(1+\alpha)}{\pi |\zeta|^{1+\alpha}}$. A notable exception is the case $\alpha=1$ where one obtains the Cauchy distribution with $p(\zeta) = \frac{1}{\pi c(1+\zeta^2/c^2)}$.
The characterization of the full process (in particular its time-correlations) is more difficult. Still, we note that its nature strongly changes at $\alpha = 1$ $(\gamma r = -b)$. When $\alpha < 1$ (i.e., $\gamma r < -b$) the process $\zeta_t$ exhibits power-law (with the same tail exponent $\alpha$) distributed jumps. Indeed in that regime a single particle, e.g. the one that is the furthest away from the origin, can contribute to a finite fraction of the noise $\lambda \sum_{i=1}^N x_t^i = \zeta_t$: $\zeta_t$ and $\lambda \text{max}_i | x_t^i|$ are of the same order (i.e. $O(1)$ with the rescaling). That is the `condensation' phenomenon for the sum of power-law distributed random variables; see e.g. \cite{Condensation,Condensation2}. Then, at resettting the position of that particle at time $t_r$ (which occurs at least with rate $r$), the noise $\zeta_t$ changes by a finite amount $ \zeta_{t_r^+}^i-\zeta_{t_r^-}^i = A -\lambda \text{max}_i | x_t^i| = O(1)$, a jump which is power-law distributed with the same exponent $\alpha$ (the distribution of the maximum of power-law distributed random variables is a Fr\'echet distribution displaying the same power law tail). In the case $\alpha >1$ on the other hand, the noise $\zeta_t$ is continuous at large $N$, as in the Gaussian case. Indeed in that case each noise $\eta^i_t$ contributes to a vanishing fraction (in the large $N$ limit, for example the contribution of the maximum is now $O(N^{\frac{1-\alpha}{\alpha}})$) of the noise $\zeta_t$ and the jumps in the individual noises $\eta^i_t$ (still stemming from the resetting) are washed out in the total noise felt by the probe. A plot of some typical realizations of the noise in the case $\alpha<1$ and $\alpha>1$ is given in Fig.~\ref{fig:plot6}.
\begin{figure}
\centerline{\includegraphics[width=13cm]{plot6bis}}
\caption{Simulation of the (rescaled) noise felt by the probe with here $A=q=0$ fixed, all the (rescaled) parameters set to unity except $a_2+a_3$ that is fixed to ensure the fat-tail exponent $\alpha = 0.5$ (left) or $\alpha = 1.5$ (right). Here we use $N=10^6$ particles. In both cases the one-time distribution of the noise is non-Gaussian with a fat-tail $p(\zeta)\sim \zeta^{-1-\alpha}$. In the case $\alpha >1$ the noise is continuous. In the case $\alpha<1$ it exhibits jumps that are power-law distributed with the same tail exponent $\alpha$.}
\label{fig:plot6}
\end{figure}
\subsection{Effective temperature and relaxation in the Gaussian case}\label{eft}
We go back to the case where $2(a_2+a_3) + \gamma r>0$.\\
A salient signature of the nonequilibrium nature of the induced probe dynamics is that the fluctuation--dissipation theorem is broken by the resetting. Taking the ratio of the noise correlations and of the friction kernel leads to the definition of an effective temperature
\begin{eqnarray} \label{Eq:IntroTeff}
T_\text{eff} && := \frac{\langle \zeta_\alpha(t) \zeta_\alpha(t-\tau) \rangle}{K_{r'}'(\tau) } = T \frac{a_2 +a_3 +\gamma r}{a_2 +a_3 +\gamma r/2} > T \, ,
\end{eqnarray}
that is larger than the temperature of the external equilibrium bath whenever $r>0$. We can also define the effective potential felt by the probe
\begin{eqnarray} \label{Eq:Intro:Veff}
V_\text{eff}(Q) := \frac{1}{2}\left( a_1 +\frac{a_2' a_3'}{a_2' +a_3'} \right) Q^2 + \frac{1}{2} \frac{a_3'^2}{a_2'+a_3'} \frac{\gamma r'}{\gamma r' + a_2'+a_3'} \left( Q - \frac{a_2'+a_3' }{a_3' } A \right)^2 \, ,
\end{eqnarray}
for which $F_\text{tot}(Q) = -\partial_Q V_\text{eff}(Q)$. As a consequence the probe reaches the stationary distribution $p(Q) \sim e^{-\frac{V_\text{eff}(Q)}{T_\text{eff}} }$. Nothing fundamentally distinguishes the probe dynamics from a dynamics that is obtained in the absence of resetting, except if one also has access to {\it e.g.} the real temperature.\\
We can also ask what is the most efficient (more precisely, the fastest) way to reach a given stationary distribution for the probe if all parameters are fixed except the resetting rate $r$ and the global confining potential parameters $a_1$ and $a_2$. We thus take $A=0$ and keep constant $\gamma$, $T$ and the length scale $\ell$ defined by
\begin{eqnarray} \label{Eq:ConstraintTherm}
\ell^2 = \frac{T_\text{eff}(r,a_i)}{\kappa_\text{eff}(r,a_i)} \, ,
\end{eqnarray}
with $T_\text{eff}$ given in \eqref{Eq:IntroTeff} and $\kappa_\text{eff}=a_1 +\frac{a_2' a_3'}{a_2' +a_3'} + \frac{a_3'^2}{a_2'+a_3'} \frac{\gamma r'}{\gamma r' + a_2'+a_3'}$ the stiffness of the effective confining potential \eqref{Eq:Intro:Veff}. $\ell$ controls the width of the stationary distribution of the probe $p(Q) \sim e^{-Q^2/(2\ell^2)}$. To evaluate the relaxation time of the probe we also need the effective friction coeffitient defined by (dropping the primes everywhere now),
\begin{eqnarray}
\gamma(r,a_i) = \int_0^{+\infty}\textrm{d}\tau\,K_{r}(\tau) =\gamma \frac{a_3^2}{(a_2+a_3+\gamma r)^2} \, .
\end{eqnarray}
In a Markovian approximation of the reduced dynamics \eqref{redd} the relaxation time of the probe is then given by
\begin{eqnarray}
\tau(r,a_i) = \frac{\gamma(r,a_i)}{\kappa(r,a_i)} = \frac{\gamma \ell^2}{T} \frac{a_3^2 (a_2+a_3+\gamma r /2)}{(a_2+a_3+\gamma r)^3} \, ,
\end{eqnarray}
and one should remember that there is a relation linking the $a_1$, $a_2$ and $r$ in this setting \eqref{Eq:ConstraintTherm}. Note that at equilibrium $r=0$, $\tau = \frac{\gamma \ell^2}{T} a_3^2$ is independent of the stiffness of the external parabolic wells $a_1$ and $a_2$. Out--of--equilibrium one gets a dependence on the three `confining' parameters $(r,a_1,a_2)$ and it is now possible to tune the relaxation time of the probe. In particular, keeping $a_2$ constant and tuning $a_1$ as a function of $r$ so as to keep the constraint \eqref{Eq:ConstraintTherm} it is clear that one gets a relaxation time that strictly decays with $r$ and for $r \to \infty$, $\tau(r,a_i) \simeq \frac{\gamma \ell^2}{2T} \frac{a_3^2}{(\gamma r)^2} $. Keeping $a_1$ constant and tuning $a_2$ to satisfy \eqref{Eq:ConstraintTherm} leads to a more complicated dependence of $\tau(r,a_i)$ on $r$ but we always find that the relaxation time monotonically decreases with $r$.
\section{Single particle bath case} \label{Sec:SingleParticleBathCase}
The computations that lead to the previously announced results all depend on calculations for a single bath-particle interacting with the probe on which we focus in this section.
\subsection{Solution for the bath particle between resetting times}
For $N=1$ (one bath-particle with position $x_t$) we have the coupled equations of motion \eqref{odm}--\eqref{pd} that we rewrite here for clarity (with $g_t\equiv 0$), in terms of the parameters $b=a_2+a_3$, $\kappa=a_1+a_3$ and $\lambda=a_3 $,
\begin{eqnarray} \label{Eq:1ParticleAndProbeLinearDynamics}
&& M \ddot{q}_t = - \kappa q_t + \lambda x_t \nonumber \\
&& \gamma \dot{x}_t = - b x_t + \lambda q_t + \sqrt{2 \gamma T} \xi_t + \sum_{k} \gamma \delta(t-t_k) (A - x_t) \label{sbp} \ .
\end{eqnarray}
We remember that $\xi_t$ is a standard white noise $\langle \xi_{t} \xi_{t'} \rangle = \delta(t-t')$, and the $t_k$ are positive random variables which are just the times at which an exponential clock with rate $r>0$ rings: $t_0$ has a pdf $p(t_0) = r e^{- r t_0}$ and the waiting times $\Delta t_i =t_i -t_{i-1}$, $i \geq 1$ are also iid with an exponential distribution $p(\Delta t_i) = r e^{- r \Delta t_i}$. We take the resetting to the position $A$.
When the last resetting happened at time $s<t$, the solution of \eqref{sbp} is, given the probe position $q_t$
\begin{eqnarray} \label{Eq:Sol:OrnsteinUhlenbeck}
x_t = A e^{-\frac{t-t'}{\gamma_b}}+ \int_{s}^{t} (\frac{q_{t'}}{\gamma_{\lambda}} + \sqrt{D}\xi_{t'} ) e^{-\frac{t-t'}{\gamma_b}} dt' \, ,
\end{eqnarray}
where we introduced the time scales $\gamma_{b} = \gamma/b$ and $\gamma_{\lambda} = \gamma/\lambda$ and the diffusion coefficient $D = 2T/\gamma$.
Hence $(x_{t'})_{t' \in [s,t]}$ is a Gaussian stochastic process fully characterized by its first two cumulants
\begin{eqnarray} \label{Eq:DefPhi1Phi2}
&& \phi_1(s,A;t) := \langle x_t |\{q_{t'}\}_s^t \rangle = A e^{-\frac{t-t'}{\gamma_b}}+ \int_{s}^{t} \frac{q_{t'}}{\gamma_{\lambda}}e^{-\frac{t-t'}{\gamma_b}} dt' \, , \\
&& \phi_2(s;t_1,t_2) := \langle x_{t_1} x_{t_2} \rangle_c =_{t_1 \leq t_2} D \int_{s}^{t_1} e^{- \frac{t_1+t_2 - 2t'}{\gamma_b}} dt' = \frac{D \gamma_b}{2} \left( e^{- \frac{|t_2-t_1|}{\gamma_b}} - e^{- \frac{t_1+t_2 - 2 s}{\gamma_b}} \right) \, , \nonumber
\end{eqnarray}
all assuming that the last resetting was at time $s<t_1,t_2,t$. The $\phi_1$ and $\phi_2$ will be the building blocks of our computations, and the notation $\{q_{t'}\}_s^t$ emphasizes the dependence on the probe trajectory $q_{t'}$ for $t' \in [s,t]$.
\subsection{Stationary distribution of the bath particle}
Here we supose that $q_t=q$ is fixed. To get the distribution of $x_t$, note that conditioned on the last resetting time having occured at time $s$, this distribution is Gaussian with the moments \eqref{Eq:DefPhi1Phi2}. Integrating over the distribution of $\tau =t-s$ (i.e. the time since the last resetting) we get
\begin{eqnarray}
p_\text{stat}(x|q) = \int_{0}^{+\infty} r d\tau e^{-r\tau} \frac{1}{\sqrt{2 \pi \phi_2(t-\tau;t,t)}} e^{- \frac{(x-\phi_1(t-\tau,A;t))^2}{2 \phi_2(t-\tau;t,t)}} \, .
\end{eqnarray}
That implies \eqref{Eq:ProbabilityDistribution}, the stationary distribution of an Onstein-Uhlenbeck process with resetting as already studied in \cite{pal}.
We next perform the change of variables $y = 2T (1- e^{-2 \tau b/\gamma})/b \geq 0$. We get for \eqref{Eq:ProbabilityDistribution},
\begin{equation}
p_\text{stat}(x|q = 0 ,A=0) = \frac{r\, \gamma}{4T}\,\int_{0}^{C_b} \textrm{d} y \,\frac{(1- \frac{b y}{2T})^{-1+ \frac{\gamma r}{2b}}}{ \sqrt{\pi y}} \exp\{ - \frac{\left(x- \frac{q \lambda}{b} -(A-\frac{q \lambda}{b})\sqrt{1-\frac{b y}{2T}} \right)^2}{y} \} \, ,
\end{equation}
with $C_b = 2T/b$ for $b>0$ and $C_b= +\infty$ for $b <0$. In both cases the integral can be expressed in terms of hypergeometric functions. Secondly, in the large $x$ limit the integral is dominated by large $y$. If $b>0$, then $y$ is bounded from above and one gets a Gaussian decay. If $b <0$, then, from rescaling $y \to x^2 y$ we get \eqref{Eq:FatTail} and \eqref{Eq:FatTail2}.
To get the non-analyticity of the stationary distribution around $x=A$, note that the distribution $P(x)=p_\text{stat}(x|A,q)$ must solve the stationary Fokker-Planck equation (this is directly obtained from the dynamics \eqref{Eq:1ParticleAndProbeLinearDynamics})
\begin{eqnarray}
0 = -\partial_x \left(\frac{\lambda q - b x}{\gamma} P(x) \right) + \frac{D}{2}\partial_x^2 P(x) - r P(x) + r \delta(x-A) \, .
\end{eqnarray}
And the Dirac distribution must be compensated by a jump in the first derivative at $x=A$: introducing $c_\pm = \lim_{x \to A^\pm} \partial_x P(x) $ we obtain $\frac{D}{2} \partial_x^2 P = \frac{D}{2} (c_+-c_-) \delta(x-A) + \text{more regular terms}$. Hence $c_+-c_- = -\frac{2 r}{D} = -\frac{r \gamma}{T}$.
\subsection{Effective force and friction}
From here we can use \eqref{est} to compute
\begin{eqnarray} \label{Eq:FirstMomentExact}
\langle x_t | \{q_s\}_{-\infty}^t \rangle = \int_0^{+\infty} r \textrm{d} \tau e^{-r \tau} \phi_1(t-\tau, \{q_{t'}\}_{t-\tau}^t ; t) \, .
\end{eqnarray}
In this simple linear case ``linear response is exact'' and we can compute exactly the first moment in the steady state.
After some calculation we find
\begin{eqnarray} \label{Eq:ExactResulKernel}
&& \lambda \langle x_t | \{q_s\}_{-\infty}^t \rangle = F(q_t) - \int_{0}^{\infty} K(\tau) \dot{q}_{t-\tau} \nonumber \\
&& F(q) = \lambda \left( A\, \frac{r}{r_b} + \frac{\gamma_b}{\gamma_{\lambda}} \left( 1 - \frac{r}{r_b} \right) q \right) \nonumber \\
&& K(t) = \lambda \frac{\gamma_b}{\gamma_{\lambda}} \left( 1 - \frac{r}{r_b} \right) e^{- r_b t} \ ,
\end{eqnarray}
with $r_b = r + 1/\gamma_b$. This formula is valid when $r_b > 0$, otherwise the systematic force is not defined (this is due to the fat tail in the distribution of $x$, see \eqref{Eq:FatTail}). Note that this formula is also valid for $r =0$ (equilibrium case). We see here that the friction term is decreased by the nonequilibrium driving. There are two mechanisms responsible for this decay (i) a shorter memory time $\gamma_b \to \frac{\gamma_b}{1+r \gamma_b}$; (ii) a smaller overall amplitude $\lambda \frac{\gamma_b}{\gamma_{\lambda}} \to \lambda \frac{\gamma_b}{\gamma_{\lambda}} \left( 1 - \frac{r}{r_b} \right)$.
\subsection{The one-particle noise}\label{var}
For the joint distribution of $x_{t_1},x_t$ for $t_1 \leq t$ we need to consider both the time $\tau$ of the last resetting before $t$ and $\tau_1$ the time of the last resetting before $t_1$. In general these are not correlated, except if $\tau \geq t-t_1$, in which case $\tau_1 = t_1 - (t- \tau) = t_1 - t + \tau$. We thus have to distinguish two cases which lead to two terms in the correlation function,
\begin{eqnarray}
\langle x_{t_1} x_{t} | \{q_s\}_{-\infty}^t\rangle = && \int_{0}^{t-t_1} r e^{-r\tau} \textrm{d}\tau\int_{0}^{+\infty} re^{-r\tau_1} d\tau_1\phi_1(t_1-\tau_1,A ; t_1) \phi_1(t-\tau,A ; t) \nonumber \\
&& + \int_{t-t_1}^{+\infty} r \textrm{d}\tau e^{-r\tau} \left[ \phi_1(t-\tau,A ; t_1) \phi_1(t-\tau,A ; t) + \phi_2(t-\tau; t_1,t) \right] \, , \nonumber
\end{eqnarray}
which leads to
\begin{eqnarray}
\langle x_{t_1} x_{t} | \{q_s\}_{-\infty}^t \rangle = && \int_{0}^{t-t_1} re^{-r\tau} \textrm{d}\tau \phi_1(t-\tau,A ; t) \langle x_{t_1} | \{q_s\}_{-\infty}^{t_1} \rangle \nonumber \\
&& + \int_{t-t_1}^{+\infty} r \textrm{d}\tau e^{-r\tau} \left[ \phi_1(t-\tau,A ; t_1) \phi_1(t-\tau,A ; t) + \phi_2(t-\tau; t_1,t) \right] \, . \nonumber
\end{eqnarray}
Note that the first term can be rewritten as
\begin{eqnarray}
&& \int_{0}^{t-t_1} re^{-r\tau} \textrm{d}\tau \phi_1(t-\tau,A ; t) \langle x_{t_1} | \{q_s\}_{-\infty}^{t_1} \rangle \nonumber \\
&& = \langle x_{t} |\{q_s\}_{-\infty}^{t_1} \rangle \langle x_{t_1} |\{q_s\}_{-\infty}^{t_1} \rangle - \int_{t-t_1}^{+\infty}re^{-r\tau} \textrm{d}\tau \phi_1(t-\tau,A ; t) \, \langle x_{t_1} | \{q_s\}_{-\infty}^{t_1} \rangle \, . \nonumber
\end{eqnarray}
Hence the connected moment is
\begin{eqnarray}
\langle x_{t_1} x_{t} | \{q_s\}_{-\infty}^t \rangle_c = && - \int_{t-t_1}^{+\infty}re^{-r\tau} \textrm{d}\tau \phi_1(t-\tau,A ; t) \, \langle x_{t_1} |\{q_s\}_{-\infty}^{t_1} \rangle \nonumber \\
&& + \int_{t-t_1}^{+\infty} r \textrm{d}\tau e^{-r\tau} \left[ \phi_1(t-\tau,A ; t_1) \phi_1(t-\tau,A ; t) + \phi_2(t-\tau; t_1,t) \right] \, , \nonumber
\end{eqnarray}
and we obtain the decomposition
\begin{eqnarray} \label{Eq:DefQ1Q2}
&& \langle x_{t_1} x_{t} | \{q_s\}_{-\infty}^t \rangle_c = Q_1(t-t_1) + Q_2(t_1,t|\{q_s\}_{-\infty}^{t_1}) \\
&& Q_1(t-t_1) = \int_{t-t_1}^{+\infty} r \textrm{d}\tau e^{-r\tau} \phi_2(t-\tau ; t_1,t) = \frac{D \gamma_b}{2} \left( 1- \frac{r}{r_b'} \right) e^{- r_b(t-t_1)} \nonumber \\
&& Q_2(t_1,t | \{q_s\}_{-\infty}^{t_1}) = \int_{t-t_1}^{+\infty} r \textrm{d}\tau e^{-r\tau} \phi_1(t-\tau,A ; t) \left( \phi_1(t-\tau,A ; t_1) - \langle x_{t_1} | \{q_{-\infty}^{t_1}\} \rangle \right) \, , \nonumber
\end{eqnarray}
with $r_b'=r+2/\gamma_b$. That result is only valid in the regime $r_b' >0$. For $r_b' \leq 0$ the second moment does not exist because of the fat tail \eqref{Eq:FatTail} in the distribution of $x_t$. The correlations are generated by two mechanisms: (i) the simple probe-independent correlations that originate from the Gaussian correlations of the free process when there is no resetting time in between $t_1$ and $t$ and that are taken into account in $Q_1(t_1,t)$; (ii) the correlations induced by the correlations between the last resetting times before $t$ and $t_1$ (if $t-\tau < t_1$ then $\tau_1 = t_1- t + \tau$) and that are contained in $Q_2(t_1,t | \{ q_{-\infty}^{t_1} \}) $ and explicitly depend on the probe position before $t_1$.
\smallskip
While an explicit expression for $Q_1$ was already given above, getting an explicit expression for $Q_2$ is more tedious. In the large $N$ scaling limit presented in Section \ref{Sec:ManyParticleBath} only $Q_1$ contributes. At finite $N$, $Q_2$ also contributes. In that case we get an explicit expression for $Q_2$ in the limit of a slowly moving probe, taking $q_s = q_t$ in \eqref{Eq:DefQ1Q2} and we get
\begin{eqnarray}
Q_2(t_1,t | q_t) && = e^{-r(t-t_1)} \int_{0}^{+\infty} \int_{0}^{+\infty} r^2 d\tau d\tau' e^{-r(\tau +\tau')} \left( A e^{- \frac{t-t_1 + \tau}{\gamma_b}} + q_t \frac{\gamma_b}{\gamma_{\lambda}} \left(1 - e^{-\frac{t-t_1 + \tau}{\gamma_b}} \right)\right) \nonumber \\
&& \left( A\left( e^{-r \tau} - e^{-r \tau'} \right) + \frac{\gamma_b}{\gamma_{\lambda}} q_t \left( e^{-\frac{\min{\tau,\tau'}}{\gamma_b}} -e^{-\frac{\max{\tau,\tau'}}{\gamma_b}} \right) \right) \nonumber \, .
\end{eqnarray}
After some calculations we find
\begin{eqnarray} \label{Eq:ResultSecondCumulantConstantProbe}
Q_2(t_1,t |q_t) && = e^{-r_b(t-t_1)} \left( A - q_t \frac{\gamma_b}{\gamma_{\lambda}} \right) \left(A \frac{r \gamma _b}{4 r^2 \gamma _b^2+6 r \gamma _b+2} + q_t \frac{\gamma_b}{\gamma_{\lambda}} \frac{r \gamma _b}{r^2 \gamma _b^2+3 r \gamma _b+2} \right) \nonumber \\
&& + 2 e^{-r(t-t_1)} \left( \frac{\gamma_b}{\gamma_{\lambda}} q_t \right)^2 \left( \frac{1}{-2 r \gamma _b-1}+\frac{1}{r \gamma _b+1} \right) \, .
\end{eqnarray}
\section{Many-particles bath} \label{Sec:ManyParticleBath}
\subsection{Gaussian case}
In the case of a bath made off $N \gg 1$ particles, we consider the reduced equation for the probe dynamics \eqref{deo2} under the scaling limit \eqref{Eq:ScalingLimit} that we recall here:
\begin{eqnarray} \label{Eq:StartingPointBath}
&& M \ddot{q_t} = -(a_1 +N a_3) q_t + a_3 N \langle x_t^i | \{q_{t'}\}^t_{-\infty}\rangle + a_3 \sum_{i=1}^N \eta_t^i \\
&& a_3 = a_3'/N \quad , \quad a_2 = a_2'/N \quad , \quad t = N t' \quad , \quad r = r'/N \quad , \quad M = N^2 M' \quad , \quad Q_{t'}= q_{Nt'} \, . \nonumber
\end{eqnarray}
Under this limit and using \eqref{Eq:FirstMomentExact} and \eqref{Eq:ExactResulKernel} (assuming $a_1'+a_2'+\gamma r'>0$) it is immediate to see that the first two terms on the right hand side of \eqref{Eq:StartingPointBath} converges as
\begin{eqnarray}
\lim\limits_{N \to \infty} -(a_1 +N a_3) q_t + a_3 N \langle x_t^i | \{q_{t'}\}^t_{-\infty}\rangle = F_\text{tot}(Q_{t'}) - \int_{0}^{+\infty} \textrm{d}\tau K'_{r'}(\tau) \dot{Q}_{t'-\tau} \, ,
\end{eqnarray}
where $F_\text{tot}$ and the kernel $K'_{r'}$ were given in \eqref{Eq:IntroFtot} and \eqref{frk}. To evaluate the limit of the noise term $\sum_{i=1}^{N} \eta^i_t $, note that the noise terms are effectively uncorrelated when they are conditioned on a given probe history. It thus converges to a Gaussian noise:
\begin{eqnarray}
\lim_{N \to \infty} \sum_{i=1}^{N} \eta^i_{N t} = \zeta(t') \, ,
\end{eqnarray}
with $\zeta(t)$ a Gaussian noise with average $0$ and two-times correlation function given by (assuming $a_1'+a_2'+\gamma r'/2>0$), for $t_1' < t_1$
\begin{eqnarray}
\langle \zeta(t') \zeta(t_1') \rangle &=& \lim_{N \to \infty} N \lambda^2 \langle x_{Nt_1'} x_{Nt'} |\{q_{t'}\}_{-\infty}^{Nt'} \rangle \nonumber \\
& = & \lim_{N \to \infty} N \lambda^2 \left( Q_1(N(t'-t_1')) + Q_2(Nt_1',Nt'| \{q_{-\infty}^{Nt_1'} \}) \right) \, ,
\end{eqnarray}
where we have used the decomposition \eqref{Eq:DefQ1Q2}. It is immediate to see that the first term converges as
\begin{eqnarray}
\lim_{N \to \infty} N \lambda^2 Q_1(N(t'-t_1')) = T_\text{eff} \,K'_{r'}(t'-t_1') \, ,
\end{eqnarray}
in terms of the kernel \eqref{frk} and the effective temperature \eqref{Eq:IntroTeff}. On the other hand, it can be seen directly from the expression of $Q_2$ given in \eqref{Eq:DefQ1Q2} and of $\phi_1$ given in \eqref{Eq:DefPhi1Phi2} that the other contribution to the noise correlation becomes very small: $ N \lambda^2 Q_2(Nt_1',Nt'| \{q_{s}\}_{-\infty}^{Nt_1'}) = O(1/N^3)$. That follows from the fact that $\phi_1 = O(1/N)$ in this scaling limit. Hence we get that $\langle \zeta(t') \zeta(t_1') \rangle = T_\text{eff} K'_{r'}(t'-t_1')$, reproducing the result given in \eqref{Eq:Intro:NoiseCorrelations}. The fact that $Q_2$ does not contribute in this limit explains why we can obtain an exact result in that case without the need of using a slow probe approximation.
\medskip
{\bf Remark} Note that the effective temperature interpretation only holds in the large $N$ limit. For small $N$ the noise is obviously nonGaussian but it also de-correlates on two different time scales (see \eqref{Eq:DefQ1Q2} and \eqref{Eq:ResultSecondCumulantConstantProbe}), while the friction kernel always (for finite $N$ as well) only involves the unique time-scale $1/(r+1/\gamma_b)$.
\subsection{NonGaussian case}
We now discuss the rescaling to be performed in the nonGaussian case $\gamma r \leq -2b$. In general the sum of iid distributed random variables $y_i$ distributed with a pdf exhibiting a fat tail $p(y) \sim_{y \to \pm \infty} |y|^{-1-\alpha}$ with $\alpha \leq 2$ converges to a generalized stable distribution if the random variables are rescaled as $N^{1/\alpha}$ (also substracting the mean if $\alpha \geq 1$). Here the noise felt by the probe $\zeta_t$ is $\zeta_t = \lambda \sum_{i=1}^N (x^i_t - \langle x^i_t | \{q_{t'}\}_{-\infty}^t \rangle)$ (without the average value substracted in the case $\gamma r \leq -b$), and the exponent of the fat-tail in the distribution of $x^i_t$ is $\alpha = - \frac{\gamma r}{b} $. Taking into account the prefactor in front of the tail (see \eqref{Eq:FatTail}) shows that $x^i_t$ is of order $1/\sqrt{|b|}$ (the other constants are fixed). Hence we must take
\begin{eqnarray}
\frac{\lambda}{\sqrt{|b|}} \sim \frac{1}{N^{1/\alpha}} \, ,
\end{eqnarray}
which makes us choose $a_2 \sim \frac{a_2'}{N^{2/\alpha}}$, $a_3 \sim \frac{a_3'}{N^{2/\alpha}}$, $r \sim \frac{r'}{N^{2/\alpha}}$ (to conserve the value of the tail exponent), $t \sim t' N^{2/\alpha} $ (to keep $rt$ finite) and $M \sim M' N^{4/\alpha}$ (to keep $M \ddot{q}_t$ finite) as in Section \ref{ngc}.
In order to fully characterize the one-point distribution of $\zeta_t$ using a generalized central limit theorem (GCLT see \cite{GCLT}), we also need the prefactors of the tails of the distribution of bath particles given the probe history $p(x^i_t | \{ q_{t'} \}_{-\infty}^t )$ (that is not stationary). In principle these prefactors could depend on the full probe history but in the scaling limit considered here they do not depend on $q_t$. That can be seen directly on the expression of these prefactors that we gave in the case of a constant probe position: from \eqref{Eq:FatTail2} we obtain that the random variable $y_i = N^{1/\alpha} \lambda x^i_t $ (with the average value substracted in the case $\alpha >1$) has a symmetric fat tail in the $N \to \infty$ limit: $p(y) \sim C/|y|^{1+\alpha} $ with
\begin{eqnarray}
C = \lim_{N \to \infty} \frac{1}{N^{1/\alpha} |\lambda|} G_{\pm} \frac{\gamma r/|b|}{(2T/|b|)^{\gamma r/(2b)}} (\lambda N^{1/\alpha})^{1+\alpha} = \alpha G \left( \frac{2 T a_3^2}{|a_2+a_3|} \right)^{\alpha/2} \, ,
\end{eqnarray}
where we recall that $G= \frac{\Gamma(\frac{3+\alpha}{2})}{2\sqrt{\pi}}$. Here we have derived the result in the case where the probe stays at a constant position but it clear that this also holds in the general case. Using the GCLT of \cite{GCLT} we can conclude that the one time distribution of the noise is a stable distribution $S(\alpha , 0 , c , 0)$ with
\begin{eqnarray}
c= \left( \frac{\pi C}{\alpha \sin(\frac{\pi \alpha}{2}) \Gamma(\alpha))} \right)^{1/\alpha} \, ,
\end{eqnarray}
which leads to \eqref{Eq:StableDistc}. Here by definition $\zeta$ has a stable distribution $S(\alpha , 0 ,c , 0)$ if its characteristic function is $\langle e^{i \omega \zeta} \rangle = e^{ - c^{\alpha} |\omega|^{\alpha}} $.
{\bf Remark} The reason why the asymmetry in the tails of the distribution due to the non-zero value of the probe and of the resetting position does not appear in that scaling limit is because the temperature strongly dominates the short-time behavior of the evolution of the bath particles (which can be checked by rescaling the Langevin equation \eqref{Eq:1ParticleAndProbeLinearDynamics}). More precisely the bath particles feel the influence of the deterministic terms after a time of order $\gamma/|b|$, time at which they have diffused on a distance of order $\sqrt{D\gamma/|b|} = \sqrt{2T/|b|} \sim N^{1/\alpha}$. At that time the influence of the position of the resetting is lost since $A=O(1)$. Scaling also $A \sim N^{1/\alpha}$, it can readily be seen from \eqref{Eq:FatTail2} that the asymmetry in the tail is conserved under the scaling limit, leading for the probe to a noise that is an asymmetric generalized stable distribution. A similar remark can be made for the Gaussian regime.
\section{Conclusion}
Adding random position resetting for overdamped bath particles with a linear dynamics yields an exactly solvable model for the probe evolution. A Langevin dynamics can be derived in the large bath scaling limit and an interesting nonequilibrium effect arises: when the bath particles would run off to infinity under the equilibrium dynamics (without resetting) by the presence of an inverted harmonic well, the resetting stabilizes them and the result on the probe's effective dynamics is a nonGaussian noise, possibly with jumps having power-law tails. \\
\noindent {\bf Acknowledgment}: We are very grateful to Satya Majumdar for introducing us to the resetting dynamics and for useful suggestions on the paper. T.T. has been supported by the InterUniversity Attraction Pole phase VII/18 dynamics, geometry and statistical physics of the Belgian Science Policy.
|
1,108,101,566,667 | arxiv | \section{Introduction}
\label{sect:introduction}
The RWI (Lovelace \& Hohfeld 1978, Toomre 1981,
Papaloizou \& Pringle 1984, 1985) is a non-axisymmetric hydrodynamic shear instability
associated with axisymmetric extrema of vortensity (Lovelace et
al. 1999, Li et al. 2000). The localized extremum behaves as a potential well that, if deep enough, can trap
modes in its corotational resonance. The trapped modes are linearly
unstable and eventually saturate into large-scale vortices (Hawley
1987, Li et al. 2001, Tagger 2001). Essentially, the RWI is the
form the Kelvin-Helmholtz instability takes in differentially
rotating disks, with the upshot being the conversion of
the surplus shear into vorticity.
The RWI was originally proposed in the context of galaxies (Lovelace
\& Hohfeld 1978), and later, in accretion disk tori (Papaloizou \&
Pringle 1984,1985). First sought as the elusive source of accretion in
disks, the interest in it decreased after the discovery of the
magnetorotational instability (MRI, Balbus \& Hawley 1991). It has
been reintroduced in the protoplanetary disk landscape by Varni\`ere \&
Tagger (2006) who suggested that the RWI would naturally develop in the
transition between poorly ionized zones that are ``dead'' to the MRI,
and the magnetized zones, ``active'' to the MRI. This is because the
transition constitues a gradient in turbulent viscosity, that in turn
would lead to RWI-unstable density bumps at the dead zone boundaries.
Subsequently, Lyra et al. (2008b, 2009ab) presented proof-of-concept
models showing that the particle concentration in these vortices
is strong enough to assemble bound clumps of solids, in the Mars-mass
range. Although those models were idealized (2D, no collisional
fragmentation of the particles),
they showed for the first time that planet formation was feasible in
physically-motivated vortices. This scenario has since been shown to hold as increasing
sophistication is sought. The RWI was shown to exist in three
dimensions with vertical stratification in
barotropic (Meheut et al. 2010, 2012ab), polytropic (Lin
2012a), non-barotropic (Lin 2013), and self-gravitating disks (Lin
2012b). Vortices have also been studied in local models, that can more
easily afford high resolution, showing that the ``elliptic
instability'' that destroys an isolated vortex (Kerswell 2002, Lesur \&
Papaloizou 2009) is well balanced by a vorticity source, leading to
vortex survival (Lesur \& Papaloizou 2010, Lyra \& Klahr 2011).
A critical test was to replace the alpha viscosity
approximation (Shakura \& Sunyaev 1973) by magnetohydrodynamics, with a properly modeled active
zone. This was done by Lyra \& Mac Low (2012) who replaced
the jump in alpha viscosity by a jump in resistivity, letting the MRI naturally evolve in the
active zone. The RWI was excited, leading to a vortex in the dead side
of the transition. This result was confirmed by Faure et al. (2014),
with a dynamically-varying resistivity profile in a
thermodynamically-evolving disk.
\begin{figure*}
\begin{center}
\resizebox{\textwidth}{!}{\includegraphics{fig1.png}}
\end{center}
\caption[]{Suite of MHD simulations. From left to right, the
transition width is $h_1$=0.1, 0.2, 0.4 and 0.8. The resulting resistivity
profiles are shown in the upper panels. The dashed vertical lines
correspond to $2H$ around the jump center at $r=2.5$. The model with $h_1=0.8$ corresponds to
a smooth jump from $r\approx 1$ to $r\approx 4$, i.e., over
$\approx$15 scale heights. Although at different times, all models
develop a sharp density bump that goes unstable to the RWI,
producing a Rossby vortex (middle panels).
The panels, from left to right, correspond to density snapshots at 100, 100, 122,
and 232$T_0$, where $T_0$ is the orbital period at $r=1$. The
dashed vertical lines correspond to $2H$ around the density maximum. The
reason for RWI excitation in these cases is because even though the resistivity jump is smooth, the
transition in Maxwell stress remains sharp (lower panels).
The transition in Reynolds stress is somewhat smoother, but
turbulent stresses do not blur gradients in the same way Laplacian viscosity
does, and the density bump remains RWI-unstable.}
\label{fig:results-mhd}
\end{figure*}
\begin{figure*}
\begin{center}
\resizebox{\textwidth}{!}{\includegraphics{fig2.png}}
\end{center}
\caption[]{Two-dimensional hydrodynamical alpha-disks, with viscosity transitions
(upper panels) equivalent to the resistivity ones, and $\alpha_\nu\approx 0.02$ in the
``active'' zone (the subscript $\nu$ underscores that this
$\alpha$ corresponds to a Laplacian viscosity, not to turbulent
stresses). The lower panels show the density, for snapshots at the same times as in
\fig{fig:results-mhd}. Only the first ones ($h_1$=0.1 and $h_1$=0.2) develop the RWI.}
\label{fig:results-visc}
\end{figure*}
One of the remaining questions toward more realism in this scenario is
how sharp does the transition need to be, and how realistic
is it to expect such a gradient. Although linear theory predicts that any
extremum of vortensity is unstable to the RWI, one finds in practice
that the extremum has to be sharp enough to trap modes in the
corotational resonance. The required width is problem-dependent,
but as a rule-of-thumb, Li et al. (2000) find that a 10\%-20\% radial
variation in density over a length scale comparable to the pressure
scale height ($H$) is sufficient to trigger it. Lyra et al. (2009a) have shown
that viscous jumps up to $2H$ in width are necessary, with wider jumps
not exciting the RWI up to 500 orbits. This result was subsequently
confirmed by Regaly et al. (2012), up to $\approx$20\,000 orbits.
Two scale heights is enough in the transition from the inner active to
the outer dead zone, because, in this case, the transition is indeed
sharp. It comes about when the temperature reaches $\approx$900\,K,
enabling collisional ionization of alkali metals (especially
potassium). The model of Lyra \& Mac Low (2012) was
locally-isothermal, using a static sharp resistivity transition
(essentially a Heaviside function), while Faure et al. (2014)
used a thermodynamical model where the resistivity drops to
zero below the temperature threshold of MRI activation.
The theme of this paper is the transition in the outer disk. This is
more problematic because there is no sharp threshold. The ionization
increases gradually as X-rays (and perhaps cosmic rays) reach the
midplane. The resistivity thus decreases very smoothly,
over a very wide range, from $r\approx10$\,AU to $r\approx40$\,AU
(Dzuyrkevich et al. 2013). For aspect ratios $h=H/r=0.1$, this corresponds to about 15 scale heights, and
therefore the RWI is not expected to occur. However, a smooth
resistivity jump does not necessarily imply an equally smooth
transition in turbulent stress. In fact, Sano \& Stone (2002) show shearing
box simulations with net vertical field and resistivity where an
abrupt transition is seen. In their figure 9 they plot Maxwell stress
versus Elsasser number, which is a magnetic Reynolds number
defined as
\begin{equation}
\varLambda \equiv v_{_{\rm A}}^2/\eta\varOmega.
\end{equation}
\noindent In this definition,
\begin{equation}
v_{_{\rm A}}\equiv B/\sqrt{\mu_0\rho}
\end{equation} is the Alfv\'en
speed, with $\v{B}$ the magnetic field, $\rho$ the density and $\mu_0$ the
magnetic permitivity of vacuum; $\eta$ is the resistivity and $\varOmega$
the Keplerian angular frequency. The figure shows that the Maxwell stress is constant for
$\varLambda \geq 1$, yet it drops by two orders of magnitude for $\varLambda =
0.1$. In this work we explore the connection of this result to
global disk calculations and, in particular, its implications for the RWI.
\section{Model}
\label{sect:model}
\begin{figure*}
\begin{center}
\resizebox{\textwidth}{!}{\includegraphics{fig3.png}}
\end{center}
\caption[]{Density bumps at selected snapshots, for the MHD
simulations (upper panels) and alpha-disk equivalents (lower
panels). The dashed lines bracket $2H$ around the density maximum of
the last snapshot shown (red line). The RWI ``rule-of-thumb'' of 10-20\% variation in density
over a width comparable to the scale height is satisfied for the MHD
turbulent runs, even for the wide transition model of $h_1=0.8$. In the
equivalent viscous laminar model the density bumps diffuse over a
much larger width, and the RWI is not excited.}
\label{fig:mhdvsvisc}
\end{figure*}
We perform three-dimensional MHD simulations in the cylindrical
approximation, i.e., neglecting the disk vertical stratification
and switching off gravity in that direction. The equations solved are
\begin{eqnarray}
\pderiv{\rho}{t} &=& -\left(\v{u}\cdot\v{\nabla}\right)\rho -\rho{\del\cdot\v{u}}, \label{eq:continuity}\\
\pderiv{\v{u}}{t} &=& -\left(\v{u}\cdot\v{\nabla}\right)\v{u} -\frac{1}{\rho}\v{\nabla}{p} - \v{\nabla}\varPhi + \frac{\v{J}\times\v{B}}{\rho}, \label{eq:navier-stokes}\\
\pderiv{\v{A}}{t} &=& \v{u}\times\v{B} -\eta\mu_0\v{J} \label{eq:induction}\\
p&=&\rho c_s^2\label{eq:eos}.
\end{eqnarray}
\noindent where $\v{u}$ the velocity, $\v{A}$ is the
magnetic potential ($\v{B}=\del\times{\v{A}}$), $\v{J}=\mu_0^{-1}\del\times{\v{B}}$ is the current density, $p$ is the
pressure, and $c_s$ is the sound speed. The gravitational potential $\varPhi=-GM_\star/r$ where
$G$ is the gravitational constant, $M_\star$ is the stellar mass, and
$r$ is the cylindrical radius. The resistivity is a radial function of
position. We use smooth step functions
\begin{equation}
\eta(r) = \eta_0 - \frac{\eta_0}{2}\left[\tanh\left(\frac{r-r_1}{h_1}\right) -\tanh\left(\frac{r-r_2}{h_2}\right)\right]
\label{eq:eta-jump}
\end{equation}
\noindent in order to mimic the effect of a dead zone. The resistivity passes
from $\eta_0$ to zero over a width $h_1$ centered at an arbitrarily
chosen distance $r_1$. The second transition is for buffer
purposes, raising the resistivity above the MRI-triggering threshold near the outer
boundary of the domain.
We solve the equations with the {\sc Pencil Code} {\footnote{The code,
including improvements done for the present work, is publicly
available under a GNU open source license and can be downloaded at
http://www.nordita.org/software/pencil-code}} which integrates
the evolution equations with sixth order spatial derivatives, and
a third order Runge-Kutta time integrator. Sixth-order
hyper-dissipation terms are added to \eq{eq:continuity}-\eq{eq:induction},
to provide extra dissipation near the grid scale, explained in Lyra et
al. (2008a). They are needed because the high order scheme of
the Pencil Code has little overall numerical dissipation (McNally et
al. 2012).
\subsection{Initial Conditions}
We model a disk similar to Lyra \& Mac Low (2012) in cylindrical
coordinates $(r,\phi,z)$. The disk ranges over $r$=[0.4,9.6]$r_0$,
$\phi$=[-$\pi$,$\pi$], and $z$=$[-0.1,0.1]$. The resolution is
[$N_r$,$N_\phi$,$N_z$]=[512,512,32]. The radial spacing is
non-uniform, keeping a constant number of
points per radial scale height ($H/\Delta r=16$), where $\Delta
r =\Delta r(r)$ is
the (local) radial resolution. The vertical direction is
unstratified, and the main purpose of its presence is to resolve the
MRI.
We use units such that
\begin{equation}
GM_\star = r_0 = \varOmega_0 = \rho_0 = \mu_0 = 1.
\label{eq:units}
\end{equation}
The density and sound speed are set as radial power-laws
\begin{equation}
\rho = \rho_0 \left(\frac{r}{r_0}\right)^{-q_\rho}; \quad \quad c_s^2 = c_{s0}^2 \left(\frac{r}{r_0}\right)^{-q_{_T}}
\end{equation}
\noindent with $q_\rho=1.5$ and $q_{_T}=1.0$. The reference
sound speed is set at $c_{s0}$=0.1, and Gaussian-distributed noise is added to the
velocities, cell by cell, at rms equal to $\ttimes{-3}$ times the
local sound speed. The initial angular velocity profile is corrected by the thermal pressure gradient
\begin{equation}
\dot\phi^2 = \varOmega^2 + \frac{1}{r\rho}\frac{\partial{p}}{\partial{r}}
\label{eq:centrifugal}
\end{equation} \noindent where $\varOmega = \varOmega_0\,(r/r_0)^{-q}$ with $q=1.5$
is the Keplerian angular velocity.
The magnetic field is set as a net vertical field, with two MRI wavelengths
resolved in the vertical range. The constraint $\lambda_{\rm MRI} = 2\piv_{_{\rm A}}\varOmega^{-1} = L_z/2$
translates into a radially varying field
\begin{equation}
v_{_{\rm A}} = \frac{L_z\varOmega}{4\pi}\sqrt{\mu_0 \rho} = B_0 \left(\frac{r}{r_0}\right)^{-(q+q_\rho/2)},
\end{equation}
\noindent i.e., a field falling as a 9/4 power-law, with $B_0= v_{_{\rm A}} \approx \xtimes{1}{-2}$
in code units. Since the magnetic field is not needed in the
resistive inner disk, we bring it smoothly to zero inward of $r=2.5$.
The dimensionless plasma beta parameter $\beta =
2c_s^2/v_{_{\rm A}}^2$ ranges from $\ttimes{3}$ to $\ttimes{4}$ in the active
zone. In this configuration, the MRI grows and saturates
quickly in 3 local orbits. We use reflective boundaries, with a buffer
zone of width $H$ at each radial border, that drives the quantities to the initial condition
on a dynamical timescale.
In the presence of resistivity, the behavior of the MRI is
controlled by the magnetic Reynolds number ${\rm Re}_M$$=$$Lv_{_{\rm A}}/\eta$ where $L$ is a
relevant length scale. Setting $L=v_{_{\rm A}}/\varOmega$ defines the
Elsasser number $\varLambda$. Because the MRI exists for
arbitrarily large wavelengths (Balbus \& Hawley 1991), a more relevant length scale, that controls
the excitation of the MRI, is the longest wavelength present in the
vertical domain. For stratified disks, this wavelength is $L=H$.
In the unstratified case, it is simply $L=L_z$.
Once this wavelength is in the resistive range, the MRI is
quenched (e.g., Pessah 2010, Lyra \& Klahr 2011). That defines a
``box'' Lundquist number,
\begin{equation}
\mathrm{Lu} \equiv \frac{L_zv_{_{\rm A}}}{\eta}.
\end{equation}
We set the reference resistivity $\eta_0$ so that
$\mathrm{Lu}$ is unity at $r_1$,
thus quenching the MRI inward of this radius. This constraint
translates into $\eta_0 = L_z v_{_{\rm A}} = \xtimes{5}{-4}$. We
place the resistivity jump starting at $r_1=2.5$, and
vary the transition width $h_1$. For $h_1=0.8$, the resistivity jump
goes from $\mathrm{Lu}\approx 0.1$ to 1 from the
inner disk to $r=2.5$, and thence to $\mathrm{Lu}\approx 10$ at
$r\approx 4$. Considering $r_0=10$\,AU, this corresponds to the
physical dead-to-active transition in the outer disk (Dzyurkevich et
al. 2013). We fix $r_2$=10 and $h_2=0.8$ for
the second transition.
\section{Results}
\label{sect:results}
\begin{figure}
\begin{center}
\resizebox{\columnwidth}{!}{\includegraphics{fig4.png}}
\end{center}
\caption[]{Cartesian projection of the MHD model with $h_1=0.8$, with the
smooth resistivity jump from $r\approx 1$ to $r\approx 4$,
i.e. roughly 15 scale heights. Although the resistivity jump is
smooth, the resulting transition in Maxwell stress is
sharp, triggering the Rossby vortex. Notice also the conspicuous spiral
pattern propagating into the dead zone. For an animation of the simulation,
click \href{https://www.youtube.com/watch?v=XBH5o1q9pZI}{\blue{here}}.}
\label{fig:cartdisk}
\end{figure}
We show in \fig{fig:results-mhd} the results of the suite of MHD
simulations. The only difference between the models is the width
of the transition from the (inner) dead zone to the active zone. From left to right, the transition width $h_1$ is
0.1, 0.2, 0.4, and 0.8. The upper panels show the corresponding
resistivity jumps; the dashed vertical lines mark a width of $2H$
around $r=2.5$. For the latter two simulations, the
transition in resistivity is significantly wider than $2H$.
The middle panels show the density in the
midplane, normalized by the initial density. The RWI was excited in
all models. The vortex is seen as a non-axisymmetric density
enhancement around (but not exactly at) the resistivity transition
center at $r=2.5$. The white dashed lines correspond to a $2H$
width centered at the density maximum. That the $2H$ lines box the structures
is evidence that they are indeed vortices (whose size is limited by
shocks and therefore approximately $2H$, c.f., Lyra \& Lin 2013).
The snapshots are at 100, 100, 122, and 232 orbits.
In the middle panels we show the turbulent alpha values,
i.e., the magnetic and kinetic turbulent stresses normalized by the
local pressure (Shakura \& Sunyaev 1973). We define them, respectively, as
\begin{equation}
\alpha_M = \frac{\langle{v_{_{\rm A}}}_r^\prime \
{v_{_{\rm A}}}_\phi^\prime\rangle}{c_s^2} \qquad \alpha_R = \frac{\langle u_r^\prime \ u_\phi^\prime\rangle}{c_s^2},\\
\end{equation}
\noindent where the angled brackets represent vertical and azimuthal average. The
prime represents a fluctuation from a mean, defined as $\xi^\prime = \xi
- \langle\xi\rangle$, where $\xi$ is an arbitrary quantity. As
seen in the figure, both stresses level at $\alpha \approx 0.01$. The Maxwell
stress is well confined to the active zone; the Reynolds stress in the
dead zone is roughly an order of magnitude lower than in the active
zone.
We show in \fig{fig:results-visc} the corresponding 2D
viscous alpha-disk simulations. The viscosity profile corresponds to that of
the resistivity, but with minima and maxima swapped,
\begin{equation}
\nu(r) = \frac{\nu_0}{2} \left[\tanh\left( \frac{r-r_1}{h_1}\right) - \tanh\left( \frac{r-r_2}{h_2} \right) \right].
\end{equation}
\noindent The values of $r_1$, $h_1$, $r_2$ and $h_2$ are identical to those
used in the MHD models. The second jump is not needed in the
viscous calculation, but kept for symmetry
reasons. The value of $\nu_0$ is chosen consistently with the value of
$\alpha\approx0.02$ (for the combined kinetic and magnetic stresses) derived
from the MHD calculation.
The snapshots were taken at the
same times. In agreement with previous alpha-disk works on the RWI,
but in contrast to the MHD runs shown in \fig{fig:results-mhd}, only in the first two simulations,
where the viscosity jump is sharper than $2H$, was the RWI
triggered.
\section{Discussion}
\label{sect:discussion}
According to RWI theory, the instability will be triggered if a
quantity $\mathcal{L}$, associated with vortensity, has an extremum
somewhere in the disk. In the context of locally isothermal disks
$\mathcal{L}$ is half the inverse of vortensity
\begin{equation}
\mathcal{L} = \frac{\rho}{2\omega_z},
\end{equation}
\noindent where $\omega_z$ is the vertical vorticity. Amplitude and sharpness
of the transition also play a role, although a strict criterion has not
been derived. Linear analysis predicts instability if any extremum
exists, but empirically, it is found that critical amplitude and sharpness
exist for the onset of instability. Li et al. (2000) report that as a rule-of-thumb the
instability is triggered when the density varies by
10\%-20\% over lengths scales comparable to the scale height $H$;
a condition subsequently confirmed in later simulations with different
numerical schemes and different resolutions.
We show in \fig{fig:mhdvsvisc} that this
rule-of-thumb is met in the MHD simulations for the
transition widths used. The figure shows, for each width used, the
density bump in the MHD runs and alpha-disk equivalent, in three
different snapshots, labeled in the figure. In the viscous run (lower
panels) the smoothness of the resulting density bump is strongly correlated with
the width of the viscous jump. The same does not happen for the MHD
run, where the density bump is sharp in all cases, and, although the
rate of mass accumulation slows down as the resistivity transition widens, the
smoothness of the bump is only weakly correlated with the width of the
resistivity gradient, remaining sharp in all cases tested. We did not explore further than the transition
width $h_1=0.8$ because, taking $r_0$=10\,AU, that width
corresponds to a transition over 30\,AU, which is the physical value
expected for the dead-active transition in the outer disk (Dzyurkevich
et al. 2013). The
conclusion is that in physical disks, even the smooth outer resistivity transition
can excite the RWI.
That even
this smooth transition can lead to excitation of the RWI is in
principle unexpected. How can the density bump be so sharp? The
solution seems to be in the Maxwell stress. Although the resistivity
is smooth, the transition in Maxwell stress remains sharp, a feature that is absent in
the alpha-disk viscous equivalent. The origin of this behavior is a
property of the MRI. The MRI is excited and maintained for
$\mathrm{Lu} >1$, which always constitutes a sharp transition to turbulence. So, in
essence, it is a property of turbulent flows. As long as the critical
wavelength is not within the resistive range, the MRI will be
excited. This is in agreement with the shearing box results of
Sano \& Stone (2002). The novelty of this work is to show that this
non-uniform $\eta-\alpha$ relationship triggers the RWI even for
weak gradients of $\eta$. Maxwell stresses drive the gas inwards (away from
the pressure maximum), placing it at the transition, at
the laminar side. Inviscid, the density bump is slightly blurred by
Reynolds stresses, but does not spread viscously as in the alpha-disk
case, and thus remains sharp. Because the vortex is in the
resistive side of the transition, it does not get destroyed by
the magneto-elliptic instability (Mizerski \& Bajer 2009, Lyra \&
Klahr 2011, Mizerski \& Lyra 2012).
We show in \fig{fig:cartdisk} a Cartesian projection of the
MHD model with $h_1=0.8$. The Rossby vortex is conspicuous as
a crescent-shaped overdensity at $r\approx 2$. Notice also the spiral
pattern in the dead zone. Reminiscent of planet-induced spirals, in
this case the spiral is the result of waves propagating inwards, from the
turbulence in the active zone.
\section{Conclusions}
\label{sect:conclusions}
The RWI requires a sharp extremum
of potential vorticity. In alpha disks, this bump comes about only at
sharp viscosity transitions, sharper than $2H$. This, although making
the inner active/dead zone transition attractive for the RWI, has hindered the appeal of
the outer dead/active zone transition as a RWI location.
Sano \& Stone (2002) found that, in shearing boxes, the Maxwell stress
increases sharply with Elsasser number in the range below unity. Bringing the
result to global disks, we have found that the required sharpness of the viscosity transition
for RWI is a feature of alpha disks. We have increased the
width of the resistivity transition without finding a RWI cutoff.
Resistivity transitions can be as smooth as 30 AU in width
in the outer disk and still excite the RWI. This is because once the
MRI is excited growth rates
can be affected by resistivity, but the
resulting amplitude at saturation is only weakly affected by it. As a result,
the transition in Maxwell stress remains sharp, driving mass to the dead side of the transition. Once there, without
Laplacian viscosity to smooth it, the density bump remains sharp,
triggering the RWI as it collects mass.
Our finding has importance for the interpretation of observations of
dust asymmetries in transitional disks (van der Marel et al. 2013,
Casassus et al. 2013, Isella et al. 2012, 2013, van der Plas et
al. 2014), for which the best explanation is a vortex. A vortex can be brought
about by RWI in gaps carved by planets (de Val Borro 2007, Lyra et
al. 2009b, Lin \& Papaloizou 2011ab, Lin 2012), by convective overstability (Klahr \& Hubbard
2014, Lyra 2014), or by RWI at dead zone boundaries (Varniere \&
Tagger 2006, Lyra et al. 2008b, 2009a). A planetary gap is an exciting possibility (Zhu
\& Stone 2014), but before an undetected planet is invoked, other
alternatives ought to be dealt with. In the case of Oph IRS 48, convective overstability fails to
operate, because the disk is supposedly too radiatively efficient; as
for dead zone boundaries, the transition in resistivity was
thought too smooth to lead to RWI. We have shown that the latter is not a
deterrent. RWI in the outer dead/active transition may be the culprit
for the vortex of Oph IRS 48.
Notice also the spiral pattern in the dead zone in
\fig{fig:cartdisk}. Similar spiral patterns have been observed in
actual disks (SAO 206462, Muto et al. 2012), and attributed to unseen
planets. In our simulation, these are simply spiral density waves that
propagate inward, launched by the turbulence in the active zone. The
spiral pattern comes about because without turbulent interference,
they propagate coherently. A proper comparison to the
observations would require the addition of particles in
order to study how
they are trapped in these spirals, which we leave for future
work. At first, we expect the resulting
particle concentration to be stronger than the trapping
in planetary spirals, that are stationary in the reference frame of the
planet and thus only trap the very well-coupled dust (Lyra et
al. 2009b). In contrast, these dead zone spirals rotate at the disk's
velocity and hence should be able to capture more loosely coupled
particles (Ataiee et al. 2013).
We caution that our models have several limitations. They are
unstratified MHD calculations with static Ohmic resistivity profiles.
Stratification should have little effect on the linear growth of the
RWI, which is essentially 2-dimensional (Umurhan 2010, Meheut et al.
2012ab, Lin 2012ab, 2013). However stratification will influence the
outcome by setting a physical scale for the resistivity cutoff: in
unstratified models the results depend on box size, because the
Lundquist number is a function of $L_z$. In stratified models this
artificial parameter is replaced by the density scale height $H$.
Furthermore the saturated RWI may be severely affected by the vertical
structure. Lin (2014) finds that Rossby vortices are transient in
disks with a higher-viscosity surface layer (Gammie 1996) if the
density bump spreads out once the vortex forms. Here the vortex is
long-lived only if the accretion viscosity is low throughout the
column. However, Lin (2014) also finds that if the viscosity is such
as to maintain the density bump, the vortex is sustained despite the
layered structure. This latter situation is more similar to our case,
where the turbulent stresses bring material to the dead zone edge,
strengthening the density bump. Further study of vortex lifetimes in
layered surroundings is warranted, especially treating the ambipolar
and Hall terms, which can greatly alter the vertical structure of the
weakly-ionized annuli in protostellar disks (Wardle 1999, Bai \& Stone
2011, 2013, Wardle \& Salmeron 2012, Mohanty et al. 2013, Kunz \& Lesur 2013, Lesur et al.
2014). The resistivity's time-dependence has less effect, judging
from the results of Faure et al. (2014).
The layered structure of the magnetic activity potentially has a
special impact at the dead zone edge, where the stratification can
yield a vertically-averaged stress that varies more smoothly with
radius than the midplane profile represented in our unstratified
calculations. A smoother stress profile would lead to a broader
density bump, weakening our conclusion. However, dead zone structure
calculations treating the Ohmic and ambipolar terms in prescribed
surface density profiles indicate the transition from dead to active
zone forms a vertical wall up to $\pm H$ (fig. 4a of Dzyurkevich et
al. 2013). Between $H$ and $2H$, the boundary bends further from the
star, but is set by the ambipolar diffusion, which will weaken as the
density bump builds up. Over time, mass is thus likely to accumulate
in a narrow range of radii near the position of the dead zone's edge in
the midplane. The feedback between the evolving density distribution
and the evolving diffusivities deserves further investigation.
Our results are evidence of the practical importance of developing a
detailed picture of the underlying turbulence mechanisms in
protoplanetary disks. As demonstrated here, such models can have a
critical impact on how observations are understood.
\begin{acknowledgements}
This work was performed in part at the Jet Propulsion
Laboratory, under contract with the California Institute of Technology
funded by the National Aeronautics and Space Administration
(NASA) through the Sagan Fellowship Program executed by the NASA
Exoplanet Science Institute.
The research leading to these results has received funding from the
People Programme (Marie Curie Actions) of the
European Union's Seventh Framework Programme
(FP7/2007-2013) under REA grant agreement 327995.
We acknowledge discussions with Min-Kai Lin and Zhaohuan Zhu.
\end{acknowledgements}
|
1,108,101,566,668 | arxiv | \section{Introduction}
Discovery strategies for light, color singlet technipions
at hadron colliders have been investigated.
As pointed out by Eichten and Lane\cite{eichtenlane}
they can be copiously produced through $s$-channel
technirho production:
$$ q \overline q^\prime \to \rho^\pm_{T} \to V_1 V_2 $$
where $V_1 V_2 = W^\pm Z$, $W^\pm \pi^0_T$, $\pi^\pm_T Z$,
or $\pi^\pm_T \pi^0_T$; and through
$$ q \overline q \to \rho^0_{T} \to V_1 V_2 $$
where $V_1 V_2 = W^+ W^-$, $W^\pm \pi^\mp_T$,
or $\pi^+_T \pi^-_T$.
The modes where $W$ and $Z$ are produced and subsequently
detected in their leptonic decays are straightforward and
largely free of background; see for example the ATLAS
Technical proposal\cite{atlastp}.
In this study, the dijet decays of the technipion have
been investigated:
\begin{eqnarray*}
\pi^0_T & \to & b\overline b \\
\pi^\pm_T & \to & c \overline b
\end{eqnarray*}
These will generally be expected to dominate as
long as the $t \overline t$ and $ t \overline b$
modes are kinematically accessible; in Topcolor
models the top decay modes can remain forbidden even for
larger masses.
\section{Signal and Background}
For definiteness the following process has been considered:
$$ q \overline q^\prime \to \rho_T \to W(\ell \nu) \pi_T (b\overline b),$$
with $m_{\rho_T} = 210\,$GeV and $m_{\pi_T}=115\,$GeV.
The signal is thus a $W$
(reconstructed from lepton plus missing transverse energy)
together with two jets, with a resonance in the dijet mass $m_{jj}$.
The backgrounds are $W+$jets and $t\overline t$. (The latter has not yet been
included in the study since it is small compared with the $W+$jets
process for $n=2$ jets, even if single $b$-tagging is applied).
The signal cross sections are large: about 5~pb at the Tevatron, and
35~pb at the LHC\cite{lane}.
The main issue is therefore dijet mass resolution and
$b$-tagging.
Signal and background events were simulated using ISAJET. The signal
topology was generated using the TCOLOR process with the $WZ$ final state;
the $Z$ mass was set to $m_{\pi_T}$ and the decay to $b\overline b$ was
forced.
\begin{figure}[t]
\leavevmode
\begin{center}
\vspace{2.5cm}
\resizebox{!}{8cm}{%
\includegraphics{motivate_eta_e_cut.ps}}
\end{center}
\vspace{-2.5cm}
\caption{Reconstructed lepton pseudorapidity distributions for signal
and $W+$jets processes.}
\label{fig:eta_e}
\end{figure}
Detector acceptance and resolution were modelled using a fast
simulation\cite{sscsim}.
Energy was deposited in cells of size $\Delta\eta \times \Delta\phi =
0.1 \times 0.1$ and was smeared for
detector resolution: $15\%/\sqrt{E{\rm(GeV)}} \oplus 0.5\%$ for EM, and
$50\%/\sqrt{E{\rm(GeV)}} \oplus 5\%$ for hadronic energy.
Transverse shower spreading and calorimeter leakage
were also modeled. Jets were found (up to $|\eta| = 4$)
from the calorimeter towers using a cone of $R=0.7$. Missing transverse
energy was calculated from the sum of the calorimeter towers over
$|\eta| \leq 5$.
Events were selected which satisfied the following criteria:
\begin{itemize}
\item A good $W \to \ell \nu$ candidate, defined as:
\begin{itemize}
\item[$\circ$] lepton with $p_T^\ell > 25$~GeV/c, $|\eta^\ell| < 1.1$, and
isolated (transverse energy within $R < 0.4$ less than
10\% of the lepton $p_T$);
\item[$\circ$] $E_T^{miss} > 25$GeV
\item[$\circ$] Transverse mass $m_T$ satisfying $50 < m_T < 100$~GeV;
\end{itemize}
\item At least two jets with $E_T > 20$~GeV and $|\eta^j|<2.5$.
\end{itemize}
The lepton was required to be central, since (as Fig.~\ref{fig:eta_e} shows)
this gives some improvement in signal-to-background.
For $b$-tagging it was assumed that single tagging would be performed
with an efficiency of 50\% and a mistag rate of 1\% for light quark jets.
The cross sections obtained for the Tevatron and LHC are listed in
Table~\ref{table}.
It will be seen that the signal-to-background ratio
for this process is rather better at the Tevatron.
Figure~\ref{fig:jjmass} shows the invariant mass distribution obtained
for the leading jet pair in signal events. The peak has a
resolution of about 15~GeV with tails from jet combinatorics.
\begin{figure}[t]
\leavevmode
\begin{center}
\vspace{2.5cm}
\resizebox{!}{8cm}{%
\includegraphics{jjmass.ps}}
\end{center}
\vspace{-2.5cm}
\caption{Leading dijet invariant mass distribution for
$W(\ell\nu)\pi_T(b \overline b)$ events at the LHC.}
\label{fig:jjmass}
\end{figure}
\begin{figure}[t]
\leavevmode
\begin{center}
\vspace{2.5cm}
\resizebox{!}{8cm}{%
\includegraphics{techni_tev1.ps}}
\end{center}
\vspace{-2.5cm}
\caption{Leading dijet invariant mass distribution for
technipion signal (dark) over the $W+$jets background
(light) at the Tevatron, before $b$-tagging. Vertical
scale is events/10~GeV/2~fb$^{-1}$. The background has
been smoothed to simulate the full statistics.}
\label{fig:tev1}
\end{figure}
\begin{figure}[t]
\leavevmode
\begin{center}
\vspace{2.5cm}
\resizebox{!}{8cm}{%
\includegraphics{techni_tev2.ps}}
\end{center}
\vspace{-2.5cm}
\caption{Leading dijet invariant mass distribution for
technipion signal (dark) over the $W+$jets background
(light) at the Tevatron, after $b$-tagging. Vertical
scale is events/10~GeV/2~fb$^{-1}$.}
\label{fig:tev2}
\end{figure}
\begin{figure}[t]
\leavevmode
\begin{center}
\vspace{2.5cm}
\resizebox{!}{8cm}{%
\includegraphics{techni_lhc2.ps}}
\end{center}
\vspace{-2.5cm}
\caption{Leading dijet invariant mass distribution for
technipion signal (dark) over the $W+$jets background
(light) at the LHC, after $b$-tagging. Vertical
scale is events/10~GeV/0.5~fb$^{-1}$.}
\label{fig:lhc2}
\end{figure}
\begin{table}[h]
\begin{center}
\caption{Cross sections for signal and $W+$jets background.}
\label{table}
\begin{tabular}{lcc}
\hline
\hline
&$W \pi_T$ &$W+$jets \\
\hline
LHC \\
\hline
$\sigma\cdot B$ &3.7~pb &2200~pb\\
with 2 jets &1.7~pb &250~pb\\
with $b$-tag &0.85~pb &2.5~pb\\
\hline
Tevatron \\
\hline
$\sigma\cdot B$ &0.53~pb &170~pb\\
with 2 jets &0.10~pb &2.5~pb\\
with $b$-tag &0.05~pb &0.025~pb\\
\hline
\hline
\end{tabular}
\end{center}
\end{table}
Figure~\ref{fig:tev1} shows the invariant mass distribution obtained
for signal and background at the Tevatron, before $b$-tagging.
Once $b$-tagging is applied, the signal becomes much more apparent,
as seen in Fig.~\ref{fig:tev2}. A clear excess is visible with $S:B\sim 5$
in the peak, and could easily be discovered in Run II with 2~fb$^-1$.
For comparison, Fig.~\ref{fig:lhc2} shows the situation at the LHC after
$b$-tagging.
\section{Kinematic Properties of the Events}
We note that there are significant differences in kinematic
distributions between signal and background events.
Cuts on these distributions may be used to further improve the
background rejection,
or they may be used as a way to confirm the
presence of a signal by (for example) observing
differences in these variables as a function of
dijet mass.
Some variables of interest are:
\begin{itemize}
\item
Transverse momentum of the leading dijet system, $p_T^{jj}$;
\item
Pseudorapidity of the leading dijet system, $\eta^{jj}$;
\item
$\Delta\phi$ between the leading two jets;
\item
Dijet asymmetry $A = (E_{T1}-E_{T2})/(E_{T1} + E_{T2})$.
\end{itemize}
The dijet pseudorapidity and asymmetry do not offer much discriminating
potential, but the $\Delta\phi$ and transverse momentum of the dijet system
are distinctly different between signal and background,
as can be seen from Fig.\ref{fig:kinematics}.
Requiring, for example, $\Delta\phi > 2.3$, and $p_T^{jj} < 45$~GeV/c
retains 60\% of the signal while rejecting 74\% of
the $W+$jets background.
\begin{figure}[t]
\leavevmode
\begin{center}
\vspace{2.5cm}
\resizebox{!}{8cm}{%
\includegraphics{kinematics.ps}}
\end{center}
\vspace{-2.5cm}
\caption{Distributions of $p_T^{jj}$, $\eta^{jj}$, asymmetry
$A$ and $\Delta\phi$ for
technipion signal (shaded) and $W+$jets background
(outline) at the Tevatron.}
\label{fig:kinematics}
\vspace{2.85cm}
\end{figure}
\section{Conclusions}
Light, color-singlet technipions, produced in association with a vector
boson through s-channel
technirho production, can be discovered at hadron colliders
in the $b\overline b$ decay mode. The signal to background ratio
is somehat better at the Tevatron but this physics can also be addressed
at the LHC. Tagging of $b$-quarks is important to reduce the $W+$jets
background.
The kinematic properties of signal and background
events are significantly different and simple cuts can be used to further
improve the signal to background ratio.
|
1,108,101,566,669 | arxiv | \section{Introduction}
Galaxy discs have been known to follow an exponential decline in the radial surface brightness since the study by \citet{freeman1970}. Disc formation is thought to be closely connected to the initial formation of galaxies, the exponential nature being a result of comparable time-scales for viscous evolution and star formation (e.g. \citealt{lin1987}; \citealt{yoshii1989}). The outermost faint regions of galaxies were first studied in detail in edge-on systems by \citet{vanderkruit1979}, who found that the exponential decay of the surface brightness does not always continue infinitely outwards. Instead a sharp change was found with a much steeper surface brightness decline in the outermost parts of the disc.
More recently the advance in observations has enabled detailed studies of the faint outer regions of more face-on disc galaxies. Similar, yet not as sharp changes in the surface brightness profiles have been found in many face-on galaxies in the optical (\citealt{erwin2005}; \citealt{pohlen2006}; \citealt{erwin2008}; \citealt{gutierrez2011}), and in the infrared (\citealt{munozmateos2013}). Also, most discs are found to be best described as double exponentials with a change of slope between the two exponential subsections. Discs can thus be divided into three main types (\citealt{pohlen2006}; \citealt{erwin2008}): single exponential discs with no break (type I), discs where the slope is steeper beyond the break (type II), and discs where the outer slope is shallower (type III). The type II breaks in face-on galaxies generally appear much further in than the ``truncations'' in edge-on galaxies (see for example \citealt{martinnavarro2012}), thus most likely representing a different feature.
One of the first explanations for type II breaks in the surface brightness profiles was given by \citet{vanderkruit1987}, relating it to angular momentum conservation during the initial collapse of the gas during the galaxy formation. An alternative explanation involves bars, via the influence of the Outer Lindblad Resonance, which is connected to the formation of outer rings (classified as ``II.o-OLR'', \citealt{pohlen2006}, see also \citealt{munozmateos2013}). Indeed, \citet{erwin2008} noted that in some galaxies with outer rings the break radius of a type II profile is similar to the radius of the outer ring. Furthermore, the presence of a star formation threshold have also been associated with type II profiles in galaxies in which the break radius is larger than the outer ring radius (type ``II.o-CT'', \citealt{pohlen2006}; see also \citealt{schaye2004}; \citealt{elmegreen2006}; \citealt{christlein2010}). Nevertheless, possible connections between breaks and different structural components, such as rings and spirals, have not yet been systematically studied.
Studies of disc breaks in galaxies have become increasingly important with the discovery of stellar migration in discs (e.g. \citealt{sellwood2002}; \citealt{debattista2006}; \citealt{roskar2008a,roskar2008b}; \citealt{schonrich2009a,schonrich2009b}; \citealt{minchev2012}). The idea of migration has changed the paradigm that stars born in the disc do not travel radially far from their place of birth. On the contrary, stars can travel several kiloparsecs radially both inwards and outwards. Evidence of this process has been found in the solar neighbourhood, where the wide metallicity and age distributions of stars \citep{edvardsson1993} can be explained by radial migration \citep{roskar2008b}. The effects of star formation and radial migration can not necessarily be considered separately (e.g. \citealt{roskar2008a}). In these simulations the type II break is caused by a drop of the star formation rate beyond the break due to the reduced amount of cooled gas. However, the outer disc is simultaneously populated by stars radially migrating from the inner disc. In older stellar populations the outer slope is shallower than in younger populations, possibly being a result of more extended radial spreading due to the longer duration of stellar migration (e.g. \citealt{radburnsmith2012}). Colour profiles have also shown that especially for type II profiles the discs become increasingly redder after the break radius (\citealt{azzollini2008}; \citealt{bakos2008}), also consistent with stellar migration. This interpretation is not unique because in the fully cosmological simulations \citet{sanchezblazquez2009} see reddening beyond a break radius in the disc also without stellar migration, and argue that it could simply be due to a change in the star formation rate around the break. In their simulation the presence of stellar migration can smooth the mass profile of the galaxy up to a point where it appears as a single exponential. However, using similar simulations \citet{roskar2010} noted that cosmological simulations can not yet definitely tell the relative roles of radial migration and star formation in the outer regions of galaxies. Alternatively, the larger radial velocity dispersion of old stars (e.g. for solar neighbourhood \citealt{holmberg2009}) could also explain the observed properties of the outer discs up to a point.
Type III profiles remain more ambiguous. Sometimes they are associated with an outer spheroidal or halo component in the galaxy, thus not being a disc feature at all (type ``III-s'', \citealt{pohlen2006}; see also \citealt{bakos2012}). \citet{comeron2012} has proposed that $\gtrsim 50 \%$ of type III profiles could also be created by superposition of a thin and thick disc, when the scalelength of the thick disc is larger than that of the thin disc. Extended UV emission has been found in many galaxies beyond the optical disc (\citealt{gildepaz2005}; \citealt{thilker2005}; \citealt{zaritsky2007}). In such cases the increased star formation at the outskirts of the galaxies could give rise to some of the observed type III profiles. Perhaps the most intriguing possibility of type III profile formation comes from the environmental effects. Galaxies live in a hierarchical universe where galaxy mergers are common. These mergers, and also mild gravitational interactions, between galaxies can certainly change the appearance of the involved galaxies. Already in the early simulations of \citet{toomre1972} close encounters between galaxies were shown to significantly perturb the outer discs of the involved galaxies, and as a result tidal tails and bridges formed. They also showed that the masses of the galaxies affect the outcome, and the less massive galaxy is more strongly perturbed. Furthermore, predictions from more recent simulations have shown that type III profiles could be a result of minor mergers (e.g. \citealt{younger2007}; \citealt{laurikainen2001}).
\citet{pohlen2006} made the first attempt to examine the galaxy environments of the different break types by counting the number of neighbouring galaxies from SDSS within 1 Mpc projected radius, for a recession velocity difference to the target galaxy of $|\Delta v|< 350$ km s$^{-1}$, and absolute magnitude of $M_{\text{r'}}< -16$ magnitudes. They concluded that their criteria for the environment were often too harsh to truly characterise it. More recently \citet{maltby2012} compared field and cluster galaxies at higher redshifts ($z_{phot}>0.055$), and found no differences in the break types in different environments. However, they focused only on the outermost disc regions and possibly missed a significant fraction of profile breaks. Therefore, the question of the influence of galaxy environment on disc profile type remains open.
We study the disc and break parameters measured from the radial surface brightness profiles. We aim to systematically associate disc breaks with specific structural components of galaxies, such as rings, lenses, and spirals. In addition, we perform a detailed environmental analysis searching for possible connections among the different disc profile types with the galaxy density and the presence of nearby perturbers. As a database we use 3.6 $\mu m$ images from the Spitzer Survey of Stellar Structure in Galaxies (S$^4$G, \citealt{sheth2010}) and $K_{\text{s}}$-band images from the Near Infrared S0-Sa galaxy Survey (NIRS0S, \citealt{laurikainen2011}). The 3.6 $\mu m$ and $K_{\text{s}}$-band images are basically free of extinction, particularly at the large disc radii of most interest here. Both bands trace the old stellar population, which is important because of the expected wavelength dependency of the disc scalelengths (\citealt{bakos2008}; \citealt{radburnsmith2012}). In the environmental analysis we use the 2 Micron All Sky Survey (2MASS) Extended Source Catalog (XSC, \citealt{jarrett2000}) and the 2 Micron All Sky Survey Redshift Survey (RSC, \citealt{huchra2012}).
The outline of this paper is as follows. In section \ref{sample-selection} we introduce the sample selection criteria for our study, in section \ref{analysis-methods} we describe the data processing, the analysis methods, and the classification of the profile types. In section \ref{env-ana-methods} we describe the environmental study. In sections \ref{results} and \ref{env_effects} we present the main results of the surface brightness profile analysis and the results of the environmental analysis, respectively. The results are discussed in section \ref{discussion}, and summarised in section \ref{sum-conclusion}. The general parameters of the galaxies, including the environmental parameters, are presented in appendix \ref{app:sample} in Table \ref{app:a}, and the parameters of the discs and breaks in Table \ref{app:b}.
\begin{figure*}
\begin{center}
\includegraphics[width=0.95\linewidth]{sample_nir.pdf}
\caption{In the \textit{left panel} the Hubble type distribution of the sample galaxies, and in the \textit{right panel} the absolute B-band magnitude distribution of the galaxies is shown.}
\label{sample_histo}
\end{center}
\end{figure*}
\section{Sample selection}
\label{sample-selection}
We use the Spitzer Survey of Stellar Structure in Galaxies (S$^4$G, \citealt{sheth2010}) which consists of more than 2300 galaxies observed at 3.6 and 4.5 $\mu m$ wavelengths with the IRAC instrument \citep{fazio2004}. The sample for the survey was selected using values from HyperLeda \citep{paturel2003}, for the radial velocity ($V_{radio} < 3000$ km s$^{-1}$), the total corrected blue magnitude ($m_{Bcorr}<15.5$), the isophotal angular diameter in blue ($D_{25} > 1.0$ arcmin), and the galactic latitude ($|b| > 30^{\circ}$). Many gas poor early-type galaxies do not have radio based measurements of the radial velocities in HyperLeda, and so are not part of the S$^4$G sample. To fill this gap, we include the Near Infrared S0-Sa galaxy Survey (NIRS0S, \citealt{laurikainen2011}) in our study. NIRS0S is a $K_{\text{s}}$-band study and the sample selection is based on the Third Reference Catalogue of Bright Galaxies \citep{devaucouleurs1991} for the morphological types $-3 \le T \le 1$, total magnitude $B_T \le 12.5$ mag, and inclination $i < 65^{\circ}$. Additional galaxies slightly fainter than using the main selection criteria are included in the NIRSOS sample, so that in total the full sample contains 215 galaxies (13 ellipticals, 139 S0s, 30 S0/a, 33 Sa, and one later type galaxy). NIRS0S partly overlaps with the S$^4$G survey, having 93 galaxies in common.
To select our sample from these two surveys we use the following selection criteria:
\begin{itemize}
\item Hubble stage $-3 \le T \le 7$ from Buta et al. (in preparation) for S$^4$G galaxies and from \citet{laurikainen2011} for NIRS0S galaxies,
\item Galaxy magnitude in $K_{\text{s}}$-band $m \le 9.5$, taken from 2MASS (isophotal apparent magnitude measured within an elliptical aperture defined at $K_{\text{s}} = 20$ mag arcsec$^{-2}$ isophote \citep{jarrett2000}),
\item minor/major axis ratio of the disc $b/a > 0.5$.
\end{itemize}
The very late-type disc galaxies ($T>7$) are excluded because the study of disc structures becomes increasingly harder due to their patchy nature. The above magnitude limit is used because the completeness limit in the 2MASS Redshift Survey \citep{huchra2012} is 11.75 magnitudes in the $K_{\text{s}}$-band. With this selection we are able to search the environments for neighbouring galaxies that are at most two magnitudes fainter than the primary galaxies (see also section \ref{quantifying}). The inclination criterion is applied because we want to restrict to nearly face-on or moderately inclined galaxies, for which the profile shapes can be determined in a reliable manner. For galaxies that appear both in S$^4$G and NIRS0S we use the S$^4$G images because they are deeper. For S$^4$G galaxies we chose to use the 3.6 $\mu m$ images, due to the wavelength range being closer to the $K_{\text{s}}$-band used in NIRS0S.
This results in a sample of 439 galaxies, 336 from S$^4$G and 103 from NIRS0S. We exclude 111 galaxies for which the surface brightness profile analysis was unreliable, due to bright foreground stars in or near the disc, or when there are large gradients in the image. The final sample has 328 galaxies, 248 from S$^4$G and 80 from NIRS0S. The histogram in Figure \ref{sample_histo} displays the final distribution of the Hubble types, and absolute $B$-band magnitudes \citep{paturel2003}. The general properties of the galaxies are listed in appendix Table \ref{app:a}.
\section{Analysis of the surface brightness profiles}
\label{analysis-methods}
\subsection{Initial data processing and analysis}
\label{data-processing}
The data processing and analysis of the S$^4$G images consists of four pipelines\footnote{The images and many data products are publicly available at the IRSA website. \url{http://irsa.ipac.caltech.edu/data/SPITZER/S4G/} } \citep{sheth2010} through which all the galaxies in the survey are processed. The first pipeline (P1) creates the mosaicked science ready images. The second pipeline (P2) uses SExtractor \citep{bertin1996} to create masks for the foreground stars and image artefacts. These masks were also visually inspected and edited if necessary. The third pipeline (P3) performs automated photometry. The basic parameters of the S$^4$G galaxies in our sample (centre, orientation, ellipticity, background level) were obtained from pipeline four performed on the 3.6 $\mu m$ images (P4, Salo et al. in preparation), which produces two dimensional multi-component decompositions of the galaxies. In P4 the background sky level is measured in 10-20 locations outside the galaxy and a mean of the medians is calculated. The standard deviation, $\sigma_{sky}$, of the sky measurements is also calculated. P4 also gives the position angles and inclinations of the discs. We check visually that the orientation parameters used for the disc do not erroneously correspond to large spherical halos. The measurement regions are mostly outside possible outer rings, in which region the discs are likely to be close to axisymmetric. In cases where the galaxy seems to end at an outer ring, the orientation and ellipticity of the outer disc are considered uncertain ($\sim 15 \%$ of the S$^4$G galaxies in the sample, e.g. NGC2859). However, even in these cases the measurements were included in further analysis. Detailed morphological classifications of the S$^4$G galaxies are taken from \citet{buta2010} and Buta et al. (in preparation), which are used to identify the structural components.
For the NIRS0S galaxies we use the cleaned and flux calibrated $K_{\text{s}}$-band images \citep{laurikainen2011}. The values of $\sigma_{sky}$, galaxy centres, position angles, disc inclinations, and the morphological classification are from \citet{laurikainen2011}.
\subsection{Radial surface brightness profiles}
\label{fitting}
\begin{figure*}
\begin{center}
\includegraphics[width=0.95\linewidth]{aperture_correction}
\caption{In the \textit{left panel} the aperture magnitudes inside isophote defined by $\mu_{3.6 \, \mu m} = 22.5$ mag arcsec$^{-2}$ are shown for 93 galaxies that are in both S$^4$G and NIRS0S surveys. The median difference in the magnitudes ($\mu^{\text{AB}}_{3.6}$-$K^{\text{Vega}}_{\text{S}}$) is 2.566, which can be used to convert the NIRS0S magnitudes and surface brightnesses to that is used in S$^4$G as shown by the example in the \textit{right panel}.}
\label{aperture_correction}
\end{center}
\end{figure*}
The break features in the galactic discs are studied using azimuthally averaged surface brightness profiles. The profiles are created by running the IRAF\footnote{IRAF is distributed by the National Optical Astronomy Observatories, which are operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation.} task \emph{ellipse}, with the centre, ellipticity, and position angle fixed to the values for the outer discs. For S$^4$G galaxies the surface brightness profiles were converted to AB-system magnitudes in the following manner:
\begin{equation}
\mu = -2.5 \, \log_{10} \left[ I_{\nu} (\text{MJy str}^{-1}) \right] + 20.472.
\end{equation}
The value 20.472 is the zero-point of the flux to magnitude conversion, calculated from the definition of the AB magnitude scale \citep{oke1974}. Aperture correction was not applied to the surface brightness profiles of S$^4$G galaxies (see the IRAC handbook\footnote{ \url{http://irsa.ipac.caltech.edu/data/SPITZER/docs/irac/iracinstrumenthandbook/30/} }, and \citealt{munozmateos2013}) because in our tests the effect was found to be insignificant compared to other error sources. From the flux calibrated NIRS0S images the surface brightness can be calculated as:
\begin{equation}
\mu = -2.5 \, \log_{10} \left( ADU \right).
\end{equation}
The flux calibration of the NIRS0S images is based on the 2MASS Vega system. In order to convert the NIRS0S surface brightnesses to the AB-system and to evaluate colour differences we took all 93 galaxies common in NIRS0S and S$^4$G. We calculated the total magnitudes of the galaxies inside the elliptical apertures defined by the surface brightness level of $\mu_{3.6 \, \mu m} = 22.5$ mag arcsec$^{-2}$ in the S$^4$G images, for both surveys. The median difference in the aperture magnitudes between NIRS0S and S$^4$G was found to be 2.566. Although the magnitude difference depends also on the 3.6-2.2 $\mu m$ colour of the galaxy, the relation was found to be very linear with colour (see Fig. \ref{aperture_correction} left panel). After adding the conversion factor to the NIRS0S surface brightnesses, the profiles derived from the images of both surveys are very similar (see Fig. \ref{aperture_correction} right panel), except that the S$^4$G images are on average about two magnitudes deeper (see Fig. 4. in \citealt{laurikainen2011}). All surface brightness profiles and magnitudes of the NIRS0S galaxies are converted with this value for the rest of this study.
The sky measurement uncertainty starts to affect the surface brightness profiles at large radii. The region of the surface brightness profile that falls below the level defined by the standard deviation of the sky ($\sigma_{sky}$) was excluded from the analysis. When it is obvious that the outermost region of the profile is uncertain, the outer limit was taken to be before the limit defined by $\sigma_{sky}$ was reached. The radial surface brightness profiles can be typically followed out to a surface brightness of $\mu = 26.4 \pm 1.2$ mag arcsec$^{-2}$ for the S$^4$G galaxies, and $\mu = 24.7 \pm 0.6$ mag arcsec$^{-2}$ for the NIRS0S galaxies, in both cases expressed in 3.6 $\mu m$ AB magnitudes.
\subsection{Profile functions and the fitting procedure}
\label{proffunc}
To derive the properties of the discs we adopted the methods used in the previous studies of disc breaks (\citealt{erwin2005}; \citealt{pohlen2006}; \citealt{erwin2008}; \citealt{munozmateos2013}). We ignore the inner regions ($ r \, < \, r_{in}$) of the radial surface brightness profiles, where the bulges/bars/inner rings dominate. To recover the properties of the discs, different model functions, depending on the profile shape, are fitted to the radial surface brightness profiles.
The model functions we use to derive the parameters of the discs are the exponential function \citep{freeman1970}, the double-exponential function introduced by \citet{erwin2008}, and the generalization of this introduced by \citet{comeron2012}. The exponential function is
\begin{equation}
I(r) = I_0 \exp{\left(- \frac{r}{r_s} \right)},
\label{function_1}
\end{equation}
where $I_0$ is the surface brightness in the centre of the disc, and $r_s$, the scalelength of the disc. The double-exponential function is the following
\begin{equation}
I(r) = S \, I_0 \, e^{\frac{-r}{h_i}} \, \left[ 1 + e^{\alpha \left( r - R_{br} \right)} \right]^{\frac{1}{\alpha} \left( \frac{1}{h_i} - \frac{1}{h_o} \right) }.
\label{function_2}
\end{equation}
Now $I_0$ is the central surface brightness of the inner exponential section, $h_i$ and $h_o$ are the scalelengths of the inner and outer discs, $R_{br}$ is the break radius, and $\alpha$ is a parameter defining how smooth the break between the inner and outer slopes is. The parameter $S$ is a scaling factor
\begin{equation}
S^{-1}= \left( 1 + e^{- \alpha R_{br}} \right)^{\frac{1}{\alpha} \left(\frac{1}{h_i} - \frac{1}{h_o} \right) }.
\end{equation}
Some of the galaxies in our sample have two breaks (i.e. three exponential sections), and for those we use a generalization of function \ref{function_2}
\begin{equation}
I(r)=S \, I_0 \, e^{- \frac{r}{h_1}} \prod_{i=2}^{i=n} \left\lbrace \left[ 1 + e^{\alpha_{i-1,i}(r-r_{i-1,i})} \right]^{\frac{1}{\alpha_{i-1,i}} \left( \frac{1}{h_{i-1}}-\frac{1}{h_i} \right) } \right\rbrace ,
\label{function_3}
\end{equation}
where the scaling factor $S$ is
\begin{equation}
S^{-1} = \prod_{i=2}^{i=n} \left\lbrace \left[ 1 + e^{-\alpha_{i-1,i} \, r_{i-1,i}} \right]^{\frac{1}{\alpha_{i-1,i}} \left( \frac{1}{h_{i-1}} -\frac{1}{h_i} \right)} \right\rbrace .
\end{equation}
Here the parameter $n \, (\ge 2)$ defines the number of exponential sections in the disc, $h_{i-1}$ and $h_{i}$ are the exponential scalelengths inside and outside of the break, $r_{i-1,i}$ is the break radius between the exponential sections with slopes $h_{i-1}$ and $h_i$, and $\alpha_{i-1,i}$ controls the sharpness of that break. The parameter $I_0$ is again the central surface brightness of the innermost section.
These functions were fitted to the surface brightness profiles using the IDL program \emph{mpcurvefit} \citep{markwardt2009}, which uses a non-linear least squares method for fitting. The free parameters in the fits are the disc central surface brightnesses, scalelengths, and the break radii. The parameter describing the smoothness of the break is kept fixed with a value of $\alpha=0.5$, which is a typical value found for this parameter (\citealt{erwin2008}; see also \citealt{comeron2012}).
The fitting was done in the following steps:
\begin{enumerate}
\item The inner limit of the fit range, $r_{in}$, is visually identified from the images and the radial surface brightness profiles. The region inside $r_{in}$ is not included in the fit. Figure \ref{prototypes} illustrates how the inner radius is selected for three galaxies. The inner limits are drawn with vertical dashed lines, while the visual estimates of bar lengths (if a bar is present) are indicated with vertical triple-dot dashed lines.
\item The outer limit of the fit range, $r_{out}$, is defined as the point after which the sky measurement uncertainty starts to dominate. In some cases it was selected visually when it was obvious that there was a contamination in the image, as discussed in section \ref{fitting}.
\item The user identifies the number of sections with exponential slopes. That also determines which model function needs to be used (function \ref{function_1}, \ref{function_2} or \ref{function_3}).
\item In the case of a single exponential profile the function 3 is fitted to the range defined by [$r_{in}$, $r_{out}$].
\item In case of a more complicated profile, the initial values of the fit are first estimated by having the user define segments for different exponential slopes in the profile. Then a single exponential profile is fit independently to each of these segments. The values of the central surface brightness and the disc scalelength are then used as initial values, either for the function \ref{function_2} or \ref{function_3}, which is fit to the range defined by [$r_{in}$, $r_{out}$].
\end{enumerate}
Possibly the largest source of uncertainty in the disc parameters is due to the selection of the galaxy region where the different functions are fitted (see also \citealt{munozmateos2013}). We follow a Monte Carlo approach to estimate how the fit region selection affects the parameters of the discs. In practice we vary the fit region delimiters ($r_{in}$, $r_{out}$) in an range centred to the original user selected value that has a width of $0.15 \times R_{24}$, and draw 500 new values for the region delimiters using an uniform distribution. The surface brightness profile is then automatically fitted with these region delimiter values, and the standard deviation $\sigma$ of the disc parameters are calculated. The radius defined by the $\mu_{3.6 \, \mu m} = 24$ mag arcsec$^{-2}$ was selected because this level can be reached also with the shallower NIRS0S data. The type of the profile (single exponential, one break, or two breaks) is the same as the user selected, and the initial values for the fit are also the same. The parameters from the fits and the calculated uncertainties ($\pm 1 \sigma$) are shown in the appendix Table \ref{app:b}. The reported uncertainties take into account the possibility of including some of the bulge or the bar region which we try to avoid. In addition, the outer limit of the fit region can extend further out than achieved with the sky uncertainty criteria described above. Therefore possible contribution of the background sky variation is also included in the uncertainties.
\begin{figure*}
\begin{center}
\includegraphics[width=\linewidth]{prototypes}
\caption{Examples of the different disc types. In the radial surface brightness profiles in the right panels the \textit{horizontal dashed line} presents the sky uncertainty limit, and the \textit{red dashed line} over the surface brightness profile is the fitted profile using the functions explained in the text. The \textit{vertical dashed lines}, and the \textit{dashed ellipses in the images on left}, present the inner- and outer radii of the fitted area. The break radius is also presented in all panels with a \textit{dot-dashed line}. Finally, the visual estimation of the bar radius (if present) is shown in the surface brightness profiles with a \textit{vertical triple-dot dashed line}.}
\label{prototypes}
\end{center}
\end{figure*}
\subsection{Classification of the profiles}
The profiles were classified using the main break types I, II, or III, and in case of multiple breaks, with a combination of types II and III (e.g. \citealt{pohlen2006}; \citealt{erwin2008}), based on the behaviour of the disc scalelengths. Type I is the classical single exponential disc. Type II is a downbending dual-exponential disc, where the outer disc has a lower disc scalelength than the inner disc. Type III is an upbending dual-exponential disc, where the outer disc has a larger scalelength than the inner disc. Examples of the three disc types are shown in figure \ref{prototypes}.
In previous studies, types II and III have also been divided into several subtypes. For type II discs this is based on the location of the break in relation to the bar length: when the break is at, or inside, the bar radius it is called II.i, and in cases where the break is beyond the bar radius it is called II.o \citep{erwin2005}. We use the II.i class to separate these profiles from pure type I and II.o profiles, which both are intrinsically different from type II.i. We classify the type II.o profiles as type II without separating the subclasses II.o-OLR and II.o-CT. We use similar approach with the type III profiles and do not use the subclasses III-d and III-s. The profile types for each galaxy are presented in appendix Table \ref{app:b}.
\section{Environmental analysis}
\label{env-ana-methods}
For the environmental analysis we use the 2MASS Extended Source Catalog (XSC, \citealt{jarrett2000}) as a basis, due to its completeness in the local Universe. We used objects having apparent $K_{\text{s}}$-band isophotal magnitudes measured in elliptical apertures defined at $K_{\text{s}} = 20$ mag arcsec$^{-2}$ isophote (k\_m\_k20fe), and semi-major axis length corresponding to this isophote (r\_k20fe). All the angular distances between galaxies used in the environmental analysis are calculated from the XSC coordinates.
\subsection{Redshift data}
Redshifts for the XSC objects come from the 2MASS Redshift Survey (RSC, \citealt{huchra2012}), and are 97.6\% complete for the XSC galaxies up to $K_{\text{s}} \le 11.75$ mag (44,599 galaxies). The data release also includes redshifts for 196,963 XSC galaxies that are beyond their main catalog limits of the redshift survey (i.e. $K_{\text{s}} > 11.75$ mag, $E(B-V)>1$ mag, or near the Galactic plane). The RSC matches the previous large redshift surveys with the XSC objects, and therefore no additional cross-matching between the XSC and other databases is necessary.
\subsection{Quantifying the environments}
\label{quantifying}
\begin{figure*}
\begin{center}
\includegraphics[width=0.95\linewidth]{rs_comp_areas}
\caption{We plot (\textit{left panel}) the redshift completeness in the selected areas around the primary galaxies in 0.5 magnitude bins, and (\textit{right panel}) the number of the galaxies in the same magnitude bins. }
\label{areas_rs}
\end{center}
\end{figure*}
We have evaluated galactic environments using two complementary parameters, the projected surface number density of the neighbouring galaxies (e.g. \citealt{dressler1980}; \citealt{cappellari2011}), and the Dahari-parameter \citep{dahari1984}, which estimates the tidal interaction strength between galaxies. These parameters were calculated using galaxies that are found in an area defined by a projected radius of 1 Mpc at the distance of the primary galaxy, and within a recession velocity interval of $\pm 1000$ km s$^{-1}$ around the primary galaxy. In galaxy poor environments the radius of the search area was increased in 1 Mpc steps until there are at least three companion galaxies in the area, while still keeping the velocity interval at $\pm 1000$ km s$^{-1}$. This results in 229 galaxies having 1 Mpc search radius, 45 with 2 Mpc, 16 with 3 Mpc, and 6 with 4 Mpc.
The redshift completeness for the environments were estimated by restricting to XSC galaxies in the search areas. Then, in 0.5 $K_{\text{s}}$-band magnitude bins, the number of galaxies that have a redshift available was divided by the number of all XSC galaxies within the bin. The resulting histograms for the redshift completeness in these bins, as well as the number of XSC galaxies, are shown in Figure \ref{areas_rs}. We confirm that the redshift data is essentially complete for the XSC galaxies up to $K_{\text{s}}$-band magnitude $\sim 12$, as also stated in \citet{huchra2012} for the RSC as a whole.
The projected surface density of galaxies, using the previously described restriction for the neighbouring galaxies, was calculated with the formula:
\begin{equation}
\Sigma_3^A = \log_{10} \left( \frac{N_{gal}}{\pi R_3^2} \right),
\label{surf_dens}
\end{equation}
where $N_{gal}=3$ and $R_3$ is the projected distance to the third nearest neighbour galaxy, given in Mpc. We use the logarithm in this parameter because of the large variation of this parameter among the galaxies. The surface density estimated at the radius of the third nearest neighbouring galaxy was selected because it probes the density in small galaxy groups. Estimators using distances to the tenth nearest galaxy work better for surface densities in larger groups or in galaxy clusters, as discussed in \citet{cappellari2011} (e.g. 24 of our sample galaxies are members of Virgo).
We use the Dahari parameter ($Q$, \citealt{dahari1984}) to estimate the gravitational interaction strength. It estimates the tidal force produced by the companion galaxy divided by the internal binding force of the primary galaxy. The Dahari parameter is defined as
\begin{equation}
Q = \frac{\text{F}_{\text{tidal}}}{\text{F}_{\text{binding}}}= \left( \frac{M_c \, D_p}{S^3} \right) \left( \frac{M_p}{D_p^2} \right)^{-1} = \left( \frac{D_c}{D_p} \right)^{\gamma} \, \left( \frac{D_p}{S} \right)^{3},
\label{dahari_prop}
\end{equation}
where $D_p$ and $D_c$ are the diameters and $M_p$ and $M_c$ are the masses of the primary and companion galaxies, and $S$ is their projected separation. The value of $\gamma$, the proportionality between the galaxy mass and the radius, is not well known. Here we adopt $\gamma = 1.5$ (\citealt{rubin1982}; \citealt{dahari1984}; \citealt{verley2007}). With this choice the Dahari parameter reduces to
\begin{equation}
Q_i \equiv \frac{\left( D_p \, D_c \right)^{1.5}}{S^3}.
\label{dahari_eq}
\end{equation}
The values for the galaxy diameters were collected from the XSC using two times the semi-major axis length of the ellipse at the isophote of $K_{\text{s}}=20$ mag arcsec$^{-2}$ (r\_k20fe). Following \citet{verley2007} we take $Q$ as the logarithm of the sum of all $Q_i$
\begin{equation}
Q = \log_{10} \left( \sum_{i=1}^n Q_i \right),
\label{dahari_log}
\end{equation}
where $n$ is the number of galaxies that are found in the search area with recession velocity of $\pm 1000$ km s$^{-1}$ around the primary galaxy. The environmental parameters for individual galaxies are listed in appendix Table \ref{app:a}.
\section{Properties of the breaks and discs}
\label{results}
\subsection{Overall statistics of the breaks}
\label{overall_stat}
\begin{table}
\centering
\caption{Statistics of the break types in the surface brightness profiles, compared with those from \citet{gutierrez2011}. We also present the fractions of the profile types separately in barred and non-barred galaxies. We also give the bar fractions for each profile type. The uncertainties are calculated using binomial statistics, and denote the $\pm 1 \sigma$ uncertainties. The overall percentages of the profile types are above 100\% due to seven galaxies that have two breaks in their discs.}
\begin{tabular}{@{}lrr|r}
\multicolumn{3}{l}{All galaxies (328)} & \citet{gutierrez2011} (183) \\
\hline
Type & Fraction & N & Fraction \\
\hline
Type I & $32 \% \pm 3 \%$ & 104 & $21 \% \pm 3 \%$ \\
Type II.i & $7 \% \pm 2 \%$ & 24 & \hspace{-5pt} \rdelim\{{2}{15.3mm}[{$50 \% \pm 4 \%$}]\\
Type II & $42 \% \pm 3 \%$ & 138 & \\
Type III & $21 \% \pm 2 \%$ & 69 & $38 \% \pm 4 \%$ \\
\hline \\
\multicolumn{4}{l}{Barred galaxies (204)} \\
\hline
Type & Fraction & N & Bar fraction \\
\hline
Type I & $25 \% \pm 3 \%$ & 51 & $49 \% \pm 5 \%$ \\
Type II.i & $12 \% \pm 2 \%$ & 24 & $100 \% \pm 0 \%$\\
Type II & $48 \% \pm 4 \%$ & 100 & $72 \% \pm 4 \%$ \\
Type III & $16 \% \pm 3 \%$ & 33 & $48 \% \pm 6 \%$ \\
\hline \\
\multicolumn{4}{l}{Non-barred galaxies (124)} \\
\hline
Type & Fraction & N & \\
\hline
Type I & $42 \% \pm 5 \%$ & 53 & \\
Type II.i & $0 \% \pm 0 \%$ & 0 & \\
Type II & $30 \% \pm 4 \%$ & 38 & \\
Type III & $28 \% \pm 4 \%$ & 36 & \\
\hline
\end{tabular}
\label{overall-stat}
\end{table}
In our sample, discs with a single exponential slope appear approximately one third of the time ($32 \% \pm 3 \%$, see Table \ref{overall-stat}). The most common disc type is the type II ($42 \% \pm 3\%$). The type II.i profiles, in which the break is seen inside or at the bar radius, are rare ($7 \% \pm 2 \%$). The type III profiles are the least common of the main profile types ($21 \% \pm 2\%$). Seven of the galaxies have two breaks (II+III or III+II). These galaxies were counted twice, and explain why the total percentage exceeds 100\% in Table \ref{overall-stat}.
In Table \ref{overall-stat} we compare the fractions of the main profile types with those obtained by \citet{gutierrez2011} at optical wavelengths. Clearly, in our sample the fraction of type I profiles is higher, and the fraction of type III profiles is lower. Type II.i and III profiles show no connection with Hubble type (Fig. \ref{break_distribution_simple}, upper panel). Type I profiles are more common in early-type disc galaxies ($T < -1$), and again show a slight increase among the late-type disc galaxies ($T > 5$). Type II profiles are found in about half of the galaxies in each bin, except in the earliest types ($T < -1$).
The fraction of barred galaxies in our sample $\sim 62 \%$, based on the morphological classification by Buta et al. (in preparation) for S$^4$G and by \citet{laurikainen2011} for NIRS0S, is the same as the fraction typically found in the local universe ($\sim 60-70 \%$ , \citealt{eskridge2000}; \citealt{whyte2002}; \citealt{laurikainen2004}; \citealt{menendezdelmestre2007}; \citealt{marinova2007}). When the bar fractions of the different profile types are examined (Table \ref{overall-stat}) we see that type I and III profiles are equally often found in barred and non-barred galaxies. However, the type II profiles are more common among the barred galaxies (bar fraction is $72 \% \pm 4 \%$). Due to our classification criteria all type II.i profiles have bars.
In the lower panels of Figure \ref{break_distribution_simple} we show the fractions of the main profile types (I, II, and III) in barred and non-barred galaxies, as a function of Hubble type. It appears that all late-type galaxies in our sample with $T \ge 6$ are barred. Types I and III (Fig. \ref{break_distribution_simple}, lower left and right panels respectively) in barred and non-barred galaxies show similar behaviour. Type II profiles (Fig.\ref{break_distribution_simple}, lower middle panel) are more common in barred galaxies, especially in Hubble types $T < 3$. In the non-barred galaxies they are rarer in early-type galaxies, the fraction gradually rising for Hubble types $T > 0$.
\begin{figure*}
\begin{center}
\includegraphics[width=0.95\linewidth]{break_distribution_total}
\caption{The distribution of the disc-profile types among morphological types. Error bars denote $\pm 1 \sigma$ errors, calculated with the binomial formula $\Delta f = \sqrt{f(1-f)/N}$, where $N$ is the total number of galaxies in the bin. Small offsets are applied to the Hubble type values in the plots for clarity.}
\label{break_distribution_simple}
\end{center}
\end{figure*}
\subsection{Connection with the structural components}
\label{connections}
\begin{figure*}
\begin{center}
\includegraphics[width=0.92\linewidth]{rings_breaks_withnir}
\caption{The radii of the closest feature to the breaks is shown against the break radii. Feature dimensions are from \citet{comeron2014} and \citet{laurikainen2011}. Type II breaks are shown at the \textit{left}, and type III breaks at the \textit{right}. The \textit{solid lines} show when the break radius is the same as the feature radius. The \textit{dashed line} in the left panel shows a simple linear fit to the inner rings, with a slope of 2. The error bars in the upper left corners of the panels show the mean uncertainties of the break radius, and they are not large enough to significantly affect the similarity between the break radius and the feature dimension.}
\label{rings_breaks}
\includegraphics[width=0.92\linewidth]{rings_lenses_breaks_histo_nir}
\caption{The distribution of type II breaks among morphological types, and the connection with lenses, outer rings, and spiral structures. The galaxies in which no structure could be associated with the break are also shown. In the \textit{left panel} all type II breaks are shown, and in the \textit{right panel} only type II profiles found in barred galaxies are shown.}
\label{rings_breaks_histo}
\end{center}
\end{figure*}
\begin{figure*}
\begin{center}
\includegraphics[width=\linewidth]{prototypes_type2}
\caption{Same as figure \ref{prototypes}, but for type II examples only.}
\label{prototypes_type2}
\end{center}
\end{figure*}
During the fitting of the surface brightness profiles the galaxies were visually inspected and the connection of the break radii with the structural components were studied. In the identification of the structures we rely on the classifications given in \citet{laurikainen2011} and Buta et al. (in preparation). The type II.i profiles are connected to bars, and we do not include further discussion of this type below.
\subsubsection{Type II breaks}
\label{type2connections}
We found three distinct groups of type II profiles that cover $\sim 94 \%$ of that type: 1) breaks connected with inner and outer lenses, 2) breaks connected with outer rings or outer pseudorings, and 3) breaks connected with edge of the bright star formation regions in the spirals or apparent outer radius of the spirals. The last group is based on visual inspection of the images, with the break radii drawn over the galaxies, whereas the dimensions of the rings and lenses were taken from \citet{comeron2014} and \citet{laurikainen2011}. In barred galaxies only those rings and lenses with radii equal to or larger than the bar radii were considered, grouped as inner and outer rings/lenses, respectively. Nuclear rings/lenses were omitted.
Some type II breaks in the early-type disc galaxies appear to be associated with inner or outer lenses, and this was found in $\sim 8 \%$ (11 cases) of all type II profiles. In these galaxies the break radii coincide with the lens radii, as can be seen in Figure \ref{rings_breaks}, left panel. This connection is not surprising, since the lenses are defined as having a flat luminosity distributions with fairly sharp outer edges, which in turn appear as type II breaks in the radial surface brightness profiles.
We found that for the galaxies with type II profiles and an outer ring/pseudoring, these structures without exception are related to the breaks, with the ring radii coinciding with the break radii (Fig. \ref{rings_breaks} left panel, and Fig. \ref{prototypes_type2} upper panels for an example of such a galaxy). It appears that $\sim 48 \%$ (66 cases) of the type II profiles are associated either with an outer ring, an outer pseudoring, or an outer ringlens. If the profile types are then studied as a function of the morphological type (Fig. \ref{rings_breaks_histo}) it is obvious that most of type II profiles in Hubble types $ -2 \lesssim T \lesssim 2$ are related to outer rings (see also Fig. \ref{break_distribution_simple} upper panel). In this Hubble type range type II profiles are particularly common in barred galaxies (Fig. \ref{break_distribution_simple}, lower middle panel).
In later type galaxies ($T \gtrsim 3$) the breaks are largely connected with the regions of the brightest star formation regions in the outer spiral arms (19 cases, $\sim 14\%$ of type II profiles, see Fig. \ref{prototypes_type2} middle panels), or with the apparent outermost radii of the spirals (34 cases, $\sim 24\%$ of type II profiles, Fig. \ref{prototypes_type2} lower panels). In the latter case, the observed outermost part of the surface brightness profile is usually a featureless disc (see Fig. \ref{prototypes_type2} lower panels). These two groups, associated with the spiral structures, form $\sim 38 \%$ of all type II breaks in our sample. These breaks are slightly less common in the barred than in the non-barred galaxies (see Fig. \ref{rings_breaks_histo} right panel).
Only for $\sim 6 \%$ (9 cases) of type II breaks no apparent morphological feature was connected with the break. \citet{erwin2005} proposed that some type II profiles could be caused by slight asymmetries, and indeed some of the cases in our study where no connection with morphological features was found could be due to this. After all, a majority of galaxies are asymmetric in the outskirts in some level (e.g. \citealt{zaritsky2013}).
In galaxies that have no outer structures, the inner rings are systematically slightly smaller than the break radii, but still correlate with each other. This is shown by the grey dashed line in Fig. \ref{rings_breaks} (left panel), which is a rough linear fit to the inner rings with a slope of 2.0. That both the inner and outer rings are correlated with the break radii is not unexpected if both rings are resonance structures. The outer rings are known to have a radius of around twice that of the inner rings (e.g. \citealt{kormendy1979}; \citealt{athanassoula1982}; \citealt{buta1986}; \citealt{sellwood1993}; \citealt{buta1996}; \citealt{rautiainen2000}; \citealt{comeron2014}).
\begin{figure*}
\begin{center}
\includegraphics[width=0.9\linewidth]{break_surf_mag}
\caption{In the \textit{left panel} the distribution of surface brightness at the break radius of types II and III is shown, and in the \textit{right panel} the distributions of the break radius scaled with the B-band 25 mag arcsec$^{-2}$ isophotal level radius ($R_{break} / R_{25}$) is shown.}
\label{break_surf_mag}
\includegraphics[width=0.9\linewidth]{break_strenght}
\caption{ The two parameters that have been used in literature to describe the breaks, the ratio between the disc scalelengths inside and outside the break ($log_{10} (h_o / h_i)$), and the ratio between break radius and the scalelength of the disc inside the break ($R_{break} / h_i$). In the \emph{left panel} both type II and III profiles are shown, while in the \emph{right} panel only type II profiles are shown and different symbols are used to show with which structural component the break is associated with.}
\label{break_strength}
\end{center}
\end{figure*}
\subsubsection{Type III breaks}
We found that roughly 1/3 of all type III profiles in our sample can be directly connected with such structures as rings and lenses. However, in the majority of galaxies the type III features cannot be associated with any distinct structure. Also, independent of the component to which the type III profile is connected, the structures are 10 kpc or smaller in size, in distinction to the structures associated with type II profiles that extend up to $\sim 30$ kpc in size. Type III breaks found in the outer parts of galaxies most probably have some other explanation.
\begin{figure*}
\begin{center}
\includegraphics[width=0.75\textwidth]{hin_muin}
\caption{The inner disc scalelengths against the inner disc central surface brightness (\textit{top panel}), and the outer disc central surface brightness (\textit{lower panel}). The points for type I and II.i profiles are the same in both panels. The lines show a fit to the points of the different profile types: \emph{solid line} for type I, \emph{triple-dotted dashed line} for type II.i, \emph{dashed line} for type II, and \emph{dotted line} for type III. The error bars show the mean uncertainties of the disc scalelengths and central surface brightnesses.}
\label{hin_muin}
\end{center}
\end{figure*}
\subsection{Parameters of the breaks}
\label{parameters-breaks}
The surface brightness at the break radius is one of the parameters determining the properties of the breaks. Generally type II and III breaks are found at similar surface brightnesses, although the median indicates that type III profiles appear at slightly lower surface brightnesses (see Fig. \ref{break_surf_mag} left panel). In addition, there is no large difference in the radius between type II and type III profiles, although the median $R_{break}/R_{25}$ is slightly larger for type III profiles (Fig. \ref{break_surf_mag} right panel).
Mainly two parameters have been used in the literature to characterise the break strengths in galaxies, the ratio of inner- and outer disc scalelengths around the break ($h_o/h_i$), and the ratio of break radius and the inner disc scalelength ($R_{break}/h_i$) (e.g. \citealt{vanderkruit1987}; \citealt{pohlen2006}; \citealt{maltby2012}; \citealt{martinnavarro2012}; \citealt{comeron2012}). The distributions of these parameters are shown in Figure \ref{break_strength} (left panel) for type II and III profiles, clearly distinguishing the two groups. It is worth noticing that type II profiles have a tail of low $R_{break}/h_i$ and $h_o/h_i$ ratio, indicating that the profile is flat inside the break.
We find that the flat inner discs are associated with outer rings or the outer regions of bright star formation regions in the spiral arms (Fig. \ref{break_strength} right panel). The breaks connected with the apparent outer edges of the spiral structures are also well separated from the breaks connected with strong star formation regions.
\begin{figure*}
\begin{center}
\includegraphics[width=0.9\textwidth]{breaks_total_dahari}
\caption{Distribution of the Dahari parameter. In the \textit{left panel} a histogram is shown, and in \textit{right panel} the cumulative distribution. The p-values are the probabilities from a 2-sided Kolmogorov--Smirnov test, indicating probabilities that the two compared samples are drawn from the same parent distribution.}
\label{breaks_total_dahari}
\includegraphics[width=0.9\textwidth]{breaks_total_surf}
\caption{Same as Figure \ref{breaks_total_dahari}, but for the surface density of galaxies. Small $P=0.02$ for comparing between type I and II profiles indicate that they occur in statistically different environments as measured by $\Sigma_3^A$.}
\label{breaks_total_surf}
\end{center}
\end{figure*}
\subsection{Parameters of the discs}
\label{parameters-discs}
What makes the type II and III profiles deviate from the single exponential profile is still an open question. We show that the discs of type I galaxies follow the same scaling relation as the inner discs of type II and III galaxies, when the central surface brightness ($\mu_0$) is plotted against the scalelength of the disc ($h$) (Fig. \ref{hin_muin}, upper panel). This scaling relation is similar to that obtained previously for the global disc parameters of S0s and bright spirals (\citealt{dejong1996}; \citealt{graham2001}; \citealt{graham2008}; \citealt{gadotti2009}; \citealt{laurikainen2010}). The inner discs of type II profiles extend to larger scalelength values than those of type III profiles. Concerning the disc parameters outside the break radius we see a large dispersion. This behaviour is largely related to the definition of the profile types, and particularly for type II profiles it might be a manifestation of several different origins of the break. Pure type I profiles are fairly similar to type II.i, but have on average higher extrapolated central surface brightness than type I profiles. We also compared the values of $\mu_0$ and $h$ of the inner- and outer discs individually for barred and non-barred galaxies for the main types, but found no differences. It is also worth noting that the estimated uncertainties are not large enough to affect the found scaling relations.
\section{Environmental properties}
\label{env_effects}
We found that the distributions of the environmental parameters, namely the surface density of the galaxies ($\Sigma_3^A$) and the Dahari parameter (Q) (Figs. \ref{breaks_total_dahari} and \ref{breaks_total_surf}), are fairly similar for all three disc profile types. The statistical tests (Kolmogorov--Smirnov $P$ and $D$) for both parameters are given in Table \ref{env-overall-stat}, where the mean values of the parameters for the profile types are also given. There is some hint in the parameter $\Sigma_3^A$, that type I profiles appear in denser galaxy environments than types II and III. This difference is statistically significant when type I and II profiles are compared (see Fig. \ref{breaks_total_surf} and Table \ref{env-overall-stat}).
Next we studied possible connections between the environmental properties and the various parameters describing the discs and breaks. Evidence that the environment affects the surface brightness profiles was found by looking at a possible correlation of the inner and outer disc scalelengths ($h_i$ and $h_o$) with the Dahari parameter. This correlation was found to be statistically significant for type III profiles inner and outer discs (Fig. \ref{dahari_outer} right upper and lower panels, Spearman's rank correlation test $P=0.0002$ and $P=0.0019$, respectively). The correlation is statistically significant also when $h$ is plotted against the Dahari parameter for type I profiles ($P=0.026$), although the scatter is very large. When this correlation is re-plotted separating the early-type galaxies ($T \le 1.5$ , Fig. \ref{dahari_outer} lower left panel), the scatter is significantly reduced ($P=0.003$). It is worth noting that no correlation is found in the disc or break parameters when using the surface density of the galaxies ($\Sigma_3^A$).
Our sample includes some visually identified interacting galaxies. Examples are NGC1097 (type II, $Q = -0.67$, $\Sigma_3^A = 0.72$), NGC0772 (type III, $Q = -1.06$, $\Sigma_3^A = 0.41$) and NGC5427 (Type II, $Q = -0.52$, $\Sigma_3^A = 0.20$). One galaxy, NGC3893 (Kar 302 A), from \citet{laurikainen2001} who studied M51-type galaxy pairs, is also in our sample. This galaxy has a type I profile and both environmental parameters are above the average ($Q = -1.39$, $\Sigma_3^A = 1.80$).
Previously \citet{pohlen2006} counted the number of neighbouring galaxies from SDSS within 1 Mpc projected radius. To select a likely companion they applied criteria for the recession velocity difference to the target galaxy and absolute magnitude ($| \Delta v | < 350$ km s$^{-1}$ and $M_{\text{r'}}< -16$ magnitude, respectively). They did not find a connection with the environment and the profile types. More recently \citet{maltby2012} has compared the outer disc scalelengths ($h_{o}$) and the break strengths ($log_{10} \, (h_{o} / h_{i}) $) between field and cluster spiral galaxies, at redshifts ($z_{phot} > 0.055$) higher than those in our sample ($z_{phot} \lesssim 0.020$). They focused only on breaks that appeared in the surface brightness profiles at $24.0 < \mu < 26.5$ mag arcsec$^{-2}$ in the V-band, which corresponds roughly $22.5 < \mu < 25.0$ mag arcsec$^{-2}$ at 3.6 $\mu m$. Compared to our Figure \ref{break_surf_mag} it means that they mostly missed type II and III breaks that appear at the surface brightnesses of $\mu_{3.6 \mu m} < 22.5$ mag arcsec$^{-2}$. They did not find any differences in the profile types between the field and cluster galaxies.
\citet{erwin2012} and \cite{roediger2012} have argued that S0 galaxies in the Virgo cluster do not show any type II breaks at all. Our sample includes 24 galaxies that are in the Virgo cluster catalogue by \citet{binggeli1985}, of which 11 are S0s. Three of these galaxies have type II profile (see figure \ref{4596} for one example), and one galaxy has an outer type II, and an inner type III profile. The rest of the galaxies have four type I profiles, and three type III profiles. Clearly, from our small sample we can already say that type II profiles are not completely absent among the Virgo cluster S0 galaxies.
The environmental parameters were also examined separately for the main profile types in barred and non-barred galaxies, but no differences were found. The estimated uncertainties described in section \ref{proffunc} were not found to influence the found correlations. Type II.i profiles are rare and were not included in the environmental analysis.
\begin{table}
\begin{center}
\caption{The environmental parameters of the galaxies, as well as the Kolmogorov--Smirnov two tailed statistics test values. The parameter P describes the probability that the two compared distributions are drawn from the same initial distribution and the parameter D specifies the maximum deviation between the cumulative distributions. }
\label{env-overall-stat}
\begin{tabular}{@{}l c c c}
\multicolumn{4}{l}{Dahari parameter $Q$} \\
\hline
Type & Mean $\pm \, 1 \sigma$ & & \\
\hline
Type I & $-3.16 \pm 1.42$ & & \\
Type II & $-3.53 \pm 1.37$ & & \\
Type III & $-3.26 \pm 1.44$ & & \\
\hline
\multicolumn{4}{c}{ } \\
\hline
Type & Compared to & P & D \\
\hline
Type I & Type II & 0.29 & 0.15 \\
Type I & Type III & 0.92 & 0.08 \\
Type II & Type III & 0.39 & 0.13 \\
\hline
\multicolumn{4}{c}{ } \\
\vspace*{20pt} \\
\multicolumn{4}{l}{Surface density $\Sigma_3^A$} \\
\hline
Type & Mean $\pm \, 1 \sigma$ & & \\
\hline
Type I & $0.89 \pm 0.94 $ & & \\
Type II & $0.56 \pm 0.77 $ & & \\
Type III & $0.74 \pm 0.76$ & & \\
\hline
\multicolumn{4}{c}{ } \\
\hline
Type & Compared to & P & D \\
\hline
Type I & Type II & 0.02 & 0.20 \\
Type I & Type III & 0.08 & 0.19 \\
Type II & Type III & 0.22 & 0.15 \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{figure*}
\begin{center}
\includegraphics[width=0.85\textwidth]{dahari_scale}
\caption{The disc scalelengths as a function of the Dahari parameter ($Q$) for type I profiles (\textit{left panels}), and the inner- (\textit{upper middle and right panels}) and outer (\textit{lower middle and right panels}) disc scalelengths as a function of the Dahari parameter ($Q$) for type II and III profiles. The correlations are statistically significant for type I discs ($P=0.026$), and for inner- and outer discs of type III ($P=0.0003$ and $P=0.0019$, respectively). The lines show a simple linear fit to the points. In the lower left panel the type I profiles are divided by Hubble type. The correlation for the early-type galaxies ($T< 1.5$) with type I disc is statistically significant ($P=0.003$), and the line shows a simple linear fit to the points in this bin. The error bars show the mean uncertainties of the disc scalelengths.}
\label{dahari_outer}
\includegraphics[width=0.85\textwidth]{prototypes_4596}
\caption{Image and radial surface brightness profile of NGC4596. Note the clear type II break that is associated with the outer ring.}
\label{4596}
\end{center}
\end{figure*}
\section{Discussion}
\label{discussion}
One of the big puzzles in the structure formation of galaxies has been, and still is, why a large majority of discs in galaxies are exponential. Were the galaxies formed like that or did that happen later by some internal or externally driven processes? In this study we examine the extent to which the observed deviations from the exponential profile can be associated with the observed morphological structures like bars, rings, and lenses. Such associations can be used to assess the presence of internal dynamical processes that can re-distribute mass. We also study in which way the galaxy environment is related to the profile type.
\subsection{Association to structure components of the galaxies}
\subsubsection{Type II}
\label{ii_dis_comp}
We found that almost all ($\sim 94 \%$) of type II breaks are associated with distinct morphological structures. In early-type disc galaxies ($T<3$) the breaks are typically connected with outer rings (outer ring/pseudo-ring/ringlens, $\sim 48\%$ of type II breaks) or lenses ($\sim 8\%$ of type II breaks). In later type galaxies the breaks are more likely associated with the outer edges of star formation regions, or with the apparent outer edges of the spirals (in total $\sim 38 \%$ of type II breaks). This leaves only $\sim 6 \%$ of type II breaks that could not be associated with any visible structure.
A possible connection between type II breaks and outer rings was proposed by \citet{pohlen2006}. We have systematically studied this connection, using existing ring size measurements from \citet{comeron2014} and \citet{laurikainen2011}. We find a strong correlation between the ring and break radii (Fig. \ref{rings_breaks}), suggesting that rings are causing the effect (see also appendix A in \citealt{kim2014}). This connection is expected if the outer rings are associated to the Outer Lindblad Resonance of the bar. The bar will effectively redistribute material and angular momentum in the disc thus forming the rings through the resonances, leading to deviations from a single exponential surface brightness profile. In this study outer lenses are also found to be connected with type II breaks in the early-type disc galaxies, in a similar manner as the outer rings (see Fig. \ref{rings_breaks}). This is natural if the outer lenses are also largely resonance structures, as suggested by \citet{laurikainen2013}.
Additionally, we see a connection between the inner rings and type II breaks in those galaxies in which no outer rings or lenses are visible (see Fig. \ref{rings_breaks}). In those cases the breaks are nearly twice as large as the inner rings. This factor of two difference in size corresponds to that given in resonance theory for the inner and outer rings (e.g. \citealt{sellwood1993}; \citealt{buta1996}; \citealt{athanassoula2012}). It is possible that these galaxies have faint outer rings not apparent in the images, or that distinct outer rings have already disappeared.
In later type galaxies (see Fig. \ref{break_distribution_simple} lower middle panel), type II profiles are generally of a different nature and are connected with the star formation regions in spiral arms or with the apparent outer edges of the spiral structures (Fig. \ref{rings_breaks_histo}). While some of this change could be explained by a slightly lower bar fraction compared to early-type galaxies ($T < 3$), the bar fraction in type II profiles in late-type spirals is still as high as $40-50 \%$ (see Fig. \ref{break_distribution_simple} lower middle panel) and thus can not be the main factor causing it. Strong star formation in the spiral arms inside the break radius was found to be related to $\sim 14 \%$ of all the type II profiles. In these cases the spiral arms continue also outside the break radius. Star formation thresholds have been considered as one possible mechanism for type II break formation (see for example \citealt{schaye2004}). Breaks have been detected also in the radial surface brightness profiles based on H$\alpha$ emission line measurements of late-type disc galaxies, at $\sim 0.7 \, R_{25}$ \citep{christlein2010}, which coincide well with many of type II breaks of this study (Fig. \ref{break_surf_mag} right panel). Additionally, coupled bar-spiral resonances \citep{tagger1987} might be causing the observed breaks for some barred type II galaxies \citep{munozmateos2013}.
Galaxies where the breaks are connected with the apparent outer radii of spiral structures comprise $\sim 24 \%$ of all type II profiles. In these cases the galaxy extends outwards from the break radius, but the outer region is featureless. This featureless region could consist of migrating old stars. Some evidence of migration is found in the colour profiles of galaxies (\citealt{azzollini2008}; \citealt{bakos2008}), where for type II profiles the discs outside the break get increasingly redder, indicating older stellar populations. In principle it is also possible that the outer discs are redder simply because of a change in the star formation profile of the disc, without any need for stellar radial migration \citep{sanchezblazquez2009}. Moreover, \citet{sanchezblazquez2009} claim that migration would remove the break from the stellar mass distribution. This is inconsistent with our results which show a similar fraction of type II breaks (see Table \ref{overall-stat}) in the near infrared (and thus in the stellar mass distribution), as is seen in the previous studies in the optical wavelengths (e.g. \citealt{gutierrez2011}). Simulations and star count observations of NGC7793 by \citet{radburnsmith2012} show that the ratio of the outer and inner disc scalelengths ($h_o/h_i$) increase with older stellar populations (i.e. the break gets smoother), as the oldest stars have had more time to migrate to radii where less in-situ star formation is expected. Thus, also the break in the stellar mass profile of the galaxy is smoothed, but they found that the break remains visible in the radial surface brightness profile of the old stellar population that matches the $3.6 \mu m$ profiles of our study. Similar behaviour with increasing stellar age was not seen in the simulations of \citet{sanchezblazquez2009} without taking migration into account. It is worth noting that for these type II profiles, associated with the outer edges of spiral structure, value of $R_{break}/h_i = 2.61 \pm 0.82$ is in agreement with the value of 2.6 derived from the simulation models for radial migration by \citet{roskar2008a}. However, the analysis of stellar populations is beyond the scope of this paper.
\begin{figure*}
\begin{center}
\includegraphics[width=0.95\textwidth]{median_profs}
\caption{The median profiles of the 242 S$^4$G galaxies in the sample that have only one break in the surface brightness profile. In the \textit{left panel} the median profiles of type I, II, and III are shown, and in the \textit{right panel} type II and III median profiles are compared with type I median profile. The major axis has been scaled by the B-band 25 mag arcsec$^{-2}$ isophotal level radius.}
\label{median_profiles}
\end{center}
\end{figure*}
\subsubsection{Type III}
In approximately one third of type III profiles we could associate the break radius directly to galaxy structures like inner/outer lenses, or outer rings. Multiple exponential subsections associated with lenses have been previously studied by \citet{laurikainen2005} and \citet{laurikainen2009,laurikainen2011}, in the early-type disc galaxies. They showed that in such galaxies the exponential subsections appear at fairly high surface brightnesses. Our result fit into this picture as some of the type III breaks are indeed at relatively high surface brightnesses, and also because the type III breaks connected with the structural components appear at a fairly small radius (Figs. \ref{break_surf_mag} left panel, and \ref{rings_breaks} right panel).
Other structural components have also been proposed to create the type III profiles. \citet{bakos2012} found, using deep optical images for a small sample of face-on galaxies, that stellar halos systematically produce type III profiles at surface brightness levels of $\mu_{r'} \sim 28$ mag arcsec$^{-2}$. This surface brightness level corresponds roughly to a surface brightness of $\mu_{3.6 \mu m} \sim 27$ mag arcsec$^{-2}$, and is typically one magnitude fainter than the reach of our data. Nevertheless, bright stellar halos are seen in some galaxies in the S$^4$G sample. Additionally, the superposition of thin and thick discs has been proposed by \citet{comeron2012} to create some of the type III profiles. The type III profile would be formed when the thin disc has a lower scalelength than the thick disc.
In addition to the formation scenarios mentioned above, star formation in extended gas discs might also be involved, in at least some fraction of the type III breaks. The discovery of extended UV-discs, that continue far beyond the optical discs, indicates the presence of star formation at large galactocentric distances in about 25\% of disc galaxies (e.g. \citealt{gildepaz2005}; \citealt{thilker2005}; \citealt{zaritsky2007}). In some of the cases the extended UV-discs might be a result of galaxy interactions (for example M83, \citealt{gildepaz2007}), but often the galaxies are isolated.
\subsection{Scaling relations}
In Figure \ref{median_profiles} (right panel) we show the median surface brightness profiles of type II and III normalized with the median type I profile, scaled with the B-band 25 mag arcsec$^{-2}$ isophotal level radius. It appears that type II profiles in the outer disc are closer to the single exponential discs. On the other hand, type III profiles in the inner disc are more similar to the single exponential discs. These differences in the profile types are seen also in the obtained scaling relations, although they mainly arise from the behaviour of the disc scalelengths. For example, in Fig. \ref{hin_muin} (upper panel) the type I and III profiles overlap in scalelength. In the same Fig. \ref{hin_muin} (lower panel) the types I and II overlap with each other in scalelength. The inner parts of type II profiles are flatter than the single exponential profiles. As discussed in the next section this phenomenon is most probably related to bar induced secular evolution in galaxies. Whether the extended outer discs of type III profiles are manifestations of environmental effect will be discussed in Section \ref{env-disc}.
\citet{gutierrez2011} compared the disc scalelength ($h$) and the central surface brightness ($\mu_0$) of the inner and outer parts of type II and III profiles, to the values of type I profiles. They divided their sample to early ($T\le3$) and late-type galaxies ($T>3$). Contrary to our study, they found that in both Hubble type bins the inner discs of type II profiles are similar to those of type I profiles. Also, in their study the outer parts of type II, and inner parts of type III profiles had shorter scalelengths and brighter $\mu_0$ than type I profiles. Using the same Hubble type bins we find that the inner disc scalelengths of type III are similar to type I profiles in both early and late-type galaxies (K--S test value $P=0.11$ in both bins). In late-type galaxies also the outer disc scalelengths of type II profiles are similar to type I discs ($P=0.28$). These differences between the studies could arise from the different wavelengths used in the studies.
The scatter among points in the $\mu_0$ vs. $h_o$ diagram (upper left corner in Fig. \ref{hin_muin}, lower panel) show no connection with the structural components of the galaxies or with the galaxy absolute magnitudes. Thus, the scatter could be related to various formation scenarios of the breaks. In principle it could be due to small fitting errors in the outermost parts of the discs that influence the extrapolated disc central surface brightnesses (see also \citealt{munozmateos2013}). However, our estimated uncertainties are not large enough to explain this scatter.
\subsection{Bar related secular evolution of the discs}
Associating disc breaks with the different morphological structures, and considering the scaling relations discussed above, it is obvious that bars must play an important role in redistributing matter in galactic discs. Bars can transfer angular momentum among stars and therefore cause the disc to spread, while the gas in the disc is driven towards the bar resonances (for a recent review see \citealt{athanassoula2012}). In section \ref{ii_dis_comp} we discussed that nearly a half ($\sim 48$ \%) of type II profiles are connected with outer rings and that a correlation exists with the break radius and the inner ring radius, which are thought to be formed at the bar resonances. Further evidence of the influence of bars on the discs is seen in the bar fractions of the main profile types (see Table \ref{overall-stat}), and in the distributions of the breaks in barred and non-barred galaxies (see Fig. \ref{break_distribution_simple} lower panels). While types I and III are found equally in barred and non-barred galaxies ($\sim 50$ \% bar fraction), type II profiles are more common in barred galaxies ($\sim 72$ \% bar fraction). Most of the profiles in barred galaxies in Hubble types $T<3$ are of type II. And also, the majority of type II profile breaks are connected with the outer rings (see Fig. \ref{rings_breaks_histo}).
The outer profiles of type II are similar to those of types I and II.i, both among the barred and non-barred galaxies. In fact, the whole type II.i surface brightness profile is similar to that of type I, except inside the bar region where the surface brightness falls below the extrapolated disc. These intermediate cases between types I and II are caused by the bar on the underlying stellar disc, and similar profiles have been seen in N-body simulations (e.g. \citealt{athanassoula2002}; \citealt{valenzuela2003}).
Outer rings are not common in early-type S0 galaxies (S0$^o$, S0$^+$), where they seem to be replaced with lenses \citep{laurikainen2013}. Lenses in these galaxies appear in both barred and non-barred galaxies. It was discussed by \citet{laurikainen2013} that lenses in these galaxies might be largely structures formed in the earlier phase of galaxy evolution when the galaxies were still barred. We also find that the seven type II breaks (out of 66 type II breaks associated with outer rings) connected with outer rings, appear in non-barred galaxies. Formation of rings in non-barred galaxies is not well understood, but based on our study they are similarly connected with the type II breaks as in barred galaxies.
It seems that bars in later type disc galaxies trigger the break formation in a different manner. We find that in Hubble types $T>3$ most of the type II breaks are connected with star formation in spiral arms, or with the apparent outer edges of the spirals (Fig. \ref{rings_breaks_histo}). These type II profiles appear mostly in the non-barred galaxies (Fig. \ref{break_distribution_simple} lower middle panel).
We conclude that in the early-type disc galaxies ($T<3$) bar tends to flatten the disc profile inside the break radius via redistribution of matter, making them to deviate from type I profiles. On the other hand, in the late-type ($T>3$) galaxies the breaks are caused by star formation in the disc.
\subsection{Environmental effects}
\label{env-disc}
Minor-mergers in simulations are able to produce realistic type III profiles in galactic discs (e.g. \citealt{younger2007}; \citealt{elichemoral2011}). Major mergers are known to produce shells, loops, and tails, which in some cases can also be associated with type III profiles, in particular at low surface brightnesses (e.g. $\mu_v =26-29$ mag arcsec$^{-2}$, \citealt{janowiecki2010}). An example of a galaxy with shells in our sample is NGC0474 (see also \citealt{kim2012}), which shows a type III break in the disc associated with the shell structures.
Instead of looking for such direct evidence of galaxy interactions, we calculate the surface density of galaxies ($\Sigma_3^A$) and the Dahari parameter ($Q$), in order to evaluate possible environmental effects on galaxies. For the early-type galaxies ($T<1.5$) with type I profile (Fig. \ref{dahari_outer} lower left panel) we find a statistically significant correlation between the disc scalelength ($h$) and the Dahari parameter ($Q$). In principle this could be a manifestation of the well known morphology-density relation (e.g. \citealt{oemler1974}; \citealt{davis1976}; \citealt{dressler1980}): the early-type galaxies live in denser galaxy environments, where the tidal effects are more frequent. Early-type disc galaxies are also on average slightly brighter which could explain the larger scalelengths (e.g. \citealt{binggeli1988}; \citealt{courteau2007}). However, as no correlation was found between $h$ and the galaxy density parameter, $\Sigma_3^A$, it is more likely that the type I profiles are created in small galaxy groups where the tidal effects are efficient in modifying the discs. Alternatively they could be relics of major mergers. However, based on our study alone, it has not been ruled out that a majority of type I profiles were relics of initial disc formation, from the epoch where galaxy halos controlled the relation between the scalelength and galaxy brightness.
For type III profiles we find that both the inner ($h_i$) and the outer ($h_o$) disc scalelength correlate with the Dahari parameter (Fig. \ref{dahari_outer} right panels): when the tidal effect increases, both scalelengths increase. The increase in $h_o$ with increasing tidal force is consistent with the picture in which the companion galaxies disturb the outer discs creating the observed type III profiles. What happens to the outer disc depends strongly on the orbital parameters of the encounter, the gas content, and the relative masses of the encountering galaxies (e.g. \citealt{laurikainen2001}; \citealt{younger2007}). More specifically, the type III profile could be a result of a tidal encounter when the main galaxy has a significant supply of gas, and the encounter occurs in a prograde orbit with moderate orbital angular momentum. In the above picture similar increase in $h_i$ of type III profiles with increasing tidal force, is less clear. The simulations of \citet{laurikainen2001} and \citet{younger2007} predict that in a minor merger $h_i$ remains similar to the initial disc before the encounter, increasing only $h_o$. On the other hand, it has been shown by \citet{elichemoral2011} that rotationally supported inner components (e.g. discs and rings) can be formed from the accreted material of the satellite galaxies. Therefore, it is possible that the observed properties of type III profiles could, at least partly, be triggered by tidal encounters in small galaxy groups.
We did not find similar correlations between the surface density of the galaxies ($\Sigma_3^A$) and the parameters of the disc's breaks. It means that tidal interactions with the nearby companions are more likely to affect the discs than the surrounding galaxy density. This could also explain why in the previous studies of \citet{pohlen2006} and \citet{maltby2012} no connections were found with the profile types and the environment. They used more global measures of galaxy density (number of galaxies in an aperture, field/cluster comparison) that do not directly tell if the primary galaxy has a close companion.
\section{Summary and conclusions}
\label{sum-conclusion}
We present a detailed study of the disc surface brightness profiles of 248 galaxies using the 3.6 $\mu m$ images that form part of the Spitzer Survey of Stellar Structure in Galaxies (S$^4$G, \citealt{sheth2010}). Additionally, 80 galaxies were taken from the Near Infrared S0-Sa galaxy Survey (NIRS0S, \citealt{laurikainen2011}, observed at $K_{\text{s}}$-band). Using the radial surface brightness profiles we measured the properties of the main disc break type, first defined by \citet{erwin2005}. We associate the breaks with possible structural components in these galaxies using existing size measurements of rings and lenses. In addition, we carried out an environmental study of the sample galaxies using the 2 Micron All Sky Survey Extended Source Catalog (XSC), and the 2 Micron All Sky Survey Redshift Survey (RSC), and calculated the parameters describing the environmental galaxy density ($\Sigma_3^A$) and the Dahari parameter ($Q$) for the tidal interaction strength.
Our main results are summarised as follows:
\begin{itemize}
\item The fractions of the different profile types in the near infrared ($3.6 \, \mu m$ and $K_{\text{s}}$-band) are: type I $32 \pm 3$ \%, type II $42 \pm 3$ \%, and type III $21 \pm 2$ \%. We find also type II.i profiles in $7 \pm 2$ \% of the sample. In seven galaxies we see two breaks. These galaxies are counted twice and explain why the total percentage exceeds 100 \%.
\item The inner parts of type III profiles are found to resemble the single exponential discs, while for type II it is the outer disc that more closely resemble the single exponential discs. This suggests that in galaxies with type II profile the evolution of the inner parts of the galaxy has been more significant, while in type III profiles the outer disc has gone through substantial evolution.
\item $\sim 56 \%$ of type II profiles can be directly connected to outer lenses ($\sim 8 \%$), or to the outer rings, pseudorings, and ringlenses ($\sim 48 \%$). Almost all of the type II profiles with Hubble types $T<3$ are associated to these structures, with break radii that are coincident with the location of these structures. Therefore in galaxies of Hubble types $T<3$ the breaks are most likely associated to the resonances of bars.
\item $\sim 38 \%$ of type II profiles can be visually connected to either the outer edges of intense star formation in the spiral arms or to the apparent outer radii of the spirals. These profiles appear mainly in Hubble types $T>3$, where they explain nearly all of type II profiles.
\item Only approximately 1/3 of type III profiles could be associated with distinct morphological structures in the galaxies, such as lenses or outer rings.
\item For type III profiles a correlation was found between inner and outer disc scalelengths ($h_i$ and $h_o$) and the Dahari parameter ($Q$), indicating that nearby galaxy encounters are partly causing the upbending part of these profiles.
\item The disc scalelengths ($h$) and central surface brightnesses ($\mu_0$) of the inner- and outer discs were found to be similar in barred and non-barred galaxies when the main types (I, II, III) are studied individually.
\end{itemize}
\section*{Acknowledgements}
We thank the referee for comments that have significantly improved the manuscript. The authors wish to thank the entire S$^4$G team for their efforts with this project. J.L. gratefully acknowledges financial support from the Vilho, Yrj\"o ja Kalle V\"ais\"al\"a foundation of the Finnish Academy of Science and Letters. J.L, E.L, H.S, and S.C acknowledge the support from Academy of Finland. E.A. and A.B. acknowledge the CNES (Centre National d'Etudes Spatiales - France) for financial support. We acknowledge financial support from the People Programme (Marie Curie Actions) of the European Union's FP7 2007-2013 to the DAGAL network under REA grant agreement number PITN-GA-2011-289313.
This research is based in part on observations made with the Spitzer Space Telescope, which is operated by the Jet Propulsion Laboratory, California Institute of Technology under a contract with NASA. We are grateful to the dedicated staff at the Spitzer Science Center for their help and support in planning and execution of this Exploration Science program. We also gratefully acknowledge support from NASA JPL/Spitzer grant RSA 1374189 provided for the S$^4$G project.
This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. This research has made use of SAOImage DS9, developed by Smithsonian Astrophysical Observatory.
|
1,108,101,566,670 | arxiv |
\subsection{Fluent Speech Synthesis Algorithm}
\section{Credits}
This document has been adapted by Yulan He
from the instructions for earlier ACL and NAACL proceedings, including those for
ACL 2020 by Steven Bethard, Ryan Cotterrell and Rui Yan,
ACL 2019 by Douwe Kiela and Ivan Vuli\'{c},
NAACL 2019 by Stephanie Lukin and Alla Roskovskaya,
ACL 2018 by Shay Cohen, Kevin Gimpel, and Wei Lu,
NAACL 2018 by Margaret Michell and Stephanie Lukin,
2017/2018 (NA)ACL bibtex suggestions from Jason Eisner,
ACL 2017 by Dan Gildea and Min-Yen Kan,
NAACL 2017 by Margaret Mitchell,
ACL 2012 by Maggie Li and Michael White,
ACL 2010 by Jing-Shing Chang and Philipp Koehn,
ACL 2008 by Johanna D. Moore, Simone Teufel, James Allan, and Sadaoki Furui,
ACL 2005 by Hwee Tou Ng and Kemal Oflazer,
ACL 2002 by Eugene Charniak and Dekang Lin,
and earlier ACL and EACL formats written by several people, including
John Chen, Henry S. Thompson and Donald Walker.
Additional elements were taken from the formatting instructions of the \emph{International Joint Conference on Artificial Intelligence} and the \emph{Conference on Computer Vision and Pattern Recognition}.
\section{Introduction}
The following instructions are directed to authors of papers submitted to EMNLP 2020 or accepted for publication in its proceedings.
All authors are required to adhere to these specifications.
Authors are required to provide a Portable Document Format (PDF) version of their papers.
\textbf{The proceedings are designed for printing on A4 paper.}
\section{Electronically-available resources}
ACL provides this description and accompanying style files at
\begin{quote}
\url{https://2020.emnlp.org/files/emnlp2020-templates.zip}
\end{quote}
We strongly recommend the use of these style files, which have been appropriately tailored for the EMNLP 2020 proceedings.
\paragraph{\LaTeX-specific details:}
The templates include the \LaTeX2e{} source (\texttt{\small emnlp2020.tex}),
the \LaTeX2e{} style file used to format it (\texttt{\small emnlp2020.sty}),
an ACL bibliography style (\texttt{\small acl\_natbib.bst}),
an example bibliography (\texttt{\small emnlp2020.bib}),
and the bibliography for the ACL Anthology (\texttt{\small anthology.bib}).
\section{Length of Submission}
\label{sec:length}
The conference accepts submissions of long papers and short papers.
Long papers may consist of up to eight (8) pages of content plus unlimited pages for references.
Upon acceptance, final versions of long papers will be given one additional page -- up to nine (9) pages of content plus unlimited pages for references -- so that reviewers' comments can be taken into account.
Short papers may consist of up to four (4) pages of content, plus unlimited pages for references.
Upon acceptance, short papers will be given five (5) pages in the proceedings and unlimited pages for references.
For both long and short papers, all illustrations and tables that are part of the main text must be accommodated within these page limits, observing the formatting instructions given in the present document.
Papers that do not conform to the specified length and formatting requirements are subject to be rejected without review.
The conference encourages the submission of additional material that is relevant to the reviewers but not an integral part of the paper.
There are two such types of material: appendices, which can be read, and non-readable supplementary materials, often data or code.
Additional material must be submitted as separate files, and must adhere to the same anonymity guidelines as the main paper.
The paper must be self-contained: it is optional for reviewers to look at the supplementary material.
Papers should not refer, for further detail, to documents, code or data resources that are not available to the reviewers.
Refer to Appendices~\ref{sec:appendix} and \ref{sec:supplemental} for further information.
Workshop chairs may have different rules for allowed length and whether supplemental material is welcome.
As always, the respective call for papers is the authoritative source.
\section{Anonymity}
As reviewing will be double-blind, papers submitted for review should not include any author information (such as names or affiliations). Furthermore, self-references that reveal the author's identity, \emph{e.g.},
\begin{quote}
We previously showed \citep{Gusfield:97} \ldots
\end{quote}
should be avoided. Instead, use citations such as
\begin{quote}
\citet{Gusfield:97} previously showed\ldots
\end{quote}
Please do not use anonymous citations and do not include acknowledgements.
\textbf{Papers that do not conform to these requirements may be rejected without review.}
Any preliminary non-archival versions of submitted papers should be listed in the submission form but not in the review version of the paper.
Reviewers are generally aware that authors may present preliminary versions of their work in other venues, but will not be provided the list of previous presentations from the submission form.
Once a paper has been accepted to the conference, the camera-ready version of the paper should include the author's names and affiliations, and is allowed to use self-references.
\paragraph{\LaTeX-specific details:}
For an anonymized submission, ensure that {\small\verb|\aclfinalcopy|} at the top of this document is commented out, and that you have filled in the paper ID number (assigned during the submission process on softconf) where {\small\verb|***|} appears in the {\small\verb|\def\aclpaperid{***}|} definition at the top of this document.
For a camera-ready submission, ensure that {\small\verb|\aclfinalcopy|} at the top of this document is not commented out.
\section{Multiple Submission Policy}
EMNLP 2020 will not consider any paper that is under review in a journal or another conference at the time of submission, and submitted papers must not be submitted elsewhere during the EMNLP 2020 review period. This policy covers all refereed and archival conferences and workshops (e.g., COLING, NeurIPS, ACL workshops). For example, a paper under review at an ACL workshop cannot be dual-submitted to EMNLP 2020. The only exception is that a paper can be dual-submitted to both EMNLP 2020 and an EMNLP workshop. In addition, we will not consider any paper that overlaps significantly in content or results with papers that will be (or have been) published elsewhere.
Authors submitting more than one paper to EMNLP 2020 must ensure that their submissions do not overlap significantly ($>25$\%) with each other in content or results.
\section{Formatting Instructions}
Manuscripts must be in two-column format.
Exceptions to the two-column format include the title, authors' names and complete addresses, which must be centered at the top of the first page, and any full-width figures or tables (see the guidelines in Section~\ref{ssec:title-authors}).
\textbf{Type single-spaced.}
Start all pages directly under the top margin.
The manuscript should be printed single-sided and its length should not exceed the maximum page limit described in Section~\ref{sec:length}.
Pages should be numbered in the version submitted for review, but \textbf{pages should not be numbered in the camera-ready version}.
\paragraph{\LaTeX-specific details:}
The style files will generate page numbers when {\small\verb|\aclfinalcopy|} is commented out, and remove them otherwise.
\subsection{File Format}
\label{sect:pdf}
For the production of the electronic manuscript you must use Adobe's Portable Document Format (PDF).
Please make sure that your PDF file includes all the necessary fonts (especially tree diagrams, symbols, and fonts with Asian characters).
When you print or create the PDF file, there is usually an option in your printer setup to include none, all or just non-standard fonts.
Please make sure that you select the option of including ALL the fonts.
\textbf{Before sending it, test your PDF by printing it from a computer different from the one where it was created.}
Moreover, some word processors may generate very large PDF files, where each page is rendered as an image.
Such images may reproduce poorly.
In this case, try alternative ways to obtain the PDF.
One way on some systems is to install a driver for a postscript printer, send your document to the printer specifying ``Output to a file'', then convert the file to PDF.
It is of utmost importance to specify the \textbf{A4 format} (21 cm x 29.7 cm) when formatting the paper.
Print-outs of the PDF file on A4 paper should be identical to the hardcopy version.
If you cannot meet the above requirements about the production of your electronic submission, please contact the publication chairs as soon as possible.
\paragraph{\LaTeX-specific details:}
PDF files are usually produced from \LaTeX{} using the \texttt{\small pdflatex} command.
If your version of \LaTeX{} produces Postscript files, \texttt{\small ps2pdf} or \texttt{\small dvipdf} can convert these to PDF.
To ensure A4 format in \LaTeX, use the command {\small\verb|\special{papersize=210mm,297mm}|}
in the \LaTeX{} preamble (below the {\small\verb|\usepackage|} commands) and use \texttt{\small dvipdf} and/or \texttt{\small pdflatex}; or specify \texttt{\small -t a4} when working with \texttt{\small dvips}.
\subsection{Layout}
\label{ssec:layout}
Format manuscripts two columns to a page, in the manner these
instructions are formatted.
The exact dimensions for a page on A4 paper are:
\begin{itemize}
\item Left and right margins: 2.5 cm
\item Top margin: 2.5 cm
\item Bottom margin: 2.5 cm
\item Column width: 7.7 cm
\item Column height: 24.7 cm
\item Gap between columns: 0.6 cm
\end{itemize}
\noindent Papers should not be submitted on any other paper size.
If you cannot meet the above requirements about the production of your electronic submission, please contact the publication chairs above as soon as possible.
\subsection{Fonts}
For reasons of uniformity, Adobe's \textbf{Times Roman} font should be used.
If Times Roman is unavailable, you may use Times New Roman or \textbf{Computer Modern Roman}.
Table~\ref{font-table} specifies what font sizes and styles must be used for each type of text in the manuscript.
\begin{table}
\centering
\begin{tabular}{lrl}
\hline \textbf{Type of Text} & \textbf{Font Size} & \textbf{Style} \\ \hline
paper title & 15 pt & bold \\
author names & 12 pt & bold \\
author affiliation & 12 pt & \\
the word ``Abstract'' & 12 pt & bold \\
section titles & 12 pt & bold \\
subsection titles & 11 pt & bold \\
document text & 11 pt &\\
captions & 10 pt & \\
abstract text & 10 pt & \\
bibliography & 10 pt & \\
footnotes & 9 pt & \\
\hline
\end{tabular}
\caption{\label{font-table} Font guide. }
\end{table}
\paragraph{\LaTeX-specific details:}
To use Times Roman in \LaTeX2e{}, put the following in the preamble:
\begin{quote}
\small
\begin{verbatim}
\usepackage{times}
\usepackage{latexsym}
\end{verbatim}
\end{quote}
\subsection{Ruler}
A printed ruler (line numbers in the left and right margins of the article) should be presented in the version submitted for review, so that reviewers may comment on particular lines in the paper without circumlocution.
The presence or absence of the ruler should not change the appearance of any other content on the page.
The camera ready copy should not contain a ruler.
\paragraph{Reviewers:}
note that the ruler measurements may not align well with lines in the paper -- this turns out to be very difficult to do well when the paper contains many figures and equations, and, when done, looks ugly.
In most cases one would expect that the approximate location will be adequate, although you can also use fractional references (\emph{e.g.}, this line ends at mark $295.5$).
\paragraph{\LaTeX-specific details:}
The style files will generate the ruler when {\small\verb|\aclfinalcopy|} is commented out, and remove it otherwise.
\subsection{Title and Authors}
\label{ssec:title-authors}
Center the title, author's name(s) and affiliation(s) across both columns.
Do not use footnotes for affiliations.
Place the title centered at the top of the first page, in a 15-point bold font.
Long titles should be typed on two lines without a blank line intervening.
Put the title 2.5 cm from the top of the page, followed by a blank line, then the author's names(s), and the affiliation on the following line.
Do not use only initials for given names (middle initials are allowed).
Do not format surnames in all capitals (\emph{e.g.}, use ``Mitchell'' not ``MITCHELL'').
Do not format title and section headings in all capitals except for proper names (such as ``BLEU'') that are
conventionally in all capitals.
The affiliation should contain the author's complete address, and if possible, an electronic mail address.
The title, author names and addresses should be completely identical to those entered to the electronical paper submission website in order to maintain the consistency of author information among all publications of the conference.
If they are different, the publication chairs may resolve the difference without consulting with you; so it is in your own interest to double-check that the information is consistent.
Start the body of the first page 7.5 cm from the top of the page.
\textbf{Even in the anonymous version of the paper, you should maintain space for names and addresses so that they will fit in the final (accepted) version.}
\subsection{Abstract}
Use two-column format when you begin the abstract.
Type the abstract at the beginning of the first column.
The width of the abstract text should be smaller than the
width of the columns for the text in the body of the paper by 0.6 cm on each side.
Center the word \textbf{Abstract} in a 12 point bold font above the body of the abstract.
The abstract should be a concise summary of the general thesis and conclusions of the paper.
It should be no longer than 200 words.
The abstract text should be in 10 point font.
\subsection{Text}
Begin typing the main body of the text immediately after the abstract, observing the two-column format as shown in the present document.
Indent 0.4 cm when starting a new paragraph.
\subsection{Sections}
Format section and subsection headings in the style shown on the present document.
Use numbered sections (Arabic numerals) to facilitate cross references.
Number subsections with the section number and the subsection number separated by a dot, in Arabic numerals.
\subsection{Footnotes}
Put footnotes at the bottom of the page and use 9 point font.
They may be numbered or referred to by asterisks or other symbols.\footnote{This is how a footnote should appear.}
Footnotes should be separated from the text by a line.\footnote{Note the line separating the footnotes from the text.}
\subsection{Graphics}
Place figures, tables, and photographs in the paper near where they are first discussed, rather than at the end, if possible.
Wide illustrations may run across both columns.
Color is allowed, but adhere to Section~\ref{ssec:accessibility}'s guidelines on accessibility.
\paragraph{Captions:}
Provide a caption for every illustration; number each one sequentially in the form:
``Figure 1. Caption of the Figure.''
``Table 1. Caption of the Table.''
Type the captions of the figures and tables below the body, using 10 point text.
Captions should be placed below illustrations.
Captions that are one line are centered (see Table~\ref{font-table}).
Captions longer than one line are left-aligned (see Table~\ref{tab:accents}).
\begin{table}
\centering
\begin{tabular}{lc}
\hline
\textbf{Command} & \textbf{Output}\\
\hline
\verb|{\"a}| & {\"a} \\
\verb|{\^e}| & {\^e} \\
\verb|{\`i}| & {\`i} \\
\verb|{\.I}| & {\.I} \\
\verb|{\o}| & {\o} \\
\verb|{\'u}| & {\'u} \\
\verb|{\aa}| & {\aa} \\\hline
\end{tabular}
\begin{tabular}{lc}
\hline
\textbf{Command} & \textbf{Output}\\
\hline
\verb|{\c c}| & {\c c} \\
\verb|{\u g}| & {\u g} \\
\verb|{\l}| & {\l} \\
\verb|{\~n}| & {\~n} \\
\verb|{\H o}| & {\H o} \\
\verb|{\v r}| & {\v r} \\
\verb|{\ss}| & {\ss} \\
\hline
\end{tabular}
\caption{Example commands for accented characters, to be used in, \emph{e.g.}, \BibTeX\ names.}\label{tab:accents}
\end{table}
\paragraph{\LaTeX-specific details:}
The style files are compatible with the caption and subcaption packages; do not add optional arguments.
\textbf{Do not override the default caption sizes.}
\subsection{Hyperlinks}
Within-document and external hyperlinks are indicated with Dark Blue text, Color Hex \#000099.
\subsection{Citations}
Citations within the text appear in parentheses as~\citep{Gusfield:97} or, if the author's name appears in the text itself, as \citet{Gusfield:97}.
Append lowercase letters to the year in cases of ambiguities.
Treat double authors as in~\citep{Aho:72}, but write as in~\citep{Chandra:81} when more than two authors are involved. Collapse multiple citations as in~\citep{Gusfield:97,Aho:72}.
Refrain from using full citations as sentence constituents.
Instead of
\begin{quote}
``\citep{Gusfield:97} showed that ...''
\end{quote}
write
\begin{quote}
``\citet{Gusfield:97} showed that ...''
\end{quote}
\begin{table*}
\centering
\begin{tabular}{lll}
\hline
\textbf{Output} & \textbf{natbib command} & \textbf{Old ACL-style command}\\
\hline
\citep{Gusfield:97} & \small\verb|\citep| & \small\verb|\cite| \\
\citealp{Gusfield:97} & \small\verb|\citealp| & no equivalent \\
\citet{Gusfield:97} & \small\verb|\citet| & \small\verb|\newcite| \\
\citeyearpar{Gusfield:97} & \small\verb|\citeyearpar| & \small\verb|\shortcite| \\
\hline
\end{tabular}
\caption{\label{citation-guide}
Citation commands supported by the style file.
The style is based on the natbib package and supports all natbib citation commands.
It also supports commands defined in previous ACL style files for compatibility.
}
\end{table*}
\paragraph{\LaTeX-specific details:}
Table~\ref{citation-guide} shows the syntax supported by the style files.
We encourage you to use the natbib styles.
You can use the command {\small\verb|\citet|} (cite in text) to get ``author (year)'' citations as in \citet{Gusfield:97}.
You can use the command {\small\verb|\citep|} (cite in parentheses) to get ``(author, year)'' citations as in \citep{Gusfield:97}.
You can use the command {\small\verb|\citealp|} (alternative cite without parentheses) to get ``author year'' citations (which is useful for using citations within parentheses, as in \citealp{Gusfield:97}).
\subsection{References}
Gather the full set of references together under the heading \textbf{References}; place the section before any Appendices.
Arrange the references alphabetically by first author, rather than by order of occurrence in the text.
Provide as complete a citation as possible, using a consistent format, such as the one for \emph{Computational Linguistics\/} or the one in the \emph{Publication Manual of the American
Psychological Association\/}~\citep{APA:83}.
Use full names for authors, not just initials.
Submissions should accurately reference prior and related work, including code and data.
If a piece of prior work appeared in multiple venues, the version that appeared in a refereed, archival venue should be referenced.
If multiple versions of a piece of prior work exist, the one used by the authors should be referenced.
Authors should not rely on automated citation indices to provide accurate references for prior and related work.
The following text cites various types of articles so that the references section of the present document will include them.
\begin{itemize}
\item Example article in journal: \citep{Ando2005}.
\item Example article in proceedings, with location: \citep{borschinger-johnson-2011-particle}.
\item Example article in proceedings, without location: \citep{andrew2007scalable}.
\item Example arxiv paper: \citep{rasooli-tetrault-2015}.
\end{itemize}
\paragraph{\LaTeX-specific details:}
The \LaTeX{} and Bib\TeX{} style files provided roughly follow the American Psychological Association format.
If your own bib file is named \texttt{\small emnlp2020.bib}, then placing the following before any appendices in your \LaTeX{} file will generate the references section for you:
\begin{quote}\small
\verb|\bibliographystyle{acl_natbib}|\\
\verb|
\subsection{Datasets and Systems Settings}
\vspace{-0.1cm}
We evaluate on two simultaneous
speech-to-speech translation directions:
Chinese$\leftrightarrow$English.
For training, we use the text-to-text
parallel corpora available
from WMT18\footnote{\scriptsize\url{http://www.statmt.org/wmt18/translation-task.html}}
(24.7M sentence pairs).
We also annotate a portion of Chinese and English speeches
from LDC United Nations Proceedings Speech
\footnote{\scriptsize\url{https://catalog.ldc.upenn.edu/LDC2014S08}} (LDC-UN)
as a speech-to-text corpus.
This corpus includes speeches recorded in 2009-2012
from United Nations conferences in six official UN
languages.
We transcribe the speeches and then translate
the transcriptions as references.
The speech recordings include not only source speech
but also corresponding professional simultaneous
interpreters' interpretation in the conference.
Thus, we also transcribe those human simultaneous
interpretation of En$\rightarrow$Zh direction
which will not be used in our model but compared to in
the following experiments.
\begin{table}[h!]\centering
\small
\begin{tabular}{|c|c|c|c|}\hline
\multicolumn{2}{|c|}{} & En$\rightarrow$Zh & Zh$\rightarrow$En \\\hline
\multirow{3}{*}{Train} & \# of speeches & 58 & 119 \\%\hline
& \# of words & 63650 & 61676 \\%\hline
& Total time & 6.81 h & 9.68 h \\\hline
\multirow{3}{*}{Dev}& \# of speeches & 3 & 6\\%\hline
& \# of words & 1153 & 2415 \\%\hline
& Total time & 0.27 h & 0.35 h \\\hline
\multirow{3}{*}{Test} & \# of speeches & 3 & 6 \\%\hline
& \# of words & 3053 & 1870 \\%\hline
& Total time & 0.39 h & 0.30 h \\\hline
\end{tabular}
\caption{Statistics of LDC-UN dataset (source-side).}
\vspace{-0.4cm}
\label{tab:data}
\end{table}
Table \ref{tab:data} shows the statistics of our
speech-to-text dataset.
We train our models using both the WMT18 training set and
the LDC UN speech-to-text training set.
We validate and test the models only in the LDC-UN dataset.
For Chinese side text, we use jieba
\footnote{\scriptsize\url{https://github.com/fxsjy/jieba}}
Chinese segmentation tool.
We apply BPE~\cite{sennrich+:2015} on all texts in order
to reduce the vocabulary sizes.
We set the vocabulary size to 16K for both Chinese and English text.
Our Transformer is essentially the same with base Transformer model
\cite{vaswani+:2017}.
As mentioned in Section \ref{sec:asr}, we use an
anonymous real-time speech recognizer from a
well-known cloud platform
as the speech recognition module
for both English and Chinese.
During speech-to-speech simultaneous translation decoding,
after receiving an ASR input, we first normalize the
punctuations and tokenize (or do Chinese segmentation
for Zh$\to$En translation) the input.
The last tokens are always removed in the encoder
of translation model because they are very unstable.
In the latency measurement we use
Penn Phonetics Lab Forced Aligner (P2FA) \cite{yuan2008speaker} as the forced aligner to automatically
annotate the time-stamp for both Chinese and English words
in source and target sides.
For the incremental Text-to-speech system, we follow
\cite{ma2019incremental}
and take the Tacotron 2 model~\cite{shen+:2018} as our phoneme-to-spectrogram model and train it with additional {\em guided attention loss}~\cite{tachibana+:2018} which speeds up convergence.
Our vocoder is the same as that in the Parallel WaveGAN paper~\cite{yamamoto+:19}, which consists of 30 layers of dilated residual convolution blocks with exponentially increasing three dilation cycles, 64 residual and skip channels and the convolution filter size 3.
For English, we use a proprietary speech dataset containing 13,708 audio clips (i.e., sentences) from a female speaker and the corresponding transcripts.
For Chinese, we use a public speech dataset\footnote{\url{https://www.data-baker.com/open_source.html}} containing 10,000 audio clips from a female speaker and the transcripts.
\vspace{-0.2cm}
\subsection{Speech-to-Speech Simul.~Translation}
\vspace{-0.2cm}
\begin{figure}[t]
\centering
\vspace{-.9cm}
\begin{tabular}{c}
\begin{minipage}[t]{1.0 \linewidth}
\begin{center}
\subfigure[Chinese-to-English simultaneous translation]{
\includegraphics[width=5.9cm]{figs/zh-en.pdf}
\label{fig:zh-en}
}
\end{center}
\end{minipage}
\\[-0.3cm]
\begin{minipage}[t]{1.0 \linewidth}
\begin{center}
\subfigure[English-to-Chinese simultaneous translation]{
\includegraphics[width=6.5cm]{figs/en-zh.pdf}
\label{fig:en-zh}
}
\end{center}
\end{minipage}
\end{tabular}
\vspace{-0.3cm}
\caption{Translation quality and latency (pBAL)
of proposed simultaneous speech-to-speech translation
systems compared with baselines. For all those SAT-$k$ and wait-$k$ models, $k=\{3, 5, 7\}$ from bottom to top.}
\label{fig:result}
\vspace{-0.5cm}
\end{figure}
Fig.~\ref{fig:result} show the final results of our proposed
models and baselines.
For translation quality measurement, we use the "multi-bleu.pl"
\footnote{\scriptsize\url{https://github.com/moses-smt//mosesdecoder/blob/master/scripts/generic/multi-bleu.perl}}
script to calculate BLEU scores.
Since different punctuations are soundless,
we remove all of them before BLEU evaluation for both
hypotheses and references.
We follow \cite{xiong+:2019} to concatenate the translations
of each talk into one sentence to measure BLEU scores.
For Chinese-to-English simultaneous translation,
we compare our models with naive wait-$k$,
wait-$k$ with SAT decoding (only use
Self-adaptive inference in Sec.~\ref{sec:SAI}),
segment based models \cite{oda+:2014,xiong+:2019} and full sentence translation model.
All these models share one iTTS system.
For segment based model,
since our streaming ASR API doesn't provide any punctuation
before the final step,
we use the final punctuations to segment the partial
streaming inputs and then use a full-sentence translation
model to translate the partial segment as a full sentence.
The results show that our proposed SAT-$k$ models can
achieve much lower latency without sacrificing
quality compared with those baselines.
Fig.~\ref{fig:result}(b) shows the results of
En$\to$Zh simultaneous translation.
Besides the baselines used in Zh$\to$En
experiments, we also compare our system with
professional human interpreters' translation.
Our proposed models also outperform all the baselines
and human interpreters.
Our models reduce more latency in Zh$\to$En
than En$\to$Zh compared
with wait-$k$
because
English sentences is always longer than Chinese
thus
it's more easily to accumulate
latency in Zh$\to$En
(also shown in Fig.~\ref{fig:acc_latency}).
\begin{figure}[h!]
\centering
\vspace{-0.2cm}
\includegraphics[width=7.cm,height=4.cm]{figs/zhen_accumulate_lag.pdf}
\captionof{figure}{Latency for sentences at different
indices in Chinese-to-English dev-set.}
\label{fig:acc_latency}
\vspace{-0.4cm}
\end{figure}
\vspace{-0.2cm}
\subsection{Human Evaluation on Speech Quality}
\vspace{-0.2cm}
\begin{table}[h!]\centering
\small
\begin{tabular}{|c|c|c|}\hline
Method & En$\to$Zh & Zh$\to$En \\\hline
wait-$3$ & $3.56 \pm 0.09$ & $3.68 \pm 0.08$ \\\hline
wait-$3$ + SAT decoding & $3.81 \pm 0.08$ & $3.96 \pm 0.04$\\\hline
SAT-$3$ & $3.83 \pm 0.07$ & $3.97 \pm 0.07$\\\hline
Segment-based & $3.79 \pm 0.15$ &$3.99 \pm 0.07$ \\\hline
Full sentence & $3.98 \pm 0.08$ &$4.03 \pm 0.03$ \\\hline
Human & $3.85 \pm 0.05$ & - \\\hline
\end{tabular}
\caption{MOS evaluations
of fluency for different
target speeches generated
by different methods.}
\label{tab:mos}
\vspace{-0.4cm}
\end{table}
\if 0
\begin{table}[h!]\centering
\begin{tabular}{|c|c|}\hline
Method & MOS \\\hline
wait-$3$ & $3.68 \pm 0.08$ \\\hline
wait-$3$ + SAT decoding & $3.96 \pm 0.04$ \\\hline
SAT-$3$ & $3.97 \pm 0.07$ \\\hline
Segment-based & $3.99 \pm 0.07$ \\\hline
Full sentence & $4.03 \pm 0.03$ \\\hline
\end{tabular}
\caption{MOS evaluations
of fluency for different Chinese-to-English
target speeches generated
by different methods.}
\label{tab:mos_en}
\vspace{-0.5cm}
\end{table}
\fi
In Table~\ref{tab:mos},
we evaluate our synthesized speeches by Mean Opinion Scores (MOS)
with native speakers,
which is a standard metric in TTS.
Each speech received 10 human ratings scaled from 1 to 5,
with 5 being the best.
For both Zh$\leftrightarrow$En directions,
wait-$3$ models have the lowest MOS due to the many unnatural pauses
(see Sec.~\ref{sec:naive}).
Our proposed model SAT-$3$ and wait-$3$
with SAT decoding achieve similar
fluency to full sentence models and
even human interpreters.
\vspace{-0.2cm}
\subsection{Examples}
\begin{sidewaysfigure}
\centering
\begin{minipage}{1.0\textheight}
\includegraphics[width=25cm]{figs/tikz.pdf}
\captionof{figure}{Decoding results of proposed simultaneous speech-to-speech Chinese-to-English translation system
and baselines.}
\label{fig:zhen_example}
\end{minipage}
\begin{minipage}{1.0\textheight}
\includegraphics[width=25cm]{figs/tikz_enzh.pdf}
\captionof{figure}{Decoding results of proposed simultaneous speech-to-speech English-to-Chinese translation system
and baselines.}
\label{fig:enzh_example}
\end{minipage}
\end{sidewaysfigure}
Fig.~\ref{fig:zhen_example} shows a Zh$\to$En
decoding example. Here the wait-$3$ models'
outputs have a much longer latency compared with
SAT-$3$ because their beginnings are delayed by the translation of
previous sentence(s) and their tails are also very long.
The En$\to$Zh example in Fig.~\ref{fig:enzh_example} is similar.
While streaming ASR has a very long delay,
SAT-$3$ model still controls the latency to roughly 4.5s;
all pauses on the target side are natural ones from punctuations.
By contrast, the human interpreter's translation
has the longest latency.
\if 0
\begin{sidewaysfigure}
\centering
\includegraphics[width=10cm]{figs/tikz.pdf}
\captionof{figure}{Chinese-to-English translation.}
\label{fig:zhen_example}
\end{sidewaysfigure}
\begin{sidewaysfigure}
\centering
\includegraphics[width=24cm]{figs/tikz_enzh.pdf}
\captionof{figure}{English-to-Chinese translation.}
\label{fig:enzh_example}
\end{sidewaysfigure}
\fi
\section{Introduction}
\input{intro}
\vspace{-0.2cm}
\section{Preliminaries}
\vspace{-0.2cm}
\label{sec:prelim}
\input{prelim}
\section{Self-Adaptive Translation}
\label{sec:train}
\input{train}
\vspace{-0.2cm}
\section{Paragraph-Based Boundary Aware Latency}
\vspace{-0.2cm}
\label{sec:metric}
\input{metric}
\vspace{-0.2cm}
\section{Experiments}
\label{sec:exps}
\input{exps}
\vspace{-0.2cm}
\section{Conclusions}
\vspace{-0.2cm}
We proposed Self-Adaptive Translation
for simultaneous
speech-to-speech translation which flexibly adjusts translation length
to avoid latency accumulation and unnatural pauses.
In both Zh$\leftrightarrow$En directions, our method generates fluent
and low latency
target speeches with high translation quality.
\balance
\subsection{Streaming Automatic Speech Recognition}
\label{sec:asr}
We use anonymous real-time speech recognizer as the speech recognition module.
As shown in Fig.~\ref{fig:pipeline}, streaming ASR is first step of the entire pipeline which
converts the growing source acoustic signals from speaker into a sequence of
tokens $\ensuremath{\bm{x}}\xspace=(x_1, x_2, ...)$ timely with about 1 second latency.
Table~\ref{tab:Zh-En} demonstrates one example of English streaming ASR
which generates the English outputs incrementally.
Each row in the table represents the streaming ASR outputs at each step.
Note that streaming ASR sometimes revises some tail outputs from
previous step (e.g. $3$th and $4$th steps in Table~\ref{tab:Zh-En}).
To get stabler outputs,
we exclude the last word in ASR outputs
(except the final steps)
in our system.
\vspace{-0.2cm}
\subsection{Simultaneous Machine Translation}
\vspace{-0.2cm}
As an intermediate step between source speech recognition and target speech synthesis modules,
the goal of this step is to translation all the available source language tokens from streaming ASR
into another language.
There are many Text-to-Text simultaneous translation models \cite{Gu+:2017,ma+:2019,Arivazhagan+:2019,ma2019monotonic}
that have been proposed recently.
Different from conventional full-sentence translation model,
which encodes the entire source sentence $\ensuremath{\bm{x}}\xspace=(x_1,...x_m)$ into a sequence of
hidden states, and decodes sequentially conditioned on those hidden states and previous predictions
as $p(\ensuremath{\bm{y}}\xspace \mid \ensuremath{\bm{x}}\xspace) = \textstyle\prod_{t=1}^{|\ensuremath{\bm{y}}\xspace|} p(y_t \mid \ensuremath{\bm{x}}\xspace,\, \ensuremath{\bm{y}}\xspace_{<t})$
to form the final hypothesis $\ensuremath{\bm{y}}\xspace = (y_1,...,y_t)$, simultaneous translation
makes predictions with partial, growing inputs before the source sentence finishes.
Without loss of generality, regardless the
actual design of translation policy,
simultaneous translation can be represented
with prefix-to-prefix fashion as follows:
\vspace{-0.2cm}
\begin{equation}
p_g(\ensuremath{\bm{y}}\xspace \mid \ensuremath{\bm{x}}\xspace) = \textstyle\prod_{t=1}^{|\ensuremath{\bm{y}}\xspace|} p(y_t \mid \ensuremath{\bm{x}}\xspace_{\leqslant g(t)},\, \ensuremath{\bm{y}}\xspace_{<t})
\label{eq:gensentscore2}
\vspace{-0.1cm}
\end{equation}
where $g(t)$ can be used to represent any arbitrary fixed or
adaptive policy, denoting the number of processed source tokens at time step $t$.
We choose the wait-$k$ policy \cite{ma+:2019}
as our baseline
for its simplicity and great performance.
More specifically, in this paper, our wait-$k$ policy is defined as follows:
\vspace{-0.2cm}
\begin{equation}
\ensuremath{{g_\text{wait-$k$}}}\xspace(t) =\min\{k+t-1, \, |\ensuremath{\bm{x}}\xspace|\}
\label{eq:policy}
\vspace{-0.1cm}
\end{equation}
This policy starts to decode after the first $k$ source words, and
then translates one token every time when one more source token
is received.
\vspace{-0.1cm}
\subsection{Incremental Text-to-Speech}
\vspace{-0.1cm}
As the last step of the entire pipeline
the goal of iTTS is to incrementally generate
the target speech audio and play it to the audience
instantly with available translated words.
Different from conventional full-sentence TTS, which requires the availability of the entire
sentence, iTTS usually has 1-2 words delay but with similar audio quality compared with the
full-sentence TTS.
Compared with previous source sentence segment-based SSST systems \cite{oda+:2014,xiong+:2019}, our system can achieve word-level latency.
We adapt the iTTS framework from \cite{ma+:2019} to our pipeline to generate
target speech audio with translated tokens $\ensuremath{\bm{y}}\xspace_t$ at $t$ time step.
\subsection{Naive Solution is Problematic}
\label{sec:naive}
To alleviate the various speech rate problem,
one naive solution is to adjust the target side speech speed based
on the source speaker's speed.
However, as shown in Table~\ref{tab:speed}, this solution is problematic
as it usually requires the audience to be more focus on
the translated speech
when we speed up the speech rate on target side,
and sometimes it will
disrupt the audiences' comprehension of the translation
\cite{gordon2014recognition}.
Similarly, slowing down the speech only creates overlong
phoneme pronunciation which is unnatural and leads to confusion.
\begin{table}[!]\centering
\small
\begin{tabular}{|c|c|}\hline
Speech Rate & MOS \\\hline
$0.5 \times$ & $2.00 \pm 0.08$ \\\hline
$0.6 \times$ & $2.32 \pm 0.08$ \\\hline
$0.75 \times$ & $2.95 \pm 0.07$ \\\hline
Original & $4.01 \pm 0.08$ \\\hline
$1.33 \times$ & $3.34 \pm 0.08$ \\\hline
$1.66 \times$ & $2.40 \pm 0.09$ \\\hline
$2.0 \times$ & $2.06 \pm 0.04$ \\ \hline
\end{tabular}
\caption{Mean Opinion Score (MOS) evaluations
of naturalness for different speech speed changed by ffmpeg.
Original English speeches are synthesized by
our incremental Text-to-speech system. }
\label{tab:speed}
\vspace{-0.3cm}
\end{table}
Inspired by human interpreters\cite{he+:2016,Raja+:2000}
who often summarize the contexts in order to catch up
the speaker, or make wordy translation to wait the speaker,
the optimal translation model should be enable to adjust the length of translated
sentence to change the speech duration on target side
to avoid further delays or unnatural pauses fundamentally.
\vspace{-0.2cm}
\subsection{Self-Adaptive Training}
\begin{figure}[bt!]
\centering
\includegraphics[width=7.5cm]{figs/ideal_tgt_src_ratio_sents.pdf}
\captionof{figure}{Tgt/src length ratio for English-to-Chinese task in training data (red) and ideal testing cases (blue).}
\label{fig:lengthratio}
\vspace{-0.5cm}
\end{figure}
Translation between different language pairs
have various tgt/src length ratios,
e.g., English-to-Chinese translation ratio is roughly around 0.85
(small variations between different datasets).
However, this length ratio merely reflects the average statistics for the entire dataset,
and as it is shown with red line in Fig.~\ref{fig:lengthratio}, the ratio
distribution for individual
sentence is quite wide around the average length ratio.
As shown in Fig.~\ref{fig:comparison} and discussed in earlier sections,
over short and long translations are not preferred in simultaneous speech-to-speech
translation.
Ideally, we prefer the system
to have a similar amount of
initial wait
with delay in the tail during translation of each sentence.
Following this design,
the translation tail of previous sentence
will fit perfectly into
the beginning delay window for the following sentence,
and will
not cause any extra latency and intermittent speech.
\begin{figure}[tb!]
\vspace{-0.5cm}
\centering
\includegraphics[width=7cm]{figs/catoon.pdf}
\captionof{figure}{Illustration of conventional wait-$k$ (red) and SAT-$k$ (yellow) training policy. In SAT, we force the length of tail to be $k$ which equals the latency $k$. In the above example, we have $k=1$.}
\label{fig:comparison}
\vspace{-0.5cm}
\end{figure}
Based on the above observation, we propose to use different training policies for different
sentences with different tgt/src ratios.
As shown in Fig.~\ref{fig:comparison},
We start from a fixed delay of $k$ tokens and
then force the model to have the same
number of tokens in initial wait and final tail
by amortizing the extra tokens into the middle steps.
More specifically, when we have longer tail than the fixed initial wait,
we move extra words into former steps, and some steps before
tail will decode more than one word at a time.
As a result, there will be some one-to-many policies
between source and target
and the model will learn to generate longer translations with shorter source text.
On the contrary, when we have shorter tail,
we perform extra reading on the source side and the
model will learn to generate
shorter translation through this many-to-one policy.
Formally, we define our SAT training policy as follows:
\vspace{-0.6cm}
\begin{equation}
\ensuremath{{g_\text{wait-$k$, $c$}}}\xspace(t) = \min\{k+t-1-\floor{ct},\; |\ensuremath{\bm{x}}\xspace|\}
\label{eq:policyc}
\vspace{-0.2cm}
\end{equation}
where $c$ is the compensation rate which is decided by the tgt/src length ratio after deduction of $k$ tokens in
source initial and target tail.
For example, when tgt/src length ratio is 1.25, then
$c={|tgt|-k\over|src|-k}-1 =1.25-1=0.25$,
representing to decode 5 target
words for every 4 source words,
and model learn to generate wordy translation.
When target side is shorter than source side, $c$ becomes negative, and model learn to
decode less tokens than source side.
\begin{figure}[!h]
\centering
\includegraphics[width=5.8cm]{figs/mapping.pdf}
\captionof{figure}{Different translation policy with different choice of $c$. Green boxes represent
many-to-1 policy; yellow boxes denote 1-to-1 policy; purple boxes show 1-to-many translation policy.}
\label{fig:mapping}
\vspace{-0.4cm}
\end{figure}
Note that the tgt/src length ratio in our case is determined by
the corresponding sentence itself instead of the corpus level
tgt/src length ratio, which is a crucial different from
catchup algorithm from \cite{ma+:2019}
where some short translations is trained with inappropriate positive $c$.
It seems to be a minor difference, but it actually enables the model to learn
totally different thing other than catchup.
The blue line in Fig.~\ref{fig:lengthratio} represents the tgt/src length ratio
for the ideal simultaneous speech-to-speech translation examples in our
training set which have the same speech time between source and target side.
When we have the same speech time between source and target side,
there will be no accumulated latency from previous sentences to the following
sentences.
As we notice, our training data covers all the tgt/src length ratio distribution
for the ideal cases, indicating that by adjusting the compensation rate $c$
from our training corpus,
our model learns to generate appropriate length of translation on the target
side to avoid accumulated latency.
As shown in Fig.~\ref{fig:lengthratio}, there are many different choices of $c$
for different sentences, and each sentence is trained with their own
corresponding compensation rate which makes the training policy different from others with different $c$.
Hence, As shown in Fig.~\ref{fig:mapping},
our trained model is
implicitly learned
many different policies, and when you choose
a compensation rate $c$ during inference,
the model will generate certain length of translation corresponding to that
compensation rate $c$ in training.
More specifically, assume we have a source sentence, for example in Chinese,
with length of $m$, and the
conventional full-sentence or wait-$k$ model normally would translate this into
a English sentence with length of $1.25 \times m$.
However, the output length from SAT can be changed by $c$
following
the policy in Eq.~\ref{eq:policyc} during decoding.
When $c$ is negative, SAT generates shorter translation than $1.25 \times m$.
On the contrary, if we choose $c$ that is positive,
SAT generates longer translation than $1.25 \times m$.
The compensation rate $c$ functions as the key of model selection
to generate outputs of different lengths.
\begin{figure}[t]
\centering
\begin{tabular}{c}
\begin{minipage}[t]{1.0 \linewidth}
\vspace{-0.6cm}
\begin{center}
\subfigure[Tail length vs.~test-time compensation rate]{
\includegraphics[width=6.5cm]{figs/sat_waitk_tail_len.pdf}
\label{fig:tail_len}
\vspace{-0.6cm}
}
\end{center}
\end{minipage}
\vspace{-0.5cm}
\\
\vspace{-0.4cm}
\begin{minipage}[t]{1.0 \linewidth}
\begin{center}
\subfigure[Translation length $|\ensuremath{\bm{y}}\xspace|$ vs.~test-time compensation rate]{
\includegraphics[width=6.5cm]{figs/sat_waitk_total_len.pdf}
\label{fig:tgt_len}
\vspace{-0.6cm}
}
\end{center}
\end{minipage}
\end{tabular}
\caption{Translation length analysis on Chinese-to-English
task using one SAT-$3$ and wait-$k$ model.}
\vspace{-0.4cm}
\end{figure}
\if 0
\begin{figure}[!h]
\centering
\includegraphics[width=6.5cm]{figs/sat_waitk_tail_len.pdf}
\captionof{figure}{Translation tail length with different
testing-time compensation rate.}
\label{fig:tail_len}
\vspace{-0.4cm}
\end{figure}
\begin{figure}[!h]
\centering
\includegraphics[width=6.5cm]{figs/sat_waitk_total_len.pdf}
\captionof{figure}{Translation length with different
testing-time compensation rate.}
\label{fig:tgt_len}
\vspace{-0.4cm}
\end{figure}
\fi
Fig.~\ref{fig:tail_len}--\ref{fig:tgt_len} show the effectiveness of our proposed
model which has the ability to adjust the tail length of the entire translation
with different $c$'s.
\subsection{Self-Adaptive Inference}
\label{sec:SAI}
The above section discusses the importance of $c$, which is easily to obtain during training time,
but at inference time, we do not know the optimal choice of $c$ in advance
since the fluency and latency criteria also rely on the finish time for each word on both sides.
Therefore, the streaming ASR and iTTS plays important roles
here to determine the decoding policy to form fluent and low-latency translation speech
and we use the knowledge of streaming ASR and iTTS for selecting the appropriate
policy on the fly.
When we have a faster speech,
streaming ASR will send multiple tokens to SAT at some steps.
But SAT only
generates one token at a time on the target side and pass it to iTTS instantly.
This decoding policy has a similar function to a negative $c$, which
has many-to-one translation policy.
In this case, SAT will generate succinct
translation and iTTS therefor can finish the translation speech with shorter time since
there are less tokens.
Contrarily, when the speaker talks with slower pace, there is only one token that is
feed into SAT.
SAT translates it into a different token and delivers it to iTTS.
This is one-to-one translation policy.
When iTTS is about to finish playing the newly generated speech, and there is no new incoming
token from steaming ASR,
SAT will force the decoder to generate one extra token,
which becomes one-to-many translation policy
(including the decoded token in previous step),
and feeds it to iTTS.
When the speaker makes a long pause, and there is still no new tokens from streaming ASR,
the decoder of SAT continues to translate until a pause token (e.g. any punctuations) generated.
This pause token forms a necessary, natural pause on speech side and will not change
the understanding for the translated speech.
|
1,108,101,566,671 | arxiv | \section{\label{intro} Introduction}
M dwarfs are main sequence stars with masses ranging from $0.6\,M_{\odot} \geq M_{*}\geq\,0.08\,M_{\odot}$, radii between $0.6\,R_{\odot}\geq R_{*}\geq\, 0.1\,R_{\odot}$, and $T_\mathrm{eff}$ from $3800\,K\geq T_\mathrm{eff}\geq\,2300\,K$ \citep{delfosse2000accurate}. They are the most abundant stars in the universe, with about 7 in every 10 stars in the Milky Way being M dwarfs, and accounting for most of its mass \citep{henry2006solar}. M dwarfs are the faintest main-sequence stars, with the largest and brightest of them having only $0.1\,L_{\odot}$. However, and despite their relative dimness, M dwarfs can be the key to a bright future in areas of astronomy such as galactic chemical evolution and exoplanet science.
As M dwarfs have a longer lifespan than other larger and brighter main-sequence stars and they represent such a large fraction of existing stars in our galaxy, M dwarf characterization is essential for our understanding of the galactic history. Some examples are \cite{lepine2007revised}, which uses spectroscopic indices and the kinematic measurements of both disk dwarfs and halo subdwarfs to revise metallicity classes for these objects, and \cite{bochanski2007exploring}, that uses a spectroscopic sample of low-mass stars to investigate the properties of the thin and thick disks.
M dwarfs are very good targets for surveys dedicated to the discovery of Earth-like planets. Due to their faintness, their habitable zone is much closer to the star than it is for solar-type stars, and, due to their small radius and mass, the relative difference in size between any potential exoplanet and the host star is much smaller than for more massive stars. As the two most successful current methods for planet detection, radial velocity and transit, are indirect methods, this means that detecting exoplanets in the star's habitable zone is orders of magnitude easier for M dwarfs than for solar-type stars. Several programs therefore focus primarily on finding potentially habitable planets around M dwarfs, such as MEarths \citep{irwin2014mearth}, and CARMENES \citep{alonso2015carmenes} or HARPS \citep{bonfils2013harps}, among others, have also targeted M dwarfs with the purpose of planet detection. There have also been many studies detailing how life might be possible in planets around M dwarfs \citep[e.g.][]{segura2005biosignatures,scalo2007m,france2013ultraviolet}.
Stellar characterization through spectroscopy can provide the scientific community with fundamental stellar parameters such as effective temperature, metallicity, and surface gravity. These stellar parameters can be used, for example, to discover more information about the history of our galaxy through the study of metal-poor stars \citep{suda2008stellar}, improve our understanding of giant stars \citep{ness2016spectroscopic} or for the characterization of exoplanet host stars, and, indirectly, of exoplanets \citep[e.g,][]{Sousa2011a,santos2013sweet}.
Spectroscopic analysis of M dwarfs in the NIR has been a developing topic over the last few years, with works in lower resolution such as \cite{rojas2010metal,rojas2012metallicity}, a characterization of 133 M dwarfs using K-band (2.0-2.4\,micron, $S/N > 200$, $R\sim2700$) spectra, as well as \cite{mann2013prospecting}'s work with $1300<R<2000$ optical and infrared spectra, and \cite{terrien2015near}'s catalog of 886 M Dwarfs in the full NIR (0.8-2.4 micron, $R\sim2000$). More recently, works have moved to higher resolution, with publications such as \cite{veyette2017physically} providing an analysis of 29 M dwarfs from their NIRSPEC (Keck II) Y-band ($\sim 1\mu m $, $R\sim 25\,000$) spectra with the help of PHOENIX models. The high-resolution studies of \cite{onehag2012m}, using CRIRES $R\sim50 000$ J band spectra, as well as the more recent \cite{passegger2019carmenes}'s characterization of 300 stars by fitting PHOENIX models to their high-resolution ($R=80\,000-100\,000$) CARMENES optical and near-infrared spectra expanded our understanding of the spectra of these stars.
Our work positions itself as a continuation and extension of these works, providing a new pipeline for spectroscopic parameter derivation from the Apache Point Observatory Galactic Evolution Experiment \citep[APOGEE,][]{majewski2017apache}'s mid-high resolution ($R\sim 22\,000$) H-band (1.5-1.7\,micron) stellar spectra.
This paper is structured as follows: Section \ref{Data} contains a description of both APOGEE and our sample stars, detailing available previous literature analysis of them; Section \ref{method} details the methodology used for our characterization of stellar spectra ; Section \ref{Results} contains our main results, including both an H-R diagram of our 313 characterized stars and an example of a synthesized stellar spectrum; Section \ref{Discussion} includes our analysis of the derived parameters, including literature comparisons; and Section \ref{Conclusions} details our main conclusions and future possibilities for the pipeline.
This work comes as a follow-up to \cite{sarmento2020derivation}, using similar methods and expanding the parameter space of its analysis into late-K and early-M dwarf stars.
\section{Data \label{Data}}
All M dwarf spectra used for stellar characterization in this paper comes from the APOGEE survey. APOGEE is an H-band (1.5-1.7 micron) Sloan Digital Sky Survey program that focuses on obtaining $R \sim 22\,500$ stellar spectra with a 300-fiber spectrograph. It is split between APOGEE-N, using the Sloan 2.5\,m telescope at the Apache Point Observatory in New Mexico \citep{gunn20062}, and APOGEE-S, which uses the 2.5\,m duPont telescope at the Las Campanas Observatory in Chile \citep{bowen1973optical}. It targets mostly red giants and provides public spectra for more than 200\,000 stars in its Data Release 14 \citep[DR14,][]{holtzman2018apogee}. APOGEE has observed FGK and M dwarfs for calibration purposes or as part of ancillary programs \citep{zasowski2013target}.
Their spectroscopic parameters (effective temperature, surface gravity, microturbulence, macroturbulence, rotation, overall metal abundance $[M/H]$ , relative $\alpha$-element abundance $[\alpha/M]$ (determined by fitting simultaneously lines of O, Mg, Si, S, Ca, and Ti), carbon $[C/M]$, and nitrogen $[N/M]$ abundances) have been derived with APOGEE Stellar Parameter and Chemical Abundances Pipeline \citep[ASPCAP,][]{perez2016aspcap}. ASPCAP also determines abundance for 24 different elements. APOGEE's Data Release 16 \citep[DR16,][]{APOGEE_Dr16, 2020ApJS..249....3A,2020AJ....160..120J} is the most recent release of APOGEE data, and it is the first one to include results obtained with APOGEE-2. The spectra for more than 430 000 stars is included in this data release. The previously released data was analyzed again, with small changes to the data processing and ASPCAP analysis procedures. Comparing with previous data releases, the most important of these changes is an expansion of the parameter space of the characterized stars, with published calibrated $\log g$ values up to $6.0\,dex$ rather than $4.5\,dex$, and $T_\mathrm{eff}$ down to $3200\,K$. ASPCAP works by searching and interpolating a grid of synthetic spectra to find the best match for each observed spectrum, adopting the parameters of the synthetic spectrum as the preliminary parameters for each star. These parameters are then calibrated to follow the theoretical models, with the calibrated values being the final parameters for each star.
In addition to the spectra characterized by ASPCAP DR16, we decided to apply our method to the spectra of the same stars, as published in APOGEE DR14 \citep{holtzman2018apogee}. These results are included in the paper for reference and comparison between the two data releases, and because they were the main results of the first author's thesis. The main differences between the two data releases, in addition to small cleanups of the spectra, including the removal of cosmic rays, are the parameter calibrations, which, for $\log g$ and $T_\mathrm{eff}$, are now extended to better characterize the M dwarf parameter range.
\subsection{Sample selection}
All stars in our sample were chosen as part of APOGEE's ancillary M dwarf program. The goal of this program was to constraint the rotational velocities and compositions of over 1400 M dwarfs and to detect their low mass companions through RV variability, as published in \cite{deshpande2013sdss}. The targets are drawn primarily from two sources, the LSPM-North catalog of nearby stars \citep{lepine2005catalog} and the catalog of nearby M dwarfs by \cite{lepine2011all}. In order to obtain their final selection, magnitude and color cuts ($7 < H < 12\,; V-K >5.0 \,; 0.4 <J-H<0.65\,; 0.1<H-Ks<0.42$) were applied to stars from those catalogs that fell on APOGEE's observation fields. In addition to both of these proper-motion selected catalogs, additional known planet hosts and other stars with available rotational speed, metallicity, or radial velocity estimates were also included for calibration purposes. Although \cite{deshpande2013sdss} published rotational velocities for these M dwarfs, they were unable to provide stellar parameters for all stars in the sample due to the inability to model the strong $H_{2}O$ bands found in the spectra.
The sample analyzed in this work was filtered by $S/N>200$ to remove the spectra with worst quality from the sample. We chose this value as a limit because it both provided us with enough stars to perform a statistical analysis and was not so large that the time for the computational analysis became unpractical. The distribution of APOGEE's reported $S/N$ for the stars in our final sample is available in the bottom part of Fig. \ref{Histograms_Mdwarfs}. In addition to our star selection by $S/N$, 21 additional stars characterized by \cite{Souto2020} and 9 stars with confirmed exoplanets were included in the sample, despite having a lower $S/N$ ratio than the cutoff for the rest of our analyzed stars. In order to test the lower bounds of our method, 7 M dwarfs with ASPCAP $T_\mathrm{eff} < 3000\,$K were included in the sample as well. Our final M dwarf sample contains 313 stars.
\subsection{Sample characterization}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{SNR_Mag_Mdwarfs}}
\caption{Histogram presenting both the H magnitude and the $S/N$ (from APOGEE) of our M dwarf sample stars.}
\label{Histograms_Mdwarfs}
\end{figure}
We display both the magnitude distribution and the APOGEE-estimated $S/N$ of our 313 sample stars in Fig. \ref{Histograms_Mdwarfs}. The figure shows how most of our sample has high $S/N$ spectra, and also shows how fainter stars tend to have lower $S/N$ as expected. The small clump of lower magnitude (4-6), $S/N<300$ stars corresponds to the M dwarfs from \cite{Souto2020} that have been characterized with interferometry.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{Declination.pdf}}
\caption{Map showing the location in the sky of our sample M dwarfs. Kepler target field is highlighted in red.}
\label{declination_Mdwarfs}
\end{figure}
We display the location in the sky of all M dwarfs in the sample, showing both their Right Ascension (RA) and Declination (Dec) in Fig. \ref{declination_Mdwarfs}. Most of the stars are located in the northern hemisphere (Dec > 0\,deg), as they were observed by APOGEE-2N. Nineteen stars in our sample are located in the Kepler field \citep{latham2005kepler}, fourteen of which have been observed by the Kepler Space Telescope.
\subsection{Available literature values for the sample\label{SampleParams}}
We cross-matched our M dwarf sample with different surveys and works related to spectroscopic stellar characterization. We found 8 works with stars in common with our sample that serve as our literature comparisons - ASPCAP \citep{perez2016aspcap}, \cite{terrien2015near}, \cite{gaidos2014trumpeting}, \cite{Hejazi2019Chemical}, \cite{Souto2020}, \cite{passegger2019carmenes}, \cite{rajpurohit2018carmenes} and \cite{rajpurohit2018apogee}. This subsection is dedicated to explanations of these works and their determined parameters for the stars in common between them and our M dwarf sample.
\subsubsection{ASPCAP}
ASPCAP \citep{perez2016aspcap}, as the pipeline dedicated to derivation of spectroscopic parameters for stars observed with APOGEE, published their estimates for stars in our sample as well. The ASPCAP pipeline provides both a preliminary set of output parameters for each observed star as well as a final set of calibrated parameters focused on its main target of giant stars. However, their calibrated parameter ranges do not include all M dwarfs in our sample, as their lower boundary for $T_\mathrm{eff}$ is $3200$\,K. Their values can still be an important reference for our pipeline, as they have been shown to be consistent by \cite{schmidt2016examining}, which performed an extensive analysis on both the $T_\mathrm{eff}$ and $[M/H]$ values ASPCAP produced for 3834 M dwarfs, comparing them to photometric and interferometric parameters for the same stars. Based on interferometric data from \cite{Boayajian2013}, color-$T_\mathrm{eff}$ relations from \cite{mann2015constrain}, and the infrared flux technique published by \cite{casagrande2008m}, they concluded that the ASPCAP $T_\mathrm{eff}$ is accurate to about $100\,\mathrm{K}$ for stars between 3550–4200\,K, and $[M/H]$ values are accurate to about $0.18\,\mathrm{dex}$. This means that, despite ASPCAP's main focus being on giant stars and not M dwarfs, we can take their measurements of $T_\mathrm{eff}$ and $[M/H]$ as consistent for the parameters of M dwarfs in our sample, and we will thus use them as a comparison value for our analysis.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{Teff_MH_ASPCAP.pdf}}
\caption{ASPCAP $T_\mathrm{eff}$ and $[M/H]$ values for M dwarfs in sample, with overplotted PARSEC isochrones \citep{bressan2012parsec} for an age of 5Gyr created with different $\log g$ values (scale is in dex). Points are color-coded based on the metallicity value derived for each star.}
\label{Mdwarfs_ASPCAP_params}
\end{figure}
From the 313 stars in our sample, ASPCAP has published their final, calibrated $[M/H]$, $T_\mathrm{eff}$, and $\log g$ for 283 stars. These values are displayed in Fig. \ref{Mdwarfs_ASPCAP_params}, with the color indicating the ASPCAP $\log g$ values for each star. The figures shows that their $T_\mathrm{eff}$ values are concentrated around 3600-3800\,K, and $[M/H]$ are between -0.4 and +0.0\,dex. The metallicity distribution follows an approximate Gaussian shape, with an average value of -0.29\,dex and a standard deviation of 0.27\,dex. These results show that a significant fraction of stars in our sample are early M dwarfs. According to ASPCAP results, our sample is, on average, more metal-poor than the solar neighborhood, which has mean metallicity values around -0.20\,dex \citep{holmberg2007geneva}.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{Teff_MH_4_sources.pdf}}
\caption{$T_\mathrm{eff}$ (TOP, a) and $[M/H]$ (BOTTOM, b) values from 4 literature sources: \cite{gaidos2014trumpeting} (black, $T_\mathrm{eff}$ for 40 stars, $[Fe/H]$ for 24 stars), \cite{Hejazi2019Chemical} (orange, filled bars, 19 stars), \cite{terrien2015near} (red, 45 stars), and an aggregate between \cite{souto2017}, \cite{souto2018stellar}, and \cite{Souto2020} (blue, 24 stars). Number of stars is displayed with logarithmic notation for visibility.}
\label{3sources}
\end{figure}
\subsubsection{\cite{terrien2015near}}
From the 886 stars characterized in \cite{terrien2015near}, 45 are included in our sample. Their spectra have wide coverage ($0.8-2.4\,\mathrm{\mu m}$) but low resolution ($R\sim2000$). Their $T_\mathrm{eff}$ values are determined by analysis of water indices in the K band and have uncertainties above 100\,K, and their $[M/H]$ values are measured using empirical spectroscopical calibrations with uncertainties around 0.12\,dex. Their published parameters are shown in Fig. \ref{3sources}. Comparing their parameter distribution with the full ASPCAP sample, we find that \cite{terrien2015near} is more representative of later-type M dwarfs, having a significant fraction of stars with published $T_\mathrm{eff}$ between 3200-3500\,K.
\subsubsection{\cite{souto2017}, \cite{souto2018stellar}, \cite{Souto2020}}
We find that some of the best characterized stars in the APOGEE sample are the M dwarfs that were analyzed by \cite{souto2017}, \cite{souto2018stellar}, and \cite{Souto2020}. That is because these works use spectral synthesis and a combination of \textit{MARCS} and \textit{PHOENIX} model atmospheres to determine both stellar parameters and chemical abundances for 8 different elements using APOGEE spectra. Their reported uncertainties are $T_\mathrm{eff}\pm100\,$K, $\log g\pm0.2\,$dex and $[Fe/H]\pm0.1\,$dex. A total of 24 different M dwarfs were characterized by this method - Kepler 138 and Kepler 186 in \cite{souto2017}, Ross 128 in \cite{souto2018stellar}, and 21 other stars in \cite{Souto2020}. Among all the stars in our sample previously characterized in the literature, the most metal-poor one is 2M03150093+0103083, with $[Fe/H]=-0.91\,dex$, and it was characterized by \cite{Souto2020}.
\subsubsection{\cite{gaidos2014trumpeting}}
\cite{gaidos2014trumpeting} analyzed optical spectra of nearby K and M dwarfs with $R\sim1000$ and determined their spectroscopic parameters by fitting model spectra to observations. Their estimated errors vary between stars, but are around 100\,K for $T_\mathrm{eff}$ and 0.12\,dex for $[Fe/H]$.. From the 2970 stars with $d<50\,pc$ observed by \cite{gaidos2014trumpeting}, 28 are included in our sample, and their parameters are shown in Fig. \ref{3sources}. The reduced number of stars, combined with the lack of $[Fe/H]$ available for half the stars in the sample, does not allow for an extensive statistical analysis of this stellar subsample. Nevertheless, it is possible to notice \cite{gaidos2014trumpeting} reported $T_\mathrm{eff}$ values between 3100-3500\,K for a greater number of stars in this subsample than in the main ASPCAP one, helping us characterize the lower $T_\mathrm{eff}$ stars in our sample. \cite{gaidos2014trumpeting} also found metallicity for stars in our sample across a larger parameter space ($-0.6<[Fe/H]<+0.4\,$dex) than ASPCAP.
\subsubsection{\cite{Hejazi2019Chemical}}
\cite{Hejazi2019Chemical} published chemical properties for 1544 high proper-motion M dwarfs and subdwarfs from low-mid resolution ($\sim 2000-4000$) optical spectra. A template-fit method was developed, based on the measurement of TiO and CaH molecular bands near $7000 \AA$. The analysis of 48 binary systems suggests precision levels of $\pm 0.22$ for $[M/H]$, $\pm 0.08$ for $[\alpha/Fe]$, and $\pm 0.16$ for the combined index $[M/H] + [\alpha/Fe]$. We find 19 stars in common between our observed sample and their study. From Fig. \ref{3sources}, we can verify that the characterized stars are dispersed across a wide range of temperatures, with a stronger concentration of stars around $T_\mathrm{eff} = 3600\,K$. As for the metallicities, we find stars both above and below solar neighborhood values, but no star within $-0.1<[M/H]<+0.1\,$dex, which must be considered when comparing our results to theirs.
\subsubsection{\label{passegger}\cite{passegger2019carmenes}}
We cross-matched our sample with the 300 stars characterized by \cite{passegger2019carmenes} and found 14 stars in common. As mentioned in Section \ref{intro}, this work derived parameters for M dwarfs by using a $\chi^2$ to fit \textit{PHOENIX-SESAM} models to high-resolution ($R\sim90\,000$) CARMENES spectra in both the visible and infrared wavelength ranges. Their reported uncertainties depend on each star's rotational velocity and, for NIR spectra, are as low as 51\,K for $T_\mathrm{eff}$, 0.07 for $\log g$, and 0.16 for $[Fe/H]$.
\subsubsection{\cite{rajpurohit2018carmenes}}
Another study with CARMENES spectra is \cite{rajpurohit2018carmenes}. This work focused on matching BT-Settl model spectra to CARMENES optical and near-infrared observations of 292 M dwarf spectra. Since this work determines parameters by matching observed spectra to a grid of previously computed models, the reported uncertainties correspond to the size of the grid, 100\,K for $T_\mathrm{eff}$ and 0.1\,dex for both $\log g$ and $[M/H]$. We find 12 stars in common between our sample and the one characterized by \cite{rajpurohit2018carmenes}, which are part of the 14 stars later characterized by \cite{passegger2019carmenes} and listed in section \ref{passegger}.
\subsubsection{\cite{rajpurohit2018apogee}}
The same technique was applied to 45 M dwarfs observed with APOGEE in \cite{rajpurohit2018apogee}. We have 8 stars in common, with six of them having $T_\mathrm{eff}<3300\,$K and are some of the coldest stars in our sample. The reported uncertainties for the parameters of these stars are $T_\mathrm{eff}\pm100\,$K, $\log g\pm0.3-0.5\,$dex and $[M/H]\pm0.05\,$dex. A comparison between our derived parameters and the previously available parameters for these stars is available in Fig.\ref{Raj_Pass_comp}.
We found \cite{lopez2019effective} as an example of another large scale, high resolution study of spectroscopical parameters for M dwarfs in the infrared, but, unfortunately, it shares no stars in common with APOGEE. The best spectroscopic parameters available in literature for our sample stars are the ones cited in this section and ASPCAP values. Therefore, comparisons between our results and literature values are made against the works mentioned above.
\section{\label{method} Methodology}
This section includes the steps in our method that are specific for M dwarfs. The full rundown of our pipeline is available at \cite{sarmento2020derivation}. Summarizing, our first step is to normalize the APOGEE observed spectra for each star using the template method and a synthetic spectrum. Then, we created custom software that uses \textit{iSpec} \citep{blanco2014determining,ispec2019sbc} and \textit{Turbospectrum} \citep{plez1998,plez2012turbospectrum} to derive the atmospheric parameters through a $\chi^2$ minimization algorithm that compares observed and synthetic spectra. These synthetic spectra are created using \textit{MARCS} \citep{MARCS} stellar atmospheric models and a custom line list. The best fitting synthetic spectra are selected through $\chi^2$ minimization, using MPFIT \citep{markwardt2009non} internally through \textit{iSpec}. Finally, the parameters of the best matching synthetic spectra are taken as our measurements for the stellar parameters of each star.
We created routines for the download, format conversion, normalization of the sample spectra, and have made the pipeline as automated as possible, but the $\chi^2$ minimization and the synthesis of the best fitting spectra are done with \textit{Turbospectrum}, controlled by \textit{iSpec} python code. The custom routines are programmed in Python 3 and for the 2020 distribution of iSpec. They are included in a public repository together with the full results, line lists, line masks, and everything required to replicate the process described in this paper.
\subsection{\label{lineMdwarf} Line list}
The line list used for M dwarf spectral syntheses is more complex than the one used for the spectra of FGK stars, as these stars have many water ($\textrm{H}_{2}\textrm{O}$) lines that are not visible in hotter stars. These water lines form a thick blanket that makes the flux continuum harder to identify, as shown in figure \ref{teff_comp}. We use the water line list first published in \cite{barber2006high}, retrieved from Bertrand Plez's personal website \footnote{http://www.pages-perso-bertrand-plez.univ-montp2.fr/}. We note here that a more recent work on water lines has been published by \cite{polyansky2018exomol}, but preference was given to usage of the water line list from \cite{barber2006high} as it was already formatted for Turbospectrum and of a more computationally manageable size. A total of 1 263 825 water lines were included in the M dwarf line list, along with the 85334 lines from other molecules and atoms already available at \cite{sarmento2020derivation}, made from a combination of the Vienna Atomic Line Database \citep[VALD,][]{piskunov1995vald} and the APOGEE line list \citep{shetrone2015sdss}. This resulted in a final list of 1 349 159 lines.
\subsection{\label{MdwarfNorm} Spectra Normalization}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{Synthetics_ispec_comparison_2.pdf}}
\caption{Comparison between synthetic spectra made with different $T_\mathrm{eff}$. The spectra corresponds to synthesis made with $T_\mathrm{eff}$ 4200K (gray), 3900K (red), 3600K (orange), 3300K (orange) and 3000K (black).}
\label{teff_comp}
\end{figure}
As we can see in Fig.\ref{teff_comp}, a $300\,\mathrm{K}$ difference in $T_\mathrm{eff}$ can mean a 0.05 difference in the continuum flux for a star, which is enough to turn an accurate fit into a terrible one. Therefore, in order to accurately normalize an M dwarf's H-band spectrum, an accurate initial input stellar $T_\mathrm{eff}$ for that star is required. Normalizations with templates synthesized with $T_\mathrm{eff}$ closer to the star's real $T_\mathrm{eff}$ will result in continuum values closer to the real ones, meaning the synthetic spectra will closely resemble the observed ones. Therefore, the first important step towards deriving a star's atmospheric parameters is to have good estimates for their $T_\mathrm{eff}$ and metallicity.
Given the lack of a consistent literature source for the $T_\mathrm{eff}$ of our sample stars, and in order to minimize the effect of the visual choice of $T_\mathrm{eff}$ based on the normalized spectra, the spectrum of each star in our sample was normalized using synthetic spectra with 148 different combinations of $T_\mathrm{eff}$, $\log g$ and $[M/H]$, corresponding to values taken from PARSEC isochrones \citep{bressan2012parsec} for stars with ages between $10^9$ and $10^{10}$ years. This approach is done to avoid unrealistic combinations of output parameters. We selected isochrones with a 0.2\,dex spacing in $[M/H]$ values, rounding $T_\mathrm{eff}$ values to the nearest multiple of 100\,K, and $\log g$ being rounded to the nearest multiple of 0.1\,dex. These values are selected to both have an amount of normalizations that can be computationally generated in a short amount of time and to have an even distribution across our expected parameter space for M dwarfs, as our sample stars are expected to have parameters approximating one of these combinations. The full list of parameter combinations is available at appendix \ref{table:parameter_normalization}.
Afterwards, we compared each of the normalized spectra to the template used to create the normalization. By calculating and minimizing the $\chi ^2$ across our line mask region (see section \ref{linemaskMd}) between the observed and the template spectra, the parameters for the best normalization template are defined. Fig. \ref{norm_chi2} displays the $\chi ^2$ values for different normalization parameters for star 2M05201152+2457212. That figure shows how the $\chi ^2$ values depend strongly on the normalization template, and can indicate the quality of the template choice for normalization. It is also clear that templates with similar parameters have similar $\chi ^2$ values. This fact shows that these values are close to the real stellar parameters, and the normalization with a template with $T_\mathrm{eff}=3700\,\mathrm{K}$,$\log g=4.7\,\mathrm{dex}$, $[M/H] = 0.0\,\mathrm{dex}$ can be used for this star in the next steps of the pipeline.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{Normalization_dr16_ChiSQ.pdf}}
\caption{TOP (a): Scatter plot displaying the $\chi ^2 $ values measured for different $T_\mathrm{eff}$ and metallicity combinations for templates for the normalization of star 2M05201152+2457212. BOTTOM (b): Similar plot with $T_\mathrm{eff}$ and $\log g$ for the same star.}
\label{norm_chi2}
\end{figure}
\subsection{\label{linemaskMd} Line mask}
As explained in \cite{sarmento2020derivation}, a line mask instructs \textit{iSpec} on which spectral regions should be considered for comparing observed and synthetic spectra. A line mask specifically made for M dwarf stars was therefore required for the code to match an appropriate model to the observations. The star Ross128 (2M11474440+0048164), as an APOGEE M-dwarf with $S/N\sim229$ that has previously been characterized by \cite{souto2018stellar}, was chosen as the basis for this line mask. A synthetic spectrum was generated using the parameters found for the star by \cite{souto2018stellar} ($T_\mathrm{eff} = 3231\,\mathrm{K}$, $[Fe/H] = 0.03\,\mathrm{dex}$ and $\log g=4.96\,\mathrm{dex}$). This synthetic spectrum was used to normalize the observed one and then compare it. Regions and spectral lines with a good agreement between both spectra were selected for the line mask displayed in Fig.\ref{MdwarfMask}. The areas where the synthesis did not reproduce well the observed spectrum were discarded to create the final line mask shown. We note that, despite the slight errors in the Al lines around 1675\,nm, we included these lines in our line mask. This was done because the two Al lines are the strongest lines we find across our wavelength range, and they are very well reproduced across other stars (see for example the appendix for Figs.\ref{Mdwarf_1}, \ref{Mdwarf_2}, \ref{Mdwarf_5}).
\subsection{Free parameters}
The free parameters used for the syntheses of M dwarf spectra are the same as the ones used in \cite{sarmento2020derivation} for the syntheses of the spectra of FGK stars - effective temperature ($T_\mathrm{eff}$), surface gravity ($\log g$), metallicity ($[M/H]$), micro-turbulent velocity ($v_{mic}$), projected rotational velocity ($v \sin i$), and spectral resolution. The input values used are, for $T_\mathrm{eff}$, $\log g$, and $[M/H]$, the ones used to create the template spectra for the normalization (see Section \ref{MdwarfNorm}). The initial values for the remaining free parameters are 1.06\,km/s for the $v_{mic}$, 1.6\,km/s for $v \sin i$, and 22\,000 for the resolution. We note that, despite leaving the resolution as a free parameter, our output values for this parameter are close to 22\,000.
However, an important change required to keep the output parameters limited to realistic values is the restriction of possible output parameters to a given parameter space. Restricting the parameters to a given interval based on the starting values allows the code to both take the first guess given by the normalization template into account and at the same time allowing it to fine-tune the parameters towards optimal values for $\chi^2$ minimization. The output parameters are limited to the initial template parameters $\pm 350\,$K (for $T_\mathrm{eff}$), $\pm 0.2\,$dex ($\log g$) and $\pm 0.3\,$dex ($[M/H]$). We found these ranges through trial and error, and testing the method with multiple stars. They are a balance between the information given by the normalization template and the freedom required for the pipeline to find the best fit for a given spectrum.
\subsection{Error estimation \label{errors}}
Due to the way our pipeline is setup, we have two major possible error sources, one associated to a poor choice of normalization, and one connected to Turbospectrum and the derivation of the best fit synthetic spectrum for each star. In this section, we will try to quantify the errors associated to both of these sources across our M dwarf parameter space.
\subsubsection{Normalization template errors \label{NormErrors}}
As the shape of the characterized spectra, and consequent derived parameters, are strongly influenced by the normalization template used, we decided to test our pipeline by running multiple normalizations for the same observed star. A sample of 4 test stars was selected, covering a wide range of stellar parameters. For each test star, we ran the pipeline with all normalization templates with $\chi^2$ within a $10\%$ margin from the best templates found for each star, using the $\chi^2$ values as an indication that the template is appropriate for the studied star. We ran the pipeline 20 times for each normalization template, injecting Gaussian errors proportional to the $S/N$. The errors are injected so the deterministic pipeline does not return the same output parameters on every iteration, allowing us to know how the parameters can change across multiple observations of the same star. We do note here that these tests result in an underestimation of the actual errors, as errors affecting the continuum normalization will produce correlations across multiple pixels. Therefore these serve as a minimum estimate for the actual errors in our measurements. Table \ref{table:diffnormalizations} summarizes our results for a test sample of 4 stars, including the average and standard deviation obtained across the 20 runs of the pipeline for each normalization template per star.
The results displayed demonstrate that the output parameters can vary significantly with the template used for the normalization of the spectra. They also demonstrate the difficulty in assessing which template works best for a given star, as $\chi^2$ values can be very similar even for spectra with varying parameters. The standard deviation verified within the same template used also shows the consistency of the pipeline itself for a given spectrum, which is further explored in the following section.
\subsubsection{Errors associated with $S/N$}
In order to estimate the errors associated with the $S/N$ inherent to the spectra and the method used to find the best fit for each normalized spectra, a sample of 20 M dwarfs with different $S/N$ and stellar parameters was selected. The pipeline was ran 20 times per star, with Gaussian errors proportional to the reported $S/N$ of each star injected into the observed spectra before normalizing them with the method described in Section \ref{MdwarfNorm} Similar to the tests described in Section \ref{NormErrors}, these are also underestimations of the possible errors, as only uncertainties associated with individual pixels are taken into account. In order to have a more statistically relevant sample size, 80 additional iterations of the pipeline were run for each of 4 selected stars that cover a wide parameter range, for a total of 100 pipeline iterations per star. The results of these tests are summarized in Table \ref{table:synthmatchtest}.
As shown in the table, the output of the pipeline is very consistent across multiple iterations, with $T_\mathrm{eff}$ having standard deviations below 12\,K, $[M/H]<0.03$\,dex and $\log g<0.09\,$dex for all analyzed stars. There is no strong $S/N$ effect on the standard deviation measured, as the values remain very consistent across all stars in the sample. These tests suggest that most of the possible errors in the final results are a result of the templates used for the normalization of each observed spectrum, and showing that $\log g$ is the parameter with the largest associated error.
Given the errors measured in the displayed tests and the size of the grid used for the selection of the normalization template, we estimate the overall errors of our pipeline to be around $T_\mathrm{eff} \pm 100\,$K, $[M/H] \pm 0.1$\,dex and $\log g \pm 0.2\,$dex, with a large percentage of these errors coming from the inherent uncertainty in the choice of normalization template used for a given star. This choice is limited by the PARSEC evolutionary code, as it is the only one publicly available covering the M dwarf parameter space with a good range coverage. Another possible error source can be the line mask used for the $\chi^2$ calculation, as $\chi^2$ values can be very similar for different normalization templates and changes in the line mask can result in the selection of different templates as well.
\begin{landscape}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{2M11474440+0048164_ispec_comparison.pdf}}
\caption{Comparison between the observed spectrum of Ross 128 (2MASS J11474440+0048164), normalized using the template method (black), and a synthetic spectrum created using the literature \citep{souto2018stellar} parameters for that star (red). The parameters used to generate the synthetic spectrum were $T_\mathrm{eff}=3231\,\mathrm{K}$,$\log g=4.96\,\mathrm{dex}$, $[M/H] = +0.03\,\mathrm{dex}$. The ASPCAP preliminary parameters for this star are $T_\mathrm{eff}=3212\,\mathrm{K}$,$\log g=4.45\,\mathrm{dex}$, $[M/H] = -0.60\,\mathrm{dex}$. The areas in gray represent the ones included in the line mask created from these spectra.}
\label{MdwarfMask}
\end{figure}
\end{landscape
\begin{table*}
\caption{Average and standard deviation for stellar parameters measured for a sample of 4 stars, using different normalization templates and across 20 iterations of the pipeline for each star/normalization template combination. All displayed $T_\mathrm{eff}$ values are in K, while $\log g$ and [M/H] are in dex.}
\label{table:diffnormalizations}
\centering
\begin{tabular}{c c c c | c c c c}
\hline\hline
{} & \multicolumn{3}{c}{Normalization} & \multicolumn{4}{c}{iSpec Output}\\
Star & $T_\mathrm{eff}$ & $\log g$ & $[M/H]$ & $T_\mathrm{eff}$ & $\log g$ & $[M/H]$ & $\chi^2$ \\
\hline
2M00243855+5119224 & 3700 & 4.9 & -0.6 & $ 3681 \pm 3 $ & $ 5.16 \pm 0.02 $ & $ -0.49 \pm 0.01 $ & $ 0.0455 \pm 0.0007 $ \\
2M00243855+5119224 & 3700 & 4.9 & -0.8 & $ 3671 \pm 2 $ & $ 5.12 \pm 0.02 $ & $ -0.55 \pm 0.02 $ & $ 0.0464 \pm 0.0006 $ \\
2M00243855+5119224 & 3700 & 5 & -0.8 & $ 3674 \pm 4 $ & $ 5.14 \pm 0.02 $ & $ -0.54 \pm 0.02 $ & $ 0.0462 \pm 0.0008 $ \\
\hline
2M01195227+8409327 & 3100 & 5 & -0.2 & $ 3113 \pm 7 $ & $ 4.99 \pm 0.01 $ & $ -0.22 \pm 0.03 $ & $ 0.196 \pm 0.002 $ \\
2M01195227+8409327 & 3100 & 5.1 & -0.2 & $ 3134 \pm 10 $ & $ 5.07 \pm 0.01 $ & $ -0.29 \pm 0.04 $ & $ 0.198 \pm 0.003 $ \\
2M01195227+8409327 & 3200 & 5 & -0.2 & $ 3201 \pm 4 $ & $ 5.05 \pm 0.04 $ & $ -0.22 \pm 0.02 $ & $ 0.201 \pm 0.003 $ \\
2M01195227+8409327 & 3200 & 5.1 & -0.4 & $ 3202 \pm 1 $ & $ 5.13 \pm 0.01 $ & $ -0.38 \pm 0.01 $ & $ 0.199 \pm 0.003 $ \\
\hline
2M03431519+5006558 & 3300 & 4.8 & 0 & $ 3296 \pm 1 $ & $ 4.93 \pm 0.01 $ & $ 0.03 \pm 0.01 $ & $ 0.191 \pm 0.002 $ \\
2M03431519+5006558 & 3300 & 4.9 & 0 & $ 3301 \pm 1 $ & $ 4.94 \pm 0.01 $ & $ 0.03 \pm 0.01 $ & $ 0.190 \pm 0.002 $ \\
2M03431519+5006558 & 3300 & 5 & -0.2 & $ 3275 \pm 10 $ & $ 4.88 \pm 0.04 $ & $ -0.06 \pm 0.05 $ & $ 0.204 \pm 0.004 $ \\
2M03431519+5006558 & 3400 & 4.9 & -0.2 & $ 3352 \pm 10 $ & $ 5.02 \pm 0.03 $ & $ -0.05 \pm 0.03 $ & $ 0.188 \pm 0.003 $ \\
\hline
2M23460112+7456172 & 3600 & 5.1 & -0.8 & $ 3578 \pm 4 $ & $ 5.20 \pm 0.01 $ & $ -0.55 \pm 0.03 $ & $ 0.069 \pm 0.001 $ \\
2M23460112+7456172 & 3700 & 5 & -0.8 & $ 3678 \pm 4 $ & $ 5.20 \pm 0.01 $ & $ -0.64 \pm 0.03 $ & $ 0.072 \pm 0.001 $ \\
2M23460112+7456172 & 3700 & 5.1 & -1 & $ 3675 \pm 4 $ & $ 5.20 \pm 0.01 $ & $ -0.7 \pm 0.03 $ & $ 0.072 \pm 0.001 $ \\
2M23460112+7456172 & 3800 & 4.9 & -0.6 & $ 3770 \pm 2 $ & $ 5.10 \pm 0.01 $ & $ -0.68 \pm 0.02 $ & $ 0.080 \pm 0.001 $ \\
2M23460112+7456172 & 3900 & 4.9 & -0.8 & $ 3845 \pm 3 $ & $ 5.10 \pm 0.01 $ & $ -0.82 \pm 0.02 $ & $ 0.088 \pm 0.001 $ \\
\hline
\end{tabular}
\end{table*}
\begin{table*}
\caption{Average and standard deviation for stellar parameters measured for multiple iterations for 20 selected stars representative of the full sample. 20 iterations were made for each of 16 test stars, while the code was made to run 100 iterations for 4 selected stars. All displayed $T_\mathrm{eff}$ values are in K, while $\log g$ and [M/H] are in dex. The ``Runs'' column displays the number of iterations made with the pipeline for each spectrum.}
\label{table:synthmatchtest}
\centering
\begin{tabular}{c c c c c c}
\hline\hline
Star & $T_\mathrm{eff}$ & $\log g$ & $[M/H]$ & $S/N$ & Runs\\
\hline
2M00391896+5508132 & $ 3329 \pm 5 $ & $ 5.05 \pm 0.02 $ & $ -0.72 \pm 0.04 $ & 767 & 20\\
2M02465257+5619505 & $ 3506 \pm 6 $ & $ 4.69 \pm 0.02 $ & $ -0.29 \pm 0.02 $ & 346 & 20\\
2M04244284+4537062 & $ 3859 \pm 4 $ & $ 4.68 \pm 0.03 $ & $ -0.44 \pm 0.01 $ & 257 & 20\\
2M04422854+5818015 & $ 3312 \pm 5 $ & $ 5.05 \pm 0.01 $ & $ -0.26 \pm 0.03 $ & 615 & 20\\
2M05201152+2457212 & $ 3667 \pm 7 $ & $ 4.59 \pm 0.03 $ & $ -0.06 \pm 0.03 $ & 677 & 20\\
2M06070493+1403109 & $ 3292 \pm 3 $ & $ 5.08 \pm 0.02 $ & $ -0.21 \pm 0.01 $ & 258 & 20\\
2M06572462+0651440 & $ 3307 \pm 8 $ & $ 5.03 \pm 0.03 $ & $ -0.24 \pm 0.04 $ & 197 & 20\\
2M08050361+4121251 & $ 3340 \pm 7 $ & $ 4.91 \pm 0.03 $ & $ -0.68 \pm 0.06 $ & 222 & 20\\
2M10562960+4858264 & $ 4063 \pm 12 $ & $ 4.76 \pm 0.02 $ & $ 0.21 \pm 0.01 $ & 328 & 20\\
2M11152550+0003159 & $ 3424 \pm 4 $ & $ 4.87 \pm 0.03 $ & $ -0.77 \pm 0.04 $ & 294 & 20\\
2M13552585+2556161 & $ 4013 \pm 4 $ & $ 4.51 \pm 0.01 $ & $ -0.48 \pm 0.01 $ & 177 & 20\\
2M14535251+1739448 & $ 3801 \pm 3 $ & $ 4.67 \pm 0.02 $ & $ -0.51 \pm 0.01 $ & 619 & 20\\
2M16495034+4745402 & $ 3754 \pm 6 $ & $ 4.69 \pm 0.02 $ & $ -0.01 \pm 0.01 $ & 1147 & 20\\
2M18055545+0316213 & $ 3573 \pm 2 $ & $ 4.63 \pm 0.01 $ & $ 0 \pm 0.01 $ & 229 & 20\\
2M19004176+0310312 & $ 3512 \pm 6 $ & $ 4.95 \pm 0.03 $ & $ -0.77 \pm 0.09 $ & 170 & 20\\
2M23134861+1227072 & $ 3560 \pm 2 $ & $ 5 \pm 0.01 $ & $ 0 \pm 0.01 $ & 172 & 20\\
\hline
2M03190939+0130543 & $ 2972 \pm 3 $ & $ 4.7 \pm 0.02 $ & $ 0.22 \pm 0.01 $ & 161 & 100 \\
2M04552111+5017249 & $ 3882 \pm 3 $ & $ 4.66 \pm 0.03 $ & $ -0.87 \pm 0.02 $ & 177 & 100 \\
2M09301445+2630250 & $ 3385 \pm 2 $ & $ 4.67 \pm 0.02 $ & $ 0.14 \pm 0.01 $ & 658 & 100 \\
2M21400112+5408179 & $ 3592 \pm 3 $ & $ 5.03 \pm 0.02 $ & $ -0.55 \pm 0.02 $ & 563 & 100 \\
\hline
\end{tabular}
\end{table*}
\begin{table*}
\caption{Output parameters for 6 stars with $T_\mathrm{eff} \leqslant 3000$\,K. All $T_\mathrm{eff}$ values are in Kelvin, $[M/H]$ in dex and $\log g$ in dex.}
\label{table:Coldest}
\centering
\begin{tabular}{c| c c c| c c c}
\hline\hline
{} & \multicolumn{3}{c}{iSpec Output} & \multicolumn{3}{c}{Normalization}\\
Star & $T_\mathrm{eff}$ & $\log g$ & $\textrm{[M/H]}$ & $T_\mathrm{eff}$ & $\log g$ & $\textrm{[M/H]}$ \\
\hline
2M02081366+4949023 & 2886 & 4.80 & 0.25 & 2900 & 5.0 & 0.4 \\
2M05392474+4038437 & 2619 & 4.93 & 0.28 & 2600 & 5.1 & 0.4 \\
2M06481555+0326243 & 2831 & 5.17 & -0.31 & 2800 & 5.2 & -0.2 \\
2M13481341+2336486 & 3003 & 5.09 & -0.21 & 3000 & 5.1 & -0.2 \\
2M14562713+1755001 & 3117 & 5.05 & -0.28 & 3100 & 5.0 & -0.2 \\
2M20032651+2952000 & 3027 & 4.76 & 0.40 & 3000 & 4.9 & 0.4 \\
\hline
\end{tabular}
\end{table*}
\subsection{Summary}
This section contains a small summary of the steps required to derive M dwarf stellar parameters using our pipeline.
\begin{itemize}
\item 144 different synthetic spectra are generated using parameter combinations taken from PARSEC isochrones and expected for M dwarfs. We note that the MARCS models in this parameter space are provided in steps of 100K for $T_\mathrm{eff}$, $0.25\,dex$ for $[Fe/H]$, and 0.5\,dex for $\log g$, and we interpolate between them to create the grid whenever necessary. This interpolation is done entirely within \textit{iSpec}, and we have not edited that part of the code.
\item The observed combined spectrum for each sample star is downloaded from APOGEE, and normalized using each of the previously generated synthetic template spectra.
\item The best normalization template for each star is chosen based on a $\chi^2$ comparison between each template and its resulting normalized spectrum.
\item \textit{iSpec} \citep{blanco2014determining,ispec2019sbc}, as a shell for \textit{Turbospectrum} \citep{plez1998,plez2012turbospectrum}, is used to generate synthetic spectra from a set of starting values. These synthetic spectra are created using \textit{MARCS} \citep{MARCS} stellar atmospheric models and a custom line list.
\item Specific line masks are required for the synthesis of spectrum of stars with different spectral types, and they include the most relevant wavelength areas for spectral parameter determination.
\item The synthetic spectra are matched to the normalized observed ones through a $\chi^2$ minimization algorithm based on MPFIT \citep{markwardt2009non}. The algorithm runs on a list of previously selected wavelength regions defined as the line mask.
\item $\chi^2$ minimization is used to find the best match between a synthetic spectrum and a given observed one, interpolating the available \textit{MARCS} whenever necessary. Then, the spectral parameters used to generate the synthetic spectrum are taken as corresponding to the observed star.
\end{itemize}
\section{Results \label{Results}}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{HR_isoc_Tmatch.pdf}}
\caption{HR diagram with $T_\mathrm{eff}$ and $\log g$ values from iSpec output parameter distribution, with overplotted PARSEC isochrones \citep{bressan2012parsec} for an age of 5Gyr created with different $[M/H]$ values (scale is in dex). Points are color-coded based on the metallicity value derived for each star. Plotted parameters were obtained with APOGEE DR16}
\label{HR_Mdwarfs_Results}
\end{figure}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{Teff_MH_iSpecDr16.pdf}}
\caption{$T_\mathrm{eff}$ and $[M/H]$ from iSpec parameter distribution for stars in M dwarf sample, with overplotted PARSEC isochrones \citep{bressan2012parsec} for an age of 5Gyr created with different $\log g$ values (scale is in dex). Points are color-coded based on the surface gravity value derived for each star. Plotted parameters were obtained with APOGEE DR16}
\label{Teff_MH_ispec}
\end{figure}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{Teff_MH_iSpecDr14.pdf}}
\caption{$T_\mathrm{eff}$ and $[M/H]$ from iSpec parameter distribution for stars in M dwarf sample, with overplotted PARSEC isochrones \citep{bressan2012parsec} for an age of 5Gyr created with different $\log g$ values (scale is in dex). Points are color-coded based on the surface gravity value derived for each star. Plotted parameters were obtained with APOGEE DR14}
\label{Teff_MH_ispecDR14}
\end{figure}
The spectra of the 313 M dwarfs in our sample were analyzed with the method detailed in Section \ref{method}, and the derived $T_\mathrm{eff}$, $\log g$ and $[M/H]$ are plotted in Figs. \ref{HR_Mdwarfs_Results} and \ref{Teff_MH_ispec}. The full results are available at the appendix, in Table \ref{table:MdwarfResultsDR16}. The errors included in the plots are $\pm 100\,$K for the $T_\mathrm{eff}$, $\log g \pm 0.2\,$dex, and $[M/H]\pm 0.1\,$dex. Fig.\ref{Teff_MH_ispecDR14} includes the parameters derived with APOGEE DR14, and the full list of parameters derived with that Data Release is included in Table \ref{table:MdwarfResultsDR14}, in the appendix.
The isochrone comparisons presented in Fig. \ref{HR_Mdwarfs_Results} show that, as expected, for most of the sample, our methodology's output $T_\mathrm{eff}$ and $\log g$ agree with the values predicted by the PARSEC models for M dwarfs. Despite this, the output $\log g$ values for a small number of stars is either overestimated or underestimated when compared to isochrone predictions. Some weak trends are also present across the results for the full sample in Fig. \ref{Teff_MH_ispec}, with thin columns of stars with very similar effective temperatures but different metallicity values. We do not know exactly what causes these trends. We find these values to cluster around multiples of 100\,K, and \textit{MARCS} models have spacings of 100\,K for $2500 \,$K$< T_\mathrm{eff} < 4000\,$K (compared with 250\,K for $4000 \,$K$< T_\mathrm{eff} < 8000\,$K), but further tests with the interpolation of the models have not shown any issues.
The lower limit of our analysis is around $T_\mathrm{eff} = 3000\,$K. From the sample of stars with ASPCAP $T_\mathrm{eff} < 3000\,$K, we found the normalization template with the lowest $\chi^2$ for 3 of them to converge to synthetic spectra with very high metallicity values ($[M/H]=0.4\,$dex) (see Table \ref{table:Coldest} for the full results derived for these stars). This correspond to the highest possible $[M/H]$ using our normalization method, and to the limit of the \textit{MARCS} models used to generate synthetic spectra. The output parameters for these three stars are very similar between them, and are outside the expected values for M dwarfs. We do not think these output parameters represent a reliable measurement for these stars, and they are included here only as a demonstration of the limitations of our method for stars with $T_\mathrm{eff} < 3000\,$K. We think that our pipeline's limitations are caused by our optimization of the line list to reproduce the spectra of stars with higher $T_\mathrm{eff}$, missing possible opacity sources for lower temperature stars, and the pipeline compensating these deficiencies by decreasing the $T_\mathrm{eff}$ and increasing the $[M/H]$ of the synthetic spectra. These effects increase the strength of the water lines present in the spectrum, but the resulting parameters will not be reliable.
We find the spectra of stars with $3000 <T_\mathrm{eff} < 3500\,$K to be especially challenging to synthetically reproduce due to the degeneracy in continuum absorption between multiple spectral parameters such as temperature, metallicity, and surface gravity. Absorptions in the continuum can be caused by any of these parameters, with the pipeline sometimes estimating lower metallicity and higher temperature values than the star has in reality.
We present an example synthesized stellar spectra for a star with $T_\mathrm{eff} \sim 3500\,$K in Fig. \ref{Mdwarf_3}. Here, the spectra of star 2M18244689-0620311 (BD-06 4756B \footnote{\url{http://simbad.u-strasbg.fr/simbad/sim-id?Ident=2Mass+j18244689-0620311}}) is displayed, showing both the observed (black) and best matching spectra (red). This star was characterized in \cite{Souto2020} as having $T_\mathrm{eff}=3376$\,K,$\log g=4.77$\,dex, $[M/H] = +0.21$\,dex. The available ASPCAP parameters for this star are $T_\mathrm{eff}=3502$\,K, $\log g=4.63$\,dex, and $[M/H] = -0.17$\,dex. The displayed spectrum for the star has been obtained by normalizing the APOGEE observed one with a synthetic spectrum with $T_\mathrm{eff} = 3500\,$K, $\log g = 4.9$\,dex and $[M/H] = -0.2\,$dex, and the derived spectroscopical parameters for this star are $T_\mathrm{eff}=3484\pm100$\,K,$\log g=4.85\pm0.2$\,dex, $[M/H] = -0.15\pm0.1$\,dex.
The presence of a wide water line band is noticeable across the full spectrum, especially the full first order (two top rows), resulting in a jagged and depressed continuum down to around 0.9 instead of the expected 1.0. The modeling of these water lines is fundamental for the creation of an accurate synthetic spectra of late M dwarfs ($T_\mathrm{eff}<3500$\,K). This star is slightly below solar metallicity, still having pronounced molecular and elemental lines. The effect can be noticed in the strong Al line around 1675\,nm, a line accurately matched by the synthetic spectrum. The synthesized spectrum matches the observed one across the full wavelength range, with lines of multiple elements and molecules being correctly synthesized. This fact, combined with the relative agreement of our pipeline's derived parameter with available literature analysis, increases our confidence in the derived parameters for this star.
Additional examples of output spectra obtained by our pipeline are displayed in the appendix, in Figs. \ref{Mdwarf_1} to \ref{Mdwarf_6}, with more stars across our parameter space.
\section{Analysis \label{Discussion}}
\subsection{ASPCAP comparison \label{ASPCAP_comp}}
\begin{landscape}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{2M18244689-0620311_3500_-02.pdf}}
\caption{Comparison between APOGEE spectra of the star 2M18244689-0620311 (black, normalized using a synthetic spectra with $T_\mathrm{eff} = 3500\,$K, $\log g = 4.9$\,dex and $[M/H] = -0.2\,$dex) and the best fitting synthetic spectra (red, straight line) for APOGEE wavelength range. In gray highlight are the areas used for $\chi ^2 $ minimization by our pipeline's algorithm. The best fitting parameters derived were $T_\mathrm{eff}=3507\pm100$\,K,$\log g=4.80\pm0.2$\,dex, $[M/H] = -0.19\pm0.1$\,dex. The available ASPCAP parameters are $T_\mathrm{eff}=3600\pm59$\,K, $\log g=5.09\pm0.12$\,dex and $[M/H] = -0.06\pm0.01$\,dex. This star was characterized in \cite{Souto2020} as having $T_\mathrm{eff}=3376$\,K,$\log g=4.77$\,dex, $[M/H] = +0.21$\,dex. }
\label{Mdwarf_3}
\end{figure}
\end{landscape
\begin{figure*}
\resizebox{\hsize}{!}{\includegraphics[height = 19cm,keepaspectratio=true]{3parameter_comparison_ALL_IN_ONE_dr16_new.pdf}}
\caption{Comparisons between our output parameters and ones available in literature, colored accordingly to either our method's measured $[M/H]$ ($T_\mathrm{eff}$ plots, left column, see scale at (f)) or $T_\mathrm{eff}$ for each star ($[M/H]$, center column, and $\log g$, right column, see color scale at (l)). From the top to bottom: comparisons with ASPCAP (a, b, c), \cite{terrien2015near} (d, e), \cite{Souto2020} (g, h, i), \cite{gaidos2014trumpeting} (j, k), \cite{Hejazi2019Chemical} (m, n, o). Temperature values are displayed in Kelvin, while metallicity and surface gravity are in dex.}
\label{All_comparisons}
\end{figure*}
\twocolumn
\begin{table}
\caption{Median differences between our results and literature parameters. All displayed ASPCAP parameters ($T_\mathrm{eff}$, $[M/H]$ and $\log g$) are from their calibrated values. Median difference is calculated by subtracting literature parameters from our results. SD stands for standard deviation and MAD for Median Absolute Deviation. Temperature values are displayed in Kelvin, while metallicity and surface gravity are in dex.}
\label{table:All_comp_table}
\centering
\begin{tabular}{c c c c c}
\hline\hline
Parameter & Stars & Median & SD & MAD \\
\hline
\multicolumn{5}{c}{ASPCAP} \\
\hline
$T_\mathrm{eff}$ & 283 & -63 & 70 & 58 \\
$[M/H]$ & 283 & +0.0 & 0.15 & 0.08 \\
$\log g$ & 283 & -0.07 & 0.14 & 0.11 \\
\hline
\multicolumn{5}{c}{\cite{terrien2015near}} \\
\hline
$T_\mathrm{eff}$ & 45 & +92 & 86 & 57 \\
$[M/H]$ & 45 & -0.19 & 0.15 & 0.08 \\
\hline
\multicolumn{5}{c}{\cite{Souto2020}} \\
\hline
$T_\mathrm{eff}$ & 24 & +109 & 85 & 77 \\
$[M/H]$ & 24 & -0.23 & 0.09 & 0.11 \\
$\log g$ & 24 & +0.03 & 0.20 & 0.11 \\
\hline
\multicolumn{5}{c}{\cite{gaidos2014trumpeting}} \\
\hline
$T_\mathrm{eff}$ & 40 & +56 & 91 & 81 \\
$[M/H]$ & 24 & -0.17 & 0.19 & 0.19\\
\hline
\multicolumn{5}{c}{\cite{Hejazi2019Chemical}} \\
\hline
$T_\mathrm{eff}$ & 19 & +59 & 81 & 78 \\
$[M/H]$ & 19 & -0.43 & 0.36 & 0.32 \\
$\log g$ & 19 & -0.09 & 0.14 & 0.19 \\
\hline
\end{tabular}
\end{table}
As mentioned in Section \ref{SampleParams}, there are no available final ASPCAP spectroscopic parameters for all stars in our sample. From the full 313 star sample, we have ASPCAP calibrated values for $T_\mathrm{eff}$, $[M/H]$, and $\log g$ for 283 of them. Therefore, all comparisons shown and discussed here will focus on those 283 stars. We do not show raw/spectroscopic parameters for consistency reasons. Fig.\ref{All_comparisons} (a) shows a comparison between the $T_\mathrm{eff}$ values for our sample M dwarfs as derived by our pipeline and the calibrated values published by ASPCAP for the same stars, (b) presents the same comparison for ASPCAP calibrated $[M/H]$ values, and (c) shows a comparison between the $\log g$ values derived with our pipeline and the calibrated ones published by ASPCAP. Errors included are estimated based on our normalization grids and are $\pm 100\,$K for the $T_\mathrm{eff}$, $\pm 0.1\,$dex for $[M/H]$ and $\pm 0.2\,$dex for $\log g$ (see section \ref{errors}). The points are colored accordingly to other parameters measured by our method in order to highlight any correlation or trend between errors across multiple output parameters.
In Fig. \ref{All_comparisons} (a), and relative to $\Delta T_\mathrm{eff}$, we find a small trend of more negative $\Delta T_\mathrm{eff}$ for stars with $T_\mathrm{eff} < 3500\,$K. For stars above this $T_\mathrm{eff}$, we find very similar values between our analysis and the parameters published by ASPCAP. Additionally, we find no correlation between $\Delta T_\mathrm{eff}$ and each star's metallicity values. Overall, we find this parameter to have a good agreement with ASPCAP published results, with the median difference, SD and MAD all being within our error margins.
As for Fig. \ref{All_comparisons} (b) and $[M/H]$, a trend is present here, as we find a negative $\Delta [M/H]$ for the stars in our sample with $T_\mathrm{eff}>3400\,$K and a positive one for stars with $T_\mathrm{eff}<3400\,$K. This translates to an overestimation of metallicity for the colder stars in our sample, when compared to ASPCAP values. Both lower $T_\mathrm{eff}$ and increasing $[M/H]$ can have a similar effect on the spectral shape, lowering the continuum position due to an increased strength of the water molecular lines. Therefore, metallicity overestimations can be caused by normalization errors and/or correspond to effective temperature underestimations. We have to note that we are using the $T_\mathrm{eff}$ values determined by our pipeline to color the points.
For $\log g$, Fig. \ref{All_comparisons} (c) shows that a small trend is present in the data, with negative $\Delta \log g$ values for stars with lower $\log g$. The overall distribution is not very different, with median differences of +0.07\,dex between our parameters and the ones published by ASPCAP, although with SD and MAD around our uncertainty levels. We find a wide range of differences in $\log g$, with the comparisons varying between -0.6\,dex and +0.4\,dex. Comparisons with the isochrones in Fig. \ref{HR_Mdwarfs_Results} show that our values for this parameter are within the expectations for main-sequence stars around these temperatures. A direct comparison between Figs. \ref{Mdwarfs_ASPCAP_params}, \ref{Teff_MH_ispec} and \ref{Teff_MH_ispecDR14} shows how the ASPCAP calibrated $\log g$ values differ from our derived parameters for our M dwarf sample. While our values using data from DR16 follow the isochrones approximately, both the results with DR14 and their parameters seem to vary wildly, especially for M dwarfs with $T_\mathrm{eff}<3800\,$K / $\log g > 4.9\,$dex. This does seem to indicate that our method provides more plausible surface gravity values, given our current stellar modeling knowledge reflected in the overplotted PARSEC isochrones. It also shows how the spectra provided by APOGEE improved between DR14 and DR16, lending credence to the more recent Data Release as being of higher quality than the previous one.
Overall, we find that the best agreement between our parameters and the ones published by ASPCAP is found for $[M/H]$, despite some trends being present alongside $T_\mathrm{eff}$ and $\log g$. All of these comparisons are made with calibrated ASPCAP parameters. These should not be ignored, as they can show inherent bias in the method and the output parameters.
\subsection{Other literature comparisons}
\subsubsection{Comparison with \cite{terrien2015near}}
Fig. \ref{All_comparisons} (d) and (e) show a comparison between the parameters derived with our pipeline and the ones published in \cite{terrien2015near} for 45 stars in common between both samples. Table \ref{table:All_comp_table} shows the median difference, standard deviation and mean absolute deviation found when comparing both distributions. As \cite{terrien2015near} does not publish values for $\log g$, we kept the space reserved for the plot for that parameter (right column) empty. We find the differences between them to be above the uncertainty level, with the SD and MAD of the $[M/H]$ values being slightly above (0.15 and 0.08\,dex vs 0.10\,dex) and $T_\mathrm{eff}$ slightly below (86\,K and 57\,K vs 100\,K).
In the case of $T_\mathrm{eff}$, we find no large-scale trend or deviation between both parameter distributions, with $\Delta T_\mathrm{eff} < 250\,$K for all compared stars and with no correlation with $[M/H]$. However, the fact that we find $\Delta T_\mathrm{eff}\geq 0\,$K for almost the full sample, as well as a median difference of $+92\,$K, points towards small biases in either parameter distribution. As for $\Delta [M/H]$, it has a wider distribution, going up to $\pm 0.3\,$dex, and a median difference of $-0.19\,$dex can point towards systematic trends in our output data. There is also a linear trend towards negative $\Delta [M/H]$ for stars with $[M/H] < -0.2$\,dex and $T_\mathrm{eff} \sim 3300\,$K.
\subsubsection{Comparison with Souto's parameters}
Figs. \ref{All_comparisons} (g), (h), and (i) show a comparison between the parameters derived by our pipeline and the ones published in \cite{souto2017, souto2018stellar, Souto2020} for 24 M dwarfs. We should note that, as Souto et al. published $[Fe/H]$ for their characterized stars and our method measures $[M/H]$, that discrepancy can explain some of the differences. The differences between results published by both methods are also summarized in Table \ref{table:All_comp_table}.
We find clear differences between both parameter distributions, with lower metallicity values ($\Delta [M/H] = -0.23\,$dex) and higher temperatures ($\Delta T_\mathrm{eff} = +109\,$K) for our stars. These trends fall in line with the results for the previously compared \cite{terrien2015near} results, with negative $\Delta [M/H]$ and positive $\Delta T_\mathrm{eff}$, with the metallicity differences being above our uncertainty levels. Like the previous comparison, we find no cross-parameter trends, as the $[M/H]$ differences seem to be independent of $T_\mathrm{eff}$ and vice versa.
We also find a weak linear trend with $\log g$ itself, as we find lower $\Delta \log g$ for stars with $\log g < 4.8\,$dex, and a higher $\Delta \log g$ for stars with $\log g>4.8\,$dex (with a correlation coefficient of $\rho = 0.71$)
\subsubsection{Comparison with \cite{gaidos2014trumpeting}}
We present, in Fig. \ref{All_comparisons} (j), a comparison between our $T_\mathrm{eff}$ results and the ones published in \cite{gaidos2014trumpeting} for 40 stars in common between both samples. Fig. \ref{All_comparisons} (k) shows a similar comparison for $[M/H]$ for the 24 stars with that parameter in the common sample. Similar to the case of \cite{terrien2015near}, we have no reference value for $\log g$ from this source, so the space for the plot for that parameter (right column) is kept empty. Additionally, Table \ref{table:All_comp_table} presents the median, standard deviation and mean absolute deviation found when comparing both parameter distributions.
Similar to the previous comparisons, we find trends towards both positive $\Delta T_\mathrm{eff}$ and negative $\Delta [M/H]$. We find the standard (91\,K) and mean absolute deviations (81\,K) to be slightly around our estimated uncertainties for temperature, while the median differences (+56\,K) are below them. As for metallicity, they are slightly above our uncertainty levels. There is a small trend towards lower $\Delta [M/H]$ for colder objects, but since the number of stars is so small it might be just a statistical artifact.
\subsubsection{Comparison with \cite{Hejazi2019Chemical}}
We present, in Figs. \ref{All_comparisons} (m), (n) and (o), a comparison between our output values and the ones published in \cite{Hejazi2019Chemical} for 19 stars in common between both samples. Additionally, Table \ref{table:All_comp_table} presents the differences between results obtained by both methods. We find positive $\Delta T_\mathrm{eff}$, with standard (81\,K) and mean absolute (78\,K) deviations around our estimated uncertainties for the parameter. We also find strong trends in both $\log g$ and $[M/H]$ distributions, with deviations significantly above our estimated uncertainty values.
As for discrepancies in $T_\mathrm{eff}$, Fig. \ref{All_comparisons} (m) shows that the $\Delta T_\mathrm{eff}$ distribution between both methods is not very large, with most stars having a $\Delta T_\mathrm{eff} \sim \pm 120\,$K. The strongest outlier is star 2M07404603+3758253, which we find to have $T_\mathrm{eff} = 3325\,$K and $[M/H] = -0.62\,$dex, while \cite{Hejazi2019Chemical} published $T_\mathrm{eff} = 3200\,$K and $[M/H] = +0.4\,$dex for the same star \footnote{A plot comparing our best synthetic fit to the observed APOGEE spectra is included in the Appendix as Fig. \ref{Mdwarf_7}.}.
A $[M/H]$ comparison shows a linear trend in Fig. \ref{All_comparisons} (n), with $\Delta [M/H] \sim -1.0$\,dex for metal-poor stars, and $\Delta [M/H] \sim 0.0$\,dex for stars around solar metallicity. We also find significantly lower values for metallicity than \cite{Hejazi2019Chemical}, with an average $\Delta [M/H] = -0.43\,$dex. Despite their reported uncertainties of $\pm0.22\,$dex, this is a significant difference. Similar to the comparison with \cite{gaidos2014trumpeting} (Fig.\ref{All_comparisons} (k)) and unlike the ASPCAP comparison (Fig.\ref{All_comparisons} (b), there is a trend towards lower $\Delta [M/H]$ for colder objects. Some possible explanations can be the lower resolution of their analysis and the fact that it was done in the optical and not in the near-infrared, but the fact that the trends are very similar across all literature comparisons presented here lead us to believe in the presence of an inherent bias in our pipeline towards low $[M/H]$ output parameters.
For $\log g$, we find a linear trend towards measuring lower $\Delta \log g$ for stars with $\log g < 4.9\,$dex, and a higher $\Delta \log g$ for stars with $\log g>4.9\,$dex. This is very similar to the $\Delta \log g$ plot for our Souto values comparison (see Fig.\ref{All_comparisons} (i)) and can point towards some issues with the $\log g$ determined by our method.
\subsubsection{Comparison with \cite{rajpurohit2018apogee}}
\begin{figure*}
\resizebox{\hsize}{!}{\includegraphics{All_comparisons_This-Raj_Pass_14stars.pdf}}
\caption{Comparison in derived parameters between our method and the eight stars in common with \cite{rajpurohit2018apogee} (APOGEE) and the twelve stars in common with \cite{rajpurohit2018carmenes} (CARMENES), and fourteen with \cite{passegger2019carmenes} (our pipeline - literature).}
\label{Raj_Pass_comp}
\end{figure*}
We display a comparison between our output parameters and the results published by \cite{rajpurohit2018apogee} for 8 stars in common in Fig. \ref{Raj_Pass_comp}. Due to the low number of stars compared, no real statistical analysis can be made from this comparison. We find some small differences in the results across the stellar sample, with the clearest example being star 2M06320207+3431132, with an output $T_\mathrm{eff}=3465\,$K (our results) vs 3200\,K. Similar to the previous comparisons, we find an overall positive, but close to our uncertainty levels, $\Delta T_\mathrm{eff}$. Unlike the previous results, we actually find a positive $\Delta [M/H]$ as well, while the largest difference between our results and theirs is in surface gravity, with some stars having $\Delta \log g\sim-0.6\,$dex.
\subsubsection{Comparison with \cite{rajpurohit2018carmenes} and \cite{passegger2019carmenes}}
As our sample and both \cite{rajpurohit2018carmenes} and \cite{passegger2019carmenes}'s have only 12 stars and 14 stars in common, respectively, we compared our results with the parameters for each of them in Fig \ref{Raj_Pass_comp}. No overall trends are present in $ T_\mathrm{eff}$, with the exception of a single star, 2M11474440+0048164, that has \cite{rajpurohit2018carmenes} $T_\mathrm{eff}=3500\,$K and our analysis output is $T_\mathrm{eff}=3202\,$K. This star, however is characterized in \cite{passegger2019carmenes} as having $T_\mathrm{eff}=3267\,$K, and by \cite{souto2018stellar} as having $T_\mathrm{eff}=3231\,$K. Both these literature values are much closer to our results, explaining the results of \cite{rajpurohit2018carmenes} as an outlier. As for metallicity, we find $\Delta [M/H] \sim - 0.3\,$dex, in line with other literature comparisons, and keep in mind that the comparison with \cite{passegger2019carmenes} is based on their values for $[Fe/H]$ rather than $[M/H]$. As for $\log g$, we find our output parameters to be very similar to the ones published by \cite{passegger2019carmenes}, but some differences are present in the comparison with \cite{rajpurohit2018carmenes}, with our values being up to $0.8\,dex$ below theirs for 3 stars in common. However, similarly to the $ T_\mathrm{eff}$ for star 2M11474440+0048164, the $\log g$ values published by \cite{passegger2019carmenes} for these stars are also much closer to our measurements, with a $\Delta \log g \sim 0.05\,$dex.
\section{Conclusions \label{Conclusions}}
Building on the method previously presented in \cite{sarmento2020derivation}, we derived stellar atmospheric parameters for 313 M dwarfs with $3000\,$K$ < T_\mathrm{eff}\pm100\,$K$ < 4200\,$K, $4.5\,$dex$< \log g\pm0.2\,$dex$ < 5.3\,$dex and $-1.05\,$dex $ < [M/H]\pm0.1\,$dex$ < 0.56\,$dex. The pipeline uses \textit{iSpec} as a spectroscopic framework to control \textit{Turbospectrum} code, and requires both a complete line list at the studied wavelength and a line mask tailored for the spectral type of the analyzed stars. We include both of these resources with the paper, as well as all the derived stellar parameters, for future reference.
A series of literature comparisons demonstrates the difficulty of finding accurate parameters for M dwarfs. Positive $\Delta T_\mathrm{eff}$ values are found across multiple literature comparisons, except for the ASPCAP values, although usually within uncertainty levels. However, the additional presence of negative $\Delta [M/H]$ values in most of our literature comparisons suggest the existence of issues with the temperature and metallicity determination in our pipeline, as the effects of changes in these parameters in the M dwarf regime are not independent. We also note that our results are also dependent on the assumptions of the PARSEC evolutionary code. More analysis needs to be done on this subject.
Despite the existence of these trends, the strong matching between synthesized and observed spectra for a wide range of M dwarf stars shows the power of the pipeline as a method for parameter determination. Future works could apply the method to Near-Infrared spectra observed with other instruments with better resolution than APOGEE, such as CARMENES \citep[$R=80\,000-100\,000$,][]{quirrenbach2014carmenes}, GIANO \citep[$R\sim50\,000$,][]{origlia2014high}, SPIROU \citep[$R\sim75\,000$,][]{artigau2014spirou}, NIRPS \citep[$R\sim90\,000-100\,000$][]{wildi2017nirps}, and CRIRES+ \citep[$R\sim100\,000$,][]{dorn2014crires+}. Expanding the characterized parameter space, through an analysis of a larger M dwarf sample, together with an expansion to more complete and detailed molecular line lists, are other possibilities to improve the performance of our method.
\begin{acknowledgements}
This work was supported by FCT - Fundação para a Ciência e a Tecnologia through national funds (PTDC/FIS-AST/28953/2017,
PTDC/FIS-AST/7073/2014, PTDC/FIS-AST/32113/2017, UID/FIS/04434/2013)
and by FEDER - Fundo Europeu de Desenvolvimento Regional through COMPETE2020 -
Programa Operacional Competitividade e Internacionalização
(POCI-01-0145-FEDER-028953, POCI-01-0145-FEDER-016880, POCI-01-0145-FEDER-032113, POCI-01-0145-FEDER-007672).
This work was supported by Fundação para a Ciência e a Tecnologia (FCT) through the research grants UID/FIS/04434/2019, UIDB/04434/2020 and UIDP/04434/2020.
This research has made use of NASA’s Astrophysics Data System.
P.S. acknowledges the support by the Bolsa de Investigação PD/BD/128050/2016.
E.D.M. acknowledges the support by the Investigador FCT contract IF/00849/2015/CP1273/CT0003 and in the form of an exploratory project with the same reference.
B.R-A acknowledges funding support from FONDECYT through grant 11181295.
\end{acknowledgements}
\bibliographystyle{aa}
|
1,108,101,566,672 | arxiv | \section{Introduction}
Diffusion-weighted magnetic resonance imaging (DWI) is a biomedical imaging
technique that creates images that are sensitive to the direction and distance
of water diffusion within millimeter-scale voxels in the human brain \emph{in
vivo}. Repeated in several different directions, diffusion sensitization can
be used to make inferences about the microstructural properties of brain
tissue in different locations, about the trajectories of organized bundles of
axons, or fascicles, and about the connectivity structure of the brain. This is
because water molecules freely diffuse along the length of nerve cell axons,
but are restricted by cell membranes and myelin along directions orthogonal to
the axon's trajectory. This technique has therefore been used in many clinical
and basic research applications [1].
To make inferences about the directions and relative fractions of
different fascicles within each region of the brain, mixture models
are employed. The signal within each volumetric pixel (or voxel) of
approximately 2x2x2 $mm^3$ is deconvolved with a kernel function,
$f_{\kappa}$, assumed to represent the signal from every individual
fascicle [2]. A set of weights, $w_i$ provides an estimate of the
fiber orientation distribution function (fODF) in each voxel, a
representation of the direction and volume fraction of different
fascicles in each voxel. However, many algorithms are proposed to
perform this deconvolution. In choosing a model and an algorithm, the
main consideration is the \emph{accuracy} of the model with respect to
the ground truth. Accuracy is defined as the average \emph{error} of
the model fit to the ground truth; error can be assessed by comparing
model fits with a known physical structure, such as excised neural
tissue that is placed in the MRI device in a particular configuration
[3]. However, direct assessment of error, and hence model accuracy, is
not applicable in human brain \emph{in vivo}. Hence, a useful proxy
for accuracy is the \emph{precision}, or equivalently, the
\emph{reliability} of the model--how much the model fit varies due to
noise. Precision can be estimated by fitting the model to several
replicate datasets with independent noise; computing the difference
between the fitted models produces a quantity which we refer to as the
\emph{replicate error}. While precision does not guarantee accuracy,
the inverse statement holds--an \emph{imprecise} model will also be
\emph{inaccurate}.
Both \emph{error} and \emph{replicate error} require the specification
of a distance function or divergence on the space of fODFs.
In turn, we can quantify \emph{model inaccuracy} as average error
and \emph{model imprecision} as average replicate error.\footnote{
Note that the definition of accuracy and precision resemble
but subtly differ from the statistical concepts of \emph{bias} and \emph{variance}.
Bias refers to the difference between the average model fit and the ground truth.
However, in non-Euclidean spaces, there may not exist an operation for averaging
multiple model fits, making the concept of bias inapplicable.
Meanwhile, variance is defined as half the average \emph{squared} distance
between two model fits, in contrast to imprecision, which is
the average distance between two model fits.}
Previous DWI studies have used numerical simulations to assess the
fits of algorithms to the fODF [2, 4, 5], and the angular error (AE),
quantified as the sum of the minimal arc distances between the true
directions and estimated directions, is commonly used as a measure of
inaccuracy in these studies. AE has an intuitive appeal, but its
application to fODFs with multiple non-zero weights is problematic,
since angular error ignores the relative weights of the directions,
and also fails to penalize fODFs with an incorrect number of
directions. However, the fODF is naturally interpreted as a probability distribution of
directions. Thus, any distance between probability distributions
could be used to measure distances between fODFs. In the present
study, we examine three commonly used distances or divergences: total
variation (TV), Kullback-Leibler divergence (KL), and earth mover's
distance (EMD). We demonstrate that the EMD has several advantages
over other measures of discrepancy between fODFs.
\section{Methods and Theory}
\subsection{Models}\label{ss:models}
We model the diffusion signal using a \emph{sparse fascicle model}
(SFM). Originating from work by [6] and further developed by Behrens
et al, Dell'Acqua et al and Tournier et al [2, 7, 8], these models
describe every voxel of the white matter as composed of $k$ distinct
populations of fibers, where $k$ is an integer greater or equal to 1.
The directions of the fibers are unit vectors $v_1,\hdots,v_k$, and we
do not distinguish between a vector $v$ and its mirror image $-v$,
because DWI measurements are antipodally symmetric. The weights of
the fibers are real positive numbers $w_1,\hdots,w_k$ and add to 1,
reflecting the fractional volume occupied by the fiber population.
The signal measured in direction $x_i$ is:
\[
y_i \sim Rician(\tilde{S}_0 \sum_{j=1}^k w_j e^{-\kappa(v_j^T x_i)},\sigma^2)
\]
where $\tilde{S}_0$ is a scaling parameter, $\kappa$ is a free
parameter which is assumed constant given fixed experimental
parameters (gradient field strength, pulse duration, etc.), and the
Rician distribution [9] is defined by $ \sqrt{(\mu + Z_1)^2 + Z_2^2} \sim
Rician(\mu,\sigma^2)$ for $Z_i \stackrel{iid}{\sim} N(0,\sigma^2)$.
Under the general framework of the SFM, one arrives at more specific models by
making particular assumptions about the number of fibers and their properties.
One might assume the assumption of a particular lower bound for the angular
separation between distinct fiber populations, a minimal threshold on the
proportion of a fiber in a voxel, or an upper limit on the number of distinct
fibers in a voxel. Furthermore, it is necessary to specify the parameter
$\kappa$; one can estimate $\kappa$ from the data, or rely on a biophysical
model. In the simulation studies, we will treat $\kappa$ and $\sigma^2$ as
known parameters.
The SFM can be formulated as a Bayesian model, by specifying priors on
the number of fibers, the directions of the fibers, and the weights of
the fibers. For reasons of computational tractability, we assume
$k=2$ fibers and that each fiber has a weight of 0.5, with a direction
which is independently uniformly distributed. The posterior
distribution for this model can be easily computed, by discretizing
the projective plane. Supposing the data is also generated by the
same priors, the Bayesian posterior allows one to obtain optimal point
estimates. However, one could consider the Bayesian model as a useful
approximation to the truth even when the priors are incorrect.
Inference of the SFM is simplified considerably if one is willing to
model the signal as having a Gaussian distribution rather than a
Rician distribution. Under the assumption of Gaussianity, the fODF
$\hat{f}$ is estimated through non-negative least squares (NNLS):
\[
\hat{f} = \sum_{j=0}^p \frac{\beta_j}{\sum_i \beta_i}\delta_{u_j}
\]
\begin{equation}\label{obj}
\beta = \text{argmin}_{\beta > 0} \sum_{i=1}^n\left|y_i-\sum_{j=1}^p \beta_j e^{-\kappa(u_j^T x_i)}\right|^2
\end{equation}
where $u_1,\hdots,u_p$ are points from an arbitrarily fine sampling of
the projective plane.\footnote{ It is common to apply
regularization, such as an $L_1$ penalty [2], or elastic net penalty
[12], to the objective function \eqref{obj}. However, NNLS yields
useful estimates even without regularization; hence we neglect the
regularized variants of NNLS in this paper.} The NNLS method does
not constrain the number of directions with positive weights.
However, one can choose to use best-$\hat{K}$-subset regression
(B$\hat{K}$S)
\footnote{Finding the best set of $\hat{K}$ directions is an NP-hard
problem in general. However, two considerations make it feasible in
the application of DWI imaging. One, there are scientific reasons
to assume that $\hat{K}$ is a small number, e.g. from two five.
Two, we are willing to tolerate a small angular error in the chosen
directions. These two factors mean that a brute force search is
possible, though still computationally expensive. This is in
contrast to the general problem of best subset regression, which
often requires the use of greedy search or convex approximation.}
to constrain the number of directions to $\hat{K}$: $\beta =
\text{argmin}_{\beta > 0, ||\beta||_0 = \hat{K}}
\sum_{i=1}^n\left|y_i-\sum_{j=1}^p \beta_j e^{-\kappa(u_j^T
x_i)}\right|^2$. Here $||\cdot||_0$ is the $L_0$ pseudonorm, which
counts the number of nonzero components in the vector.
Figure \ref{fig:model_demo} illustrates example fODFs estimated by the Bayesian
posterior mean, NNLS and best-2-subset regression (B2S).
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.4, trim = 50mm 50mm 50mm 50mm, clip]{figure_demo_bayes.png}
\includegraphics[scale=0.4, trim = 50mm 50mm 50mm 50mm, clip]{figure_demo_nnls.png}
\includegraphics[scale=0.4, trim = 50mm 50mm 50mm 50mm, clip]{figure_demo_bs.png}
\caption{fODFs estimated by the Bayesian posterior mean (left), NNLS (center),
and Best-2-subset regression (B2S; right) given the same data. Sticks
indicate true directions entered into simulation. Light edges on polygons
indicate positive probability mass. At the one end, the Bayesian posterior
mean is continuous, while at the other, B2S produces a very sparse estimate,
with NNLS having intermediate sparsity. }
\label{fig:model_demo}
\end{figure}
\subsection{Distances for Probability Distributions}\label{ss:distances}
The total variation metric for distributions $P$ and $Q$ is defined as
$d_{TV}(P,Q) = \sup_A |P(A)-Q(A)|$ where $A$ is an arbitrary
measurable set and is easy to compute: Given vectors $p$ and $q$ which
are histograms for $P$ and $Q$ respectively, TV is approximated by
$\frac{1}{2}|p-q|_1$, with $|\cdot|_1$ the $L_1$ vector norm. Another
commonly used characterization of distance between distributions is
the Kullback-Leibler divergence: $KL(P,Q) = \int \log(dP/dQ)dP$. We
will use the symmetrized KL divergence, defined by $SKL(P,Q) =
\frac{1}{2}KL(P,Q) + \frac{1}{2}KL(Q,P)$. Note that while TV is a
distance metric, neither KL divergence nor symmetrized KL divergence
are metric. Both TV and KL divergence are unsuitable to compare
distributions which are mixtures of Dirac delta function. If $P$
and $Q$ are two distributions with disjoint support, the
total variation distance $d_{TV}(P,Q)$ will be equal to 1, while
the KL divergence $KL(P,Q)$ will be infinite, regardless of how
close or how far the atoms of $P$ and $Q$ are from each
other. Therefore, rather than applying total variation or KL
divergence directly, one can first apply \emph{kernel smoothing} to
the distributions, then compute the distance between the smoothed fODFs
[10]. Here, we use Gaussian smoothing, parameterized by $\lambda > 0$,
and we write the convolution of $P$ with the gaussian kernel with mean zero
and variance $\lambda^{-1}$ by $P \star \phi_\lambda$.
Hence we define the smoothed TV distance as $d_{TV,\lambda}(P,Q) =
d_{TV}(P \star \phi_\lambda,Q \star \phi_\lambda)$ and the smoothed
symmetrized KL divergence as $SKL_\lambda(P,Q) =
SKL(P \star \phi_\lambda,Q \star \phi_\lambda)$. Figure
\ref{fig:schematic} illustrates the calculation of smoothed TV
distance in a one-dimensional setting.
The earth mover's distance (EMD), or 1-Wasserstein distance, can be
interpreted as the minimal amount of work needed to transform $P_1$
into $P_2$, by optimally transporting the mass from $P_1$ to the mass
in $P_2$. The work is measured by the total distance times mass
transported; a general definition can be found in [11]. Figure
\ref{fig:schematic} illustrates the calculation of EMD in a
one-dimensional setting. In contrast to the TV distance or the KL
divergence, the EMD depends on the notion of a distance or cost
between two points: in other words, it incorporates the geometry of
the underlying space. The EMD between two distributions $P_1$ and $P_2$ can
be computed by linear programming in the special case that $P_1$ and
$P_2$ are mixtures of Dirac deltas; i.e., $P_i = \sum_{j=1}^{k^i}
w^i_j \delta_{v^i_j}$. Then
\begin{equation}\label{eq:emd}
d_{EMD}(P_1,P_2) = \min_x \sum_{i=1}^{k^1}\sum_{j=1}^{k^2} c_{ij}x_{ij} \text{ subject to }
x_{ij} \geq 0,\ \
\sum_{i=1}^{k^1} x_{ij} = w_j^2,\ \
\sum_{j=1}^{k^2} x_{ij} = w_i^1
\end{equation}
where $c_{ij} = d(v_i^1,v_j^2)$ for a suitable distance metric $d$.
Here $x_{ij}$ is understood as the amount of mass moved from the point
$v_i^1$ in $P_1$ to the point $v_i^2$ in $P_2$.
The 2-Wasserstein distance (2WD) has a similar definition to EMD,
replacing $d_{EMD}$ with $d_{Was2}^2$ and replacing $c_{ij}$ with
$c_{ij}^2$ in equation \eqref{eq:emd}.
Because the EMD and 2-Wasserstein distance (2WD) are equipped with a notion of
geometrical distance, either can be used to quantify how two mixtures of
Dirac delta functions are ``close'' even though none of the delta
functions overlap, and in contrast to the KL and TV metrics, does not
require the choice of an arbitrary smoothing parameter.
It is possible to state a number of additional properties of the
aforementioned distance metrics when they are applied in Euclidean
space.
First is the concept of \emph{scale equivariance}. Given a
probability distribution $P$, one can define \emph{scaling by a
constant} $\lambda > 0$ by defining the scaling measure $\lambda P$
\[
(\lambda P)(A) = P(\frac{1}{\lambda}A)
\]
recalling that $\lambda A$ is defined as $\lambda A = \{\lambda x: x \in A\}$.
Then the property of scale equivariance is defined as
\[
d(P,Q) = \frac{1}{\lambda} d(\lambda P, \lambda Q)
\]
for all probability distributions $P$, $Q$.
It is easy to prove that EMD and 2-Wasserstein distance
satisfy scale equivariance.
Meanwhile, total variation satisfies \emph{scale invariance}
rather than scale equivariance, which is the property that
\[
d(P,Q) = d(\lambda P,\lambda Q).
\]
But smoothed total variation satisfies neither scale equivariance
nor scale invariance, due to the smoothing parameter.
The second concept is that of \emph{robustness to outliers}.
Given a probability distribution $P$ with mass concentrated in a small ball (say, the unit ball),
one can consider \emph{contamination} of the distribution with a point mass located at a distant point $x$.
That is, consider transforming $P$ to $(1-\epsilon)P + \epsilon \delta_x$ for $x$ with $||x||$ large.
The robustness of the distance metric $d$ is determined by the behavior of the quantity
\[
d(P,(1-\epsilon)P + \epsilon \delta_x)
\]
as $\epsilon \to 0$, $||x|| \to \infty$.
We have for small $\epsilon$ and large $x$ that
\begin{align*}
d_{EMD}(P,(1-\epsilon)P + \epsilon \delta_x) &\approx \epsilon ||x||\\
d_{2WD}(P,(1-\epsilon)P + \epsilon \delta_x) &\approx \sqrt{\epsilon} ||x||\\
d_{TV,\lambda}(P,(1-\epsilon)P + \epsilon \delta_x) &\approx \epsilon
\end{align*}
Both EMD and smoothed TV have a linear dependence on $\epsilon$ while 2-Wasserstein has
a square root dependence on $\epsilon$. This means that 2-Wasserstein is much less
robust to contamination for small $\epsilon$.
Meanwhile, only smoothed TV has an $O(1)$ dependence on $||x||$, meaning that
smoothed TV is the most robust to outliers.
While the two properties of scale equivariance and robustness to outliers
are only defined for Euclidean spaces, we will see that they are still useful
for understanding the properties of the distance metrics in non-Euclidean settings,
such as the projective plane.
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.18]{figure_schematic_1.pdf}
\includegraphics[scale=0.18]{figure_schematic_2.pdf}
\includegraphics[scale=0.18]{figure_schematic_3.pdf}
\includegraphics[scale=0.18]{figure_schematic_4.pdf}
\caption{Schematic illustrating the difference between EMD and
kernel-smoothed TV distance. Left to right: (i) probability
distribution $P_1$, a mixture of Dirac deltas (solid spikes), with
kernel-smoothed form $P_{1,gauss,100}$ superimposed (dottted lines);
(ii) distribution $P_2$ with smoothed form $P_{2,gauss,100}$
superimposed; (iii) computation of $d_{EMD}(P_1,P_2)$; (iv)
computation of $d_{TV,gauss,100}(P_1,P_2) =
d_{TV}(P_{1,gauss,100},P_{2,gauss,100})$}
\label{fig:schematic}
\end{figure}
\subsection{Distances for fODFs}
All of the aforementioned distance metrics can be adapted for the projective plane,
and thus used to measure distances between fODFs.
Furthermore,
both the EMD and 2WD equipped with the arc-length distance can be viewed
as an extension of angular error (AE). If we take fODFs consisting of
a single Dirac delta, both the angular error and the EMD distance
between the fODFs is equal to the arc length distance between their
directions: hence in figure \ref{fig:rotation_distance}, we see that
EMD distance is linear with respect to AE; in contrast, RMSE,
$TV_{gauss,1}$ and $TV_{gauss,100}$ are concave with respect to AE.
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.30, trim = 10mm 0mm 10mm 10mm, clip]{figure_ae_vs_emd.pdf}
\includegraphics[scale=0.30, trim = 10mm 0mm 10mm 10mm, clip]{figure_ae_vs_rmse.pdf}
\includegraphics[scale=0.30, trim = 10mm 0mm 10mm 10mm, clip]{figure_ae_vs_tv10.pdf}
\includegraphics[scale=0.30, trim = 10mm 0mm 10mm 10mm, clip]{figure_ae_vs_tv100.pdf}
\caption{Distance between fODFs consisting of a single Dirac delta, $f_1 = \delta_v$, $f_2 = \delta_w$ as a function
of $d_{arc'}(v,w)$.
Left to right: EMD, RMSE, $TV_{gauss,10}$, $TV_{gauss,100}$}
\label{fig:rotation_distance}
\end{figure}
\subsection{Prediction error}\label{ss:cvrmse}
Unlike other measures of accuracy, the prediction error of a model can
be evaluated without knowing the ground truth, since it uses the
observed data as a noisy version of the ground truth [12].
Furthermore, prediction error can be calculated using a single data
set, via \emph{cross-validation}.
Given data $y_1,\hdots,y_n$ corresponding to measurement directions
$x_1,\hdots,x_n$, one estimates the quantity $\tilde{S}_0$, e.g. by
fitting the NNLS model and setting $\hat{S}_0 = ||\beta||_1$. The set
of measurement directions $x_1,\hdots,x_n$ is partitioned into $K$
disjoint sets $A_1,\hdots,A_K$ of roughly equal size, and resampled
fODFs $\tilde{f}^1,\hdots,\tilde{f}^K$, where $\hat{f}^i$ are obtained
by estimating the fODF based only on the directions \emph{not} in the
set $A_i$. Each of the resampled fODFs $\tilde{f}^j$ is used to make
a prediction on the measurements in the \emph{left-out set} $\{y_i: i
\in A_j\}$. The cross-validated RMSE (CVRMSE) is computed as:
\[
CVRMSE = \sum_{j=1}^K \sum_{i \in A_j} \left(y_i - \hat{S}_0\int_v \exp(-\kappa (x_i' v)^2) d\tilde{f}^j(v)\right)
\]
Alternatively, if two or more replicate measurements $y^1,y^2$ are available,
one can also evaluate the replicated RMSE (RRMSE), defined by $RRMSE =
\sum_{i=1}^n \left(y_i^2 - \hat{S}_0\int_v \exp(-\kappa (x_i' v)^2)
d\tilde{f}^1(v)\right)$. The CVRMSE and RRMSE differ only slightly in terms
of mean; the RRMSE has smaller variance.
Supposing that the model is correctly specified, CVRMSE is an nearly
unbiased estimate of the root mean integrated squared error (RMISE)
from the noise-free signal. The RMISE is defined as the $L_2$ distance
between smoothed measures:$d_{RMISE}(f_1,f_2) = \sqrt{\int_w
(f_{1,ST,\kappa}(v) - f_{2,ST,\kappa}(v))^2 dv}$ where the smoothing
kernel is computed from the Stejskal-Tanner equation [1] with a single
shape parameter $\kappa$: $f_{ST,\kappa}(v) = \int_w
\exp(-\kappa(v'w)^2) df(w)$.
\subsection{Resampled Barycenters}\label{ss:bary}
If one had an accurate Bayesian model of the data, one could obtain an
optimal estimate of the fODF with respect to expected EMD inaccuracy
by obtaining the \emph{Wasserstein barycenter} of the posterior
distribution:
\begin{equation}\label{bayes_bary}
\hat{f} = \min_{\hat{f}} \int_f d_{EMD}(f,\hat{f}) dp(f|y)
\end{equation}, where $p(f|y)$ is the posterior distribution of the fODF
with respect to the data $y$. The precise form of the posterior
distribution appearing in \eqref{bayes_bary} depends on the particular
prior used. Bayesian approaches for DWI imaging [8] commonly use priors
consisting of mixtures of $K$ dirac deltas, where $K$ also possibly has a prior distribution.
The numerical computation of the Bayesian barycenter
can be achieved by obtaining a large number of posterior samples
$f^1,\hdots,f^N$ from the posterior, then solving
\begin{equation}\label{discrete_bary}
\hat{f} = \min_{\hat{f}} \frac{1}{N}\sum_{i=1}^N d_{EMD}(f^1,\hat{f}) dp(f|y)
\end{equation}
A variety of approaches exist for solving the equation \eqref{discrete_bary},
including linear programming\footnote{
In the case that fODFs $\hat{f}^1,\hdots,\hat{f}^K$ are
mixtures of Dirac deltas, it possible to compute the Wasserstein
barycenter using standard linear program solvers. Let $v_i^j$ be the
$i$th direction in the fODF $\hat{f}^j$, and $w_i^j$ its corresponding
weight, and let $k_i$ denote the number of directions in $\hat{f}^j$.
Let $u_1,\hdots,u_p$ be a dense sampling on the projective plane.
Then the Wasserstein barycenter is found by the following
optimization problem:
\[
\text{minimize} \sum_{\ell=1}^p \sum_{i=1}^K \sum_{j=1}^{k_i}
x_{\ell i j} d_{arc}(u_\ell, v_i^j)
\]
\[
\text{subject to } \sum_{\ell=1}^p x_{\ell i j} = w_i^j
\text{ , }
w_\ell \geq 0 \text{ , }\sum_{i=1}^K \sum_{j=1}^{k_i} x_{\ell i j} = w_\ell
\]
for $\ell = 1,\hdots,p$, $i = 1,\hdots,K$ and $j = 1,\hdots,k_i$. The
output of the optimization problem is the values of the variables
$x_{\ell i j}$ for $\ell = 1,\hdots,p$, $i=1,\hdots,K$ and $j=
1,\hdots,k_i$.
The Wasserstein barycenter $\hat{f}_{bary}$ can then be computed as follows.
\begin{enumerate}
\item Compute $w_1,\hdots,w_p$, by $w_\ell
\sum_{i=1}^K \sum_{j=1}^{k_i} x_{\ell i j}$
\item Let $\hat{f}_{bar} = \sum_{\ell=1}^p w_\ell\delta_{u_\ell}$
\end{enumerate}
}, and a recent approach by Cuturi [14]. However, obtaining the
posterior draws $f^1,\hdots,f^N$ may be extremely expensive.
One can bypass the computational cost of computing the posterior by exploiting the
connection between Bayesian inference and resampling techniques. Efron
[13] demonstrates a close connection between the parametric bootstrap
and Bayesian posteriors for uninformative priors. The parametric
bootstrap can be immediately applied to our setting: given an
estimated fODF $\hat{f}^0$, and an estimate of the noise
$\hat{\sigma}^2$, generate synthetic bootstrap data $y^1,\hdots,y^K$
by $y^j_i \sim Rician\left(\int_v \exp(-\kappa (v'x_i)^2)
d\hat{f}^0(v), \hat{\sigma}^2\right)$. Fitting the model to each
synthetic bootstrap replicate $y^1,\hdots,y^K$, obtain bootstrap
estimates of the fODF $\hat{f}^1,\hdots,\hat{f}^K$. Treating these
bootstrap estimates as a sample from an approximate posterior, compute
$\hat{f} = \min_f \sum_{i=1}^B d_{EMD}(f,\hat{f}^B)$. An alternate
approach, and one which appears to be more effective in simulations,
is to use $K$-fold partioning rather than parametric bootstrap: that
is, to obtain $\hat{f}^1,\hdots,\hat{f}^K$ using the approach
described in \ref{ss:cvrmse}.
\subsection{K-fold replicate error}
The definition of replicate error requires at least two replicate
measurements of the same voxel, $y^{(1)}$ and $y^{(2)}$: then given a
distance function $d$, the replicate error is defined as $RE =
d(\hat{f}^{(1)},\hat{f}^{(2)})$. However, one can measure K-fold replicate error (K-RE)
using a single set of measurements by using K-fold partitioning.
Given a single set of measurements $y$, obtain
$\hat{f}^1,\hdots,\hat{f}^K$ according to the $K$-fold partitioning
procedure described in \ref{ss:cvrmse}. Then define the $K$-fold
replicate error as follows:
\[
\text{K-RE}_d = \frac{K-1}{\sqrt{K}}
\left[\frac{1}{K(K-1)/2}\sum_{1 \leq i < j \leq K}
d(\hat{f}^i,\hat{f}^j)\right]
\]
The correction factor, $\frac{\sqrt{K}}{K-1}$, is used to reduce the
dependence of the calculated replicate error on the arbitrary choice of
$K$. Supposing the correction were not employed, the $K$-fold
replicate error would be asymptotically proportional to $\sqrt{(K-1)}/K$,
which is the product of the square root of the relative sample size
$\sqrt{K/K-1}$ and the inverse proportion of directions shared between
different folds, $1/(K-1)$.
\section{Results and Discussion}
\subsection{Comparison of models and accuracy measures}
We compare measures of accuracy applied to simulated estimates of fiber
orientation distribution functions (fODF)
obtained from different models. The measures we consider are angular
error (AE), root mean integrated squared error (RMISE), earth mover's
distance (EMD), total variation (TV) with $\lambda = \{1,10,100\}$ and
symmetric Kullback-Liebler (SKL) with with $\lambda = \{1,10,100\}$\footnote{
The choice of smoothing parameters $\lambda$ for TV and SKL are
somewhat arbitrary; we are not aware of any previous use of smoothed
distances in the DWI literature.}.
The ground truth fODF consists of two
orthogonal directions with equal weights; the data was generated using
parameters $\kappa=1.5$ and $\sigma^2 = 0.04$.
These parameters are typical for DWI simulations [2,4,5].
The simulated measurements used measurement directions $x_1,\hdots,x_{150}$ used in
DWI measurements. We then fit a Bayesian model, best-2-subset and
NNLS. The Bayesian prior was specified as described in
\ref{ss:models}, and the cross-validated barycenter was computed as
described in \ref{ss:bary} with $K=20$ folds. Figure
\ref{fig:model_demo} displays sample model fits; Table \ref{fig:table}
provides a table of the measures of accuracy of each model as averaged
over 1000 random trials.
\begin{figure}[htbp]
\centering
\includegraphics[scale=.5]{plot_test11.png}
\caption{Simulated comparison of distance metrics. Models: Bayesian posterior
mean (bys), Bayesian posterior Wasserstein barycenter (bry),
best-2-subset (B2S), nonnegative least squares (NNLS), K-fold barycenter
(cv). For comparison, see also Figure 1.}
\label{fig:table}
\end{figure}
RMISE most strongly favors continuous estimates, such as the Bayes
posterior mean. AE is undefined for continuous estimates and favors
non-sparse estimates, such as NNLS and K-fold barycenter. On the
opposite side of the spectrum, EMD favors sparse estimates, such as
best-2-subset and the posterior barycenter. TV and SKL do not clearly
favor sparse or non-sparse models. TV and SKL rank the models
similarly regardless of the smoothing parameter used, but the
smoothing parameter does influence the contrast between different
methods. In the case of oversmoothing, all models have close to the
minimum inaccuracy, as can be seen in the inaccuracies calculated
using $TV_1$ and $SKL_1$. In the case of undersmoothing, all models
have close to the maximum inaccuracy, as seen in the inaccuracies
calcuated using $TV_{100}$ and $SKL_{100}$. In the case of $TV$, we
see that the ratio $\max \text{err}/\min \text{err}$ is equal to 1.3 for $TV_1$, 1.3
for $TV_{10}$, and 1.2 for $TV_{100}$. In comparison, the ratio $\max
\text{err}/\min \text{err}$ is equal to 1.9 for EMD.
The K-fold barycenter outperforms NNLS in all measures considered
here: a somewhat surprising result, given that the K-fold barycenter
was solely motivated by the goal of minimizing the inaccuracy as
measured by EMD.
\subsection{Correlation of error with replicate error}
In a similar simulated experiment with $k=2$ fiber directions, uniform random
weight $w_1$ and $w_2 = 1-w_1$, $\sigma^2 = 0.04$ and varying $\kappa$, we
compare the correlations between errors $\text{err} = d(f,\hat{f}^1)$ and
replicate errors $RE = d(\hat{f}^1,\hat{f}^2)$.
We find that the correlation between the EMD-based error and EMD-based replicate error,
$\text{Corr}(\text{err}_{EMD},RE_{EMD})$ is above 0.4 for a range of parameter values
$\kappa$ from 0.1 to 2--higher than the minimum correlations for other
distances. Figure \ref{fig:table2} contains correlations, as computed from
10000 simulations, for several values of $\kappa$.
\begin{figure}[htbp]
\centering
\includegraphics[scale=.5]{plot_test13.png}
\caption{Correlation of error and replicate error for data generated from a
simulation of a two-direction fODF. For each metric, the values at different
$\kappa$ are presented, and 'min' denotes the minimal correlation among the
different values of $\kappa$ for that metrix. The column for RRMSE contains values for
$\text{Corr}(RRMSE,RE_{RMISE})$. All other columns contain the correlation
between $\text{err}$ (error) and $RE$ (replicate error) when both quantities are
evaluated using the given distance/divergence.}
\label{fig:table2}
\end{figure}
Given the practical utility of RRMSE, it is surprising to see its low
correlation with the true RMISE regardless of $\kappa$. Although
RRMSE is a close-to-unbiased estimated of the $\text{err}_{RMISE}$, this may
come at a cost of greater variability. In contrast, $RE_{RMISE}$ has
a much higher correlation with $\text{err}_{RMSE}$. At first glance $RE_{RMISE}$ appears to
be a very similar procedure to $RRMSE$, but while $RRMSE$ compares the
signal from replicate 1 with the raw data of replicate 2, $RE_{RMISE}$
compares the signal from replicate 1 with the signal from replicate 2.
The distance measure with the highest correlation between $\text{err}$ and
$RE$ varies depending on $\kappa$. For $\kappa=0.1$, EMD has the
highest correlation, $0.45 \pm 0.01$; for $\kappa = 1$, $TV_1$ has the
highest correlation: $0.55 \pm 0.01$, slightly higher than EMD ($0.52
\pm 0.01$), while for $\kappa = 2$, $TV_{10}$ has the highest
correlation: $0.55 \pm 0.01$. In both $TV$ and $SKL$ we see that the
choice of smoothing parameter which maximizes the correlation depends
on $\kappa$. Meanwhile, the 2-Wasserstein distance, which does not
use smoothing, nevertheless has poor correlation between $\text{err}$ and $RE$
at $\kappa=2$, and is consistently dominated by EMD.
To summarize, the correlation of
$\text{Corr}(\text{err}_{EMD},RE_{EMD})$ is consistently comparable to the
highest correlation of any other distance. Distances with
fixed smoothing kernels suffer from degraded correlation at one of
the extremes of the parameter range, $\kappa = 0.1$ or $\kappa = 2$,
while the 2-Wasserstein distance also suffers from degraded correlation at high $\kappa$ even though it does not employ smoothing;
in contrast, EMD is robust across $\kappa$.
These results can seemingly be explained by the fact that EMD has a
combination of \emph{scale equivariance} and \emph{robustness to
outliers} as defined in section ~\ref{ss:distances}. Even though
the two properties were only defined in the Euclidean setting, they
can be extended in a `local' sense to any manifold via the fact that
manifolds resemble Euclidean space in a small neighborhood of any
point. In the particular application of DWI fiber deconvolution, the
consequence of scale equivariance is that the increased error due to
increased noise level will be reflected both in the error and the
replicate error. Interestingly, though, we found correlations between
error and replicate error in the simulation even when holding the
noise level fixed. This can be explained by the fact that even if the noise level
is held fixed, changes in the parameter $\kappa$ or the fiber
configuration can mimic the effect of increased noise. Thus, the fact
that smoothed TV and smoothed SKL are \emph{not} scale equivariant
explains their inconsistent performance across $\kappa$. Meanwhile,
the poor performance of 2-Wasserstein distance for $\kappa=2$ can be
explained by the poor robustness of 2-Wasserstein distance to
outliers. When $\kappa$ is low, relative to the sample size (number of
measurement directions), the NNLS algorithm finds very few `spurious'
directions. However, when $\kappa$ is high relative to sample size, a
noise spike in a single measurement direction can cause NNLS to weight
an essentially arbitrary direction in a direction orthogonal to the
direction of the noise spike. This leads to the production of
`outliers' for high $\kappa$, which inflate the variance of the
relative error as measured by 2-Wasserstein distance. On the other
hand, these directional `artifacts' can be removed by means of
post-processing; hence it would still be interesting to revisit the
application of the 2-Wasserstein distance on post-processed NNLS
estimates.
\subsection{Application to DWI data measured \emph{in vivo} }
DWI data was acquired in a healthy human participant in a 3T MRI instrument, at
the Stanford Center for Neurobiological and Cognitive Imaging. Data was
acquired at 2x2x2 $mm^3$ with a b-value of 2000 s/$mm^2$. The data consists of
two sets of replicate measurements\footnote{The data is available to download
at: http://purl.stanford.edu/ng782rw8378}. We identified regions of interest
for analysis in the corpus callosum (CC), a region of the brain known to
contain only one major fascicle, connecting the two cerebral hemispheres, and
in the centrum semiovale (CSO), a part of the brain in which multiple fascicles
cross. We compute K-fold replicate error ($K=10$), and replicate error of the
fODF estimates. We also compute CVRMSE as a direct estimate of accuracy (with
regard to RMISE). A value of $\kappa = 2.2$ was estimated using
cross-validation on a separate subset of the data.
\begin{figure}[htbp]
\centering
\begin{tabular}{ccc}
K-RE & RE & CVRMSE\\
\includegraphics[scale=0.2]{figure_cccso_1a.png} &
\includegraphics[scale=0.2]{figure_cccso_1b.png} &
\includegraphics[scale=0.2]{figure_cccso_1c.png}
\\
\includegraphics[scale=0.2]{figure_cccso_2a.png} &
\includegraphics[scale=0.2]{figure_cccso_2b.png} &
\includegraphics[scale=0.2]{figure_cccso_2c.png}
\end{tabular}
\caption{Empirical measures of replicate error (EMD) and prediction
error (CVRMSE) by brain region. Top row: Sagittal view of the corpus
callosum (CC), a part of the brain that contains fibers connecting
the two cerebral hemsipheres. Bottom row: Axial view of the centrum
semiovale (CSO), a part of the brain that contains multiple crossing
fiber populations}
\label{fig:cvemd}
\end{figure}
CVRMSE appears to vary more smoothly across both regions of interest
in the white matter. On the other hand, measures of replicate error
(both RE and K-RE) show more coherent spatial variation. Both CVRMSE
and replicate error are sensitive both to the configuration of the
fibers in the measurement voxel and to the noise in the measurement,
but their sensitivities to these factors differ. While CVRMSE is
primarily sensitive to noise, EMD-based replicate error is more
sensitive to the configuration of the underlying tissue (i.e. single
fiber population, or more populations of fibers). The spatial
variations in EMD across the corpus callosum ROI represent, therefore,
variations in the degree to which different parts of the measurement
contain partial volumes of other neighboring parts of the
tissue. These other parts may contain either cerebrospinal fluid (the fluid the
surrounds and pervades the brain), or
fibers oriented in other directions than the corpus callosum
fibers. The measurement noise, on the other hand, is dominated by
physiological factors, and instrumental factors that vary very little
across space. Hence, the relative smoothness of the variation of
CVRMSE across these regions.
\section{Conclusions}
In this paper we address the question of selecting an error metric for
fODF estimation. Through simulations, we illustrate the differences
between EMD and alternative metrics, such as smoothed total variation
and RMISE. EMD favors sparse estimates of the fODF, and is an
intuitive extension of angular error, which is commonly used to
characterize accuracy in the DWI literature. These properties favor
the use of EMD in theoretical work and simulations. In practice, one
might only be able to measure replicate error, or K-fold replicate
error. Use of the EMD in practical applications, on empirical data, is
supported by the consistent correlation of approximately 0.4 between
replicate error and error as measured by EMD across a wide range of
experimental conditions and biological factors (embodied in the model
parameterization by $\kappa$). Other error metrics, such as smoothed
total variation distance, may have higher correlation between
replicate error and error, but this depends on the smoothing
parameter $\lambda$. EMD has a unique combination of scale
equivariance and robustness to outliers, which further supports the
use of EMD-based replicate error as a proxy for EMD-based error. The
use of EMD as an error metric motivates the use of Wasserstein
barycenters as estimates of the fODF: while the K-fold barycenter is
motivated as an approximation to the Bayesian posterior barycenter, we
find in simulations that the K-fold barycenter outperforms NNLS in all
measures of accuracy considered, hence meriting more detailed
investigation of its properties.
\subsubsection*{Acknowledgments}
The authors thank Trevor Hastie, Brian Wandell, Eero Simoncelli,
Justin Solomon, Leo Guibas and Shuo Xie for useful discussions, and
the anonymous referees for their helpful suggestions. CZ was supported
through an NIH grant 1T32GM096982 to Robert Tibshirani and Chiara
Sabatti, AR was supported through NIH fellowship F32-EY022294. FP was
supported through NSF grant BCS1228397 to Brian Wandell
\subsubsection*{References}
[1] Le Bihan D, Mangin JF, Poupon C, Clark CA, Pappata
S, Molko N, Chabriat H. (2001). Diffusion tensor imaging:
concepts and applications. \emph{Journal of magnetic resonance imaging},
13(4), 534-546.
[2] Tournier J-D, Calamante F, Connelly A (2007). Robust determination of the
fibre orientation distribution in diffusion MRI: non-negativity constrained
super-resolved spherical deconvolution. {\it Neuroimage} 35:1459–72
[3] Tournier, J.-D., Yeh, C.-H., Calamante, F., Cho, K.-H., Connelly, A., and
Lin, C.-P. (2008). Resolving crossing fibres using constrained spherical
deconvolution: validation using diffusion-weighted imaging phantom
data. NeuroImage, 42: 617–25.
[4] Basser PJ. Quantifying errors in fiber-tract direction and diffusion tensor
field maps resulting from MR noise. Proc. Int. Soc. Magn. Reson. Med. 1997
[5] Aganj I, Lenglet C, Jahanshad N, Yacoub E, Harel N, Thompson PM,
Sapiro G. (2011). A Hough transform global probabilistic approach to
multiple-subject diffusion MRI tractography. \emph{Medical image
analysis}, 15(4), 414-425.
[6] Frank L. Anisotropy in high angular resolution diffusion-weighted
MRI. \emph{Magnetic Resonance in Medicine} Volume 45, Issue 6, pages
935–939, June 2001
[7] Dell’Acqua F, Rizzo G, Scifo P, Clarke RA, Scotti G, Fazio F (2007). A
model-based deconvolution approach to solve fiber crossing in
diffusion-weighted MR imaging. {\it IEEE Trans Biomed Eng} 54:462–72
[8] Behrens TEJ, Berg HJ, Jbabdi S, Rushworth MFS, and Woolrich MW (2007). Probabilistic
diffusion tractography with multiple fiber orientations: What can we
gain? {\it NeuroImage} (34):144-45.
[9] Gudbjartsson, H., and Patz, S. (1995). The Rician distribution of noisy MR
data. {\it Magn Reson Med}. 34: 910–914.
[10] Parzen E. On the estimation of a probability density fuction and mode.
\emph{The Annals of Mathematical Statistics}. 33(3): 1065-1076, 1962.
[11] Rubner, Y., Tomasi, C. Guibas, L.J. (2000). The earth mover's distance as a
metric for image retrieval. {\it Intl J. Computer Vision}, 40(2), 99-121.
[12] Rokem A, Yeatman J, Pestilli F, Kay K, Mezer A, van der Welt S,
Wandell B. (2013). Evaluating the accuracy of models of diffusion MRI
in white matter. Submitted.
[13] Efron B. Bayesian inference and the parametric bootstrap.
\emph{The Annals of Applied Statistics} 6 (2012), no. 4,
1971--1997.
[14] Cuturi M, Doucet A. Fast computation of Wasserstein
barycenters. \emph{JMLR W\&CP} 32 (1) : 685–693, 2014
\end{document}
\section{} and \subsection{} commands, these will automatically be printed on this slide as an overview of your presentation
\end{frame}
\section{First Section}
\subsection{Subsection Example}
\begin{frame}
\frametitle{Paragraphs of Text}
Sed iaculis dapibus gravida. Morbi sed tortor erat, nec interdum arcu. Sed id lorem lectus. Quisque viverra augue id sem ornare non aliquam nibh tristique. Aenean in ligula nisl. Nulla sed tellus ipsum. Donec vestibulum ligula non lorem vulputate fermentum accumsan neque mollis.\\~\\
Sed diam enim, sagittis nec condimentum sit amet, ullamcorper sit amet libero. Aliquam vel dui orci, a porta odio. Nullam id suscipit ipsum. Aenean lobortis commodo sem, ut commodo leo gravida vitae. Pellentesque vehicula ante iaculis arcu pretium rutrum eget sit amet purus. Integer ornare nulla quis neque ultrices lobortis. Vestibulum ultrices tincidunt libero, quis commodo erat ullamcorper id.
\end{frame}
\begin{frame}
\frametitle{Bullet Points}
\begin{itemize}
\item Lorem ipsum dolor sit amet, consectetur adipiscing elit
\item Aliquam blandit faucibus nisi, sit amet dapibus enim tempus eu
\item Nulla commodo, erat quis gravida posuere, elit lacus lobortis est, quis porttitor odio mauris at libero
\item Nam cursus est eget velit posuere pellentesque
\item Vestibulum faucibus velit a augue condimentum quis convallis nulla gravida
\end{itemize}
\end{frame}
\begin{frame}
\frametitle{Blocks of Highlighted Text}
\begin{block}{Block 1}
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Integer lectus nisl, ultricies in feugiat rutrum, porttitor sit amet augue. Aliquam ut tortor mauris. Sed volutpat ante purus, quis accumsan dolor.
\end{block}
\begin{block}{Block 2}
Pellentesque sed tellus purus. Class aptent taciti sociosqu ad litora torquent per conubia nostra, per inceptos himenaeos. Vestibulum quis magna at risus dictum tempor eu vitae velit.
\end{block}
\begin{block}{Block 3}
Suspendisse tincidunt sagittis gravida. Curabitur condimentum, enim sed venenatis rutrum, ipsum neque consectetur orci, sed blandit justo nisi ac lacus.
\end{block}
\end{frame}
\begin{frame}
\frametitle{Multiple Columns}
\begin{columns}[c]
\column{.45\textwidth}
\textbf{Heading}
\begin{enumerate}
\item Statement
\item Explanation
\item Example
\end{enumerate}
\column{.5\textwidth}
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Integer lectus nisl, ultricies in feugiat rutrum, porttitor sit amet augue. Aliquam ut tortor mauris. Sed volutpat ante purus, quis accumsan dolor.
\end{columns}
\end{frame}
\section{Second Section}
\begin{frame}
\frametitle{Table}
\begin{table}
\begin{tabular}{l l l}
\toprule
\textbf{Treatments} & \textbf{Response 1} & \textbf{Response 2}\\
\midrule
Treatment 1 & 0.0003262 & 0.562 \\
Treatment 2 & 0.0015681 & 0.910 \\
Treatment 3 & 0.0009271 & 0.296 \\
\bottomrule
\end{tabular}
\caption{Table caption}
\end{table}
\end{frame}
\begin{frame}
\frametitle{Theorem}
\begin{theorem}[Mass--energy equivalence]
$E = mc^2$
\end{theorem}
\end{frame}
\begin{frame}[fragile]
\frametitle{Verbatim}
\begin{example}[Theorem Slide Code]
\begin{verbatim}
\begin{frame}
\frametitle{Theorem}
\begin{theorem}[Mass--energy equivalence]
$E = mc^2$
\end{theorem}
\end{frame}\end{verbatim}
\end{example}
\end{frame}
\begin{frame}
\frametitle{Figure}
Uncomment the code on this slide to include your own image from the same directory as the template .TeX file.
\end{frame}
\begin{frame}[fragile]
\frametitle{Citation}
An example of the \verb|\cite| command to cite within the presentation:\\~
This statement requires citation \cite{p1}.
\end{frame}
\begin{frame}
\frametitle{References}
\footnotesize{
|
1,108,101,566,673 | arxiv | \section{Longitudinal Observations}
We first show the number of papers published at ISCA
in Figure~\ref{fig:iscaNumberOfSubmissions}, where we observe
an increase in the number of papers published in 1980s and then in 2010s. Highest peak is observed in 1992 and 2018. It would be interesting to see if the number of papers published at ISCA would decline in the coming years completing another cycle. In Figure~\ref{fig:iscaAvgNumAuth} we observe that the average number of authors per paper has also increased over time. According to the trends in the past it appears that the existing trend may continue for few more years and the average number of authors per submission will likely increase.
In Figure~\ref{fig:AcademiaVsIndustry}, we see that the there is a long-term trend in decline of papers by first authors from industry. We believe that this trend is caused by two important forces. First, there is a decline of publication-oriented research activities in the industry in favor of product development. Second, the amount of ``paper engineering'' effort to write a competitive paper for acceptance by ISCA has been increasing over time. While graduate students can spend the effort, it is not clear if industry researchers can justify such efforts.
We then make a few observations on longitudinal trends in the ISCA community. In Figure~\ref{fig:topichistory}, we see three types of long-term patterns. The first type of topics, such as \emph{data flow}, \emph{RISC}, \emph{parallel processing},\emph{instruction-level parallelsim}, \emph{multicore processor}, \emph{microprogramming}, \emph{branch prediction}, \emph{shared memory}, receive intensive coverage for a limited period of time. We suspect the interests in these topics subside as the topics are considered either mature or no longer of interest. Please note that the scale of the y-axis (the number of paper abstracts mentioning the topic) varies among topics. The second type of topics, such as \emph{memory access}, \emph{speculate}, \emph{operating systems}, \emph{programming languages}, \emph{cache coherence}, \emph{CPU}, and \emph{memory systems}, receives periodic surge of coverage. Because these topics are about architecture support for key parts of computing systems, they drew research interests when the industry migrates to a new technology or new paradigm. The third type, such as \emph{GPU}, \emph{power consumption}, \emph{FPGA}, \emph{energy efficiency}, \emph{DRAM}, has received increasing coverage in recent years. Although they may belong to the first type when we look back a few years from now, it is too early to tell.
\begin{figure}[h!]
\centering
\includegraphics[width=0.9\linewidth]{figures/iscaNumberOfSubmissions.png}
\caption{Number of Papers Each Year}
\label{fig:iscaNumberOfSubmissions}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=0.9\linewidth]{figures/iscaAvgNumAuth.png}
\caption{Average Number of Authors in a Paper}
\label{fig:iscaAvgNumAuth}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=\linewidth]{figures/iscaAcademiaVsIndustry.png}
\caption{Percentage of industry vs. academia affiliation of first authors}
\label{fig:AcademiaVsIndustry}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=\linewidth]{figures/iscatrendsforfewterms.png}
\caption{A long-term history of selected top topics}
\label{fig:topichistory}
\end{figure}
\section{Detailed Analysis of Trends Over the Years}
Each figure in this section represents a cloud of representative phrases that appeared in abstracts for a span of five years, except that the first and last figures cover a span of 3 years. We count the number of papers in which we find a match to our query phrase in the abstracts. We expect to have some small but potentially significant errors in the calculated weight for the phrases due to using only abstracts and some missing semantic relations between phrases in our natural language processing pipeline.
\subsection{ISCA in the 1970s}
The 1960s were the formative years of the computer industry, when only a few companies such as Burroughs, IBM, Control Data, UNIVAC, Wang Laboratories and Digital Equipment Corporation were able to produce products that were programmed in machine language and/or early high-level languages such as FORTRAN and COBOL. Later in 1970s, we see more industry players joining them such as CRAY Research. During this time, many of the languages that we use today were developed. While high-level programming languages and compilers were being developed to improve the productivity, earlier efforts to map high-level languages to stored-program machines, commonly referred as the von Neumann architecture, resulted in high memory usage and long instruction sequences. Researchers began to propose designs where the programming languages are supported with hardware features. As shown in Figure~\ref{fig:1973-1975}, the phrases \emph{programming language} and \emph{hardware implementation} were the No. 1 and No. 2 phrases during the period from 1973 to 1975.
\begin{figure}[h!]
\begin{subfigure}[b]{\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/isca1973.png}
\caption{Word cloud visualization of topics - 1973-75}
\label{fig:word-cloud-1973-1975}
\end{subfigure}
\hfill
\centering
\begin{subfigure}[b]{0.42\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/iscaTechtrendTop1973.png}
\caption{Top topics 1973-75}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.4\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/iscaTechtrendDelta1975.png}
\caption{Topics with the most change in coverage from 1973-75 to 1976-80}
\label{fig:delta1973-1975}
\end{subfigure}
\caption{Phrases and trends, 1973-1975.}
\label{fig:1973-1975}
\end{figure}
The concept of Instruction Set Architecture (ISA) was pioneered by IBM in the 1960s. The same ISA that is implemented across multiple hardware generations allows the same software to work across different machine generations. Binary codes from earlier IBM models such as 1401, 7040 and 7094 were also able to run unchanged on System/360 series.
During that time, microprogramming was the primary vehicle for a processor to interpret the instructions of the ISA and execute them on the native hardware. The success of the IBM System/360 product line solidified the role of ISA and microprogramming in the computer architecture community.
In the early 1970's, numerous designs were proposed to improve efficiency of running programs written
in those high-level languages, but only a handful were actually implemented \cite{hllc1}. Burroughs E-mode machines for Algol 60, Burroughs B2000 for COBOL, LISP machines and Intel 432 for Ada are some of those examples. A number of ideas based on microprogramming can be found in the ISCA papers published in those years that underline these efforts, which is reflected as the No. 2 ranking of the phrase \emph{microprogramming}. By 1969, the debate over virtual memory also concluded when IBM showed a consistent performance of their virtual memory overlay system over manual system. This was
also reflected in the ISCA publication during that time, where virtual memory was one of the top trends as shown in Figure~\ref{fig:1973-1975}.
\begin{figure}
\begin{subfigure}[b]{\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/isca1975.png}
\caption{Word cloud visualization of topics - 1976-80}
\end{subfigure}
\hfill
\centering
\begin{subfigure}[b]{0.42\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/iscaTechtrendTop1975.png}
\caption{Top topics 1976-80}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.42\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/iscaTechtrendDelta1980.png}
\caption{Topics with the most changes in coverage from 1976-80 to 1981-85}
\end{subfigure}
\caption{Top Phrases and Trends, 1976-80}
\label{fig:1976-1980}
\end{figure}
As more industry players design computer products in the 1970s, there was also an increased interest in developing new operating systems and providing improved support for operating systems that offer reliable, secure, and usable environments for the users. As a result, as shown in Figure~\ref{fig:delta1973-1975} for the late 1970s, the ISCA topics in operating systems, virtual memory and memory systems has also increased. Operating systems continues to be one of the top trend in late 1970s as shown in Figure~\ref{fig:1976-1980}.
There was also a burgeoning interest in parallel computing with topics like data flow, parallel processing, and processing elements. Both types of topics would increase in later years.
During 1970s, the computer industry went through a decade of innovation in the mini-computer movement. Companies such as Digital Equipment Corporation introduced mini-computers that are affordable by smaller companies and academic departments. Researchers and engineers can access these minicomputers through Cathode-Ray Tube (CRT) terminals on their desks rather than the card punchers in computing centers, which greatly improved their productivity. These minicomputers have new instruction sets that were implemented with microcode, which further stimulated the coverage of topics like CPU, instruction set, machine language, instruction execution, low cost, microprogramming, and writeable control.
With more accessibility to researchers, these mini-computers also accelerated the development of high-level languages. With the poor performance of high-level language implementations, one of the lessons learned was that it was not only about the language but also about computation, algorithm and memory access. Researchers began to investigate how these facets of a program can also be reflected in the ISA, which was implemented by a native hardware microarchitecture through microprogramming. Additionally, the desire to better support operating systems further motivated the introduction of sophisticated instructions that help data movement, security, and reliability. Previously, the control unit in a processor was hardwired as combinational logic and finite state machine. However, the need for more complex and powerful instructions made it difficult, time consuming and costly to design such hardwired processors. And since there were also very few CAD tools for hardware development and verification, this path was less productive in the late 1970s. This, in part, further contributed to the popularity of microprogramming at that time. Microcode simplified the processor design by allowing the implementation of control logic as a microcode routine stored in the memory instead of a dedicated circuit. The VAX ISA by Digital Equipment Corporation consisted of more than 300 instructions in its ISA. A number of complex instructions were introduced to support high-level languages and operating systems to bridge the semantic gap. However, it was observed that compilers rarely used those instructions and these instructions were kept just to support ``legacy codes'' in low-level libraries. The VAX polynomial-evaluate and CALL instructions are examples of such instructions. As shown in Figure~\ref{fig:1976-1980}, the period between 1976 and 1980 was the time when CPU design and microprogramming were at their peak. Programming languages, operating systems, processing element, and fault tolerance continue to receive strong attention.
\subsection{ISCA in the 1980s}
\begin{figure}[h!]
\begin{subfigure}[b]{\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/isca1980.png}
\caption{Word cloud visualization of topics - 1981-85}
\end{subfigure}
\hfill
\centering
\begin{subfigure}[b]{0.42\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/iscaTechtrendTop1980.png}
\caption{Top topics 1981-85}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.42\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/iscaTechtrendDelta1985.png}
\caption{Topics with the most change in coverage from 1981-85 to 1986-90}
\end{subfigure}
\caption{Top topics and trends, 1981-85}
\label{fig:1981-1985}
\end{figure}
In the 1980s, the computer architecture community started to embrace parallel processing and high-performance computing. As shown in Figure~\ref{fig:1981-1985}, a great deal of attention by the ISCA community was paid to the interconnection network between processing elements. The driving applications behind those systems were the military and scientific applications including image processing, astrophysics and weather prediction. Individual processors were not capable of providing the required computation speed at that time.
In industry, mini-supercomputers from Seqent and Alliant began to gain popularity. There was also a burgeoning interest in massively parallel systems such as Connection Machines from Kendal Square Research. Three major components of those multiprocessing systems were processing elements, shared memory and interconnection network. In order to make efficient use of multiprocessing systems, it required ensuring a communication network that does not become the bottleneck. A number of ideas were introduced in ISCA publications around circuit switched networks, packet switched networks, multi stage networks, binary tree networks, bus traffic, resource scheduling and routing protocols. This is reflected in the No. 1 and No. 4 rankings of the \emph{interconnect network}
and \emph{parallel processing} topics during early 1980s as shown in Figure~\ref{fig:1981-1985}. During this period, challenges in parallel programming have also motivated research in data flow architectures and data-driven execution where researchers proposed hardware mechanisms to identify instructions that are ready for execution and are scheduled for execution as soon as possible. This is reflected in the No. 2 ranking of the \emph{data flow} topic during this period.
\begin{figure}
\begin{subfigure}[b]{\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/isca1985.png}
\caption{Word cloud visualization of topics from 1986-90}
\end{subfigure}
\hfill
\centering
\begin{subfigure}[b]{0.42\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/iscaTechtrendTop1985.png}
\caption{Top topics 1986-90}
\label{fig:1986-1990(top)}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.42\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/iscaTechtrendDelta1990.png}
\caption{Topics with most changes in coverage from 1986-90 to 1991-95}
\end{subfigure}
\caption{Top topics and trends, 1986-1990}
\label{fig:1986-1990}
\end{figure}
Supporting shared memory in a parallel processing system has its own challenges, including system scalability to thousands of processors without dramatically increasing the memory access latency and memory interference. Levels of memory hierarchies were proposed to include caches for perfecting and reusing the data, which further required developing coherence protocols and consistency models to reduce the burden on programmers in managing data values across the system. This is reflected in the significant increase in related topic coverage from 1981-85 to 1986-90, as shown in Figure~\ref{fig:1981-1985}(c). We also observe that, as shown in Figure~\ref{fig:1986-1990} and Figure~\ref{fig:1991-1995}, this trend continued even into the early 1990s.
In the 1980s, the RISC movement, started with the CRAY-1 machine and the IBM 801 microprocessor project, advocated ISAs with simple instructions, which resulted in uniform instructions that are easier to decode and pipeline. The debate about the advantages and disadvantages of simpler instructions made \emph{ISA} one of the top topics during the period of 1986-90, as shown in Figure~\ref{fig:1986-1990}.
The RISC designs matched well with the transistor budget of the microprocessor chips during this time. Industry companies began to produce chips based on new ISAs like SPARC by SUN Microsystems, MIPS by MIPS, Inc., Spectrum by Hewlett-Packard, and 960 by Intel. However, the simpler instruction sets also increased the pressure for increased memory bandwidth for instruction fetch. Pipelining allowed the CPU clock frequency to improve much faster than that of the memory system. As a result, the memory system began to become an important bottleneck in overall system performance. As a result, many researchers began to pay attention to memory accesses/references, virtual addresses, memory systems, main memory, and memory hierarchies.
As the number of transistors further increased over time following the Moore's Law, there was also a resurgence of interest in the cache design. Although caches were used extensively in mainframe computers and minicomputers in the 1970s, the microprocessors started to have barely sufficient number of transistors in the 1980s to incorporate caches on chip to mitigate the memory access bottleneck. This was reflected in the increased ISCA coverage of topics such as cache memories, instruction caches, cache hierarchy, block sizes, cache misses, and trace-driven simulation for cache performance studies\ref{fig:cloud91-95}.
From 1986 to 1990, researchers also published intensively on architectures that supported Prolog, a programming language for rule-based inferences in artificial intelligence applications, as shown in Figure~\ref{fig:1986-1990(top)}. This was mostly stimulated by the increased DARPA funding in the early 1980s for AI architecture projects in response to the Japanese 5th generation AI computers project. However, the interest in AI and Prolog would soon diminish due to the lack of practical AI applications.
\subsection{ISCA in the 1990s}
\begin{figure}
\begin{subfigure}[b]{\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/isca1990.png}
\caption{Word cloud visualization of topics - 1991-95}
\label{fig:cloud91-95}
\end{subfigure}
\hfill
\centering
\begin{subfigure}[b]{0.4\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/iscaTechtrendTop1990.png}
\caption{Top Topics 1991-1995}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.46\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/iscaTechtrendDelta1995.png}
\caption{Topics with the most change in coverage from 1991-95 to 1996-2000}
\end{subfigure}
\caption{Top topics and trends, 1991-95}
\label{fig:1991-1995}
\end{figure}
In the early 1990s, the computer architecture researchers continue to publish extensively on \emph{shared memory} and \emph{shared memory multiprocessors}. The increasing commercial use of \emph{database} applications and scientific applications in the early 1990s further stimulated studies on \emph{shared memory} server designs and \emph{message-passing} clusters. Both trends eventually disrupted mainframes in the database market and the traditional vector supercomputers in the scientific computing market. However, it is also interesting to note that the coverage of these topics in ISCA will decrease dramatically in the next 5-year period.
\begin{figure}
\begin{subfigure}[b]{\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/isca1995.png}
\caption{Word cloud visualization of topics - 1996-2000}
\end{subfigure}
\hfill
\centering
\begin{subfigure}[b]{0.42\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/iscaTechtrendTop1995.png}
\caption{Top topics 1996-2000}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.42\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/iscaTechtrendDelta2000.png}
\caption{Topics with the most change in coverage from 1996-2000 to 2001-05}
\end{subfigure}
\caption{Top topics and trends, 1996-2000}
\label{fig:1996-2000}
\end{figure}
Thanks to the relentless scaling according to the Moore's Law, the the number of transistors available to the industry design teams increased to the level that allowed the designers to adopt techniques that had been only used in mainframe computers and supercomputers. Research in the 1980s set the foundation for exception handling in processors using out of order and speculative execution techniques. During the 1990-95 period, as shown in Figure~\ref{fig:1991-1995}, computer architects used newly available transistors in building \emph{high-performance processors} with hardware schedulers for detecting \emph{instruction-level parallelism}, performing \emph{instruction re-ordering}, \emph{branch predictions} and \emph{speculative execution} together with \emph{pipelining} for increased \emph{clock frequency} and also to bridge the gap between memory latency and processing time. These innovations resulted in deeper pipelines and wider issue width in a generation of superscalar processors.
The industry design of superscalar processors reached its peak during the late 1990s as Intel, AMD, MIPS/SGI, SUN, IBM, Hewlett-Packard all came up with superscalar microprocessor products in the mid 1990s. As shown in Figure~\ref{fig:1996-2000}, \emph{high-performance} processor design techniques such as \emph{hardware speculation}, and \emph{branch prediction} received significant attention from the ISCA community during 1996-2000. The superscalar processors introduced at that time included Intel Pentium, MIPS R8000 and IBM Power series.
However, the increased clock frequency and execution throughput of these processors placed even more pressure on the \emph{memory system}, which motivated more studies on \emph{memory access}, \emph{data caches}, \emph{cache sizes}, \emph{block sizes}, and \emph{miss rates/ratios}. During this time, the computer architecture community started to converge on using trace driven simulation based on \emph{SPEC benchmarks} in studying processor pipelines as well as \emph{cache memories}.
It is interesting to note though that, as shown in Figure~\ref{fig:1996-2000}(c), the coverage of \emph{instruction-level parallelism}, \emph{branch prediction}, \emph{superscalar processor} \emph{shared memory}, and \emph{memory systems} would drop significantly in the next 5-year period.
The decade from 1991 to 2000 was a dark age for research in massively parallel computing systems and special-purpose acceleration hardware. The industry not only rode on the Moore's Law with the exponentially increasing number of transistors, but also started to deviate from the Dennard's scaling, trading more power consumption for super-linear performance improvement over time. This strategy resulted in such fast advancement in microprocessor performance that it eclipsed any benefit of massively parallel computing systems or special-purpose hardware accelerators. Thus, the term "Killer Micros" became prominent, indicating the fact that the fast advancing microprocessor performance killed off the research and development of massively parallel computing and special-purpose hardware acceleration. But all of this would change in the next decade as we'll discuss below.
\subsection{ISCA in the 2000s}
\begin{figure}
\begin{subfigure}[b]{\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/isca2000.png}
\caption{Word cloud visualization of topics 2001-05}
\end{subfigure}
\hfill
\centering
\begin{subfigure}[b]{0.4\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/iscaTechtrendTop2000.png}
\caption{Top topics 2001-05}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/iscaTechtrendDelta2005.png}
\caption{Topics with the most change in coverage from 2001-05 to 2006-10}
\end{subfigure}
\caption{Top topics and trends, 2001-2005}
\label{fig:2001-2005}
\end{figure}
The fact that \emph{SPEC} became the top topic phrase for the period of 2001-05 indicated that by this time the computer architecture community has fully embraced quantitative approaches to computer architecture research. We observe that \emph{SPEC CPU2000}, or its predecessor SPEC CPU95, has became the de facto standard for measuring processor and/or memory-hierarchy performance in 2000s. The benchmarks were used to measure a wide variety of system designs, including \emph{multiprocessing systems} with a multi-level \emph{memory hierarchy}, \emph{memory address translation} overheads in cloud and server applications, a period when internet was coincidentally getting popular.
The period of early 2000s also saw the peak of \emph{speculation techniques} in superscalar processors, VLIW processors, and memory systems. The industry was producing VLIW/EPIC processors such as the Intel Itanium. Computer architecture researchers were publishing extensively on architectural support for compile-time control \emph{speculation} and data \emph{speculation}. Researchers also published extensively on \emph{register file} design for both VLIW/EPIC and wide-issue superscalar processors. These processors require very large number ports to support simultaneous accesses to the register file by many instructions at different stages of the processor pipeline. This triggered the coverage of large register files with multiple read and write ports. A number of ISCA publications at that time looked at various aspects of register file design, including their organization, access time, power consumption and cost.
From 1995 to 2005, the industry has been achieving super-linear scaling of \emph{high performance} designs, especially the clock frequency, over time at the cost of increasing power consumption. As we mentioned earlier, this is accomplished by deviating from the Dennard scaling principle of linearly scaling the performance in each generation of technology while keeping power consumption constant. By 2005, the power consumption of microprocessors has reached the limit of practical heat dissipation mechanisms. As a result, computer architecture researchers began to focus on \emph{energy efficiency}, which would be one of the highly ranked ISCA topics with the most increased coverage from 2001-05 to 2006-10, as shown in Figure~\ref{fig:2001-2005}(c). On the other hand, the coverage of superscalar processors and register files would drop significantly in the next 5-year period. It is interesting to note that Figure~\ref{fig:2001-2005}(c) presents one of the most dramatic shift of topic coverage throughout the ISCA history.
\begin{figure}
\begin{subfigure}[b]{\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/isca2005.png}
\caption{Word cloud visualization of topics - 2006-2010}
\end{subfigure}
\hfill
\centering
\begin{subfigure}[b]{0.42\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/iscaTechtrendTop2005.png}
\caption{Top topics - 2006-2010}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.4\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/iscaTechtrendDelta2010.png}
\caption{Topics with the most change in coverage from 2006-10 to 2011-15}
\end{subfigure}
\caption{Top topics and trends, 2006-10}
\label{fig:2006-2010}
\end{figure}
The period of 2006-2010 is the start of the era of \emph{chip multiprocessors}, a.k.a. \emph{multicore processors}. Before the availability of commercial multicore processors, superscalar processors were packed with more and more \emph{functional units}. The instructions were dispatched to the available functional units. However, this way of scaling the performance hit a practical barrier as the industry design teams struggled to exploit sufficient instruction-level parallelism to productively utilize additional functional units for increased performance. The industry made a major pivot from uni-processor clock frequency and instruction-level parallelism scaling to multicore scaling around 2003. The clock frequencies and instruction-level parallelism of each CPU core will largely remain the same, whereas the number of cores would increase over time. In fact, in some designs, the clock frequency may even be reduced to reduce power consumption to accommodate more cores for a given power budget. This turn away from clock frequency and instruction-level processing was reflected in the reduced coverage of topics like \emph{superscalar processor} and ~\emph{register file} from 2001-05 to 2006-10, as shown in Figure~\ref{fig:2001-2005}(c).
IBM launched Power4 that was the first dual core processor in 2001. Compaq developed piranha system for high-performance servers by integrating eight simple Alpha processor cores along with a two-level cache hierarchy onto a single chip. Sun Microsystems launched Niagra in 2005 that was eight-core Web server CPU. On-chip multiprocessing systems were thus studied in detail by the ISCA community in varying contexts, including cache hierarchies, power consumption, communication overheads, thread to core assignment for Simultaneous Multithreading (SMT), soft errors under low voltage, cache coherence protocols, interconnect networks, QoS, task scheduling and power management. The strong interests from the ISCA research community to support this movement were reflected in the high coverage of topics such as \emph{chip multiprocessors}, \emph{multicore processors}, \emph{power consumption}, and \emph{energy efficiency} during the period of 2006-10, as shown in Figure~\ref{fig:2006-2010}. It is also interesting to note that the term \emph{chip multiprocessor} gave way to \emph{multicore processor}, which was reflected in the drop of their coverage from 2006-10 to 2011-15 as shown in Figure~\ref{fig:2006-2010}(c).
\subsection{ISCA in the 2010s}
\begin{figure}
\begin{subfigure}[b]{\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/isca2010.png}
\caption{Word cloud visualization of topics, 2011-2015}
\label{fig:2011-2015}
\end{subfigure}
\hfill
\centering
\begin{subfigure}[b]{0.4\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/iscaTechtrendTop2010.png}
\caption{Top topics, 2011-2015}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.4\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/iscaTechtrendDelta2015.png}
\caption{Topics with the most change in coverage from 2011-15 to 2016-18}
\end{subfigure}
\caption{Top Topics and Trends 2011-2015}
\label{fig:2011-2015}
\end{figure}
During the 2011-15 period, as shown in Figure~\ref{fig:2011-2015}, \emph{power consumption} and \emph{energy efficiency} became the top topics for ISCA authors. Meanwhile, advances in the mobile devices and internet technology fueled an exponential growth in the tech industry with a variety of applications that generated huge amount of data. GPUs with hundreds of processing cores, already in use by the industry for graphics processing, proved to be high throughput devices for processing large amount of data. They became more general purpose with the introduction of CUDA programming model in 2007. The GPU equivalent of CPU cores, or Streaming Multiprocessors, run at about half of the clock frequency as CPU cores to achieve higher energy efficiency. The savings in the power consumption enabled the GPU designers to provision much higher memory bandwidth and thread-level parallelism. A major challenge was to program these massively parallel processors.
A movement to empower application developers to develop parallel applications started in 2007 with libraries and education materials from NVIDIA and academic institutions such as the University of Illinois at Urbana-Champaign, the University of California, Davis and the University of Tennessee, Knoxville. By 2011, there had been strong momentum in GPU libraries and applications in High-Performance computing. During the period of 2011-15, China, US, and Japan began to build top supercomputers based on CUDA GPUs. Examples were Tienhe 1 in China, Tsubame at Toyo Tech, Titan in Oakridge National Lab, and Blue Waters at the University of Illinois at Urbana-Champaign.
These powerful GPU solutions and the CUDA programming model support also paved the way for machine learning. Using CUDA GPUs, a team from the University of Toronto trained the AlexNet using 1.2 million images and won the ImageNet competition in 2012 with a large margin against the team in the 2nd place. This victory ignited wide interests in Neural Networks for computer vision and other cognitive applications. Nvidia used it to capitalize its investment and quickly developed cuDNN. Several other fields such as personalized medicine, genomics, physics, economics also realized that GPUs may help to consume lots of existing data for scientific breakthroughs. Some of the Top10 supercomputers also got equipped with GPUs. However, power consumption for large computing clusters was still a bottleneck. In 2011-15, we saw that power consumption has been the biggest concern, and
the number of publications in ISCA increased to address this challenge in the context of data centers. Research ideas and solutions were
proposed by the ISCA community from industry and academia at different layers of computing stack, circuits, architecture and algorithms.
\begin{figure}
\begin{subfigure}[b]{\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/isca2015.png}
\caption{Visualization of Phrases from 2016-2018}
\label{fig:wordcloud-2016-2018}
\end{subfigure}
\hfill
\centering
\begin{subfigure}[b]{0.4\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/iscaTechtrendTop2015.png}
\caption{Top topics, 2016-2018}
\end{subfigure}
\caption{Top topics and trends, 2016-18}
\label{fig:2016-2018}
\end{figure}
In the most recent period of 2016-18, we saw that machine learning based on Neural Networks have made their way to many applications and have become part of real-life systems. In Figure\ref{fig:wordcloud-2016-2018}, we see such influence in the ISCA publication trends. Computer architects acted quickly and addressed the related challenges to build machines that can process cognitive workflows in an efficient manner. Naturally, because of the increasing trend in machine learning and Neural Networks, GPUs have thus gained tremendous attention from the ISCA community. GPUs have made the training and learning more practical in terms of time and energy consumption. The desire to train more models has motivated the development of specialized hardware for tensor processing, referred to as TPUs and Tensor Cores.
However, processing large amount of data with GPUs and tensor processing hardware is efficient only when there is enough reuse of data primarily because of the memory wall. For applications with random data access pattern and poor cache hit rate, GPUs are not suitable. Even for applications with regular access patterns, what we have witnessed so far is under utilized GPUs because of not enough reuse in the applications. As data is growing exponentially, we should also expect an exponential growth in storage requirement, data movement and energy consumption. All these concerns are clearly visible in the ISCA publication trends during 2016-18. Performance from any of the existing technologies has not shown to be scalable to process this large amount of data within the required power and throughput budget. Radical changes are required from top to bottom both in software and hardware.
\section{Outlook of Future Computer Architecture Research}
One of the key questions to the computer architecture community in the coming decade is how computer architects will address challenges to scale the performance by 100x while staying in the required power and cost budgets? For storing the large amount of data to be processed, we expect there may be a departure from existing storage devices to relatively lower cost and lower latency storage solutions. SSD is already used widely in consumer electronics and is now making its way into high performance computing solutions. Another trend is for main memory to be extended through 3D integration. Today top-of-the-line NVIDIA GPUs have already equipped with high bandwidth stacked DRAM solutions. In order to restrict the amount of data movement, an upcoming trend can be observed towards in-memory-computing and near-memory-computing solutions.
In a few years, we expect to witness a compute hierarchy parallel to the existing memory hierarchy. However, a million-dollar question would be how to address the complexity of compute logic at different levels of the compute hierarchy given a variety of applications. One of the solutions along this line is to put logic layer underneath a stacked DRAM, which has captured a lot of attention from industry and academia. However, what compute logic goes where in the compute hierarchy is still an open research question. We will also likely witness parallelism by having a massive number of simple, energy efficient processing cores running at lower clock frequency, distributed not only across all the levels of memory hierarchy but at multiple levels in a memory die to reach bandwidth at the scale of TB/s. One fundamental idea is to restrict the data movement to local compute units if possible, increase parallelism, lower power consumption, and achieve high bandwidth and high throughput.
We will continue to see accelerators working along with general purpose processors. In the end, the whole system would consist of heterogeneous computing cores interconnected via an interconnection fabric. One of the daunting challenges is to program such devices, which requires researchers rethinking of the whole software stack to ease programmers' life. Since late 1970s, we have been developing high-level languages. We think that computer scientists are now going to have another iteration of the design cycle. We will see the development of even higher-level languages including domain specific languages (DSLs) and new compilers that would support a variety of those domain specific languages and hardware platforms using a hierarchy of intermediate representations. In the next few years, we will also likely see an increasing interest in using machine learning as a tool to depart from the paradigm of hand crafted rules to semi-automation. It may help compilers, schedulers, branch predictors and any other computing system operations to learn from application behavior running in the cloud or at the edge. However, benefits of such a paradigm shift are still to be investigated.
\section{About this Study}
This study began with a research project, called DISCVR, conducted at the IBM-ILLINOIS Center for Cognitive Computing Systems Reseach (c3sr.com). The goal of DISCVR was to build a practical NLP based AI solution pipeline to process large amounts of PDF documents and evaluate the end-to-end NLP capabilities in understanding large amount of unstructured data such as PDF files. This will in turn help us better understand the computation
patterns and requirements of modern AI solutions on the underlying computing systems.
While building such a prototye, an early use case came to us thanks to the
2017 IEEE/ACM International Symposium on Microarchitecture (MICRO-50) Program Co-chairs, Drs. Hillery Hunter and Jaime Moreno. They asked us if we can perform some data-driven analysis of the past 50 years MICRO papers and show
some interesting historical perspectives on MICRO's 50 years of publication. Because of the limited amount of time, we were only able to produce some
preliminary results and delivered an invited talk during that year's MICRO opening reception. It generated some interesting discussions, but the lack of insights from those early results has provided limited use of that work.
That undertake has, however, planted a seed in our C3SR center. We learned two important lessons from that experience: (1) building an AI solution to truly understand unstructured data is hard in spite of the many claimed successes in natural language understanding; and (2) providing
a data-driven perspective on computer architecture research is a very
interesting and fun project.
Since then, we continued to push those two frontiers of research at the C3SR center. On the first topic, we built a prototype of paper review matching system, called C3SR System for Reviewer Assignment (CSRA), and used that
system to help the Program Co-chairs of 2019 ISCA, Drs. Hillery Hunter and Erik Altman, with their paper review assignment task. On the second topic,
we decided to conduct a more thorough study based on all past ISCA papers, which resulted this article.
We recognize that we have just scratched the surface of
natural language understanding of unstructured data, and there are many more
aspects that we can improve (and we are still working on them). But even with
our current study, we felt there were enough interesting findings that may
be worthwhile to share with the community. Hence we decided to write this
article to summarize our findings so far based only on ISCA publications. Our hope is to generate
further interests from the community in this topic, and we welcome
collaboration from the community to deepen our
understanding both of the computer architecture research
and of the challenges of NLP-based AI solutions. For example, a similar study
can be conducted for other conferences (such as MICRO) and in other research areas (such as CVPR and SIGKDD).
\section{Acknowledgement}
We would like to thank Mr. Abdul Dakkak and Ms. Cheng Li from the C3SR center
for their early work on DISCVR and their contribution to the MICRO-50
data analysis. We would also like to thank Mr. Jong Yoon Lee, Ms. Hongyu Gong and Ms. Renee Zhu from the C3SR center for their contributions to the CSRA project, which
had direct impact on the data analysis as used in this article. We would also like to thank Drs. Hillery Hunter, Jaime Moreno and Erik Altman from IBM Research for their encouragement and feedback for our work.
|
1,108,101,566,674 | arxiv | \section{Introduction}
{}~~~~~The belonging of solutions to a certain function space
is a characteristic property for studying the asymptotic
behavior of solutions of differential equations.
Many works are concerned with the connection between the properties
of solutions and stability.
We name here the monographs [1-3] on ordinary differential equations
and the works [4-9] on functional differential equations.
For differential equations with impulses this problem was investigated
in [10-12] for ordinary differential equations and in [13] for equations
with delay.
The present paper deals with the following problems:
admissibility of a pair of spaces for a differential operator,
i.e. action conditions for this operator in corresponding
function spaces;
admissibility of a pair of spaces for a differential equation,
i.e. the conditions of belonging of all solutions to a certain space
if provided that the right hand side belongs to the other space;
connection between admissibility and exponential stability
for impulsive differential equations.
All function spaces considered are the space of locally integrable functions
and its subspaces.
Explicit conditions for existence of integrable solutions and
for exponential stability are obtained as corollaries of these results.
The present paper is organized as follows.
In section 2 the equation studied is described and the hypotheses
are introduced.
Section 3 deals with auxiliary results.
In particular the solution representation formula is given and
the properties of certain spaces of differentiable on the half-line
functions are described.
The proofs of these results are presented in the last section 7.
In section 4 admissibility of a pair of spaces is considered.
In section 5 stability problems are investigated.
Finally, section 6 gives explicit stability results.
In conclusion we outline that the present work can be
treated as [13] continued.
This paper dealt with the same problems in
the space of essentially bounded functions.
\section{Preliminaries}
{}~~~~~Let $0 = \tau_0 < \tau_1 < \dots $ be the fixed points,
$\lim _{j \rightarrow \infty} \tau _j = \infty , $
${\bf R}^n$ be the space of $n$-dimensional column vectors
$x = col (x_1, \dots ,x_n)$ with the norm
$\parallel x \parallel = \max _{1 \leq i \leq n} \mid x_i \mid $,
by the same symbol $\parallel \cdot \parallel$ we
denote the corresponding matrix norm,
$E_n$ is an $n \times n$ unit matrix,
$\chi _e : [0, \infty ) \rightarrow {\bf R}
$ is the characteristic function of the set $e :
\chi_e (t) = 1$, if $t \in e, $ and $\chi_e (t) = 0, $
otherwise.
$\bf L$ is a space of Lebesgue measurable functions
$x: [0, \infty) \rightarrow {\bf R}^n$ integrable
on any finite segment $[t, t+1]$,
${\bf L}_{\infty} \subset {\bf L}~ $ is a Banach space of essentially bounded
functions $x: [0, \infty ) \rightarrow
{\bf R}^n , \parallel \! x \! \parallel _{{\bf L}_{\infty}} =
vraisup _ {t \geq 0} \parallel x(t) \parallel ,$
\newcommand{{\bf L}_p}{{\bf L}_p}
\newcommand{{\bf D}_p}{{\bf D}_p}
\newcommand{{\bf M}_p}{{\bf M}_p}
${\bf L}_p \subset {\bf L} ~(1 \leq p < \infty )$
is a Banach space of functions $x: [0, \infty) \rightarrow
{\bf R}^n$ such that $\int_0^{\infty} \parallel x(t) \parallel ^p dt
< \infty $, with a norm
$$ \parallel x \parallel _{{\bf L}_p} =
\left( \int_0^{\infty} \parallel x(t) \parallel ^p dt \right) ^{1/p}, $$
${\bf M}_p \subset {\bf L}$ is a Banach space of functions
$x: [0, \infty) \rightarrow {\bf R}^n$ such that
$$\mu = \sup_{t>0} \left( \int_t^{t+1}
\parallel x(t) \parallel ^p dt \right) ^{1/p} < \infty, ~
1 \leq p < \infty, ~ \parallel x \parallel _{{\bf M}_p} = \mu. $$
${\bf PAC} (\tau _1, \dots , \tau_j, \dots ) $
is a linear space of
functions $x: [0, \infty) \rightarrow {\bf R}^n$
absolutely continuous on each interval $[\tau_j, \tau_{j+1} )$,
with jumps at the points $\tau_j$.
We assume that functions in {\bf PAC} are right continuous.
The same function spaces will be considered for intervals different
from $[0, \infty)$ if it does not lead to misunderstanding.
For spaces of matrix valued functions we use the same notation as for
vector valued functions.
We consider a delay differential equation
\begin{equation}
\dot{x} (t) + \sum_{k=1}^m {A_k (t) x[h_k(t)]} =
f(t),~ t \geq 0, ~ x(t) \in {\bf R}^n , \label{e1}
\end{equation}
\begin{equation}
x(\xi ) = \varphi (\xi), \xi < 0, \label{e2}
\end{equation}
with impulsive conditions
\begin{equation}
x(\tau _j) = B_j x(\tau _j - 0) + \alpha_j ,~ j = 1,2, \dots, \label{e3}
\end{equation}
under the
following assumptions:
(a1) $0 = \tau_0 < \tau_1 < \tau _2 < \dots $ are fixed points,
$\lim_{j \rightarrow \infty} \tau _j = \infty $ ;
(a2) $ f \in {\bf L}, ~ A_k \in {\bf L}, ~k=1,2, \dots, m $ ;
(a3) $h_k : [0,\infty) \rightarrow {\bf R} $
are Lebesgue measurable functions, $$h_k (t) \leq t,~
k = 1, \dots , m;$$
(a4) $\varphi : (- \infty, 0) \rightarrow {\bf R}^n $
is a Borel measurable bounded function;
(a5) $ B_j \in {\bf R}^{n \times n}, ~B = \sup_j \parallel B_j \parallel
< \infty;$
(a6) $K = \sup _{t,s >0}
\left\{ \frac{i(t,s)}{t-s}, ~i(t,s) \neq 1 \right \} < \infty. $
Here $i(t,s)$ is a number of points $\tau _j$ belonging
to the interval $(s,t).$
We denote $b = \max \{B, 1 \}, ~ I = max \{K ,1 \}. $
\vspace{5 mm}
\underline{\sl Remark.}
One can easily see that (a6) is satisfied if
$\tau_{j+1} - \tau_j \geq \rho > 0$.
\underline{{\sl Definition .}}
A function $x \in {\bf PAC}$
is said to be {\bf a solution of the impulsive equation
(1),(2),(3)} with the initial function $\varphi (t)$
if (1) is satisfied for almost all $t \in [0, \infty)$
and the equalities (3) hold.
\vspace{5 mm}
Below we use a linear differential operator
\begin{equation}
({\cal L} x)(t) = \dot{x} (t) + \sum_{k=1}^m {A_k (t) x[h_k(t)]} ,
{}~x(\xi) = 0, ~ \xi < 0. \label{e4}
\end{equation}
\section{Auxiliary results}
{}~~~~~In [13] the solution representation formula for (\ref{e1})-(\ref{e3})
is presented if provided that more restrictive conditions than (a1)-(a6)
hold.
Precisely, instead of (a2) it was assumed that $f$ and $A_k$ are in
${\bf L}_{\infty}$.
However the proof of this formula preserves in the more
general case $f,A_k \in {\bf L}$.
Thus the following result is valid.
\newtheorem{uess}{Lemma}
\begin{uess}
{\em [13]} ~Suppose the hypotheses (a1)-(a6) hold.
Then there exists one and only one solution of the equation (\ref{e1})
-(\ref{e3})
satisfying $x(0)= \alpha_0$ and it can be presented as
\begin{equation}
x(t) = \int_0^t {X(t,s) f(s) ds}
- \sum_{k=1}^m {\int_0^t {X(t,s) A_k(s) \varphi [h_k(s)] ds}} +
\sum_{0 \leq \tau_j \leq t} {X(t, \tau_j) \alpha _j}, \label{e5}
\end{equation}
with $\varphi (\zeta) = 0,$ if $\zeta \geq 0. $
The matrix $X(t,s)$ in (\ref{e5}) for a fixed $s$ as a function of $t$
is a solution of the problem
$$
\dot{x} (t) + \sum_{k=1}^m {A_k (t) x[h_k(t)]} =
0,~ t \geq s, ~ x(t) \in {\bf R}^{n \times n},
$$ $$
x(\xi ) = 0,~ \xi < s,~ x(s)=E_n; ~
x(\tau _j) = B_j x(\tau _j - 0), ~ \tau_j > s .
$$
We assume $X(t,s) = 0, ~t<s$.
\end{uess}
\underline{Definition.}
The matrix $X(t,s)$ is said to be {\bf a fundamental matrix},
$X(t,0)$ is said to be {\bf a fundamental solution}.
An operator
$$(Cf)(t) = \int_0^t X(t,s) f(s) ds $$
is said to be {\bf a Cauchy operator} of the equation (1)-(3).
For studying the equation (\ref{e1})-
(\ref{e3}) we introduce an auxiliary equation
\begin{equation}
({\cal L}_0 x)(t) \equiv
\dot{x} (t) + a x(t) = z(t), ~t \geq 0, ~x(t) \in {\bf R}^n,
\label{e6}
\end{equation}
\begin{equation}
x(\tau_j) = B_j x(\tau_j - 0). \label{e7}
\end{equation}
By
$$(C_0 z)(t) = \int_0^t X_0 (t,s) z(s) ds $$
the Cauchy operator of the equation (\ref{e6}),(\ref{e7}) is denoted.
\begin{uess}
{\em [13]} ~Suppose the hypotheses (a5) and (a6) hold
and $\nu = a- I \ln b > 0$.
Then
$$ \parallel X_0 (t,s) \parallel \leq e^{- \nu (t-s)}. $$
\end{uess}
For each space ${\bf L}_p$ we construct a subspace of ${\bf PAC}$
as follows.
Denote by ${\bf D}_p$ a linear space of functions $x \in {\bf PAC}$
satisfying (\ref{e7}) and such that $x \in {\bf L}_p, ~\dot{x} \in {\bf L}_p $.
This space is normed, with a norm
$$ \parallel x \parallel _{{\bf D}_p} = \parallel x \parallel _{{\bf L}_p} +
\parallel \dot{x} \parallel _{{\bf L}_p} . $$
\begin{uess}
Suppose the hypotheses (a5) and (a6) hold.
Then ${\bf D}_p, ~1 \leq p < \infty,$ is a Banach space.
\end{uess}
The proof is presented in section 7.
\underline{\sl Remark.} Lemma 3 remains valid
if ${\bf L}_p$ is changed by a Banach space ${\bf B} \subset {\bf L}$
if provided that the topology in ${\bf B}$ is stronger than the
topology in ${\bf L}$.
In particular ${\bf B} = {\bf L}_{\infty}$ or
${\bf B} = {\bf M}_p$ are suitable.
The following assertion supplements Lemma 3.
\begin{uess}
Suppose the hypotheses (a5) and (a6) hold and $a- I \ln b > 0$.
Then the set
$\tilde{{\bf D}_p} = \{ x \in {\bf PAC} \mid ~ \dot{x} + ax \in {\bf L}_p ,
{}~x(\tau_j)=B_j x(\tau_j -0) \} $
coincides with ${\bf D}_p$, and the norm
\begin{equation}
\parallel x \parallel _{\tilde{{\bf D}_p}} = \parallel x(0) \parallel +
\parallel \dot{x} + ax \parallel _{{\bf L}_p} \label{e8}
\end{equation}
is equivalent to the norm $\parallel \cdot \parallel _{{\bf D}_p} $.
\end{uess}
The proof is also in section 7.
\section{Admissibility of pairs}
{}~~~~~\underline{\sl Definition.}
The pair (${\bf D}_p, {\bf L}_p$) is said to be {\bf admissible
for a differential operator } ${\cal L}: {\bf PAC} \rightarrow
{\bf L}$ if ${\cal L} ({\bf D}_p) \subset {\bf L}_p$.
\underline{\sl Definition.}
Suppose the initial function $\varphi$ satisfies the hypothesis (a4)
and it is fixed.
The pair $({\bf L}_p, {\bf D}_p)$ is said to be {\bf admissible} for the equation
(1)-(3) if for any $f \in {\bf L}_p, ~ \alpha_j \in {\bf R}^n$
the solution is in ${\bf D}_p$.
The pair $({\bf L}_p, {\bf D}_p)$ is said to be
{\bf admissible on the whole} for the equation
(\ref{e1})-(\ref{e3})
if for any $f \in {\bf L}_p, ~ \alpha_j \in {\bf R}^n$
and any initial function $\varphi$
satisfying (a4) the solution is in ${\bf D}_p$.
\underline{\sl Remarks.}
1. For ordinary differential equations the admissibility of the pair
$({\bf L}_p, {\bf L}_{\infty})$ is usually considered.
However this admissibility is the consequence of the admissibility of pair
$({\bf L}_p, {\bf D}_p)$.
In fact if $x \in {\bf D}_p$ then for $a \in {\bf R}
~~~~\dot{x}~+~ax~\in~{\bf L}_p$.
Let $a-I \ln~b~>~0$.
Then by Lemma 2~ $x~\in~{\bf L}_{\infty}$,
therefore the pair $({\bf L}_p,{\bf L}_{\infty})$ is admissible
for the differential equation.
Besides this, under our approach admissibility of the pair $({\bf L}_p,
{\bf D}_p )$
is treated more naturally than of the pair $({\bf L}_p, {\bf L}_{\infty})$.
2. It is to be noted that the recent monograph of C.Corduneanu
[9] deals
with admissibility of pairs of spaces for integrodifferential equations
(and for general functional differential equations as well).
\vspace{5 mm}
Consider operators
\begin{equation}
(Hx)(t) = \sum_{k=1}^m A_k(t) x[h_k(t)];
{}~x(\xi)=0,~\xi <0, \label{e9}
\end{equation}
$$({\cal L} x)(t) = \dot{x}(t)+(Hx)(t) .$$
Under the hypotheses (a1)-(a3), (a5)-(a6) $H$ acts from
${\bf PAC}$ to ${\bf L}$.
\newtheorem{guess}{Theorem}
\begin{guess}
Suppose the hypotheses (a1)-(a3), (a5),(a6) hold and there exists
$\nu > 0$ such that $A_k^{\nu} \in {\bf M}_p$, where
$$A_k^{\nu} (t) = e^{\nu [t-h_k (t)]} A_k (t), ~1 \leq p< \infty
. $$
Then operators $H$ and ${\cal L}$ act from ${\bf D}_p$ to ${\bf L}_p$
and they are bounded.
\end{guess}
{\sl Proof.}
Let $a = \nu + I \ln b $ and $x \in {\bf D}_p $ .
Then $z = \dot{x} + ax \in {\bf L}_p$ and $x$ can be presented as
$$x(t)=X_0(t,0)x(0) + \int_0^t X_0(t,s)z(s)ds . $$
In sequel $y(h(t))=0$, if $h(t)<0$,
and $a^+ = \max \{ a,0 \} $.
Thus we obtain
$$ (Hx)(t) = $$
\begin{equation}
= \sum_{k=1}^m A_k(t) X_0(h_k(t),0) x(0) +
\sum_{k=1}^m \int_0^{h_k^+ (t)} \!\!\!\! A_k(t) X_0(h_k(t),s) z(s)ds .
\label{e10}
\end{equation}
First we will obtain that a matrix valued function
$$F(t) = \sum_{k=1}^m A_k (t) X_0(h_k(t),0) $$
is in ${\bf L}_p$.
To this end by Lemma 2
$$ \parallel A_k (t) X_0 (h_k(t),0) \parallel \leq
\parallel A_k (t) \parallel e^{- \nu h_k (t)} = $$
$$ = \parallel A_k (t) e^{\nu (t-h_k(t))} \parallel
e^{- \nu t} = \parallel A_k^{\nu } (t) \parallel e^{- \nu t} . $$
Therefore
$$ \int_0^{\infty} \parallel A_k^{\nu} (t) \parallel ^p e^{- \nu pt}
dt \leq \sup_{n \geq 0} \int_n^{n+1} \parallel A_k^{\nu} (t) \parallel
^p dt \sum_{n=0}^{\infty} e^{- \nu pn} \leq $$
$$ \leq \frac{\parallel A_k^{\nu} \parallel_{{\bf M}_p}^p }
{1- e^{- \nu p}} . $$
Hence $F \in {\bf L}_p$.
Denote
$$ (Pz)(t) = \sum_{k=1}^m
\int_0^{h_k^+ (t)} A_k (t) X_0
(h_k(t),s) z(s) ds. $$
We will prove that $P$ acts in ${\bf L}_p$ and it is bounded.
To this end
$$\parallel (Pz)(t) \parallel \leq \sum_{k=1}^m
\int_0^{h_k^+ (t)} \!\!\! \!\! \parallel A_k (t) e^{\nu [t- h_k (t)]}
\parallel
{}~ e^{- \nu (t-s)} \parallel z(s) \parallel ds = $$
$$ = \sum_{k=1}^m \int_0^{h_k^+ (t)}
\parallel A_k^{\nu} (t) \parallel ~e^{- \nu (t-s)}
\parallel z(s) \parallel ~ds. $$
Let $p=1$.
Then
$$ \parallel Pz \parallel _{{\bf L}_1} \leq
\sum_{k=1}^m \int_0^{\infty} \int_0^t
\parallel A_k^{\nu} (t) \parallel
{}~e^{- \nu (t-s)} \parallel z(s) \parallel ~ds~dt = $$
$$ = \sum_{k=1}^m \int_0^{\infty} \left(
\int_s^{\infty} \parallel A_k^{\nu} (t) \parallel ~e^{- \nu (t-s)}
dt \right) \parallel z(s) \parallel ~ds. $$
Since
$$ \int_s^{\infty} \parallel A_k^{\nu} (t) \parallel
{}~e^{- \nu (t-s)} dt \leq
\sum_{n=[s]}^{\infty} \int_n^{n+1}
\parallel A_k^{\nu} (t) \parallel ~e^{- \nu (t-s)} dt \leq $$
$$ \leq e^{\nu s} \sum_{n=[s]}^{\infty}
e^{- \nu n} \int_n^{n+1} \parallel A_k^{\nu} (t) \parallel ~dt
\leq
e^{\nu s} \parallel A_k^{\nu} \parallel_{{\bf M}_1}
\sum_{n=[s]}^{\infty} e^{- \nu n} \leq $$
$$ \leq \parallel A_k^{\nu} \parallel_{{\bf M}_1} \frac{e^{\nu}}
{1- e^{- \nu}}, $$
then
$$\parallel Pz \parallel_{{\bf L}_1} \leq
\frac{e^{\nu}}{1-e^{- \nu}}
\sum_{k=1}^m \parallel A_k^{\nu} \parallel_{{\bf M}_1}
\parallel z \parallel_{{\bf L}_1}. $$
Here $[s]$ is the greatest integer not exceeding $s$.
Let $1<p< \infty $.
Then similarly we obtain
$$ \parallel Pz \parallel_{{\bf L}_p} \leq
\sum_{k=1}^m \left[ \int_0^{\infty}
\parallel A_k^{\nu } (t) \parallel^p
\left( \int_0^t e^{- \nu (t-s)} \parallel z(s) \parallel ~ds
\right)^p dt\right] ^{1/p} = $$
$$ = \sum_{k=1}^m \left[ \int_0^{\infty} \parallel A_k^{\nu} (t)
\parallel^p \left( \int_0^t e^{- \nu (t-s)/2}
e^{- \nu (t-s)/2} \parallel z(s) \parallel ~ds \right) ^p dt
\right] ^{1/p} \leq $$
$$ \leq \sum_{k=1}^m \left[ \int_0^{\infty}
\parallel A_k^{\nu} (t) \parallel ^p
\left( \int_0^{\infty} e^{- \nu q (t-s)/2} ds \right) ^{p/q}
\right. \times $$
$$ \times
\left. \left( \int_0^t e^{- \nu p (t-s)/2} \parallel z(s) \parallel ^p ~ds
\right) ~dt
\right] ^{1/p} \leq $$
$$ \leq \left( \frac{2}{\nu q} \right) ^{1/q}
\sum_{k=1}^m \left[ \int_0^{\infty} \int_s^{\infty}
\parallel A_k^{\nu} (t) \parallel ^p
e^{- \nu p (t-s)/2}
\parallel z(s) \parallel ^p dt~ds \right] ^{1/p}, $$
where $q = p/(p-1) $.
By repeating the previous argument we obtain
$$ \parallel Pz \parallel
_{{\bf L}_p} \leq
\left( \frac{2}{\nu q} \right) ^{1/q}
\frac{e^{\nu /2}}{(1 - e^{- \nu p/2})^{1/p}}
\sum_{k=1}^m \parallel A_k^{\nu} \parallel_{{\bf M}_p}
\parallel z \parallel_{{\bf L}_p} . $$
Therefore $Pz \in {\bf L}_p$ and operator $P: {\bf L}_p \rightarrow {\bf L}_p$
is bounded.
Operator $H$ defined by (\ref{e9}) in view of (\ref{e10}) can be presented
as
$$(Hx)(t) = F(t)x(0) +(Pz)(t),~\mbox{~where~} z=\dot{x} + ax . $$
Since
$$\parallel Hx \parallel_{{\bf L}_p} \leq
\parallel F \parallel_{{\bf L}_p} \parallel x(0) \parallel +
\parallel P \parallel _{{\bf L}_p \rightarrow {\bf L}_p}
\parallel \dot{x} + ax \parallel _{{\bf L}_p} \leq $$
$$ \leq \max \{
\parallel F \parallel_{{\bf L}_p},
\parallel P \parallel_{{\bf L}_p \rightarrow {\bf L}_p} \} \parallel x
\parallel_{\tilde{{\bf D}_p} }, $$
then by Lemma 4 $H$ acts from ${\bf D}_p$ to ${\bf L}_p$
and it is bounded.
One can easily see that the admissibility of the pair $({\bf D}_p,
{\bf L}_p)$ for the operator ${\cal L}$ is equivalent to admissibility
of this pair for $H$.
The proof of the theorem is complete.
\vspace{5 mm}
\underline{\sl Corollary.}
Suppose the hypotheses (a1)-(a3), (a5),(a6) hold,
$A_k \in {\bf M}_p, ~1 \leq p < \infty$ and there exists $\delta
>0$ such that $t-h_k(t) < \delta,~k=1, \dots, m$.
Then $H$ acts from ${\bf D}_p$ to ${\bf L}_p$ and it is bounded.
\vspace{5 mm}
Now we proceed to $({\bf L}_p, {\bf D}_p)$ admissibility conditions
for the problem (\ref{e1}) - (\ref{e3}).
To this end consider an auxiliary equation of the type (\ref{e1}),
(\ref{e2})
$$\dot{x}(t) + \sum_{k=1}^r H_k (t) x[g_k(t)] = f(t),
{}~t \geq 0, ~x(t) \in {\bf R}^n , $$
\begin{equation}
x(\xi) = \varphi(\xi), ~if~~ \xi < 0. \label{e11}
\end{equation}
The equation (\ref{e11}) determines a differential operator ${\cal
M}$
\begin{equation}
({\cal M} x)(t) = \dot{x}(t) + \sum_{k=1}^r H_k (t) x[g_k(t)],
{}~x(\xi)=0, ~\xi < 0.
\label{e82}
\end{equation}
Suppose for this equation the hypotheses (a1)-(a4) hold.
By $C_{\cal M}$ we denote the Cauchy operator of this equation.
\begin{uess}
Suppose that for the operators ${\cal L}$ and ${\cal M}$
defined by (\ref{e4}) and (\ref{e82}) the following conditions
are satisfied.
1. The operators ${\cal L}$ and ${\cal M}$ act from ${\bf D}_p$ to
${\bf L}_p$ and they are bounded.
2. $R({\cal M}) = {\bf L}_p $, where $R({\cal M})$ is a range of values
of the operator ${\cal M} : {\bf D}_p \rightarrow {\bf L}_p $.
3. The operator ${\cal L} C_{\cal M} : {\bf L}_p \rightarrow {\bf L}_p$
is invertible.
Then $R({\cal L}) = {\bf L}_p$ and $C$ acts from ${\bf L}_p$ to ${\bf D}_p$
and it is bounded.
\end{uess}
{\sl Proof.}
Consider an initial value problem
$$ {\cal L} x = f, ~x(0)=0, ~x(\tau_j)= B_j x(\tau_j -0), $$
where $f \in {\bf L}_p$ is an arbitrary function.
Then $x = C_{\cal M} ({\cal L} C_{\cal M} )^{-1} f$
is the solution of this problem.
Therefore $x \in {\bf D}_p$, hence $R({\cal L}) = {\bf L}_p$.
Let ${\bf D}_p ^0 = \{ x \in {\bf D}_p : x(0)=0 \} $.
Then by the Banach theorem on an inverse operator
the operator $C: {\bf L}_p \rightarrow {\bf D}_p ^0$
is bounded.
So the operator $C : {\bf L}_p \rightarrow {\bf D}_p $
is also bounded.
Denote
\begin{equation}
\varphi^h (t) = \left\{ \begin{array}{ll}
\varphi [h(t)], & h(t) <0, \\
0, & h(t) \geq 0 , \end{array} \right.
g(t)= \sum_{k=1}^m A_k(t) \varphi^{h_k} (t).
\label{e12}
\end{equation}
\begin{guess}
Suppose the operators ${\cal L}$ and ${\cal M}$
defined by (\ref{e4}) and (\ref{e82})
satisfy the conditions of Lemma 5.
If the function $g$ defined by (\ref{e12}) is in ${\bf L}_p$
then pair $({\bf L}_p,{\bf D}_p)$ is admissible for the equation
(\ref{e1})-(\ref{e3}).
If there exists $\delta > 0$ such that
$t-h_k(t) < \delta$
and the restriction of $A_k$ to $[0, \delta ]$
belongs to ${\bf L}_p [0, \delta], ~k=1, \dots, m,$
then the pair $({\bf L}_p,{\bf D}_p)$ is admissible on the whole for
the equation (\ref{e1})-(\ref{e3}).
\end{guess}
{\sl Proof.}
Let $f \in {\bf L}_p$ and $C$ be the Cauchy operator of
(\ref{e1})-(\ref{e3}).
By Lemma 1 solution $x$ of (\ref{e1})-(\ref{e3}) can be presented
as
\begin{equation}
x(t) = (Cf)(t) - (Cg)(t) + \sum_{0 \leq \tau_j \leq t} X(t,
\tau_j)
\alpha_j. \label{e13}
\end{equation}
By Lemma 5 $Cf \in {\bf D}_p, ~Cg \in {\bf D}_p$.
Now we will establish $X(\cdot, \tau_j) \in {\bf D}_p, ~j=1,2, \dots
.$
To this end denote
$$ Y_j (t) = X(t, \tau_j) - X_0 (t, \tau_j), $$
where $X_0 (t,s)$ is the fundamental matrix of (\ref{e6}),(\ref{e7})
and $a - I \ln b > 0$.
$$ \mbox{Let~~~} f_j (t) = - {\cal L} (X_0 (\cdot, \tau_j)) (t). $$
Then $Y_j$ is a solution of the problem
$${\cal L} y = f_j, ~ t \geq \tau_j,~ y(t) \in {\bf R}^{n \times
n}, $$
\begin{equation}
y(\tau_j)=0, ~ y(\tau_i) = B_i y(\tau_i - 0),
{}~~ i= j+1, \dots .
\label{e14}
\end{equation}
By Lemma 1 the solution of (\ref{e14}) can be presented as
$$ Y_j (t) = (Cf_j)(t), $$
hence
\begin{equation}
X(t,\tau_j) = X_0 (t, \tau_j) + (Cf_j)(t).
\label{e15}
\end{equation}
By Lemma 2 $X_0 (\cdot, \tau_j) \in {\bf D}_p$.
Since by the hypothesis of the theorem pair $({\bf D}_p, {\bf L}_p)$
is admissible for the operator ${\cal L}$ then $f_j \in {\bf L}_p$.
Therefore by Lemma 5 $Cf_j \in {\bf D}_p$.
Thus (\ref{e15}) implies $X(\cdot, \tau_j) \in {\bf D}_p$
and (\ref{e13}) gives that a solution of (\ref{e1})-
(\ref{e3}) is in ${\bf D}_p$.
Admissibility of the pair $({\bf L}_p, {\bf D}_p)$ for the equation
(\ref{e1})-(\ref{e3}) is proven.
Suppose $t-h_k (t) < \delta$.
As $g$ is defined by (\ref{e12}) then $g(t)=0$ if $t> \delta$.
Since for $t \in [0, \delta]$ ~$A_k \in {\bf L}_p [0, \delta] $
and $\varphi^{h_k} \in {\bf L}_{\infty} [0, \delta] $,
then $g \in {\bf L}_p [0, \delta] $.
Therefore for $t \in [0, \infty)$ $g \in {\bf L}_p [0, \infty) $.
Thus according to the above results the pair $({\bf L}_p, {\bf D}_p)$
is admissible on the whole for (\ref{e1})-(\ref{e3}).
The proof of the theorem is complete.
\section{Admissibility and stability}
{}~~~~~This paper deals with exponential stability only.
Other types of stability and their connection with properties of
the fundamental matrix are presented in [14].
\underline{\sl Definition.}
The equation (\ref{e1})-(\ref{e3})
is said to be {\bf exponentially stable} if there exist positive
constants $N$ and $\lambda$ such that for any initial function
$\varphi, f=0$ and
$\alpha_1= \alpha_2 = \dots =0$
for a solution $x$ of (\ref{e1})-(\ref{e3})
the inequality
$$ \parallel x(t) \parallel \leq
N e^{- \lambda t} \left( \sup_{t<0}
\parallel \varphi (t) \parallel + \parallel x(0) \parallel
\right)
$$
holds.
Thus the representation (\ref{e5}) yields the following
assertion (see [14]).
\begin{guess}
Suppose (a1)-(a6) hold and
there exist positive constants $N$ and $\lambda$
such that the fundamental matrix $X(t,s)$ satisfies the inequality
\begin{equation}
\parallel X(t,s) \parallel \leq N e^{- \lambda (t-s)},
{}~t \geq s > 0,
\label{e16}
\end{equation}
and there exists $\delta > 0$ such that
$t-h_k (t) < \delta, ~k=1, \dots ,m$.
Then equation (\ref{e1})-(\ref{e3}) is exponentially stable.
\end{guess}
The following theorem is a main result of this work.
It connects admissibility of the pair $({\bf L}_p, {\bf D}_p)$
with stability of (\ref{e1})-(\ref{e3}).
\begin{guess}
Suppose for (\ref{e1})-(\ref{e3}) the hypotheses (a1)-(a6),
hold, $A_k \in {\bf M}_p , ~1 \leq p < \infty$, there exists
$\delta > 0$ such that $t-h_k (t) < \delta, ~k=1, \dots, m$
and for the initial function $\varphi \equiv 0$
the pair $({\bf L}_p, {\bf D}_p)$ is admissible for this equation.
Then the equation (\ref{e1})-(\ref{e3}) is exponentially stable.
\end{guess}
{\sl Proof.}
By Theorem 3 it is sufficient to prove that the estimate
(\ref{e16}) exists.
In view of Lemma 1 the fundamental matrix $X(t,s)$ as a function of
$t$ for a fixed $s$ is a solution of the problem
$$\dot{x}(t) + \sum_{k=1}^m A_k (t) x[h_k(t)] = 0,
{}~t \geq s,~ x(t) \in {\bf R}^{n \times n},~ x(s)= E_n, $$
\begin{equation}
x(\xi)= 0, ~ \xi < s, ~ x(\tau_j) = B_j x(\tau_j - 0), ~ \tau_j >
s.
\label{e17}
\end{equation}
Denote
\begin{equation}
Y(t,s) = e^{\lambda(t-s)} X(t,s),
\label{e18}
\end{equation}
where $\lambda > 0$ is a certain number.
Thus
$$Y(s,s) = X(s,s) = E_n \mbox{~~and,~besides,~}
Y(\tau_j,s) = B_j Y(\tau_j - 0,s), ~\tau_j >s. $$
Denote
$$ {\cal L}_s x =
\dot{x} (t) + \sum_{k=1}^m A_k (t) x[h_k(t)], t \geq s,
{}~x(t) \in {\bf R}^{n \times n};$$ $$
{}~x(\xi)=0,~\xi<s,.$$
By substituting $x(t) = y(t) e^{- \lambda (t-s)}$
we obtain
$$({\cal L}_s x)(t) = e^{- \lambda (t-s)} \dot{y} (s) -
e^{- \lambda (t-s)} \lambda y(t) +
\sum_{k=1}^m e^{- \lambda [h_k (t) -s]} A_k (t) y[h_k(t)] =
$$ $$ =
e^{- \lambda (t-s)} \left\{ \dot{y} (t) +
\sum_{k=1}^m A_k (t) y[h_k(t)] + \right. $$
$$ \left. + \sum_{k=1}^m e^{\lambda [t- h_k (t)]}
A_k (t) y[h_k (t)] - \sum_{k=1}^m A_k (t) y[h_k(t)] - \lambda
y(t) \right\} = $$
$$ = e^{- \lambda (t-s)} \left\{
({\cal L}_s y)(t) - \lambda y(t) + \sum_{k=1}^m
\left[ e^{\lambda (t-h_k(t))} - 1 \right]
A_k (t) y[h_k(t)] \right\}. $$
Denote
$$({\cal T}_s y)(t) = \sum_{k=1}^m \left[
e^{\lambda (t- h_k(t))} - 1 \right]
A_k (t) y[h_k(t)] - \lambda y (t), ~t \geq s, $$
$$ ({\cal M}_s y)(t) = ({\cal L}_s y)(t)
+ ({\cal T}_s y)(t). $$
Then
$$ ({\cal L}_s x)(t) =
e^{- \lambda (t-s)} ({\cal M}_s y)(t) $$
and $Y(t,s)$ is a fundamental matrix of the problem
${\cal M}_0 y \! = \! 0, ~y(\tau_j) = B_j y(\tau_j - 0)$.
The corollary of Theorem 1 gives that the operator ${\cal L}_s$
acts from ${\bf D}_p [s, \infty)$ to ${\bf L}_p [s, \infty)$
and it is bounded.
By the hypothesis of the theorem a solution
of ${\cal L}_s x = f$ together with its derivative is in
${\bf L}_p [s, \infty )$ if provided $f \in {\bf L}_p [s, \infty )$.
Therefore the Cauchy operator $C_s$ of this equation
acts from ${\bf L}_p [s, \infty)$ to ${\bf D}_p [s, \infty)$.
Denote ${\bf D}_p ^0 [s, \infty) =
\{ x \in {\bf D}_p [s, \infty) \mid x(s)=0 \} $.
By the hypotheses of the theorem the operator
${\cal L}_s : {\bf D}_p ^0 [s, \infty) \rightarrow {\bf L}_p [s, \infty)$
is bounded.
By Lemma 3 the space ${\bf D}_p [s, \infty)$ is Banach,
therefore its closed subspace ${\bf D}_p ^0 [s, \infty)$
is also Banach.
Thus by the Banach theorem on an inverse operator
the operator $C_s : {\bf L}_p [s, \infty) \rightarrow {\bf D}_p ^0
[s, \infty)$ and, consequently, $C_s : {\bf L}_p [s, \infty)
\rightarrow {\bf D}_p [s, \infty)$ is bounded.
By Theorem 1
$H_s^k$ acts from ${\bf D}_p [s, \infty)$ to
${\bf L}_p [s, \infty)$, where $(H_s^k x)(t) = A_k (t) x(h_k (t)), ~x(\xi)=0, ~\xi
<s $.
{}From the assumption $t- h_k (t) < \delta$
we obtain an estimate
$$ \parallel {\cal T}_s \parallel _{{\bf D}_p [s, \infty )
\rightarrow {\bf L}_p [s, \infty)}
\leq \left( e^{\lambda \delta} - 1 \right)
\sum_{k=1}^m \parallel H_s^k \parallel_{{\bf D}_p [s, \infty)
\rightarrow {\bf L}_p [s, \infty )} + \lambda . $$
The operator ${\cal M}_s C_s = E + {\cal T}_s C_s$,
with $E$ being an identity operator,
has a bounded inverse operator in ${\bf L}_p [s, \infty )$ if
\begin{equation}
\parallel {\cal T}_s C_s \parallel_{{\bf L}_p [s, \infty)
\rightarrow {\bf L}_p [s, \infty)} < 1.
\label{e19}
\end{equation}
We prove that for $\lambda$ being small enough (\ref{e19}) holds.
To this end
$$ \parallel \! {\cal T}_s C_s \! \parallel _{{\bf L}_p \rightarrow {\bf L}_p}
\leq \parallel \! {\cal T}_s \! \parallel_{{\bf D}_p \rightarrow {\bf L}_p}
\parallel \! C_s \! \parallel _{{\bf L}_p \rightarrow {\bf D}_p }
\leq \! \left[ (e^{\lambda \delta} - 1)
\sum_{k=1}^m \!\! \parallel \! H_s^k \! \parallel + \lambda \right]
\! \parallel \! C_s \! \parallel . $$
Therefore for $\lambda$ being small enough (\ref{e19}) holds,
where $\lambda$ is obviously independent of $s$ since
$\parallel H_s^k\parallel~
\leq~\parallel H_0^k \parallel,
\parallel~C_s~\parallel~\leq~\parallel~C~\parallel $.
Operators ${\cal L}_s$ and ${\cal T}_s$ act continuously
from ${\bf D}_p [s, \infty)$ to ${\bf L}_p [s, \infty )$.
Hence the operator ${\cal M}_s = {\cal L}_s + {\cal T}_s $
also possesses this property.
Thus by Lemma 5 the Cauchy operator $C_{\cal M}^s$
of the equation ${\cal M}_s y = f$
continuously acts from ${\bf L}_p [s, \infty)$ to ${\bf D}_p [s, \infty)$.
Similar to (\ref{e15}) we obtain
\begin{equation}
Y(t,s) = X_0 (t,s) + (C_{\cal M}^s f_s )(t).
\label{e20}
\end{equation}
Here $f_s (t) = - {\cal M}_s (X_0 (\cdot , s)) (t), ~a - I \ln b
> 0 $.
Lemma 2 implies $X_0 (\cdot, s) \in {\bf D}_p [s, \infty )$.
Moreover, this lemma gives the uniform estimate
$ \parallel f_s \parallel_{{\bf L}_p [s, \infty)}
\leq K, $
with $K$ not depending on $s$.
Therefore we obtain estimates independent of $s$
$$ \parallel C_{\cal M}^s f_s \parallel_{{\bf D}_p [s, \infty)}
\leq K \parallel C_{\cal M} \parallel, $$
$$ \parallel C_{\cal M}^s f_s \parallel_{{\bf L}_p [s, \infty)}
\leq K \parallel C_{\cal M} \parallel .$$
and
$$ \parallel \frac{d}{dt}
C_{\cal M}^s f_s \parallel _{{\bf L}_p [s, \infty)}
\leq K \parallel C_{\cal M} \parallel. $$
Denote $z_s = C_{\cal M}^s f_s$.
Since $z_s (s)=0$, then
$z_s = C_0^s (\dot{z}_s + a z_s )$.
By Lemma~2 $C_0^s : {\bf L}_p [s, \infty) \rightarrow
{\bf L}_{\infty} [s, \infty)$
is bounded, therefore
$$\parallel C_{\cal M}^s f_s \parallel_{{\bf L}_{\infty} [s,
\infty)} = \parallel z_s \parallel_{{\bf L}_{\infty} [s, \infty)}
\leq $$
$$ \leq \parallel C_0 \parallel_{{\bf L}_p \rightarrow {\bf L}_{\infty}}
\left( \parallel \dot{z}_s \parallel_{{\bf L}_p [s, \infty )}
+ a \parallel z_s \parallel _{{\bf L}_p [s, \infty)} \right) \leq $$
$$ \leq (1+a) K \parallel C_0 \parallel ~\parallel C_{\cal M}
\parallel . $$
Hence the estimate of the norm of $C_{\cal M}^s f_s$
in ${\bf L}_{\infty} [s, \infty) $ does not depend on $s$.
By Lemma 2 and (\ref{e20}) there exists $N>0$
such that $$vraisup_{t,s > 0}
\parallel Y(t,s) \parallel \leq N < \infty . $$
Thus (\ref{e18}) implies the exponential estimate (\ref{e16})
for the fundamental matrix of (\ref{e1})-(\ref{e3}).
The proof of the theorem is complete.
\section{Explicit stability results}
We apply Theorems 2 and 4 to obtaining explicit conditions of
exponential stability and of existence of integrable solutions.
To this end we prove an auxiliary result.
\begin{uess}
Suppose there exist $\sigma >0$ and
$\rho > 0$ such that $\rho \leq \tau_{j+1} - \tau_j \leq \sigma,
{}~ \parallel B_j \parallel \leq B < 1 $.
Then for the fundamental matrix $X_1$ of the equation
\begin{equation}
\dot{x} (t) = f(t), ~ x(\tau_j) = B_j x(\tau_j - 0)
\label{e21}
\end{equation}
the inequality
\begin{equation}
\parallel X_1 (t,s) \parallel
\leq e^{- \eta(t -s - \sigma )}
\label{e22}
\end{equation}
holds, where $\eta = - \frac{1}{\sigma} \ln B $.
\end{uess}
{\sl Proof.}
Under the hypotheses of the lemma (see [13])
$$\parallel X_1 (t,s) \parallel \leq \left\{
\begin{array}{ll}
e^{- \eta (t-s)} & , t -s > \sigma, \\
1 & , 0 < t - s \leq \sigma . \end{array} \right. $$
This immediately yields (\ref{e22}).
\vspace{5 mm}
Denote
$$ A_k^{\eta} (t) = A_k (t) e^{\eta (t- h_k (t))} . $$
\begin{guess}
Suppose for the equation (\ref{e1})-(\ref{e3})
the hypotheses (a3),(a4) and
(b1) $ f \in {\bf L}_1 , ~A_k^{\eta} \in {\bf M}_1 $;
(b2) $0 < \rho \leq \tau_{j+1} - \tau_j \leq \sigma ;$
(b3) $ \parallel B_j \parallel \leq B < 1 $;
(b4) $g \in {\bf L}_1, $
where $g$ is defined by (13);
(b5) $e^{\eta ( \sigma + 1) } \sum_{k=1}^m
\parallel A_k^{\eta} \parallel_{{\bf M}_1} \leq 1 - e^{- \eta}
$,
where $\eta = - \frac{1}{\sigma} \ln B$,
hold.
Then for any solution $x$ of (\ref{e1})-(\ref{e3})
$x \in {\bf L}_1, ~\dot{x} \in {\bf L}_1 $.
\end{guess}
\begin{guess}
Suppose for the equation (\ref{e1})-(\ref{e3})
the hypotheses (a3),(a4) and
(c1) $f \in {\bf L}_1, ~ A_k \in {\bf M}_1 $;
(c2) $0 < \rho \leq \tau_{j+1} - \tau_j \leq \sigma $;
(c3) $\parallel B_j \parallel \leq B < 1 $ ;
(c4) there exists $\delta > 0$ such that $t - h_k(t) < \delta $ ;
(c5) $ e^{\eta (\sigma + \delta + 1)}
\sum_{k=1}^m \parallel A_k \parallel_{{\bf M}_1}
\leq 1-e^{- \eta},$ where $ \eta =
- \frac{1}{\sigma} \ln B$ ,
hold.
Then the equation (\ref{e1}) - (\ref{e3}) is exponentially
stable.
\end{guess}
\underline{\sl Proof of Theorem 5.}
First we note that the hypotheses of the theorem imply (a1)-(a6).
In particular, (b2) implies (a1) and (a6).
By Theorem 1 the hypotheses of the theorem ensure admissibility
of the pair $({\bf D}_1, {\bf L}_1)$ for operator ${\cal L}$
defined by (4).
The hypotheses of Theorem 2 are satisfied if operator
${\cal L}C_{\cal M}:{\bf L}_1~\rightarrow{\bf L}_1$
is invertible, where
$C_{\cal M}$ is the Cauchy operator of the problem (\ref{e21}).
Evidently
${\cal L}C_{\cal M}=E+T$, where
$$(Tz)(t)= \sum_{k=1}^m A_k (t)
\int_0^{h_k^+(t)} X_1 (h_k(t),s)z(s)ds. $$
Lemma 6 gives that the operator $C_{\cal M}$
acts from ${\bf L}_1$ to ${\bf D}_1$.
Since by the hypothesis of the theorem $A_k^{\eta}\in~{\bf M}_1$,
then from the equality $T=H C_{\cal M},$
where $H$ is defined by (\ref{e9}), and from Theorem 1
the operator $T$ acts in ${\bf L}_1$.
Let estimate the norm of operator $T$:
$$ \parallel Tz \parallel_{{\bf L}_1} \leq
\sum_{k=1}^m \int_0^{\infty}
\parallel A_k(t) \parallel \int_0^{h_k^+(t)}
e^{- \eta (h_k (t) -s- \sigma)}
\parallel z(s) \parallel ds~dt \leq $$
$$ \leq e^{\eta \sigma}
\sum_{k=1}^m \int_0^{\infty} \int_0^t
\parallel A_k (t) e^{\eta (t-h_k(t))}
e^{- \eta (t-s)}
\parallel z(s) \parallel ds~dt = $$
$$ = e^{\eta \sigma} \sum_{k=1}^m \int_0^{\infty}
\left( \int_s^{\infty} \parallel A_k^{\eta } (t) \parallel
e^{- \eta t} dt \right) e^{\eta s}
\parallel z(s) \parallel ds \leq $$
$$ \leq e^{\eta \sigma}
\sum_{k=1}^m \int_0^{\infty}
\left( \sum_{i=[s]}^{\infty}
\int_i^{i+1} \parallel A_k^{\eta} (t) \parallel e^{- \eta t}
dt \right) e^{\eta s} \parallel z(s) \parallel ds \leq $$
$$ \leq e^{\eta \sigma} \sum_{k=1}^m \parallel A_k^{\eta}
\parallel_{{\bf M}_1} \int_0^{\infty}
\sum_{i=[s]}^{\infty} e^{-\eta i} e^{\eta s}
\parallel z(s) \parallel ds = $$
$$ = e^{\eta \sigma} \sum_{k=1}^m
\parallel A_k^{\eta} \parallel_{{\bf M}_1}
\frac{e^{\eta}}{1-e^{- \eta}} \parallel z \parallel_{{\bf L}_1}. $$
The hypothesis (b5) implies
$\parallel T \parallel_{{\bf L}_1 \rightarrow {\bf L}_1}<1$,
therefore
${\cal L}C_{\cal M}:{\bf L}_1~\rightarrow~{\bf L}_1$
is invertible.
Hence all hypotheses of Theorem 2 hold.
The proof of the theorem is complete.
\vspace{5 mm}
\underline{\sl Proof of Theorem 6.}
The hypothesis (c4) implies $\varphi (h_k(t))=0$
for $t> \delta$.
Thus (b1),(b4),(b5) and other hypotheses of Theorem 5 hold.
By Theorem 4 the equation (\ref{e1})-(\ref{e3})
is exponentially stable.
\underline{\sl Example.}
Consider a scalar equation
$$\dot{x}(t)+a(t)x(\lambda t)=f(t),~t \geq 0,~ 0 < \lambda < 1,$$
\begin{equation}
x(j)=bx(j-0),~j=1,2, \dots,~ \mid b \mid <1 .
\end{equation}
Since $h(t)= \lambda t \geq 0$ then one may assume $\varphi
\equiv 0$.
The constant $\eta$ defined in (b5)
is $\eta = - \ln b$.
Therefore by Theorem 5 all solutions of (24)
are in ${\bf L}_1 $ for any $f \in {\bf L}_1,$
i.e. they are integrable on the half-line
if
$$
a^{\eta} (t) = a(t) e^ {[(\lambda - 1) \ln b] t} \in {\bf M}_1
\mbox{~~ and ~~}
\parallel a^{\eta} \parallel _{{\bf M}_1} \leq (1-b)b^2 . $$
\section{Proofs of Lemmas 3 and 4}
{\bf Lemma 3.}
{\it Suppose (a5) and (a6) hold.
Then ${\bf D}_p, ~1 \leq p < \infty $,
is a Banach space. }
{\sl Proof.}
Let $\{x_j \}$ be a fundamental sequence in ${\bf D}_p$, i.e.
$$\lim_{k,i \rightarrow \infty}
\parallel x_k - x_i \parallel _{{\bf D}_p} = 0. $$
First we will prove that $\{ x_j (0)\}$
converges in ${\bf R}^n$.
The convergence $\parallel y_j \parallel_{{\bf D}_p [0, \infty)}
\rightarrow 0$ implies
$\parallel y_j \parallel_{{\bf D}_p [0,t_0]} \rightarrow 0$ for any
$t_0 >0$.
Hence
$\parallel y_j \parallel_{{\bf L}_1 [0,t_0]} \rightarrow 0,
{}~ \parallel \dot{y}_j \parallel_{{\bf L}_1 [0,t_0]} \rightarrow 0$.
Therefore for $t_0 < \tau_1$ and $y_j = x_k -x_i$ we have
$$ \lim_{k,i \rightarrow \infty}
\parallel x_k - x_i \parallel_{{\bf L}_1 [0,t_0]} = 0,
\lim_{k,i \rightarrow \infty}
\parallel \dot{x}_k - \dot{x}_i \parallel_{{\bf L}_1 [0,t_0]} = 0 .
$$
Consider an identity
$$ x_k (t) - x_i (t) = x_k (0) - x_i (0) +
\int_0^t [\dot{x}_k (s) - \dot{x}_i (s)] ds. $$
Since
$$\lim_{k,i \rightarrow \infty}
\parallel x_k - x_i \parallel_{{\bf L}_1 [0,t_0]} = 0 \mbox{ and}$$
$$\lim_{k,i \rightarrow 0} \int_0^{t_0} \int_0^t
\mid \dot{x}_k (s) - \dot{x}_i (s) \mid ~ ds~dt \leq
t_0 \lim_{k,i \rightarrow \infty}
\parallel \dot{x}_k - \dot{x}_i \parallel_{{\bf L}_1 [0,t_0]} = 0,
$$
then
$$\lim_{k,i \rightarrow \infty} \parallel x_k (0) - x_i (0)
\parallel_{{\bf L}_1 [0,t_0]} = 0. $$
Hence
$$\lim_{k,i \rightarrow \infty} \parallel x_k (0) - x_i (0)
\parallel_{{\bf L}_1 [0,t_0]} =
t_0 \lim_{k,i \rightarrow \infty}
\parallel x_k (0) - x_i (0) \parallel_{{\bf R}^n} = 0, $$
i.e. the sequence $\{x_j (0)\}$ is fundamental in ${\bf R}^n$.
Therefore there exists $\beta \in {\bf R}^n$ such that
$\lim_{j \rightarrow \infty} x_j (0) = \beta $ .
Let $f_j = {\cal L}_0 x_j$, where operator ${\cal L}_0$
is defined by (\ref{e6}), $\nu = a - I \ln b > 0$.
Then by Lemma 1
\begin{equation}
x_j (t) = X_0 (t,0) x_j (0) + \int_0^t X_0 (t,s) f_j (s) ds.
\label{e80}
\end{equation}
Lemma 2 yields
$$ \parallel X_0 (t,0) \parallel \leq e^{- \nu t} . $$
Since $\dot{X}_0 (t,0) + a X_0 (t,0)= 0$ then
$$ \parallel \dot{X}_0 (t,0) \parallel \leq a e^{- \nu t} .$$
Therefore the sequence $\{X_0 (t,0) x_j (0)\}$ converges in
${\bf D}_p$ to the function $X_0 (t,0) \beta $.
By Lemma 2 we obtain that the operators ${\cal L}_0 : {\bf D}_p
\rightarrow {\bf L}_p$ and $C_0 : {\bf L}_p \rightarrow {\bf D}_p $ are bounded.
To this end denoting $x = C_0 f$ we obtain
$$ \parallel {\cal L}_0 x \parallel_{{\bf L}_p} \leq
\parallel \dot{x} \parallel_{{\bf L}_p} + a \parallel x \parallel_{{\bf L}_p}
\leq (1+a) \parallel x \parallel _{{\bf D}_p}. $$
By Lemma 2 operator $C_0$ is bounded in ${\bf L}_p$ [1], hence
$$ \parallel C_0 f \parallel_{{\bf D}_p} = \parallel x
\parallel_{{\bf L}_p} + \parallel \dot{x} \parallel_{{\bf L}_p} \leq
\parallel C_0 \parallel
_{{\bf L}_p \rightarrow {\bf L}_p}
\parallel f \parallel _{{\bf L}_p} +
\parallel f-ax \parallel _{{\bf L}_p} \leq $$
$$ \leq
[ 1+ \parallel C_0 \parallel (1+a)] \parallel f \parallel_{{\bf L}_p}. $$
Since ${\cal L}_0 : {\bf D}_p \rightarrow {\bf L}_p$
is continuous and ${\cal L}_0 x_j = f_j$ then
$\{ f_j \}$ is a fundamental sequence.
Therefore there exists $f \in {\bf L}_p$ such that $\lim_{j \rightarrow
\infty} f_j = f$.
Let $\tilde{x} = C_0 f,~ \tilde{x}_j = C_0 f_j $.
The continuity of the operator $C_0: {\bf L}_p \rightarrow {\bf D}_p$
implies
$ \parallel \tilde{x}_j - \tilde{x} \parallel_{{\bf D}_p} \rightarrow 0. $
{}From here sequence
$$ x_j (t) = X_0 (t,0) x_j (0) + \tilde{x}_j (t) $$
converges in ${\bf D}_p$ to
$$ x (t) = X_0 (t,0) \beta + \tilde{x} (t). $$
The proof of the lemma is complete.
\vspace{5 mm}
\underline{\sl Lemma 4.}
{\it Let } $a> I \ln b$.
{\it Then the set}
$$\tilde{{\bf D}_p} = \{ x \in {\bf PAC} \mid \dot{x} + ax \in {\bf L}_p ,
{}~x(\tau_j) = B_j x(\tau_j -0) \} $$
{\it coincides with ${\bf D}_p$.
Besides the norm }
\begin{equation}
\parallel x \parallel_{\tilde{{\bf D}_p}} = \parallel x(0) \parallel +
\parallel \dot{x} + ax \parallel_{{\bf L}_p}
\label{e25}
\end{equation}
{\it is equivalent to the norm in } ${\bf D}_p$.
{\sl Proof.}
Let $x \in \tilde{{\bf D}_p}$ and $z = \dot{x} + ax$.
Then $x(t) = X_0 (t,0) x(0) + (C_0 z )(t)$.
By Lemma 2 $z \in {\bf L}_p$
implies $x \in {\bf L}_p$.
Hence $\dot{x}~=z-ax~\in~{\bf L}_p,$ thus $x \in {\bf D}_p$.
Let $x \in {\bf D}_p$.
Then the inequality
$$ \parallel \dot{x} + ax \parallel_{{\bf L}_p}
\leq (1+a) \parallel x \parallel_{{\bf D}_p} $$
implies
$\dot{x} + ax \in {\bf L}_p$.
Hence $x \in \tilde{{\bf D}_p}$.
Thus $\tilde{{\bf D}_p}={\bf D}_p$.
Formula (\ref{e25}) defines a norm in ${\bf D}_p$.
In fact if $\parallel x\parallel_{\tilde{{\bf D}_p}}~=0$
then $\dot{x}+ax=0,~ x(0)=0$.
Then by Lemma 1 on uniqueness of a solution $x=0$.
Let us prove that the space ${\bf D}_p$
endowed with the norm $\parallel \cdot \parallel _{\tilde{{\bf D}_p}}$
is complete.
Suppose $\{ x_j \}$ is a fundamental sequence
by this norm.
Denote $y_j = \dot{x}_j+ax_j$.
Then the convergence
$$\parallel x_k (0) - x_i (0) \parallel +
\parallel y_k - y_i \parallel _{{\bf L}_p} \rightarrow 0
\mbox{~~for~~} k,i \rightarrow \infty$$
implies $\{x_j (0)\}$ is fundamental in ${\bf R}^n$
and $\{y_j \}$ is fundamental in ${\bf L}_p$.
Therefore these sequences converge in the corresponding spaces.
Consider the equality
\begin{equation}
x_j (t) = X_0 (t,0) x_j (0) + (C_0 y_j)(t).
\label{e60}
\end{equation}
We will prove that the operator $C_0: {\bf L}_p \rightarrow
\tilde{{\bf D}_p} $ is bounded.
Let $x=C_0 f$.
Then $x(0)=0$ and
$$ \parallel C_0 f \parallel_{\tilde{{\bf D}_p}} =
\parallel \dot{x} + ax \parallel_{{\bf L}_p} =
\parallel f \parallel_{{\bf L}_p} . $$
Boundedness of $C_0 : {\bf L}_p \rightarrow \tilde{{\bf D}_p}$
and the equality (\ref{e60}) yield the convergence of $\{x_j \}$
in $\tilde{{\bf D}_p}$.
Consequently this space is complete.
Consider sets
$${\bf D}_p^0 = \{ x \in {\bf D}_p \mid x(0)=0 \} , $$
$$U_n = \{ x=X_0 (t,0) \alpha \mid ~ \alpha \in {\bf R}^n \} . $$
The space $U_n$ is $n$-dimensional, isomorphic to ${\bf R}^n$
and $U_n \subset {\bf D}_p$.
Since
$$x(t) = X(t,0)x(0) + \int_0^t X_0 (t,s) [\dot{x} (s) + ax(s)] ds
, $$
then ${\bf D}_p$ is aljebraically isomorphic to the direct sum
${\bf D}_p^0 \oplus U_n$.
Since $U_n$ is finite-dimensional then [15]
the subspace ${\bf D}_p^0$ is closed in ${\bf D}_p$ and in $\tilde{{\bf D}_p}$.
First we will prove equivalence of norms
$\parallel \cdot \parallel_{{\bf D}_p}$ and $\parallel
\cdot \parallel_{\tilde{{\bf D}_p}}$ in ${\bf D}_p^0$.
Let $x~\in~{\bf D}_p^0$.
To this end
$$\parallel x \parallel_{\tilde{{\bf D}_p}} =
\parallel \dot{x} + ax \parallel_{{\bf L}_p} \leq (1+a)
\parallel x \parallel_{{\bf D}_p}. $$
{}From here and from the fact $D_p^0$ is a Banach space with
both norms we obtain [15] that in $D_p^0$ these norms
are equivalent.
Let $P_1$ and $P_2$ be projectors to subspaces ${\bf D}_p^0$
and $U_n$ correspondingly.
${\bf D}_p^0$ is closed, therefore these projectors
are bounded operators in ${\bf D}_p$ and $\tilde{{\bf D}_p}$.
Let $\parallel x_j \parallel_{{\bf D}_p} \rightarrow 0$.
Then the relations
$$x_j=P_1x_j+P_2x_j,
~ \parallel P_i x_j \parallel _{{\bf D}_p} \leq
\parallel P_i \parallel \parallel x_j \parallel _{{\bf D}_p},
~i=1,2, $$
imply $\parallel P_i x_j \parallel_{{\bf D}_p} \rightarrow 0,
~i=1,2$.
As $P_1 x_j \in {\bf D}_p^0,$
and in ${\bf D}_p^0$ the norms
$\parallel \cdot \parallel_{{\bf D}_p}$ and
$\parallel \cdot \parallel_{\tilde{{\bf D}_p}}$
are equivalent, then
$\parallel P_1 x_j \parallel_{\tilde{{\bf D}_p}} \rightarrow 0$.
Besides this $P_2 x_j \in U_n$.
The space $U_n$ is finite-dimensional and all the norms in it
are equivalent.
Thus
$\parallel P_2 x_j \parallel_{\tilde{{\bf D}_p}} \rightarrow 0$.
Consequently,
$$ \parallel x_j \parallel_{\tilde{{\bf D}_p}} \leq
\parallel P_1 x_j \parallel_{\tilde{{\bf D}_p}}
+ \parallel P_2 x_j \parallel_{\tilde{{\bf D}_p}} \rightarrow 0 .$$
Therefore the norms $\parallel \cdot \parallel_{{\bf D}_p}$
and $\parallel \cdot \parallel_{\tilde{{\bf D}_p}}$ are equivalent,
which completes the proof.
|
1,108,101,566,675 | arxiv | \section{Introduction}
The large scale computation has become an important tool in modern day sciences.
The applications of such calculations involve wide range of fields ranging from
atmospheric physics to quantum computing. {\sc Grids} offer a new dimension in existing
large scale computer infrastructure. Generally any large scale computation
involves collection of machines which are aggregated in the form of clusters
and are located in vicinity of each other. On the other hand Grids are the
collection of several thousands of computers which are geographically separated by
large distances. Any given computing platforms may have heterogeneous architecture
and may be controlled locally by their own policies. Furthermore there may not exist
any dedicated networking backbone connecting each element of the Grid, thus
making the standard wired connection as the most widely used choice. Apart from its
heterogeneous components and wide spread locations Grid has a powerful application
porting system which takes care of each and every compute-job running on the Grid.
The so-called {\it middleware} accepts jobs from the user and assigns it to
different different computing nodes and at the end of the job the same middleware
returns the desired output back to the user. Such facility enables user to perform
the calculations without much of concern about explicit porting of any job.
As said earlier, the Grid consist a huge set of heterogeneous compute nodes, which
makes it an ideal tool for large number of jobs. Since the location of the nodes
are geographically far it is most suitable for non-parallel applications.
Thus, it is evident that such resource is most efficient if the given problem
can be split in several independent ones. With all this in mind we demonstrate here
how to handle the large scale problems in condensed matter physics using the
power of Grids which are otherwise excessively expensive in terms of
time and CPU consumption.
Two problems discussed here involve the calculation of electronic structure which is used
frequently in condensed matter physics. The first problem involves the electronic
structure of quantum dots \cite{pujari,pujarici} while the second one deals with
evolution of atomic clusters \cite{kaware}. Both the problems are addressed using
commonly used Density Functional Theory (DFT). In the following section (Sec \ref{def})
we discuss the general outline of the problems which also contains the computational
details involved. In section \ref{suitable} we point out how the selection of the
problems are suitable for the Grids. We present and discuss in brief the results
obtained from our calculations in section \ref{results}. It will be clear that the
problems involve lot of compute jobs, the handling of which can become at time very
painstaking. We address this issue in section \ref{management} where we present the
simple solutions for the management and implementation of such jobs. Finally
conclude in Sec \ref{conc}.
\section{Definition of the problems\label{def}}
In the following subsection we describe in details the nature of both
the problems and the computational procedure involved.
\subsection{Quantum dots }
Quantum dots \cite{reed,ashoori} are {\it zero dimensional} islands of electrons.
They are zero dimensional because the electrons inside the dots are under the
confinements from all three dimensions. In fact quantum dots are the
manifestation of confinement of electrons by virtue of external potential. It is
quite similar to an atom, where the electrons are confined by Coulombic potential
($1/r$), except that, in the quantum dot the potential is tunable from outside.
Hence they are sometime called as {\it artificial atoms}.
Applications of quantum dots range in a wide range of fields. From electronics to
biochemistry and from quantum computing to medical treatments. Apart from that,
being tunable in their properties, the quantum dots offer a playground for
physicists, both experimental as we theorists. Experimentally the dots
manufactured in variety of ways like molecular beam epitaxy, electron beam
lithography, or self assembly via electrochemical means. No matter how they are
manufactured, the dots are always prone to some sort of impurities. To address
this issue, we study a model impurity and its effects on the quantum dots.
Theoretically the quantum dots are investigated by various methods like density
functional theory (DFT),\cite{reimann} configuration interaction (CI),
\cite{pujarici} Quantum Monte Carlo (QMC),\cite{ghosal} Coupled Clusters method
(CC) \cite{ideh} and others. Out of which, DFT is easy to implement and proven to
be fairly accurate. In the present work we use spin density functional theory
(SDFT) which is later supported by CI method.
The confining external potential of the dot is modelled as 2D square well
potential and is given by \begin{equation} V_{ext} (x,y) \; =\; \left\{
\begin{array}{cc} 0 & 0 \le x \le L; 0 \le y \le L \\ V_0 & {\textrm otherwise}
\end{array} \right., \end{equation}.
Studied impurity is modeled using a gaussian potential given as :
\begin{equation}
V_{imp}\; =\; A e^{-B(x^2+y^2)} \label{eq:imp}
\end{equation}
For any given number number of electrons the area of the dots is changed hence
changing the density parameter $r_s$ which is defined as: \[r_s \; =\; L
\sqrt{\frac{1}{\pi N}}\:, \] where $L$ is the length of the dot containing $N$
electrons. It is clear from this equation that for higher density, $r_s$ is
lower and vice versa. In our calculations the barrier height $V_0$ is set to
1200 meV. The material of the dot is assumed to be GaAs. We also assume
effective mass approximation with an effective mass m$^*$=0.067 $m_e$, where
$m_e$ is the mass of an electron, and dielectric constant $\epsilon$ =12.9. The
units of length and energy are scaled to effective atomic units: effective Bohr
radius $a_B^*$ = 9.8 nm and effective hartree Ha$^*$=2Ry$^*$ =12 meV. In the
SDFT formalism, the Schr\"odinger equation in Kohn-Sham scheme reads as
\begin{equation} \left(- \frac{\hbar^2}{2m} \nabla^2 + V_{eff}^\sigma({\bf r})
\right)\psi_i^\sigma({\bf r})= \epsilon_i\psi_i^\sigma({\bf r}) \label{hamilt}
\end{equation}. The equation is solved iteratively where, in each iterations
the potential (or the density) is improved based on the feedback from earlier
iteration(s) till the input and output potentials (or densities) become
identical. The procedure is called as the self-consistency. We use real-space
grid technique for the solution of Eq. \ref{hamilt} For exchange-correlation
energy, we use the local density approximation.\cite{gori,tanater}
To summarise, the goal of this work is to understand the effects of impurity on
the quantum dots. To gain the better understanding, up to twenty-electrons
dots are considered with several sizes of the dot. According to DFT, there
exists a unique charge density for the given effective potential of the system
and vice versa, however {\it a priori} we do not know the effective potential
nor the density. Hence we have to guess for one of them. Our technique
initiates the self consistency with one of the {\it several hundred} educated
guesses of charge density in search of energy minima, which assures the
detection of actual ground state of the system. As will be discussed in
subsequent sections, this problem involve running large number of {\it jobs} to
obtain the accurate results.
\subsection{Atomic Clusters}
The quest for equilibrium geometries of atomic clusters of Gallium - a work done
by Kaware {\it et al} \cite{kaware} - is another example illustrating the
efficient use of Grid for condensed matter physics.\footnote{The work is carried
out in our lab and author is grateful to his colleagues for providing the data
prior to publication.} Similar work on larger scale for sodium clusters has been
carried out by Ghazi {\it et al} \cite{ghazi} who partially used Grids for their
work.
Atomic clusters are the aggregates of atoms. Understandably they are the
building blocks for several nano materials. They are stable, bound and are
artificially created (that is one of the reason, they are different from a
molecule). The main questions of interest are: If $N$ number of atoms come
together what kind of shapes they will form? How will that be different than
their {\it bulk} counterpart? What is the stability of such aggregate? Are they
reactive? What is the magnetic nature? And how is a nanostructure built up
starting from single atom? And so on. Despite the large number of studies
\cite{baletto}, a clear evolutionary pattern over a wide range of sizes has not
been developed. There is no clear answer to apparently simple question: how does
a cluster grow {\it atom-by-atom}? To address these and many other questions
Kaware {\it et al} \cite{kaware} simulated a series of clusters containing 13 to
55 gallium (Ga) atoms. They exhaustively study the growth of these clusters and
the study has revealed a peculiar {\it order-disorder-order} pattern. Their extensive Density Functional calculations involve a
search of not only $\sim$ 40 ground state structures but also $\sim$ 5000
structures of isomers! The shear extent of the problem demands a computationally
large scale infrastructure which is made available in the form of Grids.
As stated earlier, the calculations are performed under Density Functional
framework within Generalized Gradient Approximation (GGA) \cite{gga}. The aim
of the simulation is to find out several equilibrium geometries (where the
forces on each atom are zero) and the lowest energy structure among those,
which is called as the ground state geometry. Mathematically the energy $E$ of
a cluster is a function of potential $V(\vec r)$ which results due to
complicated interactions among the atoms.
\[
E \; =\; \sum_{i < j} V_{ij}(\vec r_{ij}),
\]
where $i$ and $j$ are the indices associated with atoms. This
gives rise to a typical energy landscape shown in figure \ref{landscape}. Each
minima on the landscape represents an isomer while the lowest of all minima -
called as global minimum - represents the ground state structure.
\begin{figure}
\begin{center}
\includegraphics[width=8cm]{landscape.eps}
\end{center}
\caption { A typical energy landscape depicting position of several
local minima which indicate the isomers and a global minima which refers to
ground state geometry. \label{landscape}}
\end{figure}
The procedure of finding the isomers is known as {\it simulated annealing},
which involves non-linear optimization. Simulated annealing is the
theoretical analogue of experimental technique, where the system
(cluster) is heated to a high temperature and then cooled down to obtain an
equilibrium geometry. If the system is slowly cooled then it is most likely to
reach its ground state geometry. On the other hand, if it is {\it quenched} it
reaches one of its equilibrium geometries called {\it isomer}.
Computationally, in simulated annealing, the cluster is heated (by providing
appropriate kinetic energy to the atoms) to a very high temperature and then
quenched. This results in an equilibrium geometry. However to find out the
ground state, several hundreds of structures are required to be quenched. In
other words, the problem is that of (non-linear) optimization {\it i.e.}, finding several
hundreds of equilibrium geometries, for $n$ interacting atoms. We need to do
this for $n$ ranging from, say, 10 to 50. Thus the total number of independent
executions, {\it i.e.}, total number of minimizations to be carried out, could
easily run into a few thousands, underlining the suitability of the Grid which
we shall see in the following section.
\section{Suitability of the grids \label{suitable}}
In this section we illustrate the suitability the Grids for the given problems. It is
clear from the discussion of earlier sections that both the problems involve large number
of jobs. Here we quantitatively demonstrate that the number of jobs involve is too large
to run all the jobs on standard local compute machines. Let us first consider the case of
quantum dots. As stated earlier, number of electrons in a given dot range from 2 to 20.
To bring out the effect of $r_s$ the width of the dot is also varied in five steps. As the
calculation is spin polarized, for any given number of total electrons the number of
constituent up and down electrons are also varied. More importantly, for each system ({\it
i.e.} fixed number of up and down electrons, fixed width (and $r_s$) of the dot) it is
necessary to conduct several DFT runs (typically 100) with varying initial `guess' for
charge densities. Take an example of ten-electron quantum dot. There are five spin states
possible (from all ten electrons up to five up - five down). For five different widths of
the dot and 100 initial guesses there are about 2500 calculations to be performed! Further
similar set of calculations are to be done by adding the {\it impurity} potential
resulting in about $\sim$ 5000 jobs. Thus for all twenty electron quantum dot with
impurity problem involves tens of thousands of runs to be carried out in order to get the
results of desired accuracy. Although none of the jobs are CPU or memory intensive, it is
the shear {\it number} of jobs which make it difficult to perform the calculation on
simple compute system.
Similarly, enormous amount of calculations are involved in second problem. A typical
calculation involve the search for the ground state of a series of clusters involving at
least 10 clusters. Each cluster need several hundred initial geometries to be quenched.
The calculation also involve repetition of runs for charged clusters (typically 2
charged states). Thus, if we take 400 initial geometries then the total number of runs
of a series containing 40 clusters become : $40 \times 400 \times 2 = 32000$.
At this end we summarize the nature of the problems:
\begin{itemize}
\item Both the problems involve several runs
\begin{itemize}
\item Hundreds of initial guesses required for Quantum dots
\item Hundreds and thousands of geometries to be quenched for clusters
\end{itemize}
\item Each run is independent of the other.
\item None of the calculations require any specialized hardware
\item and none require any specific need for parallelism
\end{itemize}
Thus, as can be understood the peculiarities associated with the problem make
them extremely ideal to be implement on Grids.
\section{Results and discussion\label{results}}
In this section we briefly demonstrate the results obtained for both the
problems. Detailed results are out of scope of the current paper and we
strongly encourage our readers to refer to our work published
elsewhere. \cite{pujari,pujarici,kaware, ghazi} Below we divide the results in
two subsections as per the problems discussed.
\subsection{Quantum dots}
We use density functional theory to investigate the quantum dots. One of
the major successes of DFT in quantum dots is to pick up a highly
correlated feature like {\it Wigner localization}. \cite{wigner} In such
confined electron systems, at low densities the confinement strength
weakens and the Coulomb interaction dominates over kinetic energy. As
the kinetic energy reduces the electron get localized to their
positions. Our calculations successfully pick up a incipient Wigner
localization which is shown in the figure \ref{wigner}. Figure shows the
total charge densities of four-electrons quantum dots for two different
density regimes. The high density regime (small width of the dot) is
shown in figure \ref{wigner}(a) while low density regime is depicted in
(b). The emergence of four picks at the four corners is the typical
characteristics of the incipient Wigner localization. \cite{akbar}
\begin{figure}
\begin{center}
\includegraphics[width=6cm]{n4_rs1.eps
\includegraphics[width=6cm]{wigner.eps}\\
(a) \hskip 6 cm (b)
\end{center}
\caption{Typical electronic charge densities showing the feature of incipient Wigner
localization for four electron quantum dot. (a) The charge density in low
$r_s$ regime while (b) that in low density regime (high $r_s$). The imergence
of four peaks in (b) is the indicative of incipient Wigner localization.\label{wigner}}
\end{figure}
It is of equal interest to analyze the effect of impurity to on the
charge densities seen above. Figure \ref{impurity} shows the evolution
of charge density of same quantum dot in presence of the impurity.
Impurity being attractive in nature produces the peak in the charge
density. It should be pointed out that as the size of the dot is
increased the charge in the dot spreads over larger area while the
charge inside the impurity remain confined within the same region giving
rise to relatively large peak seen in figure \ref{impurity} (b).
\begin{figure}
\begin{center}
\includegraphics[width=6cm]{n4_att_rs1_5.eps}
\includegraphics[width=6cm]{n4_att_rs8.eps}\\
(a) \hskip 6 cm (b)
\end{center}
\caption{Evolution of the charge density of the dot as a function of
dot size in presence of an attractive impurity. (a) Impurity being attractive
in nature gives rise to peak seen at the center. (b) In low density
regime the available area for electrons being sufficiently large the electron
spread away and only electron trapped inside the impurity give relatively
large peak. Four small peaks are also developed in the four corners.\label{impurity}}
\end{figure}
The impurity is tuned in such a way that traps an electron inside it, thus
giving rise to localized magnetic moment. In many quantum dots this
localization is associated with peculiar anti-ferromagnetic-like coupling with
firm unit magnetic moment at the center and four peaks at the corners for
opposite spins. Our DFT analysis indicates that the presence of impurity may
change the ground state of quantum dot from magnetic to nonmagnetic and vice
versa. We also observe the oscillations in the charge density along the walls
of the dot as function of number of electrons.
\subsection{Atomic clusters}
\begin{figure}
\begin{center}
\includegraphics[width=4cm]{ga13.eps} \hskip 1cm
\includegraphics[width=5cm]{ga24.eps}\\
\centerline { (a) \hskip 5cm (b) }
\includegraphics[width=5cm]{ga36.eps} \hskip 1cm
\includegraphics[width=5cm]{ga47.eps}\\
\centerline { (c) \hskip 5cm (d) }
\end{center}
\caption{Evolution of the gallium cluster as we go on increasing the constituent
atoms. The geometries are for (a) Ga$_{13}$, (b) Ga$_{24}$, (c) Ga$_{36}$ and (d)
Ga$_{47}$. \label{na:geom}}
\end{figure}
The main objective here is to obtains several equilibrium geometries of gallium
clusters in the size range of $n=$ 13-55 \cite{kaware}.
Authors examined the evolutionary trends as
the clusters grow. Figure \ref{na:geom} shows few representative equilibrium
geometries obtained, which highlight the evolution process of the shapes of
clusters with growth in their size.
As can be seen from the figure, the geometries represent several ordered and
disordered structures. It was seen that addition of few atoms can drastically
change the order of the system. Similar observations on larger scale were also
reported by Ghazi {\it et al} \cite{ghazi}. Gallium clusters show the tendency of
forming planer (or slab-like) structures. Further it was seen that most of the
bonds in the cluster are of sp$^2$ type, which is unlike aluminium clusters which
imply that the Gallium clusters do not fit into the simple {\it jellium}-like model.
To examine the stability of the cluster it is instructive to analyze the binding
energy per atom of the cluster. Binding energy per atom is the amount of energy required to
remove an atom completely from the cluster. Thus, higher the binding energy
stronger the cluster. Figure \ref{be} shows the binding energy per atom for the
clusters ranging from 13 to 48. It is clear from the figure that the clusters with
increasing number of atoms are more stable. The binding energy per atom tend to saturate as
the number of atoms increases.
\begin{figure}
\begin{center}
\includegraphics[width=8cm]{beperatom.eps}
\end{center}
\caption{Binding energy per atom for gallium clusters ranging from $n=13$ to
48. Increasing binding energy per atom indicates that the larger clusters are more stable
than the smaller ones.\label{be}}
\end{figure}
Based on the conclusion of both the works \cite{kaware, ghazi}, it is clear that the growth shows an order-disorder-order pattern. In fact
we found that even in the disordered cluster there are hidden interlinked
ordered structures. Authors observed that between two ordered structures the
growth proceeds via disordered clusters having multicentered icosahedral local
order. The transition from disordered to ordered structure is rather sharp and
occurs merely on changing the number of atoms by two or three. It was also found that the geometries strongly influence the melting temperature of the given cluster.
\subsection{Management of the jobs\label{management}}
A typical problem faced when we handle such large scale problems is the
implementation and the management of the jobs involved. All together we have
several thousands of jobs in both the problems and it is extremely desirable
to have a tool which can assist in handling such enormous number of jobs. Understandably
submitting, monitoring and retrieving each job manually is a tedious and
time consuming procedure and any web-based application may turn out to be
inefficient. We seek the simple solution in the form of {\it shell scripts}. It
turned out that the scripts are easy to use, highly customizable and equally
efficient tool for implementing and managing the jobs.
\section{Summery \label{conc}}
Thus to summarize, we have successfully implemented the Grids for the large
scale problem in the condensed matter physics. We have demonstrated that the
commonly used Density Functional Theory based calculations can be performed on
Grids. The nature of the problems involve large number of independent jobs to
be carried out where the Grid turned out to be most useful. For the management
of the jobs we mainly relied on standard Shell Script instead of any web-based
porting tool.
\section*{Acknowledgments}
Author like to thank D. G. Kanhere for valuable discussion, Vaibhav Kaware,
Seyed Mohammad Ghazi, Kavita Joshi, Manisha Manerikar and Shahab Zorriasatein for
their contributions and Dr. Stefano Cozzini and Neeta Kshemkalyani for technical
assistance. It is a pleasure to acknowledge EU-India Grid project (Grant No.
RI-031834) for partial financial support as well as Garuda India Grid and C-DAC
for computing resources.
\bibliographystyle{unsrt}
|
1,108,101,566,676 | arxiv | \section{Introduction}
Nuclear deformation is one of the typical
collective motions in nuclear systems.
It is known that ground states of nuclei often have static deformations
in the intrinsic states, which are regarded as spontaneous symmetry breaking of the rotational invariance
due to collective correlations. Needless to say, the broken symmetry in the intrinsic states
is restored in nuclear energy levels, because total angular momenta are good quanta
in energy eigenstates of the finite system.
Not only normal deformations of axial symmetric quadrupole deformations
but also triaxial, octupole, and super deformations have been attracting interests
in these decades. To investigate
those deformation phenomena mean-field approaches have been applied, in particular, for heavy mass nuclei.
In light nuclear systems, further exotic shapes due to cluster structures have been suggested.
For instance, a triangle shape in $^{12}$C and a tetrahedral one in $^{16}$O have been discussed
based on the cluster picture that $^{12}$C and $^{16}$O are considered to be
$3\alpha$ and $4\alpha$ systems. In old days, to understand spectra of $^{12}$C and $^{16}$O
non-microscopic $\alpha$-cluster models have been applied~\cite{wheeler37,dennison54}.
From vibrations of the triangle structure of
thee $\alpha$ particles and the tetrahedral one of four $\alpha$s,
Wheeler has suggested the low-lying $J=3$ states in $^{12}$C and $^{16}$O~\cite{wheeler37},
which are now considered to correspond to
the lowest negative-parity states $^{12}$C($3^-_1$, 9.64 MeV) and $^{16}$O($3^-_1$, 6.13 MeV) established experimentally. In 1970's, cluster structures of the ground and excited states in $^{12}$C and $^{16}$O
have been investigated by using microscopic and semi-microscopic
cluster models~\cite{brink70,ikeda72,OCM,smirnov74,GCM,RGM,suzuki76,fujiwara80,bauhoff84},
a molecular orbital model~\cite{abe71}, and also
a hybrid model of shell model and cluster model~\cite{takigawa71}.
For $^{12}$C, the ground state is considered to have the triangle deformation because of
the $3\alpha$-cluster structure. In addition, a further prominent triangle $3\alpha$ structure has been
suggested in $^{12}$C($3^-_1$, 9.64 MeV).
Although the cluster structure of the ground state, $^{12}$C($0^+_1$), may not be so prominent as that of the
$^{12}$C($3^-_1$), the $J^\pi=0^+_1$ and $3^-_1$ states are often described by the rotation of
the equilateral triangle $3\alpha$ configuration having the $D_{3\text{h}}$ symmetry.
In contrast to the geometric configuration suggested in $^{12}$C($0^+_1$) and $^{12}$C($3^-_1$),
a developed $3\alpha$-cluster structure with no geometric configuration has been suggested
in the $0^+_2$ state assigned to $^{12}$C($0^+_2$, 7.66 MeV) by
(semi-)microscopic three-body calculations of $\alpha$ clusters~\cite{OCM,GCM,RGM,fujiwara80}.
In the $0^+_2$ state, three $\alpha$ particles are weakly interacting like a gas, for which
the normal concept of nuclear deformation may be no longer valid.
For the $0^+_2$ state, Tohsaki {\it et al.} has proposed a new interpretation of
a dilute cluster gas state, where $\alpha$ particles behave as bosonic particles and
condensate in the $S$ orbit~\cite{Tohsaki01}.
This state is now attracting a great interest in relation with the
Bose Einstein condensation (BEC) in nuclear matter~\cite{Ropke98}.
Let us consider the cluster phenomena in $^{12}$C from the viewpoint of symmetry breaking.
If the symmetry of the rotational invariance is not broken, a nucleus has a spherical shape.
In the intrinsic state of $^{12}$C($0^+_1$), the spherical shape changes to the triangle shape
via the oblate shape because of the $\alpha$-cluster correlation.
It is the symmetry breaking from the rotational symmetry to the axial symmetry, and to the $D_{3\text{h}}$ symmetry.
In the group theory,
it corresponds to
${\rm O(3)} \rightarrow {\rm O(2)}\times Z_2 \rightarrow D_{3\text{h}}$, where
the symmetry breaking from the continuous (rotational) group to the discrete (point) group occurs.
In the excited state, $^{12}$C($0^+_2$), the system again may have the continuous group O(3) symmetry.
It indicates that the cluster correlations in $^{12}$C($0^+_1$) and $^{12}$C($0^+_2$) may have
different features in terms of symmetry breaking. The triangle shape with the
$D_{3\text{h}}$ symmetry in $^{12}$C($0^+_1$) is characterized by the geometric configuration,
while the $^{12}$C($0^+_2$) has no geometric configuration.
Now a question arises: what is the mechanism of
the symmetry breaking of the continuous group in $^{12}$C($0^+_1$), which is restored
again in $^{12}$C($0^+_2$). One of the key problems is the geometric configuration
because of $\alpha$ correlations in the ground state.
The triangle state has oscillating surface density along the oblate edge.
It can be understood by
the density wave (DW)-like correlation
caused by the $1p$-$1h$ correlation carrying a finite momentum
in analogy to the DW in infinite matter with inhomogeneous periodic densities, which has been
an attractive subject in various field such as nuclear and hadron physics~\cite{overhauser60,brink73,llano79,ui81,tamagaki76,takatsuka78,migdal78,Dautry:1979bk,Deryagin:1992rw,Shuster:1999tn,Park:1999bz,Alford:2000ze,Nakano:2004cd,Giannakis:2004pf,Fukushima:2006su,Nickel:2009ke,Kojo:2009ha,Carignano:2010ac,Fukushima:2010bq} as well as
condensed matter physics~\cite{CDW,SDW}.
Indeed, in our previous work, we interpreted the triangle shape as
the edge density wave on the oblate state by extending the DW concept to
surface density oscillation of finite systems~\cite{KanadaEn'yo:2011qf}.
Then the structures of $^{12}$C($0^+_1$) and $^{12}$C($0^+_2$)
may be associated with the DW and the BEC phases in infinite matter, respectively.
The mechanism of the geometric triangle shape in the finite system
may give a clue to understand an origin of DW in infinite matter.
Similar to $^{12}$C, a geometric configuration with a tetrahedral shape in $^{16}$O has been
discussed in theoretical studies with cluster model calculations~\cite{wheeler37,brink70,bauhoff84} and
also with Hartree-Fock calculations~\cite{eichler70,onishi71,takami95}.
The excited state, $^{16}$O($3^-_1$, 6.13 MeV), is understood by the tetrahedron
vibration or the rotation of the tetrahedral deformation with the $T_d$ symmetry, while
the static tetrahedral shape in the ground state has not been confirmed yet.
The tetrahedron shape in $^{16}$O($0^+_1$) and $^{16}$O($3^-_1$)
is supported in analysis of experimental data such as $E3$ transition strengths
for $3^-_1 \rightarrow 0^+_1$~\cite{Robson:1979zz}
and $\alpha$-transfer cross sections on $^{12}$C~\cite{elliott85}.
The tetrahedral shape tends to be favored in cluster-model calculations~\cite{brink70,bauhoff84}.
However,
Hartree-Fock calculations with tetrahedral deformed mean-field potentials
usually suggest the spherical $p$-shell closed state as the lowest solution for
$^{16}$O except for the calculations using effective interactions with specially strong
exchange forces~\cite{eichler70,onishi71,takami95}.
If the ground state of $^{16}$O has the tetrahedral shape,
it may suggest the breaking of the O(3) symmetry into the $T_d$ symmetry.
In the excited $0^+$ states of $^{16}$O,
$^{12}$C+$\alpha$ cluster structure was suggested in the $0^+_2$ state at 6.05 MeV
~\cite{ikeda72,suzuki76,fujiwara80,sheline60,horiuchi68}. Moreover, in analogy to the 3$\alpha$-cluster gas state of $^{12}$C($0^+_2$),
a 4$\alpha$-cluster gas state with the $\alpha$ condensation feature has been suggested recently
in a $0^+$ state above the 4$\alpha$ threshold~\cite{Funaki:2008gb,Funaki:2010px}.
Similarly to $^{12}$C, the possible tetrahedral shape in $^{16}$O may lead to
symmetry breaking of the continuous group in $^{16}$O($0^+_1$), which is restored
in higher $0^+$ states. Again, the geometric configuration
due to $\alpha$ correlations in the ground state should be
one of the key problems.
Our aim is to clarify the $\alpha$-cluster correlations with geometric configurations
in the ground states of $^{12}$C and $^{16}$O, and understand the mechanism of the symmetry
breaking from continuous (rotational) groups into discrete (point) groups.
We first confirm the problem whether the triangle and tetrahedron shapes
are favored in the intrinsic states of $^{12}$C and $^{16}$O.
For this aim, we use a method of antisymmetrized molecular dynamics (AMD)~\cite{ENYOabc,ENYOsupp,AMDrev} and
perform microscopic many-body calculations
without assuming existence of any clusters nor geometric configurations.
Variation after the spin-parity projections (VAP) is performed in the AMD framework~\cite{ENYO-c12}.
The AMD+VAP method has been proved to be useful to describe
structures of light nuclei. With this method, one of the authors, Y. K-E., has
succeeded to reproduce various properties of ground and excited states
of $^{12}$C~\cite{ENYO-c12,KanadaEn'yo:2006ze}, and confirmed the formation and development of three
$\alpha$ clusters in $^{12}$C in the microscopic calculations with no cluster assumptions
for the first time. The result was supported also by the work using the method of
Fermionic molecular dynamics~\cite{Chernykh07}, which
shows a similar method to the AMD.
In this paper, we apply the AMD+VAP method to $^{16}$O as well as $^{12}$C and analyze
the intrinsic shapes of the ground states.
We show that
the geometric configurations having the approximate $D_{3\text{h}}$ and $T_d$ symmetry arise in the ground states
of $^{12}$C and $^{16}$O, respectively.
To discuss appearance of the geometric configurations, we perform analysis using a simple
cluster wave functions of Brink-Bloch (BB) $\alpha$-cluster model~\cite{brink66},
while focusing on Pauli blocking effect on rotational motion of an $\alpha$ cluster.
Important roles of the Pauli blocking effect
in appearance of geometric configurations are described.
We also introduce a schematic model by considering clusters on a Fermi gas core
in a one-dimensional (1D) finite box,
which can be linked with clusters at surface in a $3\alpha$ system.
By analyzing the 1D-cluster wave function, in particular, looking at
Pauli blocking effects from the core and those between
clusters, we try to conjecture what conditions favor BCS-like, DW-like, and BEC-like correlations.
This paper is organized as follows. In the next section, intrinsic shapes and cluster formation
in the ground states of
$^{12}$C and $^{16}$O are investigated based on the AMD+VAP calculation. In Sec.~\ref{sec:BB-cluster}
a Pauli blocking effect in $\alpha$-cluster systems and its role in $\alpha$-cluster correlations
is described by analysis of BB $\alpha$-cluster wave functions.
In Sec.~\ref{sec:1D-cluster}, the schematic model of clusters on the Fermi gas core
in the 1D finite box is introduced and the roles of
Pauli blocking effects in $\alpha$-cluster correlations are discussed.
Summary and outlook are given in Sec.~\ref{sec:summary}.
The relations between $3\alpha$- and $4\alpha$-cluster wave functions
and triangle and tetrahedral deformed mean-field wave functions are explained in appendix
\ref{app:BB-MF}. In appendix \ref{sec:weak-coupling1} and \ref{sec:weak-coupling2},
features of weak-coupling wave functions in the 1D-cluster model are
described.
\section{Shapes and correlations in the ground states of $^{12}$C and $^{16}$O} \label{sec:AMD+VAP}
We discuss here intrinsic deformations of the ground states of $^{12}$C and $^{16}$O
based on the AMD+VAP calculation.
The AMD method has been applied for various light mass nuclei and
has been successful in describing
cluster structures as well as shell-model-like structures in light-mass nuclei.
In the present work, the AMD+VAP method, i.e., variation after spin and parity projections in the
AMD framework, is applied
to $^{12}$C and $^{16}$O.
For the details of the framework, the reader is refereed to, for instance, Refs.~\cite{AMDrev,ENYO-c12}.
\subsection{Variation after projection with AMD wave function}
In the AMD framework, we set a model space of wave functions and perform
the energy variation to obtain the optimum solution in the model space.
An AMD wave function is given by a Slater determinant of Gaussian wave packets,
\begin{equation}
\Phi_{\rm AMD}({\bf Z}) = \frac{1}{\sqrt{A!}} {\cal{A}} \{
\varphi_1,\varphi_2,...,\varphi_A \},
\end{equation}
where the $i$th single-particle wave function is written by a product of
spatial ($\phi$), intrinsic spin ($\chi^\sigma$), and isospin wave functions ($\chi^\tau$) as
\begin{align}
\varphi_i&= \phi_{{\bf X}_i} \chi^\sigma_i \chi^\tau_i,\\
\phi_{{\bf X}_i}({\bf r}_j) & = \left(\frac{2\nu}{\pi}\right)^{4/3}
\exp\bigl\{-\nu({\bf r}_j-\frac{{\bf X}_i}{\sqrt{\nu}})^2\bigr\},
\label{eq:spatial}\\
\chi^\sigma_i &= (\frac{1}{2}+\xi_i)\chi_{\uparrow}
+ (\frac{1}{2}-\xi_i)\chi_{\downarrow}.
\end{align}
$\phi_{{\bf X}_i}$ and $\chi^\sigma_i$ are spatial and spin functions, and
$\chi^\tau_i$ is the isospin
function fixed to be up (proton) or down (neutron).
Accordingly, an AMD wave function
is expressed by a set of variational parameters, ${\bf Z}\equiv
\{{\bf X}_1,{\bf X}_2,\cdots, {\bf X}_A,\xi_1,\xi_2,\cdots,\xi_A \}$.
The width parameter $\nu$ relates to the size parameter $b$
as $\nu=1/2b^2$ and it is chosen to be $\nu=0.19$ fm$^{-2}$ that minimizes energies of
$^{12}$C and $^{16}$O.
The center positions ${\bf X}_1,{\bf X}_2,\cdots, {\bf X}_A$ of single-nucleon
wave packets are independently
treated as variational parameters. Thus existence of any clusters are not {\it a priori} assumed in the AMD framework.
Despite of it, the model wave function can describe shell-model structures and
cluster structures because of the antisymmetrizer and the flexibility of
spatial configurations of Gaussian centers.
If a cluster structure is favored in a system, the corresponding cluster structure is
automatically obtained in the energy variation.
For even-even nuclei, the ground states are known to be $J^\pi=0^+$ states, i.e., they
are symmetric for rotation.
Intrinsic deformation is understood as
spontaneous symmetry breaking with respect to the rotational invariance, which is restored in
the $J^\pi=0^+$ ground states. It means that when an intrinsic state has a deformation
the ground state is constructed by the spin and parity projections from the intrinsic state.
In more general, spin and parity are good quanta in energy eigenstates of nuclei
because of the invariance of the Hamiltonian for rotation and parity transformation.
Therefore, to express a $J^\pi$ state, an AMD wave function is projected onto the spin-parity eigenstate,
\begin{equation}
\Phi({\bf Z})=P^{J\pi}_{MK}\Phi_{\rm AMD}({\bf Z}),
\end{equation}
where $P^{J\pi}_{MK}$ is the spin-parity projection operator.
To obtain the wave function for a $J^\pi$ state, the energy variation is performed
for the spin-parity projected AMD wave function $\Phi({\bf Z})$ with respect to
variational parameters $\{{\bf Z}\}$. This method is called variation after projection
(VAP). The AMD+VAP method has been applied to various light nuclei for structure study of
ground and excited states.
For the ground states of $^{12}$C and $^{16}$O,
we perform the variation of the energy expectation value
$\langle \Phi({\bf Z})|H|\Phi({\bf Z}) \rangle/\langle \Phi({\bf Z})|\Phi({\bf Z}) \rangle$
for the $J^\pi=0^+$ projected wave function and
get the optimum parameter set $\{{\bf Z}\}$ that minimizes the
energy. Then, the AMD wave function $\Phi_{\rm AMD}({\bf Z})$
given by the optimized $\{{\bf Z}\}$ is regarded as the intrinsic wave functions of the ground state.
An AMD wave function is expressed by a single Slater determinant; however,
the spin-parity projected wave function is no longer a Slater determinant
but it is a linear combination of Slater determinants except for the case that
the AMD wave function before the projection is already a spin-parity $J^\pi$ eigenstate.
If the intrinsic state has a deformation
the projected wave function contains some kinds of correlations beyond Hartree-Fock
approximation.
\subsection{Intrinsic structures of $^{12}$C and $^{16}$O}
We apply the AMD+VAP method to the ground states of $^{12}$C and $^{16}$O,
and discuss their intrinsic structures.
\subsubsection{Density distribution}
The ground states of $^{12}$C and $^{16}$O have the intrinsic deformations.
The density distribution of the intrinsic wave functions $\Phi_{\rm AMD}$ are
shown in Fig.~\ref{fig:c12-o16.dense}.
The result for $^{12}$C shows a triaxial deformation with a triangle feature,
while $^{16}$O has a deformation with a tetrahedral feature.
The quadrupole deformation parameters $(\beta,\gamma)$ evaluated by the quadrupole moments
are $(\beta,\gamma)=(0.31,0.13)$ for $^{12}$C and $(\beta,\gamma)=(0.25,0.09)$ for $^{16}$O.
The triangle and tetrahedral shapes are caused by $\alpha$-cluster correlations. Strictly speaking
$\alpha$ clusters are not ideal $(0s)^4$ clusters but somewhat dissociated ones. Moreover, the triangle and tetrahedron
are not regular but distorted
as an $\alpha$ cluster is situated slightly far from the other $\alpha$s.
\begin{figure}[th]
\centerline{\epsfxsize=7.5 cm\epsffile{dense-fig.eps}}
\caption{(color on-line) Density distributions in the intrinsic states for the ground states of
(top) $^{12}$C and (bottom) $^{16}$O obtained by the AMD+VAP calculation.
The density integrated on the $z$, $x$, and $y$ axes is plotted on the $x$-$y$, $y$-$z$, and $z$-$x$ planes.
\label{fig:c12-o16.dense}}
\end{figure}
\subsubsection{Oscillation of the surface density}
Let us consider the deformations of $^{12}$C and $^{16}$O
from the viewpoint of symmetry breaking.
The highest symmetry is the sphere, which is realized in the closed $p^{3/2}$-shell and
$p$-shell states for $^{12}$C and $^{16}$O, respectively.
As a higher symmetry can break into lower symmetry due to multi-nucleon correlations,
the deformation mechanism of $^{12}$C is interpreted as follows.
Because of the $\alpha$-cluster correlations,
the rotational symmetry in the sphere breaks to the axial symmetry in an oblate deformation
and changes into the $D_{3\text{h}}$ symmetry in the regular triangle $3\alpha$
configuration, and then
it breaks to the distorted triangle.
The symmetry change, spherical$\rightarrow$oblate$\rightarrow$triangle, corresponds to
${\rm O(3)}\rightarrow {\rm O(2)}\times Z_2\rightarrow D_{3\text{h}}$.
Similarly, the deformation of $^{16}$O is understood as
the rotational symmetry in the sphere breaks into the $T_d$ symmetry in the regular tetrahedral
$4\alpha$ configuration, and it breaks to the distorted tetrahedron.
The structure change from the spherical to the tetrahedron is the breaking of the O(3) symmetry to
the $T_d$ symmetry. Note that the continuous (rotational) groups
break to the discrete (point) groups in the triangle and tetrahedron deformations.
We discuss the connection of cluster correlations with the density wave, which is
characterized by the static density oscillation at the surface.
The surface density oscillation occurs at the symmetry breaking from the axial symmetry (oblate shape)
to the $D_{3\text{h}}$ symmetry (triangle) in $^{12}$C and
that from the rotational symmetry (sphere) to the $T_d$ symmetry (tetrahedron) in $^{16}$O.
The triangle deformation contains the $(Y_3^{-3}-Y_3^{+3})/\sqrt{2}$ component with the
$Y_2^0$ deformation in the density, while the tetrahedral one has
the $(\sqrt{5}Y_3^{0}+\sqrt{2}Y_3^{-3}-\sqrt{2}Y_3^{+3})/3$ component, which can be transformed
to $(Y_3^{-2}+Y_3^{+2})/\sqrt{2}$ by the rotation. Indeed, as described
in appendix \ref{app:BB-MF},
an ideal $3\alpha$($4\alpha$)-cluster wave function with
the triangle (tetrahedral) configuration has
the density having the finite components of $(Y_3^{-3}-Y_3^{+3})/\sqrt{2}$
($(\sqrt{5}Y_3^{0}+\sqrt{2}Y_3^{-3}-\sqrt{2}Y_3^{+3})/3$) and it
can be described by the DW-type particle-hole correlations in case of weak deformations.
Note that the DW in the triangle shape is characterized by
the particle-hole correlations on the Fermi surface carrying the finite angular momentum
$(l,m)=(3,\pm 3)$ and that in the tetrahedral one is given by
the particle-hole correlations with $(l,m)=(3,\pm 2)$ as described in the appendix.
Namely, the symmetry breaking is characterized by the
$(Y_3^{-3}-Y_3^{+3})/\sqrt{2}$ component whose amplitudes
linearly relate to the order parameter of the DW correlations as shown
in Eqs.~(\ref{eq:3alpha-density}), (\ref{eq:4alpha-density}), and (\ref{eq:4alpha-density2}).
In other words, the symmetry broken states have finite $(Y_3^{-3}-Y_3^{+3})/\sqrt{2}$ components in surface density
and show the oscillation density with the
wave number three.
To analyze the surface density oscillation in the ground states of $^{12}$C and $^{16}$O,
we perform the multipole decomposition of the intrinsic density obtained with the AMD+VAP calculation
\begin{equation}
\rho(r=R_0,\theta,\phi)=\bar\rho(R_0) \sum_{lm} \alpha_{lm} Y_l^m(\theta,\phi),
\end{equation}
at a certain radius $r=R_0$, and discuss the $(Y_3^{-3}-Y_3^{+3})/\sqrt{2}$ components.
We take $R_0$ to be the root mean square radius of the intrinsic state.
$\bar\rho(R_0)$ is determined by the normalization $\alpha_{00}=1$, and
$\alpha_{lm}$ has the relation
$\alpha_{lm}=(-1)^{m}\alpha^*_{l-m}$ because the density $\rho(r=R_0,\theta,\phi)$ is real.
The density plot at $r=R_0$ on the $\theta$-$\phi$ plane for $^{12}$C and that
at $r=R_0$ and $\theta=\pi/2$ as a function of
$\phi$ are shown in Fig.~\ref{fig:c12.theta-phi}.
As seen clearly, the density on the oblate edge shows the oscillation
with the approximate wave number three periodicity, which comes from the $\alpha$-cluster correlation.
In the right panel of Fig.~\ref{fig:c12.theta-phi}, we also plot the density for the ideal $D_{3\text{h}}$ symmetry
given only by the $(l,m)=(0,0)$ ,(2,0),(3,3) and (3,$-$3) components
to show somewhat distortion of the AMD+VAP result from the ideal $D_{3\text{h}}$ symmetry.
In the coefficients $|\alpha_{lm}|$ shown in Fig.~\ref{fig:ylm}, it is found that the
$Y_l^{\pm 3}$ components are actually finite indicating that
the axial symmetry is broken to the triangle shape in $^{12}$C.
For the density in $^{16}$O, the $\theta$-$\phi$ plot is shown in Fig.~\ref{fig:o16.theta-phi}, and
the coefficients of the multipole decomposition are shown in Fig.~\ref{fig:ylm}.
The tetrahedral component $\sqrt{5}Y_3^{0}/3+\sqrt{2}Y_3^{+3}/3-\sqrt{2}Y_3^{-3}/3$
is shown by the hatched boxes at $\alpha_{30}$ and $\alpha_{33}$ in Fig.~\ref{fig:ylm}. The open boxes
indicate the distortion components from the tetrahedron.
The distortion exists in the axial symmetry components,
$\alpha_{30}$, $\alpha_{20}$, and $\alpha_{10}$, coming
from the spatial development of an $\alpha$ cluster from the others as explained before.
Thus, the
intrinsic states of $^{12}$C and $^{16}$O show
the surface density oscillation with the $(Y_3^{-3}-Y_3^{+3})/\sqrt{2}$ component.
The wave number three oscillation characterized by the $(Y_3^{-3}-Y_3^{+3})/\sqrt{2}$ component
is understood by the $\alpha$-cluster correlations with triangle and tetrahedral deformations, which are
interpreted as the DWs on the oblate and spherical shapes, i.e., the spontaneous symmetry breaking of
axial symmetry $\rightarrow$ $D_{3\text{h}}$ and rotational symmetry $\rightarrow$ $T_d$ symmetry.
\begin{figure}[th]
\centerline{\epsfxsize=7 cm\epsffile{c12-dense-r-fig.eps}}
\caption{(color on-line) Left: density at $r=R_0$ for
$^{12}$C calculated by the AMD+VAP is plotted on the $\theta$-$\phi$ plane.
Right: that at $r=R_0$ and $\theta=\pi/2$ line (the solid line).
The density for the ideal $D_{3\text{h}}$ symmetry
given only by the $(l,m)=(0,0)$ ,(2,0),(3,3) and (3,$-$3) components is also plotted (the dashed line).
$R_0$ is taken to be 2.53 fm.
}\label{fig:c12.theta-phi}
\end{figure}
\begin{figure}[th]
\centerline{\epsfxsize=3.5 cm\epsffile{o16.dense-r.eps
}
\caption{ (color on-line)
density at $r=R_0$ for
$^{16}$C calculated by the AMD+VAP is plotted on the $\theta$-$\phi$ plane.
$R_0$ is chosen to be 2.81 fm.
\label{fig:o16.theta-phi}}
\end{figure}
\begin{figure}[th]
\centerline{\epsfxsize=6 cm\epsffile{ylm-fig.eps}}
\caption{The coefficients $\alpha_{lm}$ of
the multipole decomposition of the intrinsic density of (top) $^{12}$C and (bottom) $^{16}$O
calculated by the AMD+VAP. The hatched areas in the bottom panel are the
tetrahedron component $\sqrt{5}Y_3^{0}/3+\sqrt{2}Y_3^{+3}/3-\sqrt{2}Y_3^{-3}/3$
defined by $\alpha^{\rm hatch}_{30} \equiv \sqrt{5/2} \alpha_{33}$ and
$\alpha^{\rm hatch}_{33} \equiv \alpha_{33}$. The open area for the $Y_3^{0}$ component
is defined by the relation $\alpha_{30}=\alpha^{\rm open}_{30}+\alpha^{\rm hatch}_{30}$.
\label{fig:ylm}}
\end{figure}
\section{Pauli blocking effect in $\alpha$ correlations} \label{sec:BB-cluster}
As shown in the previous section, the
intrinsic states of $^{12}$C and $^{16}$O obtained with the AMD+VAP calculation
contain the $\alpha$-cluster correlations with the triangle and tetrahedral deformations.
For $^{12}$C, the triangle deformation has been suggested in
$3\alpha$-cluster models, in which $3\alpha$ state with a regular triangle configuration
is the energy minimum state~\cite{brink70,GCM,fujiwara80}. In contrast to such the geometric configuration as the triangle,
the second $0^+$ state of $^{12}$C is considered to be a cluster gas state where
3 $\alpha$ clusters are freely moving in dilute density
like a gas without any geometric correlations between clusters~\cite{GCM,RGM,fujiwara80,Tohsaki01}.
The $0^+_2$ state is associated with the $\alpha$ condensation in analogy to BEC because it is regarded as a system of
three $\alpha$s occupying the same $S$ orbit~\cite{Tohsaki01}.
From the viewpoint of symmetry breaking,
the symmetry is broken in the $0^+_1$ state with the triangle deformation and
it is restored in the $0^+_2$ state.
In the $0^+_1$ state, the axial symmetry breaks down
to the $D_{3\text{h}}$ symmetry due to the $3\alpha$-cluster structure.
One of the characteristics of the $0^+_1$ state is the oscillating surface density caused by
the angular correlation of $\alpha$ clusters. Then the
structure change from the $0^+_1$ to the $0^+_2$ is expected to connect with
the transition from the symmetry broken state with the angular correlation to the symmetric state with
no (or less) correlation between $\alpha$ clusters.
The origin of the angular correlation in the $0^+_1$ state
and the transition into the uncorrelated $0^+_2$ state can be
understood by the Pauli blocking effect between clusters as follows.
Let us here consider motion of an $\alpha$ cluster around a $2\alpha$ core
in the BB 3$\alpha$-cluster model wave function. In the BB 3$\alpha$-cluster model,
$\alpha$ clusters are located around certain positions ${\bf S}_1$, ${\bf S}_2$, and ${\bf S}_3$,
and the wave function is written as
\begin{equation}
\Phi_{BB}({\bf S}_1,{\bf S}_2,{\bf S}_3)=\frac{1}{\sqrt{A!}}{\cal A}\left\{
\Pi_{\tau\sigma} \phi_{{\bf S}_1}{\cal X}_{\tau\sigma} \phi_{{\bf S}_2}{\cal X}_{\tau\sigma}
\phi_{{\bf S}_3}{\cal X}_{\tau\sigma} \right\},
\end{equation}
where ${\cal X}_{\tau\sigma}$ is the spin-isospin wave function with
$\tau=\{p,n\}$ and $\sigma=\{\uparrow,\downarrow\}$.
We assume that 2 $\alpha$ clusters placed at ${\bf S}_1=(0,0,d/2)$
and ${\bf S}_2=(0,0, -d/2)$ form the core.
The third $\alpha$ is placed
at ${\bf S}_3=(0,y,z)$ for $y=r\cos \theta_y,z=r\sin \theta_y$
on the $(y,z)$-plane (see Fig.~\ref{fig:coupling}).
Because of Pauli blocking effect between clusters,
the motion of the third $\alpha$ around the core is restricted in the Pauli allowed region.
Particularly when the $\alpha$ exists near the core, rotational motion is strongly blocked because
of the existence of other $\alpha$ clusters. The Pauli allowed and forbidden areas
for the rotation of the angle $\theta_y$ for the third $\alpha$ center
in the $(y,z)$-plane are presented in Fig.~\ref{fig:coupling}.
In the figure, the norm ${\cal N}_{3\alpha}(y,z)=\langle \Phi_{BB}({\bf S}_1,{\bf S}_2,{\bf S}_3)|
\Phi_{BB}({\bf S}_1,{\bf S}_2,{\bf S}_3)\rangle$ of the BB wave function
$\Phi_{BB}({\bf S}_1,{\bf S}_2,{\bf S}_3)$ with the parameters,
${\bf S}_1=(0,0,d/2)$, ${\bf S}_2=(0,0, -d/2)$, and ${\bf S}_3=(0,y,z)$ is shown in the $(y,z)$-plane.
$d$ is fixed and taken to be a small value. The norm is normalized by the value for the $\alpha$
on the y-axis,
\begin{equation}
\tilde {\cal N}_{3\alpha}(y,z)\equiv \frac{{\cal N}_{3\alpha}(y=r\cos \theta_y,z=r\sin \theta_y) }
{{\cal N}_{3\alpha}(y=r,z=0)}.
\end{equation}
The area with $\tilde {\cal N}_{3\alpha}\sim 1$ is the allowed region where
the $\alpha$ feels no Pauli blocking with respect to the rotational motion, while the
$\tilde {\cal N}_{3\alpha}\sim 0$ region is the blocked region where
it feels the strong Pauli blocking effect from $\alpha$ clusters of the core.
It means that, when the third $\alpha$ exists near the core, its angular motion is
blocked by the $2\alpha$ core. As a result, the third $\alpha$ is confined in the
Pauli allowed region around the $y$ axis, and it has the angular correlation against the
$2\alpha$ direction.
Consequently, a compact $3\alpha$ state has a geometric structure of the
triangle deformation and it has the surface density oscillation.
On the other hand, as the cluster develops specially the Pauli blocking effect becomes weak.
When the $\alpha$ is far enough from the core, it can freely move in the rotational mode.
Then the angular correlation vanishes
and the system transits to angular-uncorrelated state.
We note that in cluster physics this transition is known to be
the change between
strong cluster coupling and weak cluster coupling states, where
the angular momentum of the inter-cluster motion couples with inner spins of clusters
strongly and weakly, respectively (see Fig.~\ref{fig:coupling}). What we call the cluster coupling
is coupling between clusters and it is different from the terminology of strong and weak couplings
in the BEC-BCS crossover phenomena, which relate to
the coupling between nucleons in a pair (or in a cluster).
The $0^+_1$ state of $^{12}$C is considered to be the compact $3\alpha$ state in the
strong Pauli blocking regime and corresponds to the angular correlated state
with the surface density oscillation due to the triangle deformation.
In the $^{12}$C($0^+_2$), clusters spatially develop well and the system goes to the
non-angular-correlation state.
Strictly speaking, in the $^{12}$C($0^+_2$), all clusters develop
to form a uncorrelating three $\alpha$ state associated with the $\alpha$ condensation,
where the radial motion is also important as well as the angular motion.
Nevertheless, we can say that,
for the restoration of the broken symmetry with the surface density oscillation
in the $0^+_1$ to the rotational symmetry in the $0^+_2$,
the transition from the angular correlated state to the uncorrelated state
in the cluster development is essential.
In the above discussion, we consider the angular motion in the intrinsic (body-fixed) frame, i.e.,
the $y$-$z$ plane. Since the system is symmetric for the rotation around the $z$-axis, the motion of the
third $\alpha$ is free for the $z$-rotation. This rotational mode around the $z$ axis is nothing but
the projection onto the $J_z=0$, which is usually performed
in the $J^\pi=0^+$ projection of the intrinsic state.
Also in $^{16}$O, the angular motion of an $\alpha$ cluster at the surface
is blocked by other three $\alpha$s. The Pauli blocking effect between
$\alpha$ clusters causes the angular correlation of the tetrahedral configurations
in a compact $4\alpha$ state. For transition from the angular correlated cluster state of the ground state
to the uncorrelated cluster state like a cluster gas, at least two $\alpha$ clusters need to spatially
develop to move freely in the allowed region without feeling the Pauli blocking effect.
The $4\alpha$-cluster gas state in $^{16}$O has been suggested near the $4\alpha$ threshold energy.
The suggested excitation energy $E_x\sim 15$ MeV is almost twice of $E_x=7.66$ MeV for
the 3$\alpha$-cluster gas state in $^{12}$C. This might correspond to the energy cost of the
spatial development for two $\alpha$s in $^{16}$O compared with
that for an $\alpha$ in $^{12}$C.
\begin{figure}[th]
\centerline{\epsfxsize=7.5 cm\epsffile{coupling.eps}}
\caption{(a) The Pauli allowed and forbidden regions
for the rotation of the angle $\theta_y$ for the third $\alpha$
in the $(y,z)$-plane around the $2\alpha$ core. The norm
$\tilde {\cal N}_{3\alpha}(y=r\cos \theta_y,z=r\sin \theta_y)$ normalized
at $(y=0, z=r)$ is presented. The $\tilde {\cal N}_{3\alpha}\sim 1$ and $\tilde {\cal N}_{3\alpha}\sim 0$ regions
are the Pauli allowed and forbidden regions, respectively.
(b) A schematic figure for the position of the third $\alpha$
around the $2\alpha$ core.
(c) A schematic figure of the strong cluster coupling state for a compact $3\alpha$ state with the
strong the Pauli blocking effect, and
(d) that of the weak cluster coupling state. See the text.
\label{fig:coupling}}
\end{figure}
\section{Clusters on a Fermi gas core in one dimensional box} \label{sec:1D-cluster}
\subsection{Concept of one-dimensional cluster model}
As mentioned, the triangle deformation of the $^{12}$C ground state can be interpreted as
the density wave on the axial symmetric oblate state and it is characterized by the
static surface density oscillation with the wave number three on the edge of the oblate intrinsic state.
The symmetry breaking in the ground state originates in
the 4-body correlation in $\alpha$ clusters.
The axial symmetry is broken by the angular correlation between $\alpha$ clusters
due to the Pauli blocking effect.
As $\alpha$ clusters develop, the Pauli blocking weakens and the angular correlation vanishes.
Then the system transits from the symmetry broken state of the $^{12}$C($0^+_1$) to the
symmetry restored state of the $^{12}$C($0^+_2$), where
$\alpha$ clusters are freely moving in a dilute density like a gas.
In this section, we consider Pauli blocking effects
in a schematic model of two clusters on a Fermi gas core
in a one-dimensional (1D) finite box,
and discuss how the translational invariance is broken to form
an oscillating density (inhomogeneous) state and how the broken symmetry is restored
to a uniform density (homogeneous) state.
Let us consider a schematic model for a simplified $3\alpha$ system consisting of
two $\alpha$ clusters around the $\alpha$ core. To concentrate only on
angular correlations between two $\alpha$s, we
ignore the radial coordinate degree of freedom
and assume that two $\alpha$ clusters are moving at a certain distance $r_0$ from the core.
Taking the body-fixed frame so as one of two
$\alpha$ clusters on the $z$-axis and the other $\alpha$ in the $y$-$z$ plane, then,
the angular motion between two $\alpha$s can be reduced to
the 1D problem, where the first cluster sits around the origin and the second cluster
exists in a finite size $L=2\pi r_0$ box with the periodic boundary condition.
For the core effect, we take into account only the Pauli blocking from the core particles,
which is treated here by a Fermi gas core for simplicity.
We extend the 1D-cluster model and treat the cluster size $b$,
the box size $L$, and the core Fermi momentum $k_c$ as parameters.
In this model, we do not perform energy variation nor calculate energy eigenstates.
Moreover, the mechanism of cluster formation and the dynamical change of
the cluster structure or the cluster size are beyond our scope. We give a model wave function with fixed parameters $b$,$L$,$k_c$
based on simple ansatz, and analyze Pauli blocking effect between a cluster and the
core and that between two clusters in the given wave function.
From behavior of the wave function and features of the Pauli blocking effects, we conjecture how the transition between
the Fermi gas, BCS-like, DW-like, and BEC-like states occurs.
\subsection{Formulation of one-dimensional cluster model}
We explain details of the 1D-cluster model wave function.
Two $n$-body clusters are formed on a Fermi gas core
in a 1D box with the box size $L=2\pi r_0$. For an $\alpha$ cluster
$n=4$ and a cluster consists of $p\uparrow$, $p\downarrow$, $n\uparrow$, and $n\downarrow$.
Basis single-particle wave functions in the periodic boundary condition are given as
$e^{+ik x}/{\sqrt{2L}}$ and $e^{-ikx}/{\sqrt{2L}}$, where the
momentum $k=2\pi \tilde k/L$. Here $\tilde k$ is the dimensionless momentum $\tilde k=kr_0$ and
takes $\tilde k=0,1,2,3,\cdots$. Core nucleons are assumed to form a Fermi surface at $k_c$. It means that
$k\le k_c$ orbits are occupied by core nucleons and they are forbidden single-particle states for
constituent nucleons of clusters. Using the dimensionless parameter $\tilde k_c=k_c r_0$,
the lowest allowed momentum is $k_F =\tilde k_F/r_0$ for $\tilde k_F\equiv \tilde k_c+1$ and
$k\ge k_F$ orbits are allowed.
We assume the first cluster ($\alpha_1$) wave function that localized around $x=0$
in a Gaussian form,
\begin{align} \label{eq:psi_1}
\psi_{\alpha_1}&=\Pi_{\chi}\phi_0(x_{1\chi}){\cal X}_{\chi}, \\
\chi&=\tau\sigma=\{p\uparrow, p\downarrow,n\uparrow,n\downarrow\},\\
\phi_0(x)&=\frac{1}{\sqrt{2L}}\sum_{k \ge k_F} f(k)\left( e^{+ik x} + e^{-ikx}\right).
\label{eq:spatial-alpha1}
\end{align}
$f(k)$ is the Fourier transformation of the Gaussian with the cluster size $b$ and given as
\begin{equation}
f(k)=n_0 \exp(-\frac{b^2 k^2}{2}),
\end{equation}
where $n_0$ is determined by the condition,
\begin{equation}
\sum_{k \ge k_F} |f(k)|^2=1,
\end{equation}
to normalize $\phi_0(x)$ in the box.
The wave function indicates that four species of nucleons, $p\uparrow, p\downarrow, n\uparrow, n\downarrow$,
occupy the same spatial orbit with the Gaussian form forbidden partially by the Fermi gas core.
If $b/L$ and $k_F$ are small enough, the wave function is localized well around $x=0$.
Since there is no identical Fermion in a cluster,
the antisymmetrizer can be omitted in the expression.
Next we consider the second cluster ($\alpha_2$).
Assuming that $\alpha_2$ is localized around a position $x=s$, the wave function is given
by the shifted function,
\begin{align}\label{eq:psi_2s}
\psi^s_{\alpha_2}&=\prod_{\sigma}\phi_s(x_{2\chi}){\cal X}\chi,\\
\phi_s(x)&=\frac{1}{\sqrt{2L}}\sum_{k \ge k_F} f(k)\left( e^{+ik (x-s)} + e^{-ik (x-s)}\right).
\end{align}
Then, the normalized wave function of the $2\alpha$ system with the parameter $s$ is written as
\begin{align}
\Psi_s &= {\cal N}^{-2}_{\rm PB}(s)
\frac{1}{8!}{\cal A} \left\{ \psi_{\alpha_1} \psi^s_{\alpha_2} \right \} ,\\
{\cal N}_{\rm PB}(s)&\equiv|\frac{1}{\sqrt{2}}{\cal A}\{\phi_0\phi_s\}|^2 \\
&=\frac{1}{2}
|\phi_0(x_1)\phi_s(x_2)-\phi_0(x_2)\phi_s(x_1)|^2 \\
&= 1- \langle \phi_0\phi_s|\phi_s\phi_0\rangle.
\end{align}
${\cal N}_{\rm PB}(s)$ is the
overlap norm for two identical nucleons. It is
a function of the parameter $s$ for the localization center of the second cluster.
${\cal N}^4_{\rm PB}$ means the overlap norm for the $2\alpha$ system and it is an indicator
to evaluate the Pauli blocking effect between two clusters.
In the case that two clusters feel no Pauli blocking, ${\cal N}^4_{\rm PB}(s)=1$, while
in the case that they feel complete Pauli blocking, ${\cal N}^4_{\rm PB}=0$.
It means that ${\cal N}^4_{\rm PB}(s)$ stands for the Pauli "allowedness"
for $\alpha_2$ around $s$.
The density distribution for $\chi=\tau\sigma$ particles
in the $2\alpha$ state is
\begin{equation}
\rho^\chi_s(x)=\langle \Psi_s|\sum_{i \in \chi}\delta(x-x_i)| \Psi_s \rangle.
\end{equation}
Note that the density distributions for all kinds of nucleons are the same in the present cluster model
and the total nuclear density is $\sum_\chi \rho^\chi_s(x)=4 \rho^\chi_s(x)$.
In a similar way, the $\chi=\tau\sigma$ density for the one-$\alpha$ state $\psi_{\alpha_1}$ is
\begin{equation}
\rho^\chi_{\alpha_1}(x)=\langle \psi_{\alpha_1}|\sum_{i \in \chi }\delta(x-x_i)| \psi_{\alpha_1}\rangle.
\end{equation}
In the present model, we can express all the formulation with the dimensionless parameters
$\tilde L=L/r_0=2\pi$, $\tilde b=b/r_0$, $\tilde s=s/r_0$, $\tilde x=x/r_0$, $\tilde k=kr_0$,
$\tilde k_c=k_cr_0$, $\tilde k_F=k_F r_0$, and so on.
The dimensionless densities are also defined as $\tilde \rho=\rho r_0$.
Then, the state $|\Psi_s \rangle$ is specified with the dimensionless parameters $(\tilde k_F, \tilde b, \tilde s)$
and does not depend on the scaling factor $r_0$.
Since the number of clusters is fixed to be two,
a larger volume size $L$ corresponds to a lower cluster density.
Because of the scaling, a larger volume size $L=2\pi r_0$ with a fixed cluster size $b$
is equivalent to a smaller cluster size $b$ with a fixed $L$.
That means, the parameter $\tilde b$ indicates the cluster size
relative to the box size and also corresponds to the cluster density.
The $\tilde k_F=\tilde k_c+1$ is the lowest allowed momentum just above the core Fermi momentum $\tilde k_c$,
and it relates to the density of the core particles.
\subsection{Pauli blocking effect from the core to one cluster}
Let us describe the structure change of one cluster on the Fermi gas core
in the 1D-cluster model.
The coefficients $f(\tilde k)$ of the Fourier transformation and
the density distribution $\rho^\chi_{\alpha_1}(\tilde x)$ for the $\alpha_1$ cluster
are shown in Figs.~\ref{fig:kf1} and \ref{fig:kf2} for $\tilde k_F=1$ and $\tilde k_F=2$, respectively.
Because of the Pauli blocking effect from the core as well as the finite volume effect,
the structure of the cluster changes
from the original Gaussian form
depending of the parameters $\tilde b$ and $\tilde k_F$.
In the case of small $\tilde b$, which corresponds to the small cluster size $b$ or the large volume size
$L$, the coefficient $f(\tilde k)$ is distributed widely and has a long
tail toward the high momentum region, and the density is localized well
around $\tilde x=0$ (and also $\tilde x=2\pi$ for the periodic boundary).
With increase of $\tilde k_F$, the low momentum components are truncated
and the density shows a oscillating tail. Nevertheless, if the cluster size
$\tilde b$ is small enough, the density is still localized. With increase of the cluster size $\tilde b$,
the density localization weakens and the density approaches the periodic one.
Then, the component of the lowest allowed orbit $\tilde k_F=\tilde k_c+1$ is dominant and
the components of higher orbits decrease. The localization declines more rapidly
in the case of the larger $\tilde k_F$.
It means that, as the $\tilde b$ increases or
as the $\tilde k_F$ increases,
the spatial correlation between nucleons (inner correlation) in a cluster becomes
weak. We call the case of the weak localization that the wave function has
the dominant $\tilde k_F$ component and minor
$\tilde k > \tilde k_F$ components ``the weak coupling regime'',
and the opposite case of the strong localization with significant $\tilde k > \tilde k_F$ components
``the strong coupling regime''. In the weak coupling limit, the one-cluster density goes to
$\cos^2(\tilde k_F \tilde x)/\pi$.
\subsection{Pauli blocking between two clusters}
For two $\alpha$ clusters expressed by the cluster
wave function $\Psi_s$, the clusters $\alpha_1$ and $\alpha_2$ are assumed to be localized around
$\tilde x=0$ and $\tilde x=\tilde s$, respectively. The Pauli blocking effect between $\alpha_1$
and $\alpha_2$ is evaluated by the overlap norm ${\cal N}^4_{\rm PB}(\tilde s)$, which is an indicator
for Pauli allowedness.
The Pauli allowedness ${\cal N}^4_{\rm PB}(\tilde s)\sim 0$ means that
the $\tilde s$ region is blocked by the $\alpha_1$ cluster and is a forbidden area for $\alpha_2$.
The middle panels of Figs.~\ref{fig:kf1} and \ref{fig:kf2} show
the allowedness ${\cal N}^4_{\rm PB}(\tilde s)$ (thin solid lines) as well as ${\cal N}_{\rm PB}(\tilde s)$ (dashed lines)
plotted as a function of $\tilde s$. In principle, the Pauli blocking effect reflects
the probability of the $\alpha_1$ cluster, and therefore, the forbidden region corresponds to the
relatively high density region of the $\alpha_1$ cluster. In case of a small cluster size $\tilde b$,
the allowed region exists widely and the forbidden region exists only
in the small area close to $\tilde x=0$ and $\tilde x=2\pi$.
That is to say, the second cluster feels almost no Pauli blocking effect except for the region near
the first cluster ($\alpha_1$).
With increase of $\tilde b$, the forbidden area spreads in a wide region around $\tilde x=0$ and $\tilde x=2\pi$, and the allowed region for $\alpha_2$ becomes narrow. In the weak coupling regime, where
the $\alpha_1$ density is periodic,
the allowed region with ${\cal N}^4_{\rm PB}(\tilde s)\sim 1$
also shows the periodicity.
Reflecting the $\cos^2(\tilde k_F x)$ periodicity of $\rho_{\alpha_1}(x)$,
the areas $\tilde s\approx \pi 2m/2\tilde k_F$ ($m=0,\cdots,2 \tilde k_F$) are the forbidden regions, while
the areas $\tilde s=\tilde s_j=\pi (2j-1)/2\tilde k_F$ ($j=1,\cdots,2 \tilde k_F$) are the allowed regions.
As mentioned, the parameter $\tilde b$ corresponds to the cluster size relative to the box size.
When the cluster density is low enough, the Pauli blocking effect is weak and
almost all region is allowed for the $\alpha_2$ cluster except for the
position close to the $\alpha_1$. On the other hand, in the case of the high cluster density
the Pauli blocking effect is strong and the allowed $s$ region is restricted
in the periodic regions.
\subsection{Transitions from strong coupling to weak coupling regimes}
We first discuss the features of the two cluster wave function $\Psi_s$ with a
fixed parameter $\tilde s$. It means that the center of the second cluster is located around
the fixed position. Later, we will discuss
how the spatial correlations between clusters (inter-cluster correlation)
can be affected by the Pauli blocking.
We choose $\tilde s_j=\pi (2j-1)/2\tilde k_F$ with $j=1$ and $j=\tilde k_F$, which are
the allowed positions $\tilde s_j$ nearest to $\tilde x=0$ and $\tilde x=\pi$ and corresponds to the
smallest and largest inter-cluster ($\alpha_1$-$\alpha2$) distances, respectively.
The density distribution $\rho^\chi_s(\tilde x)$ in the 2$\alpha$-cluster wave function $\Psi_s$
are shown in the right panels of Figs.~\ref{fig:kf1} and \ref{fig:kf2} for $\tilde k_F=1$ and
$\tilde k_F=2$.
In the strong coupling regime, for instance, the $(\tilde k_F,\tilde b)=(1.0, 0.25)$ state,
the density shows the clear two peak structure and indicates that
two clusters are well isolated without almost no overlap.
As the $\tilde b$ increases, the overlap region between clusters gradually increases and
the density changes to the oscillation structure,
in particular, in the case of $j=\tilde k_F$.
The density oscillation is remarkable, for instance, in the $(\tilde k_F,\tilde b)=(1.0, 1.0)$ and
$(\tilde k_F,\tilde b)=(2.0, 0.75)$ states, which shows the $2\tilde k_F+1$ periodicity.
With further increase of $\tilde b$, the density oscillation weakens and finally
disappears to the uniform density, and the system goes to the Fermi gas limit
with the Fermi momentum $\tilde k_F$.
In the present model, we put the $\alpha_2$ cluster around the fixed position $\tilde s$.
For more realistic wave functions of two $\alpha$ clusters, one should
extend the model by taking into account the motion of the $\alpha_2$ cluster
relative to the $\alpha_1$ cluster.
Microscopically, it corresponds to superposing $\psi^s_{\alpha_2}$ with various values of the parameter
$s$ as
\begin{equation}
\psi_{\alpha_2}=\int d\tilde s F(\tilde s) \psi^{s}_{\alpha_2}.
\end{equation}
The spatial correlation between clusters (inter-cluster correlation)
is expressed by the weight function $F(\tilde s)$. On the other hand, the correlation
between nucleons inside a cluster (inner correlation) is described by the intrinsic structure of a
single-cluster wave function of $\psi_{\alpha_1}$ or $\psi^{s}_{\alpha_2}$ determined by
the parameters $\tilde b$ and $\tilde k_F$, and it is given by hand in the present model, where
the cluster formation and its intrinsic structure are {\it a priori} assumed.
Moreover, the wave function should be projected to the
total momentum $K_G=0$ for the center of mass motion (c.m.m.) of two clusters
so that the translational invariance is restored in the finite system.
In the following discussions, we do not treat the $\alpha_2$-cluster motion explicitly,
but conjecture how the Pauli blocking may restrict the motion of the $\alpha_2$ cluster and affect
to the spatial correlations between clusters (inter-cluster correlations).
Since the area of high $\alpha_1$ density is blocked
as mentioned, the $\alpha_2$ cluster may move in the allowed regions with large
${\cal N}^4_{\rm PB}(\tilde s)$.
We assume that the interaction between clusters is weak and
the Pauli blocking effect, which acts like an effective repulsion, gives the most important
contribution
in the relative motion between clusters.
\subsubsection{BEC-like state}
Let us consider the strong coupling regime, where the cluster size $\tilde b$ is
small enough. The ${\cal N}^4_{\rm PB}(\tilde s)$ curve shows a wide open window of the allowed region
as in the case of $(\tilde k_F, \tilde b)=(1.0, 0.25)$.
Since the Pauli blocking effect is weak, the
$\alpha_2$ can move in the wide allowed region almost freely.
It means
the strong inner correlation but almost no inter-cluster correlation.
In such the uncorrelated cluster state, two clusters may condensate
approximately the zero momentum state in the ground state similarly to the BEC phase.
\subsubsection{DW-like state}
As the $\tilde b$ increases, the open window for the allowed region
closes and $\alpha_2$ no longer can
move freely.
Instead, the allowed region becomes discrete, and
the $\alpha_2$-cluster center may be confined
around the allowed $\tilde s$ values, $\tilde s_j=\pi (2j-1)/2\tilde k_F$ ($j=1,\cdots,2 \tilde k_F$).
For a possible wave function for $\alpha_2$, one may
consider a superposition of wave functions with $\tilde s_j$ as
\begin{equation}
\psi_{\alpha_2}=\sum_j F(\tilde s_j) \psi^{s_j}_{\alpha_2}.
\end{equation}
As seen in the density distribution $\rho^\chi_s(\tilde x)$ of two cluster states
shown in Figs.~\ref{fig:kf1} and ~\ref{fig:kf2},
the density oscillation shows the $2\tilde k_F+1$ periodicity for any $\tilde s_j$ values.
The density oscillation can be remarkable provided that
the amplitude $F(\tilde s_j)$ for $j=\tilde k_F$
(nearest $\tilde s_j$ to the middle point $\tilde x=\pi$) is relatively larger
than those for $\tilde s_j$ around $\tilde s=0$ and $2\pi$
because of the effective repulsion between clusters due to the Pauli blocking effect.
In other words, possible localization of the amplitude $F(\tilde s_j)$
because of the Pauli blocking effect causes
the spatial correlation between clusters (inter-cluster correlation).
In such the case, the density oscillation shows
the clear $2\tilde k_F+1$ periodicity whose origin is the DW-type correlations.
Indeed, the state contains dominantly
the coherent $1p$-$1h$ components of a $\pm \tilde k_F+1$ particle and a $\pm \tilde k_F$ hole
on the Fermi surface at $\tilde k_F$.
The $1p$-$1h$ correlation carries the $2\tilde k_F+1$ momentum
and causes the $2\tilde k_F+1$ periodicity. This correlation is the similar to
that of the DW phase.
In the weak coupling approximation, the spatial wave function Eq.~(\ref{eq:spatial-alpha1}) for
an $\alpha$ cluster
is approximated by the dominant
$\tilde k_F$ component with a minor mixing of the $\tilde k_F+1$ component as
\begin{equation}
\phi_0(x)=\frac{1}{\sqrt{2L}} \bigl( \cos \tilde k_F \tilde x + \epsilon \cos (\tilde k_F+1) \tilde x \bigr).
\end{equation}
In this approximation, it can be proved that the $2\alpha$ wave function $\Psi_s$ for $j=\tilde k_F$ actually
contains the dominant DW-type $1p$-$1h$ correlation
in the particle-hole representation (See appendix \ref{app:weak-coupling}).
In the opposite case that there exists attractive force between clusters,
the amplitudes $F(\tilde s_j)$ may gather to smaller $\tilde s_j$.
It corresponds to the exciton(Exc)-type correlations described by
the coherent $1p$-$1h$ components of a $\pm \tilde k_F+1$ particle and a $\mp \tilde k_F$ hole
on the Fermi surface as described in appendix \ref{app:weak-coupling}.
In this case, the $1p$-$1h$ carries
the momentum $|\tilde k_F+1\mp \tilde k_F|=1$, which suggests that spatial density
oscillation is not so remarkable. Indeed, such the feature is seen in the weaker density oscillation
in $\rho^\chi_s({\tilde x})$ for $\tilde s=\tilde s_j$ ($j=1$) shown
in Fig.~\ref{fig:kf2} (dashed lines in the right panels) for $\tilde k_F=2$.
We should comment that the system for $k_F=1$ is a special case that
the small $\tilde s$ region, for instance, the region $\tilde s < \pi/4$ is forbidden
and the Exc-type correlations are suppressed.
\subsubsection{BCS-like state}
With further increase of $\tilde b$, the $\tilde k_F$ component becomes more dominant.
In the case $f(\tilde k) \ll f(\tilde k_F)$ for $\tilde k > \tilde k_F$,
the structure of the Pauli allowed area for $\alpha_2$ approaches
the pure periodic one
following the periodicity of the $\alpha_1$ density
$\rho_{\alpha_1}(\tilde x)\approx \cos^2(\tilde k_F \tilde x)/\sqrt{\pi}$.
Then, a linear combination of $\psi^{s_j}_{\alpha_2}$ with an equal weight may be the lowest state
to restore the translational invariance because $\Psi_{s}$ for different $\tilde s_j$ may
degenerate energetically.
It means that the state no longer has the spatial correlation between clusters (inter-cluster correlation)
and it corresponds to a BCS-like state. Namely, the BCS-like state has the weak inner correlation
and no inter-cluster correlation.
As described in appendix \ref{app:weak-coupling},
in the weak coupling approximation,
the total c.m.m. momentum $K_G=0$ state projected from
the equal weight linear combination of
two cluster wave functions $\Psi_{s}$ is equivalent to the BCS-like state containing
$2p$-$2h$ configurations of a
$(\tilde k_F+1,-\tilde k_F-1)$ $\chi_\alpha\chi_\beta$ particle pair and a $(\tilde k_F,-\tilde k_F)$ $\chi_\alpha\chi_\beta$ hole pair
[see Eq.~(\ref{eq:BCS})].
In the $2p$-$2h$ state, all kinds of pairing is coherently mixed so as to keep
the spin-isospin symmetry of $\alpha$ clusters.
\subsubsection{Fermi-gas state and correlations}
In the large $\tilde b$ limit, excited components of $\tilde k > \tilde k_F$
vanish and the system goes to the Fermi gas (FG) state with the Fermi surface at
$\tilde k_F$.
Needless to say, the FG state has no inner correlation nor
inter-cluster correlation.
This is nothing but the uncorrelated state. On the other hand,
the correlated states are characterized by configurations of
excited $\tilde k > \tilde k_F$ components.
In the weak coupling regime,
we can recognize DW-like, Exc-like, BCS-like states, or mixing of them
by correlations in $1p$-$1h$ and $2p$-$2h$ configurations on the Fermi surface.
\subsubsection{Diagram of structure transitions}
As discussed above, in the present schematic model for two clusters on the Fermi gas core in the 1D box,
the cluster state is expected to show the BEC-like, the DW/Exc-like, BCS-like, or FG behaviors
depending on the cluster size $\tilde b$ and the lowest allowed momentum $\tilde k_F$.
We here assume the criterion to judge a correlation type for a wave function with
given $\tilde b$ and $\tilde k_F$ values, and
show a diagram of structure transitions on the $\tilde b$-$\tilde k_F$ plane.
For the criterion, we define
$\Delta \tilde k$ by the deviation of $\tilde k$ from the lowest allowed momentum $\tilde k_F$.
\begin{equation}
(\Delta \tilde k)^2= \sum_{\tilde k\ge \tilde k_F} f^2(\tilde k) (\tilde k-\tilde k_F)^2.
\end{equation}
It indicates the deviation from the FG state $|HF\rangle$.
In the case $\Delta \tilde k \ll 1$, $\tilde k \ge \tilde k_F+2$ components are negligible
and the coefficient $\epsilon\equiv f(\tilde k_F+1)$ of the $\tilde k_F+1$ component approximates $\epsilon \approx \Delta \tilde k$.
In the $\Delta \tilde k=0$ limit, the system goes to the FG state.
For the criterion to classify a wave function with given $\tilde b$ and $\tilde k_F$
to a correlation type, we use
the deviation $\Delta \tilde k$ and the Pauli allowedness ${\cal N}^4_{PB}(\tilde s)$
at the middle point of the box $\tilde s= \tilde L/2 =\pi$.
In Table \ref{tab:diagram}, we list the criterion for various correlation types.
In the table, we also show the typical examples of the $\tilde b$ values for $\tilde k_F=1$
and $\tilde k_F=2$ that
satisfy the criterion. The densities and the Pauli allowedness for the corresponding states are already
shown in Figs.~\ref{fig:kf1} and \ref{fig:kf2}.
The BEC-like state is expected to appear when the Pauli allowed region is widely open.
Then, we adopt the condition ${\cal N}^4_{PB}(\tilde s =\pi)>0.8$ for the BEC-like state.
The DW-like and/or Exc-like states may appear when the Pauli allowed region is restricted.
For this condition, we use ${\cal N}^4_{PB}(\tilde s = \pi) <0.1$.
The DW/Exc-like states may change to the BCS-like state
when the $\alpha_1$-cluster density $\rho_{\alpha_1}(x)$ approaches
$\cos^2(\tilde k_F x)/\sqrt{\pi}$ and the Pauli allowed area for $\alpha_2$ becomes
almost periodic. For the criterion that $\rho_{\alpha_1}(\tilde x) \approx
\cos^2(\tilde k_F \tilde x)/\sqrt{\pi}$, we adopt the ratio of
the $\alpha_1$ density at $\tilde x=0$ to that at $\tilde x=\tilde L/2=\pi$.
In the weak coupling regime of $\Delta \tilde k \ll 1$,
the ratio $\rho^\chi_{\alpha_1}(\tilde x=0)/\rho^\chi_{\alpha_1}(\tilde x=\pi)$
is approximately given by
$1-4 \epsilon \approx 1-4\Delta \tilde k$, therefore, we use $\Delta \tilde k$ for the measure.
We apply $\Delta \tilde k>0.05$ for the DW/Exc-like states. This approximately
corresponds to the ratio $\rho^\chi_{\alpha_1}(\tilde x=0)/\rho^\chi_{\alpha_1}(\tilde x=\pi)< 0.8$.
With the decrease of $\Delta \tilde k$, the $2\alpha$ wave function may gradually change to the FG state
via the BCS-like state.
Since the higher momentum components $\tilde k> \tilde k_F$
nearly equals to $(\Delta \tilde k)^2$ in the weak coupling regime, we use $\Delta \tilde k$ as the measure
for structure transitions from the DW/Exc-like to the FG state as listed in the table.
For instance, the condition $\Delta \tilde k < 0.001$ for the FG state indicates
the contamination of $\tilde k> \tilde k_F$ components is less than $10^{-6}$.
\begin{table}[htb]
\caption{\label{tab:diagram}
For the criterion to classify the 1D-cluster wave function with given $\tilde b$ and $\tilde k_F$
into various types of correlation.
The deviation $\Delta k$ and the Pauli allowedness ${\cal N}^4_{PB}(\tilde s = \tilde L/2 =\pi)$ are
used for the criterion.
The typical examples of the $\tilde b$ values for $\tilde k_F=1$ and $\tilde k_F=2$ that
satisfy the criterion are also shown in the table.
}
\begin{center}
\begin{tabular}{cccc}
\hline
correlation & criterion & \multicolumn{2}{c}{example} \\
& & $\tilde k_F=1$ & $\tilde k_F=2$ \\
FG & $\Delta \tilde k < 0.001$ & & $\tilde b=2.0$ \\
FG-BCS crossover & $ 0.001 \le \Delta \tilde k < 0.005$ & $\tilde b=2.0$ & $\tilde b=1.5$ \\
BCS-like & $ 0.005 \le \Delta \tilde k < 0.01$ & & \\
BCS-DW/Exc crossover & $ 0.01 \le \Delta \tilde k < 0.05$ & $\tilde b=1.5$ & $\tilde b=1.25$\\
DW/Exc-like & $ 0.05 \le \Delta \tilde k$ and ${\cal N}^4_{PB}(\tilde L/2) <0.1$ &
$\tilde b=1.0$ & $\tilde b=0.75$\\
DW/Exc-BEC crossover & $0.1 \le {\cal N}^4_{PB}(\tilde L/2) < 0.8$ & $\tilde b=0.5$ & $\tilde b=0.5$\\
BEC-like & $0.8 < {\cal N}*^4_{PB}(\tilde L/2)$ & $\tilde b=0.25$ & $\tilde b=0.25$ \\
\hline
\end{tabular}
\end{center}
\end{table}
The diagram of the structure transitions between
the FG state, BCS-like state, DW/Exc-like state, and
BEC-like state on the $\tilde k_F$-$\tilde b$ plane is shown in Fig.~\ref{fig:phase}.
For a system with higher $\tilde k_F$, nucleons in a cluster couple more weakly to each other
because of Pauli blocking from core nucleons, and therefore
the FG region is wider in the diagram.
However, one should care about that the assumption of the sharp surface for the core
Fermi momentum might be
inadequate, in particular, in case of high $\tilde k_F$. The core Fermi surface may diffuses
in correlated states. If the surface diffuses, the lower orbitals below $\tilde k_F$ are
partially allowed for nucleons in clusters.
Then, the weakening of the cluster localization by the Pauli blocking from core nucleons
can be quenched.
The present model should be extended by incorporating the surface diffuseness of
the core Fermi surface.
We also should comment that, in a real system,
two parameters $\tilde b$ and $\tilde k_F$ are uncontrollable in general.
As explained before, $\tilde b$ indicates the cluster size relative to the box size (the cluster density),
while $\tilde k_F$ relates to the core density. The size of clusters on the Fermi gas core
should be determined dynamically as a consequence of nuclear interactions between constituent nucleons
in clusters.
It should be also affected by the neighboring cluster, i.e., cluster density as well as $\tilde k_F$.
Moreover, the ratio of cluster nucleons to the core nucleons should be determined dynamically.
The extension of the present model by taking into account
dynamical change of cluster structure with the use of effective nuclear interactions is an
important issue to be solved in future study.
\subsection{Correspondence to finite nuclei}
As shown in the AMD+VAP calculation,
three $\alpha$ clusters are formed in the ground state of $^{12}$C
even though the existence of any clusters is not {\it a priori} assumed in the framework.
Once three $\alpha$ clusters are formed, it can be associated with
the schematic 1D-cluster model.
Angular correlation between two $\alpha$ clusters around the $\alpha$ core
corresponds to the spatial correlation between two $\alpha$ clusters
in the 1D-cluster model with $\tilde k_F=1$.
The cluster size $b$ is 1.62 fm for $\nu=0.19$ fm$^{-2}$ used in the AMD calculations.
If we adopt the r.m.s. matter radius $r_\alpha=1.72$ fm of the core $\alpha$
as the radial size $r_0$, the dimensionless $\tilde b=b/r_0$ is estimated to be $\tilde b=0.94$.
Or if we use the r.m.s. radius of cluster positions of 3 $\alpha$s
evaluated from $r^2_0+r^2_\alpha=R_0^2$, we get $\tilde b=0.87$. In both cases, $\tilde b\sim 1$.
As already described,
the state with $(\tilde k_F,\tilde b)=(1.0,1.0)$ in the 1D-cluster model
shows the remarkable density oscillation with the wave number three in the DW-like regime.
It is consistent with the intrinsic structure with the
$^{12}$C($0^+_1$) obtained with the AMD+VAP calculation.
Indeed, the $^{12}$C($0^+_1$) has the $(Y_3^{-3}-Y_3^{+3})/\sqrt{2}$ component
and the surface density shown in Fig.~\ref{fig:c12.theta-phi}
is similar to the oscillating density in the 1D-cluster state for $(\tilde k_F,\tilde b)=(1.0,1.0)$
in Fig.~\ref{fig:kf1}.
As discussed in Ref.~\cite{KanadaEn'yo:2011qf}, the $\alpha$ correlation in the pentagon
ground state of $^{28}$Si can be interpreted as the density wave on the $sd$-shell oblate state.
The $^{28}$Si ground state is associated with the 1D-cluster model with $\tilde k_F=2$
considering a core consisting of the spherical $^{16}$O and four nucleons in oblate orbits.
Then, the pentagon shape can be understood again by the DW-like state in the 1D-cluster model
wave function with the $2\tilde k_F+1=5$ periodicity.
In case of the $\alpha$ correlation in the $^{16}$O ground state, the tetrahedron shape can not
be connected directly to the 1D problem. However, when we focus only on the
$(Y_3^{-3}-Y_3^{+3})/\sqrt{2}$ component of the intrinsic density in the $^{16}$O ground state,
the density oscillation is characterized by the wave number three periodicity similar to
that of the triangle shape in $^{12}$C($0^+_1$) and the deformation feature is associated with
the DW-type correlation in the 1D cluster model as in $^{12}$C.
\section{Summary and outlook}\label{sec:summary}
We investigated
$\alpha$-cluster correlations in the ground states of $^{12}$C and $^{16}$O while
focusing on the surface density oscillation in the intrinsic states.
The intrinsic states of $^{12}$C and $^{16}$O obtained by the AMD+VAP method show
triangle and tetrahedral shapes, respectively, because of the $\alpha$ correlations.
The formation of $\alpha$ clusters in these states was confirmed in the AMD framework,
in which existence of any clusters are not {\it a priori} assumed.
The intrinsic deformations are regarded as
spontaneous symmetry breaking of rotational invariance. It was shown that the
oscillating surface density in the triangle and tetrahedral shapes is associated with
that in DW states caused by the instability of Fermi surface with
respect to a kind of $1p$-$1h$ correlations.
To discuss the symmetry breaking between uniform density states and the oscillating density states,
a schematic model of a few clusters on a Fermi gas core in a one-dimensional finite box
was introduced. In the model analysis, we conjecture structure transitions from
a Fermi gas state to a DW-like state via a BCS-like state, and to a BEC-like state
depending on the cluster size relative to the box size.
In both analyses of the BB-cluster model and the schematic 1D-cluster model,
Pauli blocking effects are found to play an important role in the DW-like state.
The breaking of the translational invariance in the DW-like state
originates in the Pauli blocking effect between clusters, which acts as an effective
inter-cluster repulsion and restricts cluster motion.
In the present analysis with the schematic 1D-cluster model,
we do not perform energy variation nor calculate energy eigenstates.
Moreover, the mechanism of cluster formation and the dynamical change of
the cluster structure or the cluster size are beyond our scope in the present paper.
The extension of the present model by taking into account
dynamical change of cluster structure with the use of effective nuclear interactions is an
important issue to be solved in future study.
Also the cluster and core formations as well as diffuseness of the core Fermi surface should be
studied in more realistic models.
Furthermore, the assumption that the interaction between clusters is weak and
the Pauli blocking effect gives the major contribution to
the inter-cluster motion may be too simple.
To clarify which state of BCS-like, DW-like, or EXc-like ones realizes
it is essential to explicitly solve the problem of inter-cluster motion
by taking into account nuclear forces or inter-cluster interactions.
It would be interesting to associate the present picture for clusters in the 1D finite box
with phase transitions in infinite matter.
In the extension of the present model to infinite matter problem,
one should take care of the differences between finite systems and infinite systems as follows.
Firstly, momentum $k$ values in a finite box is discrete because of boundary condition,
while they are continuum values in infinite matter.
In the description with discrete momentum,
long range correlations beyond the box size $L$ is not taken into account.
What we call the BCS-type correlation in the present 1D-cluster model is
the correlation in the range of the box size at most.
In second, the total momentum of c.m.m. should be projected to zero in finite system,
while it is not necessarily zero in infinite systems.
In spite of those differences, the 1D-cluster model
may give a hint to understand an origin of DW in infinite matter.
\begin{figure}[th]
\centerline{\epsfxsize=10 cm\epsffile{k0-fig.eps}}
\caption{(Color on-line) The results of one and two clusters on the Fermi gas core in a
one-dimension box.
Left: The coefficients $f(\tilde k)$ of the Fourier transformation.
Middle: The density $\rho^\chi_{\alpha_1}(\tilde x)$
of the first $\alpha$ located around $\tilde x=0$ (red thin lines),
the PB effect ${\cal N}^4_{PB}(\tilde s)$ for the second $\alpha$ (black solid lines).
${\cal N}_{PB}(\tilde s)$ is also shown (blue dashed lines).
Right: The density $\rho^\chi_{s}(\tilde x)$ (black solid lines). The $\tilde s$ for the center
position of $\alpha_2$ is chosen to be $\tilde s_j$ with $j=\tilde k_F$, which is the closest
$\tilde s_j$ to $\tilde x=\tilde L/2=\pi$ among the allowed $\tilde s$ values.
The density $\rho^\chi_{\alpha_1}(\tilde x)$ is also shown for comparison(red thin lines).
The results for the cluster size
$\tilde b=2.0$, 1.5, 1.25, 1.0, 0.75, 0.5, 0.25 are shown. }
\label{fig:kf1}
\end{figure}
\begin{figure}[th]
\centerline{\epsfxsize=10 cm\epsffile{k1-fig.eps}}
\caption{Same as Fig.~\ref{fig:kf1} but for $\tilde k_F=2$.
The blue dashed lines indicate the
the density $\rho^\chi_{s}(\tilde x)$ for the smallest allowed $\tilde s$ value,
$\tilde s_j$ with $j=1$.
The results for the cluster size
$\tilde b=2.0$, 1.5, 1.25, 1.0, 0.75, 0.5, 0.25 are shown.
}
\label{fig:kf2}
\end{figure}
\begin{figure}[th]
\centerline{\epsfxsize=7 cm\epsffile{trans-4b.eps}
}
\caption{Diagram of structure transitions between the Fermi gas state, BCS-like state, DW/Exc-like state, and
BEC-like state in the schematic 1D-cluster model. The criterion for the boundary are listed in
\ref{tab:diagram}. The conditions of ${\cal N}^4_{PB}(\tilde L/2) = 0.1$ and $0.8$ are shown by the solid lines,
while the conditions of $\Delta \tilde k=0.001, 0.005, 0.01, 0.05$ are shown by the dashed lines.
}\label{fig:phase}
\end{figure}
\section*{Acknowledgments}
The authors thank to nuclear theory group members of department of physics of Kyoto University
for
valuable discussions.
Discussions during the YIPQS long-term workshop "DCEN2011" held at YITP are
helpful to advance this work.
The computational calculations of this work were performed by using the
supercomputers at YITP.
This work was supported by Grant-in-Aid for Scientific Research from Japan Society for the Promotion of Science (JSPS)
Grant Number [No.23340067, 24740184].
It was also supported by
the Grant-in-Aid for the Global COE Program ``The Next Generation of Physics,
Spun from Universality and Emergence'' from the Ministry of Education, Culture, Sports, Science and Technology (MEXT) of Japan.
|
1,108,101,566,677 | arxiv | \section{Introduction}
The theory of the harmonic analysis on homogeneous spaces is closely
connected with the theory of special functions. This is apparent,
for example, on the two dimensional sphere $S^2=\SO(3)/\SO(2)$,
where the harmonic analysis with respect to the action of the
orthogonal group is contained in the classical theory of the
spherical harmonics. In spherical coordinates the spherical
functions are the Legendre polynomials $P_n(\cos \theta)$. Also the
zonal spherical functions of the sphere $S^n=\SO(n+1)/\SO(n)$ are
given, in spherical coordinates, in terms of Jacobi polynomials
$P_n^{(\alpha,\alpha)}(\cos \theta)$, with $\alpha=(n-2)/2$. More
generally the zonal spherical functions on a Riemannian symmetric
space of rank one can always be expressed in terms of the classical
Gauss' hypergeometric functions, in the case of compact spaces we
get Jacobi polynomials.
As in the scalar case alluded above, in the matrix setting we also
have these three ingredients: the theory of matrix valued spherical
functions of any $K$-type, the matrix valued hypergeometric function
and the theory of matrix valued orthogonal polynomials. In this
paper we exhibit the interplay among these concepts in the case of
the complex projective space $P_n(\CC)=\SU(n+1)/\U(n)$.
The theory of matrix valued spherical functions goes back to
\cite{T0} and \cite{GV}, based on the foundational papers of
Godement and Harish-Chandra. In \cite{GPT1} we find explicit
expressions for spherical functions, of any $K$-type associated to
complex projective plane $P_2(\CC)=\SU(3)/\U(2)$. This is
accomplished by associating to a spherical function $\Phi$ on $G$ a
vector valued function $H$ defined on a complex affine plane
$\CC^2$, whose entries are given in terms of a special class of
generalized hypergeometric functions ${}_{p+1}\!F_p$.
The matrix valued hypergeometric function was studied in \cite{T1}.
Let $V$ be a $d$-dimensional complex vector space, and let $A,B$ and
$C\in \End(V)$. The hypergeometric equation is
\begin{align}\label{hiper0}
z(1-z)F''(z) +(C-z (A+B+I))F'(z)- AB F(z)=0.
\end{align}
If the eigenvalues of $C$ are not in $-\NN_0$ we define the function
\begin{equation*}
{}_2\!F_1 \!\!\left( \begin{smallmatrix} A\,;\,B\\
C\end{smallmatrix} ; z\right)=\sum_{m=0}^\infty
\frac{z^m}{m!}(C;A;B)_m ,
\end{equation*}
where the symbol $(C;A;B)_m$ is defined inductively by
\begin{align*}
(C;A;B)_0 &=1, \\
(C;A;B)_{m+1}&=(C+m)^{-1}(A+m)(B+m)(C;A;B)_m, \quad m\geq 0.
\end{align*}
The function ${}_2\!F_1 \!\!\left( \begin{smallmatrix} A\,;\,B\\
C\end{smallmatrix} ; z\right)$ is analytic on $|z|<1$ with values in
$\End(V)$. Moreover if $F_0\in V$ then $F(z)= {}_2\!F_1 \!\!\left( \begin{smallmatrix} A\,;\,B\\
C\end{smallmatrix} ; z\right)\!F_0$ is a solution of the
hypergeometric equation \eqref{hiper0} such that $F(0)=F_0$.
Conversely any solution $F$, analytic at $z=0$ is of this form.
The theory of matrix valued orthogonal polynomials, without any
consideration of differential equations goes back to \cite{K1} and
\cite{K2}. In \cite{D}, the study of the matrix valued orthogonal
polynomials which are eigenfunctions of certain second order
differential operators was started. The first explicit examples of
such polynomials are given in \cite{GPT2}, \cite{GPT5} and
\cite{DG}.
Given a self adjoint positive definite matrix valued smooth weight
function $W=W(t)$ with finite moments, we can consider the skew
symmetric bilinear form defined for any pair of square matrix valued
polynomial functions $P(t)$ and $Q(t)$ by the numerical matrix
\[
( P,Q)= \int_{\RR}P(t) W(t)Q^*(t)dt,
\]
where $Q^*(t)$ denotes the conjugate transpose of $Q(t)$. This leads
to the existence of a sequence of matrix valued orthogonal
polynomials, that is a sequence $\{P_n(t)\}$, where $P_n$ is a
polynomial of degree $n$ with non singular leading coefficients and
$(P_n,P_m)=0$ if $n\neq m$.
We also consider the skew symmetric bilinear form
\begin{equation}
\langle P,Q\rangle=(P^*,Q^*)^*,
\end{equation}
and we say that a differential operator $D$ is symmetric with
respect to $W$ if
\begin{equation}\label{sim}
\langle DP,Q\rangle=\langle P,DQ\rangle,
\end{equation}
for all matrix valued polynomial functions $P$ and $Q$.
Let $D$ be an ordinary linear differential operator with matrix
valued polynomial coefficients of degree less or equal to the order
of derivation. If $D$ is symmetric with respect to $W$ then any
orthogonal sequence $\{P_n\}$, with respect to $(\cdot,\cdot)$,
satisfies
\begin{equation}\label{autofuncion}
DP_n^*=P_n^*\Lambda_n,
\end{equation}
for some numerical matrix $\Lambda_n$.
Assume that the weight function $W=W(t)$ is supported in the
interval $(a,b)$ and let $D$ be a second order differential operator
of the form
\begin{equation}\label{D}
D=A_2(t)\frac{d^2}{dt^2}+A_1(t)\frac{d}{dt}+A_0(t),
\end{equation}
with matrix valued polynomial coefficients $A_j(t)$ of degree less
or equal to $j$. In \cite{GPT5} (see also \cite{DG}) it is proved
that the condition of symmetry for $D$ is equivalent to the
following three differential equations
\begin{equation}\label{partes}
\begin{split}
A_2^*W &= WA_2, \\
A_1^*W &= -WA_1 + 2(WA_2)', \\
A_0^*W &= WA_0 - (WA_1)' +(WA_2)'',\\
\end{split}
\end{equation}
with the boundary conditions
\begin{equation}\label{borde}
\lim_{t\to x}W(t)A_2(t)=0=\lim_{t\to
x}\big(W(t)A_1(t)-A_1^*(t)W(t)\big),\text { for $x=a,b$}.
\end{equation}
Finding explicit solutions of these equations is a highly non
trivial task. In \cite{DG} and \cite{DG2} the authors give some
families of examples. In \cite{PT1} one finds, for each dimension, a
three parameter family of pairs $\{W,D\}$ satisfying \eqref{partes}
and \eqref{borde}. These families arise from the representation
theory of Lie groups. After the change of variable $u=1-t$, the main
result in \cite{PT1} reads:
\begin{thm}\label{D1salto} Let $\alpha,\beta >-1$, $0<k<\beta+1$ and $\ell\in \NN$.
Let $ D$ be the differential operator defined by
$$ D=u(1-u)\frac{d^2}{du^2}+(C-uU)\frac{d}{du}-V,$$
with
\begin{align*}
C& =\sum_{i=0}^\ell (\beta+1+2i)E_{ii}+\sum_{i=1}^\ell iE_{i,i-1},
\quad
U=\sum_{i=0}^\ell (\alpha+\beta+\ell+i+2) E_{ii} , \displaybreak[0]\\
V&= \sum_{i=0}^\ell i(\alpha+\beta+i-k+1)E_{ii}-
\sum_{i=0}^{\ell-1} (\ell-i)(i+\beta-k+1)E_{i,i+1}.
\end{align*}
Then the differential operator $D$ is symmetric with respect to the
weight matrix $W(u)=(1-u)^\alpha u^\beta Z(u)$ given by
$$ Z(u)=\sum_{i,j=0}^\ell\left( \sum_{r=0}^\ell
\textstyle \binom ri \binom rj \binom{\ell+k-r-1}{\ell-r}
\binom{\beta-k+r}{r} (1-u)^{\ell-r}u^{i+j}\right) E_{ij}. $$
\end{thm}
\begin{remark*}
Here, and in other parts of the paper, we use $E_{ij}$ to denote the
matrix with entry $(i,j)$ equal $1$ and $0$ otherwise.
\end{remark*}
This theorem is obtained from the first few steps in the explicit
determination of all matrix valued spherical functions associated to
the $n$-dimensional projective space $P_n(\CC)=\SU(n+1)/\U(n)$. The
idea, also used in \cite{GPT1}, is to cook up from a matrix valued
spherical function a function $H$ which depends on a single variable
$u$. Using that the spherical functions are eigenfunctions of the
Casimir operator of $\SU(n+1)$ we deduce that, after an appropriate
conjugation, $H$ is an eigenfunction of an ordinary linear second
order matrix valued differential operator $D$. The fact that this
operator is symmetric with respect to the weight $W$ is a
consequence of the fact that the Casimir operator is symmetric with
respect to the $L^2$-inner product between matrix valued functions
on $\SU(n+1)$. At this point some readers may find useful to consult
references \cite{GV}, \cite{T0} and \cite{PT1}.
One of the main purposes of this paper is to give explicit
expressions of a sequence of orthogonal polynomials associated to
the weight $W$ given in Theorem \ref{D1salto}. This is accomplished
by studying the vector space $V(\lambda)$ of all vector valued
polynomial solutions of the hypergeometric equation $DF-\lambda
F=0$. This space is non trivial if and only if
$$\lambda=\lambda_j(w)=-w(w+\alpha+\beta+\ell+j+1)-j(\alpha+\beta-k+1+j),$$
for some $w\in \NN_0$ and $j=0,1,\dots , \ell$. If the eigenvalues
$\lambda_j(w)$ are all different then there exists a unique
polynomial solution (up to scalars) of $DF=\lambda F$. In
Proposition \ref{dimVlambda} we compute, in the general case, the
dimension of the space $V(\lambda)$. With this knowledge at hand, we
construct a sequence of polynomials $\{P_w\}$, by choosing the
$j$-th column of $P_w$ as a particular polynomial in
$V(\lambda_j(w))$. In Theorem \ref{orthopoly} we prove that
$\{P_w\}$ is an orthogonal sequence of matrix valued polynomials
such that $DP_w^*=P_w^*\Lambda_w(D)$, where $\Lambda_n(D)$ is the
real valued diagonal matrix
$$\Lambda_w(D)= \sum_{0\leq j\leq \ell} \lambda_j(w) E_{jj}.$$
The matrix spherical functions associated to
$(G,K)=(\SU(n+1),\U(n))$ are eigenfunctions, not only of the Casimir
operator, but also of any element in the algebra $D(G)^G$ of all
differential operators in $G$ which are left and right invariant
under multiplication by elements of $G$. In this case this algebra
is a polynomial algebra in $n$ algebraically independent
generators, one of them can be taken to be the Casimir operator of
$G$. For $n=2$, in \cite{GPT1} the explicit expression of this set
of generators was given and two differential operators $D$ and $E$
which commute were obtained. For a general $n$ we do not have simple
expressions for a complete set of generators of the algebra
$D(G)^G$, beyond the Casimir operator. However in this paper we are
able to find another second order differential operator $E$, which
commutes with $D$ and such that it is symmetric with respect to $W$
(See Theorem \ref{E1salto}). The way in which we obtain this
operator is different to the one used in \cite{PT1} and it is
inspired in the operator $\tilde E$ given in \cite{RT}. Here we only
knew that such an operator should exist and after a trial and error
process we find it and prove that it is symmetric.
The sequence of matrix valued orthogonal polynomial constructed in
Theorem \ref{orthopoly} $\{P_w\}$ also satisfies $EP_w^*=P_w^*
\Lambda_w(E)$ with
\begin{align*}
\Lambda_w(E)= \sum_{ j=0}^\ell &
(-w(w+\alpha+\beta+\ell+j+1)(\alpha-\ell+3j)\\
&\quad -j(j+\alpha+\beta-k+1)(\alpha+2\ell+3k)) E_{jj}.
\end{align*}
We also study the algebra generated by the differential operators
$D$ and $E$. In Theorem \ref{alggenDE} we prove that it is
isomorphic to the affine algebra of the following union of lines in
$\CC^2$:
$$\prod_{j=0}^\ell \left(y-(\alpha-\ell+3j)x+3j(\ell-j+k)(j+\alpha+\beta-k+1) \right).$$
Recently, in \cite{DIG} this situation is considered in the case
$\ell=2$. The authors conjecture that the algebra generated by $D$
and $E$ coincide with the algebra of all differential operators that
have the orthogonal polynomials $P_w$ as simultaneous
eigenfunctions.
\
\noindent {\bf Acknowledgement.} We would like to thank Prof. Juan
Tirao for his continuous encouragement and for many useful comments
and suggestions that helped us to improve this paper.
\section{Orthogonal polynomials associated to the pair $\{W,D\}$}\label{orthogseccion}
The aim of this section is to give explicitly a sequence of matrix valued
orthogonal polynomials associated to the weight function $W$ and the differential operator $D$
introduced in Theorem \ref{D1salto}, i.e. we construct a sequence
$\{P_w\}$ of orthogonal polynomials with respect to $W$, such that
$DP_w^*=P_w^*\Lambda_w$, where $\Lambda_w(D)$ is a real diagonal
matrix.
The columns $\{P_w^j\}_{j=0,\dots \ell}$ of $P_w^*$ are
$\CC^{\ell+1}$-valued polynomials such that $DP_w^j=\lambda_j(w)
P_w^j$ and $(P_w^j,P_{w'}^{j'})=\delta_{w,w'}\delta_{j,j'} n_{w,j}$,
for some positive real number $n_{w,j}$.
\subsection{Polynomial solutions of $DF=\lambda F$}
We start studying the $\CC^{\ell+1}$-vector valued polynomial
solutions of $DF=\lambda F$. We will find all polynomials $F(u)$
such that
\begin{align}\label{hiperecuacion}
u(1-u)F''(u) +(C-u U)F'(u)- (V+\lambda) F(u)=0,
\end{align}
where the matrices $C,U,V$ are given in Theorem \ref{D1salto}. This
equation is an instance of a hypergeometric differential equation
studied in \cite{T1}. Since the eigenvalues of $C$ are not in
$-\NN_0$ the function $ F$ is characterized by $F_0=F(0)$. For
$\vert u\vert<1$ it is given by
\begin{equation}\label{hiper}
F(u)={}_2\!H_1 \!\!\left( \begin{smallmatrix} U\,;\,V+\lambda\\
C\end{smallmatrix} ; u\right)F_0=\sum_{i=0}^\infty
\frac{u^i}{i!}[C;U;V+\lambda]_i F_0,\qquad F_0\in \CC^\ell,
\end{equation}
where the symbol $[C;U;V+\lambda]_i$ is defined inductively by
\begin{align*}
[C;U;V+\lambda]_0 &=1, \\
[C;U;V+\lambda]_{i+1}&=(C+i)^{-1}\left(i(U+i-1)+V+\lambda\right)[C;U;V+\lambda]_i,
\end{align*} for all $i\geq 0$.
There exists a polynomial solution of \eqref{hiperecuacion} if and
only if the coefficient $[C;U;V+\lambda]_i$ is singular for some
$i\in \ZZ$.
Let us assume that $[C;U;V+\lambda]_{w+1}$ is singular and that
$[C;U;V+\lambda]_w$ is not singular.
Since the matrix $(C+w)$ is invertible, we have that
$[C;U;V+\lambda]_{w+1}$ is singular if and only if
$(w(U+w-1)+V+\lambda)$ is singular. The matrix
$$M_w=(w(U+w-1)+V+\lambda)$$ is upper triangular and
$$(M_{w})_{j,j}=w(w+\alpha+\beta+\ell+j+1)+j(\alpha+\beta-k+1+j)+\lambda.$$
Therefore $[C;U;V+\lambda]_{w+1}$ is singular if and only if
\begin{equation}\label{autovalor}
\lambda=\lambda_j(w)=-w(w+\alpha+\beta+\ell+j+1)-j(\alpha+\beta-k+1+j),
\end{equation}
for some $0\leq j\leq \ell$.
We will distinguish the cases when the eigenvalues $\lambda_j(w)$
are all different (varying $j$ or $w$) or when they are repeated. We
start studying the polynomial solutions of \eqref{hiperecuacion} in
the first case.
\begin{prop} \label{unicoF0} Assume that all eigenvalues $\lambda_j(w)$ are
different. If $\lambda=\lambda_j(w)$, for some $j=0,\dots, \ell$,
then there exists a unique $F_0\in
\CC^{\ell+1}$ (up to scalars) such that $F(u)={}_2\!H_1
\left( \begin{smallmatrix} U\,;\,V+\lambda\\
C\end{smallmatrix} ; u\right)F_0$ is a polynomial function. Moreover
this polynomial is of degree $w$.
\end{prop}
\begin{proof} We have already observed that for
$\lambda=\lambda_j(w)=-w(w+\alpha+\beta+\ell+j+1)-j(\alpha+\beta-k+1+j)$,
the matrix $[C,U,V+\lambda]_{w+1}$ is singular. Then the function
$F(u)=\sum_{i=0}^\infty \frac{u^i}{i!}[C;U;V+\lambda]_i F_0$ is a
polynomial if and only if $F_0$ is a vector such that
\begin{equation}\label{F0condition}
[C,U,V+\lambda]_wF_0\in \ker (M_w);
\end{equation} where
$M_w=w(U+w-1)+V+\lambda_j(w)$. The matrix $[C,U,V+\lambda]_w$ is
invertible, hence $F_0$ is univocally determined by an element in
the kernel of $M_w$. We have that
\begin{equation}\label{Mw}
M_w=\sum_{0\leq i\leq \ell}\bigl( (i-j)(\alpha+\beta-k+1+i+j+w)
E_{ii} - (\ell-i)(\beta-k+1+i)E_{i,i+1}\bigr).
\end{equation}
Since all eigenvalues $\lambda_j(w)$ are different we have that
$0\neq \lambda_j(w)-\lambda_i(w)=(i-j)(\alpha+\beta-k+1+i+j+w)$ if
$i\neq j$, hence the dimension of the kernel of $M_w$ is one.
Explicitly $(x_0,x_1, \dots, x_\ell)\in \ker(M_w)$ if and only if
\begin{equation}\label{kernelMw}
\begin{split}
&x_i=\textstyle (-1)^{i+j}\binom{\ell-i}{\ell-j}
\frac{(\beta-k+1+i)_{j-i}}{(\alpha+\beta+j+i+w-k+1)_{j-i}} x_j\,
\qquad \text{ for }
i=0,\dots j, \\
&x_{j+1}= x_{j+2}=\cdots =x_{\ell}=0,
\end{split}
\end{equation}
where we use $(z)_r=z(z+1)\dots (z+r-1)$, $(z)_0=1$.
Hence, up to
scalar, $F_0$ is uniquely determined by \eqref{F0condition} and it
is clear that $F(u)={}_2\!H_1 \!\!
\left( \begin{smallmatrix} U\,;\,V+\lambda\\
C\end{smallmatrix} ; u\right)F_0$ is a polynomial of degree $w$ with
leading
coefficient $\frac 1{w!}[C,U,V+\lambda_j(w)]_w F_0$. This completes
the proof of the proposition. \qed
\end{proof}
\smallskip
Now we have to study the case when some eigenvalues are repeated,
that is when there exist $w,w'\in \NN_0$ and $0\leq j,j'\leq \ell$
such that $\lambda_j(w)=\lambda_{j'}(w')$. We start observing the
following facts.
\begin{lem}\label{autovrepetido}
If $\lambda_j(w)=\lambda_{j'}(w')$ for some $w,w'\in \NN_0$ and $0\leq j,j'\leq
\ell$ then
\begin{enumerate}
\item [ i)] We have $w=w'$ if and only if $j=j'$.
\item [ ii)] If $w'>w$ then $j>j'+1$.
\end{enumerate}
\end{lem}
\begin{proof}
If $ \lambda_j(w)=\lambda_{j'}(w')$ then
\begin{align*}
(w'-w)&(\alpha+\beta+\ell+1+w+w'+j')\\
+& (j'-j)(\alpha+\beta-k+1+j+j'+w)=0.
\end{align*}
\noindent In particular if $w'=w$, we have
$(j'-j)(\alpha+\beta-k+1+j+j'+w)=0$. We observe that $j\neq j'$
implies that $\alpha+\beta-k+1+j+j'+w>0$, because $\alpha>-1$,
$\beta-k+1>0$,
$j+j'\geq 1$ and $w\geq 0$.\\
Similarly if $j'=j$ we have $(w'-w)(\alpha+\beta+\ell+1+w+w'+j)=0$.
Since $\alpha>-1$, $\beta+\ell+1>0$ and $w+w'+j\geq 1$ we obtain
that $(\alpha+\beta+\ell+1+w+w'+j)>0$ and therefore $w=w'$. This
completes the proof of i).
For ii) we start from
$$(w'-w)(\alpha+\beta+\ell+1+w+w'+j')= (j-j')(\alpha+\beta-k+1+j+j'+w),$$
and we observe that the left hand side of this identity, as well as
the factor $(\alpha+\beta-k+1+j+j'+w)$ are positive numbers, by
hypothesis, then we have $j>j'$. Finally suppose that $j=j'+1$ then
$(w'-w)(\alpha+\beta+\ell+w+w'+j)= (\alpha+\beta-k+w+2j),$
equivalently
$$(w'-w-1)(\alpha+\beta+\ell+w+w'+j)=-(w' + \ell-j+k).$$
The left hand side is non negative while the right hand side is
negative because $k>0$, which is a contradiction.
\qed
\end{proof}
\
Let $V(\lambda)$ be the vector space of all
$\CC^{\ell+1}$-vector valued polynomials such that $DP=\lambda P$.
We observe that Proposition \ref{unicoF0} said that if the
eigenvalues $\lambda=\lambda_j(w)$ are all different the dimension
of $V(\lambda)$ is one.
The next proposition generalizes this result to the
case when the eigenvalues $\lambda_j(w)$ are repeated.
\begin{prop}\label{dimVlambda} Let $\alpha,\beta>-1$, $0<k<\beta+1$ and let
$\lambda=\lambda_j(w)$, for some $w\in \NN_0$. Then
\begin{equation}\label{dimension}
\begin{split}
\dim &\{P\in V(\lambda): \deg P\leq w\}\\
&=\text{ card } \{ w': 0\leq w'\leq w\, , \,
\lambda=\lambda_{j'}(w'), \text{ for some } 0\leq j'\leq \ell\}.
\end{split}
\end{equation}
In particular
$$\dim V(\lambda)=\text{card }\{(w,j): \lambda=\lambda_j(w)\}.$$
\end{prop}
\begin{proof} We have already observed that for
$\lambda=\lambda_j(w)$ the function $F=F(u)$ is a polynomial
solution of $DF=\lambda F$ if and only if
$F(u)={}_2\!H_1(C,U,V+\lambda)F_0$ with $F_0\in \CC^{\ell+1}$ such
that $[C,U,V+\lambda]_wF_0\in \ker (M_{w,j}),$ where
$$M_{w,j}=\sum_{0\leq i\leq \ell}\bigl( (i-j)(\alpha+\beta-k+1+i+j+w)
E_{ii} - (\ell-i)(\beta-k+1+i)E_{i,i+1}\bigr)$$
We have that
$(i-j)(\alpha+\beta-k+1+i+j+w)\neq 0 $ if $i\neq j$. Hence the
dimension of $\ker( M_{w,j})$ is one. Moreover it is generated by
$(x_0, \dots , x_\ell)\in \CC^{\ell+1}$ such that
\begin{equation}\label{kernelMw2}
\begin{split}
&x_i=\textstyle (-1)^{i+j}\binom{\ell-i}{\ell-j}
\frac{(\beta-k+1+i)_{j-i}}{(\alpha+\beta+j+i+w-k+1)_{j-i}} \, \qquad
\text{ for }i=0,\dots j-1, \\
&x_j=1\\
&x_{j+1}= x_{j+2}=\cdots =x_{\ell}=0,
\end{split}
\end{equation}
where we use $(z)_r=z(z+1)\dots (z+r-1)$, $(z)_0=1$.
If the eigenvalue $\lambda$ is repeated $s$ times and
$w_1=\min\{w\in \NN_0:\lambda=\lambda_j(w), 0\leq j\leq \ell\}$,
using Lemma \ref{autovrepetido}, we can assume that
$$\lambda=\lambda_{j_1}(w_1)=\cdots =\lambda_{j_s}(w_s)$$ with
$w_1<w_2<\cdots <w_s$ and $j_1>j_{2}+1$, $j_2>j_{3}+1, \dots
,j_{s-1}>j_s$.
\smallskip
For $w=w_1$ and $j=j_1$ the matrix $[C,U,V+\lambda]_{w_1}$ is
invertible and $F_0$ is univocally determined by an element in $\ker
(M_{w_1,j_1})$, which is one dimensional, thus proving
\eqref{dimension} in this case.
Then to prove the proposition for any $w_r$ we proceed by induction
on $1\leq r\leq s$. Thus let us assume that for $2\leq r\leq s$ we
know that
$$\{P\in V(\lambda): \deg P\leq w_{r-1}\}=r-1.$$
Let $M_r=M_{w_r,j_r}$. As we remarked $0\neq P\in V(\lambda)$ is of
degree $w_r$ if and only if $P_0=P(0)$ satisfies $0\neq
[C,U,V+\lambda]_{w_r}P_0\in \ker (M_{r})$.
Let
$$[C,U,V+\lambda]_{w_r}=N_{r}M_{{r-1}}\dots N_1M_{1}N_0,$$
where $N_i$ are invertible matrices. The leading coefficient $P_r$
of such a $P$ is uniquely determined, up to scalar, by the condition
$$M_rN_rM_{{r-1}}\dots N_1M_{1}N_0 P_0=0,$$ because we may assume that
$$P_r=N_rM_{{r-1}}\dots N_1M_{1}N_0 P_0=(x_0, \dots,
x_{j_r-1},1,0,\dots, 0).$$
Now let us prove that there exists $\tilde P\in V(\lambda)$ of
degree $w_r$, by constructing one by downward induction.
Let $v_r=(x_0, \dots, x_{j_r-1},1,0,\dots, 0)\in \ker(M_r)$ and let
$b_r=N_r^{-1}v_r$. The equation $b_r=M_{r-1}v_{r-1}$ has a unique
solution $v_{r-1}$ of the form $v_{r-1}=(z_0, \dots,
z_{j_r+1},0,\dots, 0)$ because $b_r=(y_0, \dots, y_{j_r+1},0,\dots,
0)$ with $y_{j_r+1}\neq 0$ and $M_{r-1}$ is upper triangular with a
unique zero in the main diagonal in the $j_{r-1}$-position.
Similarly let $b_{r-1}=N_{r-1}^{-1}v_{r-1}$, then there exists a
unique $v_{r-2}=(t_0, \dots, t_{j_r+2},0,\dots 0)$ such that
$M_{r-2}v_{r-2}=b_{r-1}$. In this way we construct the sequence
$v_r, v_{r-1}, \dots, v_0$ such that
\begin{align*}
v_r &= N_rb_r= N_rM_{r-1}v_{r-1}= N_rM_{r-1}N_{r-1}M_{r-2}v_{r-2}=
\cdots \\
&= N_rM_{r-1}\dots N_1M_1N_0 v_0
\end{align*}
Hence $\tilde P={}_2\!H_1(C,U,V+\lambda)v_r$ is a polynomial in
$V(\lambda)$ of degree $w_r$.
Now we observe that
$$\{P\in V(\lambda): \deg P\leq w_r\}= \CC\tilde P \oplus
\{P\in V(\lambda): \deg P\leq w_{r-1}\}.$$ In fact it is clear that
the right hand side is a direct sum contained in the left hand side.
To prove the other inclusion we first observe that if $P\in
V(\lambda)$ and $\deg P<w_r$ then, as we saw, $\deg P\leq w_{r-1}$.
If $P\in V(\lambda)$ is of degree $w_r$ then the leading coefficient
of $P$ is equal to the leading coefficient of $t\tilde P$ for some
$t\in \CC$. Therefore $P-t\tilde P\in \{P\in V(\lambda): \deg P\leq
w_{r-1}\}$. This completes the proof of the proposition.\qed
\end{proof}
\subsection{Matrix valued orthogonal polynomials associated to
$\{W,D\}$.}\label{orthogsubseccion}
We want to construct a
sequence $\{P_w\}_{w\geq 0}$ of matrix valued orthogonal
polynomials with respect to the weight function $W$, with degree of
$P_w$ equal to $w$, with non singular leading coefficient and that
satisfies $DP_w^*=P_w^*\Lambda_w$, where $\Lambda_w(D)$ is a real
diagonal matrix.
Then the columns $\{P_w^j\}_{j=0,\dots \ell}$ of $P_w^*$ are
$\CC^{\ell+1}$-valued polynomials such that $P_w^j$ and
$P_{w'}^{j'}$ are orthogonal to each other if $(j,w)\neq (j',w')$
and they satisfy that $DP_w^j=\lambda_j(w) P_w^j$, where
$$\lambda_j(w)=-w(w+\alpha+\beta+\ell+j+1)-j(j+\alpha+\beta-k+1),$$
for each $w\in\NN_0$, and $j=0,\dots,\ell$.
If an eigenvalue $\lambda=\lambda_j(w)$ is not repeated, then we
choose the unique $F_0\in \CC^{\ell+1}$ such that
\begin{equation}\label{F0}
[C,U,V+\lambda_j(w)]_w F_0=\sum_{0\leq i\leq j}\textstyle
(-1)^{i+j}\binom{\ell-i}{\ell-j}
\frac{(\beta-k+1+i)_{j-i}}{(\alpha+\beta+j+i+w-k+1)_{j-i}}\, e_i
\end{equation}
where $e_i$ denotes
the $i$-th vector in the canonical basis of $\RR^{\ell+1}$. Then we
take
$$P_w^j(u)={}_2\!H_1 \left( \begin{smallmatrix} U\,;\,V+\lambda_j(w)\\
C\end{smallmatrix} ; u\right)F_0=\sum_{i=0}^\infty
\frac{u^i}{i!}[C;U;V+\lambda_j(w)]_i F_0$$ which is a polynomial
function of degree $w$ and satisfies $$DP_w^j(u)=\lambda_j(w)
P_w^j(u).$$ (See Proposition \ref{unicoF0}).
If an eigenvalue $\lambda=\lambda_j(w)$ is repeated we saw that,
$$\lambda= \lambda_{j_1}(w_1)= \lambda_{j_2}(w_2)=\cdots =
\lambda_{j_s}(w_s),$$ with $w_1<w_2<\cdots <w_s$ and $j_r\geq
j_{r+1}+1$, for $1\leq r\leq s-1$.
Let $V_r=\{ P\in V(\lambda): \deg P\leq w_r\}$, for $1\leq r\leq s$.
Then we saw, in the proof of Proposition \ref{dimVlambda}, that
$$0\neq V_1\subsetneq V_2\subsetneq \cdots \subsetneq V_s$$
with $\dim V_s=s$ Now we take, for each $1\leq r\leq s$
$$0\neq P_{w_r}^{j_r}(u)={}_2\!H_1 \left( \begin{smallmatrix} U\,;\,V+\lambda_j(w)\\
C\end{smallmatrix} ; u\right)F_{0}^{j_r} \in V_r \text{ orthogonal
to }V_{r-1}. $$
In this way, for each $w\in \NN_0$ we have defined $\ell+1$
orthogonal polynomial functions $P_w^0, P_w^1, \dots , P_w^\ell$ of
degree $w$.
\begin{thm}\label{orthopoly}
Let $P_w(u)$ be the matrix whose rows are the vectors
$P_w^j(u)$. Then the sequence $\{P_w(u)\}_{w\in \NN_0}$ is an
orthogonal sequence of matrix valued polynomials such that
$$DP_w^*(u)=P_w^*(u)\Lambda_w,$$ where $\Lambda_w=\sum_{j=0}^\ell \lambda_j(w) E_{jj}$.
\end{thm}
\begin{proof}
Let $(w,j)\neq (w',j')$. If $\lambda_j(w)\neq \lambda_{j'}(w')$ then
$(P_w^j,P_{w'}^{j'})=0$ because $D$ is symmetric. If
$\lambda_j(w)=\lambda_{j'}(w')$ then $(P_w^j,P_{w'}^{j'})=0$ by
construction. Therefore the matrices $P_w$ satisfies $(P_w,
P_{w'})=0$ if $w\neq w'$.
On the other hand we have that for each $w=0,1,2\dots $ the degree
of $P_w(u)$ is $w$ and the leading coefficient of $P_w$ is the non
singular triangular matrix
$$I+\sum_{ s< r}\textstyle (-1)^{r+s}\binom{\ell-s}{\ell-r}
\frac{(\beta-k+1+s)_{r-s}}{(\alpha+\beta+r+s+w-k+1)_{r-s}} E_{rs}.$$
This completes the proof of the theorem. \qed
\end{proof}
\section{The symmetry of the differential operator $E$}\label{seccionE}
The aim of this section is to exhibit another second order ordinary
differential operator which is symmetric with respect to the weight
$W$.
\begin{thm} \label{E1salto} Let $\alpha, \beta>-1$, $0<k<\beta+1$ and $\ell\in \NN$.
Let $ E$ be the differential operator defined by
$$ E=(1-u)(Q_0+uQ_1)\frac{d^2}{du^2}+(P_0+uP_1)\frac{d}{du}-(\alpha+2\ell+3k)V,$$
with
\begin{align*}
Q_0&=\textstyle{\sum_{i=0}^\ell 3iE_{i,i-1}},\displaybreak[0]\\
Q_1&=\textstyle{\sum_{i=0}^\ell (\alpha-\ell+3i)E_{ii}},\displaybreak[0]\\
P_0&=\textstyle{\sum_{i=0}^\ell \big( (\alpha+2\ell)(\beta+1+2i)
-3k(\ell-i)-3i(\beta-k+i) \big)E_{ii}}\\
&\textstyle{\quad-\sum_{i=0}^\ell i(3i+3\beta-3k+3+\ell+2\alpha)E_{i,i-1}},\displaybreak[0]\\
P_1&=\textstyle{\sum_{i=0}^\ell
-(\alpha-\ell+3i)(\alpha+\beta+\ell+i+2)E_{ii}}\\
&\textstyle{\quad +\sum_{i=0}^\ell 3(\beta-k+1+i)(\ell-i)E_{i,i+1}},\displaybreak[0]\\
V&= \textstyle\sum_{i=0}^\ell i(\alpha+\beta-k+1+i)E_{ii}-
\sum_{i=0}^{\ell-1} (\ell-i)(\beta-k+1+i)E_{i,i+1}.
\end{align*}
Then $E$ is symmetric with respect to the
weight matrix $W(u)=(1-u)^\alpha u^\beta Z(u)$, where $Z(u)$ is given by
$$ Z(u)=\sum_{i,j=0}^\ell\left( \sum_{r=0}^\ell
\textstyle \binom ri \binom rj
\textstyle\binom{\ell+k-1-r}{\ell-r}\binom{\beta-k+r}{r}
(1-u)^{\ell-r}u^{i+j}\right) E_{ij}. $$
\end{thm}
\begin{proof}
We need to prove that the equations \eqref{partes} and \eqref{borde}
are satisfied. The equations in \eqref{partes} take the form
\begin{align}
\label{eqI} &(Q_0^*+uQ_1^*)Z-Z(Q_0+uQ_1)=0,\displaybreak[0]\\
\label{eqII}\begin{split} &(P_0^*+uP_1^*)Z+Z(P_0+uP_1)- 2Z( Q_1-Q_0-2uZQ_1)\\ &\qquad
- 2(1-u)Z'(Q_0+uQ_1)-\tfrac{(\beta(1-u)-\alpha
u)}{u}2Z(Q_0+uQ_1)=0,
\end{split} \displaybreak[0]\\
\label{eqIII} \begin{split}&
P_1^*Z+(P_0^*+uP_1^*)Z'-Z'(P_0+uP_1)-ZP_1\\
&\qquad +(\tfrac{\beta}{u}-\tfrac{\alpha}{1-u})\bigl(
(P_0^*+uP_1^*)Z-Z(P_0+uP_1)\bigr)\\
&\qquad -2(\alpha+2\ell+3k)(ZV-V^*Z)=0.
\end{split}
\end{align}
The $ij$-entry in the left hand side of the equation \eqref{eqI} is
$$u(\alpha-\ell+3i)z_{ij}+3(i+1)z_{i+1,j} -
u(\alpha-\ell+3j)z_{ij}+3(j+1)z_{i,j+1}=0,$$
because it is easy to verify that
$$(i+1)z_{i+1,j}-(j+1)z_{i,j+1}=u(j-i)z_{ij}.$$
In order to prove the identity \eqref{eqII} we compute the
$ij$-entry of the matrices involved there:
\begin{align*}
&((P_0^*+uP_1^*)Z)_{ij}=u^{i+j}\sum_{r=\max(i,j)}^\ell\textstyle
\binom{r}{i}\binom{r}{j}\binom{\beta+r-1}{r}\binom{\ell+k-1-r}{\ell-r}(1-u)^{\ell-r}
\Big( (P_0)_{ii}\\ &\quad -
(r-i)(3i+3\beta+\ell+2\alpha-3k+6)-(\alpha-\ell+3i)(\alpha+\beta+\ell+i+2) \Big)\displaybreak[0]\\
& +u^{i+j}\sum_{r=\max(i-1,j-1)}^\ell \textstyle
\binom{r+1}{i}\binom{r+1}{j}\binom{\beta+r}{r+1}\binom{\ell+k-2-r}{\ell-r-1}(1-u)^{\ell-r}\\
&\, \Big(
(r-i+1)(3i+3\beta+\ell+2\alpha-3k+6)+(\alpha-\ell+3i)(\alpha+\beta+\ell+i+2))\Big)\displaybreak[0]\\
& +u^{i+j}\!\!\!\!\sum_{r=\max(i-1,j)}^\ell \textstyle
\binom{r}{i-1}\binom{r}{j}\binom{\beta+r-1}{r}\binom{\ell+k-1-r}{\ell-r}(1-u)^{\ell-r}3(\ell-i+1)(\beta+i-k),
\end{align*}
\begin{align*}
&(Z(P_0+uP_1))_{ij}=u^{i+j}\sum_{r} \textstyle
\binom{r}{i}\binom{r}{j}\binom{\beta+r-1}{r}\binom{\ell+k-1-r}{\ell-r}(1-u)^{\ell-r} \Big( (P_0)_{jj}\\
&\quad -(r-j)(3j+3\beta+\ell+2\alpha-3k+6)-(\alpha-\ell+3j)(\alpha+\beta+\ell+j+2) \Big)\displaybreak[0]\\
&+ u^{i+j}\sum_{r} \textstyle
\binom{r+1}{i}\binom{r+1}{j}\binom{\beta+r}{r+1}\binom{\ell+k-2-r}{\ell-r-1}(1-u)^{\ell-r}\\
& \big( (r-j+1)(3j+3\beta+\ell+2\alpha-3k+6)+(\alpha-\ell+3j)(\alpha+\beta+\ell+j+2) \big)\displaybreak[0]\\
&+u^{i+j}\sum_{r}\textstyle
\binom{r}{i}\binom{r}{j-1}\binom{\beta+r-1}{r}\binom{\ell+k-1-r}{\ell-r}(1-u)^{\ell-r}3(\ell-j+1)(\beta+j-k),
\end{align*}
\begin{align*}
&\left(Z(Q_1-Q_0-2uQ_1)\right)_{ij}\displaybreak[0]\\ &
=u^{i+j}\sum_{r} \textstyle
\binom{r}{i}\binom{r}{j}\binom{\beta+r-1}{r}\binom{\ell+k-1-r}{\ell-r}(1-u)^{\ell-r}(-\alpha+\ell-3r)\displaybreak[0]\\
&\quad + u^{i+j}\sum_{r} \textstyle
\binom{r+1}{i}\binom{r+1}{j}\binom{\beta+r}{r+1}\binom{\ell+k-2-r}{\ell-r-1}(1-u)^{\ell-r}(3r+3j+2\alpha-2\ell+3),
\end{align*}
\begin{align*}
&\left(\textstyle{\frac{(\beta(1-u)-\alpha
u)}{u}}Z(Q_0+uQ_1)\right)_{ij}\\& \quad =u^{i+j}\sum_{r} \textstyle
\binom{r}{i}\binom{r}{j}\binom{\beta+r-1}{r}\binom{\ell+k-1-r}{\ell-r}
(1-u)^{\ell-r} \alpha(\ell-\alpha-3r)\displaybreak[0]
\\& \quad + u^{i+j}\sum_{r}\textstyle
\binom{r+1}{i}\binom{r+1}{j}\binom{\beta+r}{r+1}
\binom{\ell+k-2-r}{\ell-r+1}(1-u)^{\ell-r}(\alpha+\beta)(3r+\alpha-\ell+3),
\end{align*}
\begin{align*}
&((1-u)Z'(Q_1+uQ_0))_{i,j}\displaybreak[0]\\
& =u^{i+j}\sum_{r} \textstyle
\binom{r}{i}\binom{r}{j}\binom{\beta+r-1}{r}\binom{\ell+k-2-r}{\ell-r+1}
(1-u)^{\ell-r}(r-\ell)(\alpha-\ell+3r)\displaybreak[0] \\
& +u^{i+j}\sum_{r}\textstyle
\binom{r+1}{i}\binom{r+1}{j}\binom{\beta+r}{r+1}\binom{\ell+k-2-r}{\ell-r+1}(1-u)^{\ell-r}\Big(3(r-j+1)(\ell-r+i+j)
\displaybreak[0]\\
&\quad \quad \quad \quad +(\alpha-\ell+3j)(\ell-r+i+j-1)\Big).
\end{align*}
By using the previous results we get that the identity \eqref{eqII}
is equivalent to
\begin{align*}
&\sum_{r=j}^{\ell}\!\!\textstyle
\binom{r}{i}\binom{r}{j}\binom{\beta+r-1}{r}\binom{\ell+k-1-r}{\ell-r}(1-u)^{\ell-r}
\textstyle{\frac{
3(r+1)(r+\beta-k+1)(\ell-r)(2r+2-i-j)}{(r-i+1)(r-j+1)}}\displaybreak[0]\\
&+(1-u)^{\ell-j+1}\textstyle\binom{j}{i}\binom{\beta+j-1}{j}
\binom{\ell+k-1-j}{\ell-j}3(\ell-j+k)(j-i) \displaybreak[0] \\
&-\!\!\sum_{r=j-1}^{\ell-1}\!\!\textstyle
\binom{r+1}{i}\binom{r+1}{j}\binom{\beta+r}{r+1}\binom{\ell+k-2-r}{\ell-r-1}(1-u)^{\ell-r}
3(\ell-r+k-1)(2r+2-i-j)\\ &=0,
\end{align*}
which easily follows.
In order to prove the identity \eqref{eqIII} we compute
\begin{align*}
&(ZV-V^*Z)_{ij}\\
&=(i-j)u^{i+j-1}\Big(-\sum_{r}\textstyle{\binom{r}{i}
\binom{r}{j}\binom{\beta+r-1}{r}}\binom{\ell+k-1-r}{\ell-r}
(1-u)^{\ell-r}(\alpha+\ell-r+1)\\
&+\sum_{r}\textstyle{
\binom{r}{i}\binom{r}{j}\binom{\beta+r-1}{r}\binom{\ell+k-1-r}{\ell-r}}(1-u)^{\ell-r+1}
(\alpha+\beta+i+j+\ell-r+1)\Big),
\end{align*}
\begin{align*}
&(P_1^*Z-ZP_1)_{ij}=(i-j)u^{i+j-1}\Big(\sum_{r}
\textstyle{\binom{r}{i}\binom{r}{j}\binom{\beta+r-1}{r}\binom{\ell+k-1-r}{\ell-r}}\\
& \qquad (1-u)^{\ell-r+1}(3i+4\alpha+3\beta+3k+6-3r+5\ell+3j)\displaybreak[0]\\
& -\sum_{r}
\textstyle{\binom{r}{i}\binom{r}{j}\binom{\beta+r-1}{r}\binom{\ell+k-1-r}{\ell-r}}
(1-u)^{\ell-r}(4\alpha+5\ell+3k+6-3r)\Big),
\end{align*}
\begin{align*}
&\Big(\big( \frac{\beta(1-u)-\alpha u}{u(1-u)}\big)((P_0^*+uP_1^*)Z-Z(P_0+uP_1))+ (P_0^*+uP_1^*)Z'\displaybreak[0]\\
&-Z'(P_0+uP_1)\Big)_{i,j} =u^{i+j-1}(i-j)\sum _{r}\textstyle{\binom
ri\binom rj \binom{\beta+r-1}{r}\binom{\ell+k-1-r}{\ell-r}}
(1-u)^{\ell-r}\displaybreak[0]\\
&\quad \quad\quad \big(2( \alpha+2\ell+3k )(
\alpha+\ell-r+2)+2\alpha+\ell+6-3k-3r\big)\displaybreak[0]\\
&-u^{i+j-1}(i-j)\sum _{r}\textstyle\binom ri \binom r
j\binom{\beta+r-1}{r}
\binom{\ell+k-1-r}{\ell-r}(1-u)^{\ell-r+1}\big(\displaybreak[0]\\
&
2(\alpha+2\ell+3k)(\alpha+\beta+\ell-r+i+j+3)+3(i+j+\beta-3k+2-r-\ell)
\big ).
\end{align*}
Now it is easy to verify that \eqref{eqIII} is satisfied.
Finally the boundary conditions \eqref{borde} can be easily check
and this concludes the proof of the theorem. \qed
\end{proof}
\section{The algebra of differential operators}\label{seccionalgebra}
Most of the results of this section are due to J. Tirao and they are
taken from \cite{T2}.
Let $W=W(x)$ be a $L\times L$ matrix weight function with finite
moments and let $\{P_n\}$ be any sequence of matrix valued
orthogonal polynomials associated to a weight function $W$.
Let $$V_n=\{F\in M_{L\times L}(\CC)[x]:\deg(F)\leq n \}$$ be the
set of all matrix valued polynomials in the variable $x$ of degree
less or equal to
$n$.
\begin{prop}\label{decomposition}
We have the following decomposition of $V_n$
$$V_n=\bigoplus_{j=0}^n P_j^*M_{L\times L}(\CC) .$$
\end{prop}
\begin{proof} It is clear that $\sum_{j=0}^n P_j^* M_{L\times L}(\CC) $ is a
subspace of $V_n$ and that for $n=0$ they are the same.
Let us denote by $M_n$ the leading coefficient of $P_n^*$.
If $H=A_nx^n+A_{n-1}x^{n-1}+\cdots + A_0$ is a polynomial in
$V_n$ then $H-P_n^*M_n^{-1}A_n$ is a
polynomial of degree $\leq n-1$. Thus, by induction in $n$ we
obtain that $H\in\sum_{j=0}^n P_j^*M_{L\times L}(\CC)$.
In order to prove that this sum is a direct sum we assume that
$P_0^*A_0^*+\cdots +P_n^*A_n^*=0$. By comparing, inductively the
coefficients of $x^n, x^{n-1}, \dots x^0$ we obtain that $A_n=
\cdots = A_0=0$. \qed
\end{proof}
\medskip
Let $\mathcal D$ be the algebra of all differential operators of
the form
\begin{equation}\label{operadorordens}
D= F_s(x) \frac{d^s}{dx^s}+F_{s-1}(x)
\frac{d^{s-1}}{dx^{s-1}}+\cdots + F_1(x) \frac{d}{dx}+ F_0(x)
\end{equation}
whith $F_j$ a polynomial function of degree less or equal to $j$.
\begin{thm}\label{Vn}
Let $\{P_n\}$ be any sequence of matrix valued orthogonal
polynomials associated to $W$.
If $D\in \mathcal D$ is symmetric respect to
$W$ then $DP_n^*=P_n^*\Lambda_n$, for some matrix $\Lambda_n$.
\end{thm}
\begin{remark*}
We recall that $D$ is symmetric with respect to $W$ if $\langle DP,
Q\rangle=\langle P, DQ\rangle $, for all $P,Q$ polynomials. The
sequence $\{P_n\}$ is orthogonal with respect to $(\, , )$. The
bilinear forms $\langle \, , \rangle$ and $(\,,)$ are related by
$\langle P,Q\rangle= (P^*,Q^*)^*$.
\end{remark*}
\begin{proof}
Since $D\in \mathcal D$ the operator $D$ preserves the vector
spaces $V_n$, for each $n\geq 0$.
For $n=0$ we have that $DP_0^*\in V_0$, thus
$DP_0^*=P_0^*\Lambda_0$. By induction we assume that
$DP_j^*=P_j^*\Lambda_j$, for each $0\leq j\leq n-1$. By Proposition
\ref{decomposition} we have that $DP_n^*=\sum_{i=0}^n P_i^*A_i$.
Thus, for each $0\leq j\leq n-1$ we have
$$\langle DP_n^*, P_j^*\rangle \textstyle
=\sum_{i=0}^j\langle P_i^*A_i, P_j^*\rangle= \sum_{i=1}^j
(P_i,P_j)^*A_i= (P_j,P_j)^*A_j.$$ On the other hand, since $D$ is
symmetric we obtain
$$\langle DP_n^*, P_j^*\rangle = \langle P_n^*, DP_j^*\rangle =
\langle P_n^*, P_j^*\Lambda_j\rangle = \left((P_n,
P_j)\Lambda_j\right)^* =0.$$ Thus $(P_j,P_j)^*A_j=0$ for each
$0\leq j\leq n-1$, which implies that $A_j=0$ because the matrix
$(P_j,P_j)$ is non singular. Therefore $DP_n^*=P_n^*\Lambda_n$ and
this concludes the proof. \qed
\end{proof}
\smallskip
Given $\{P_n\}$ any sequence of matrix valued orthogonal
polynomials associated to the weight $W$, we define
\begin{equation}\label{algebraDW}
\mathcal D(W)=\{D\in \mathcal D: DP_n^*=P_n^* \Lambda_n(D),\forall
n\geq 0, \text{ for some matrix } \Lambda_n(D)\}.
\end{equation}
\begin{prop}\label{propDW} We have
\begin{enumerate}
\item $\mathcal {D} (W)$ is a subalgebra of
$\mathcal D$ which does not depend on the sequence $\{P_n\}$.
\item For each $n\in \NN_0$, the function $\Lambda_n:\mathcal
D(W)\longrightarrow M_{L\times L}( \CC)$
given by $D\mapsto \Lambda_n(D)$
is a representation of the algebra $\mathcal D(W)$.
\item The family $\{\Lambda_n\}_{n\geq 0}$ separates points of $\mathcal
D(W)$. That is, if $D_1$ and $D_2$ are distinct points of $\mathcal
D(W)$, then there exists $n_0\geq0$ such that
$\Lambda_{n_0}(D_1)\neq\Lambda_{n_0}(D_2)$.
\end{enumerate}
\end{prop}
\begin{proof}
It is easy to verify that $D(W)$ is a subalgebra of $\mathcal{D}$.
To prove that it is independent of the sequence $\{P_n\}$ we take
other sequence of orthogonal polynomials $\{Q_n\}$. Then
$Q_n=A_nP_n$, for some non singular matrix $A_n$. Then we have
$DQ_n^*=DP_n^*A_n^*= P_n^*\Lambda_n(D) A_n^*= Q_n^* \Upsilon_n(D),$
where $\Upsilon_n(D)=(A_n^*)^{-1}\Lambda_n(D) A_n^*$.
If $D_1$ and $D_2$ are in $D(W)$ then $$D_1D_2 P_n^*=D_1(P_n^*
\Lambda_n(D_2))=P_n^* \Lambda_n(D_1)\Lambda_n(D_2).$$ Hence
$\Lambda_n(D_1D_2)=\Lambda_n(D_1)\Lambda_n(D_2)$.
Let us assume that there exists $D\in D(W)$ such that
$\Lambda_n(D)=0$ for all $n\geq0$. To prove (3) we have to verify
that $D=0$. By hypothesis we have that $D=\sum_{i=0}^s F_i(x)
\frac{d^i}{dx^i}$
satisfies $DP_n^*=0$, for all $n\geq 0$. For $n=0$ we
obtain $F_0P_0^*=0$, thus $F_0=0$. \\
By induction, we may assume that $F_i=0$ for $0\leq i\leq j-1$, with
$j\leq s$. Then $0=DP_j^*=\sum_{i=1}^j
F_i(x)\frac{d^i(P_j^*)}{dx^i}=F_j(x)j!M_j$, where $M_j$ is the
leading coefficient of $P_j$, which is non singular. Therefore
$F_j=0$. This concludes the proof. \qed
\end{proof}
\begin{cor}\label{autovconmutan} The operators $D_1$ and $D_2$ in the algebra $\mathcal
D(W)$ commute if and only if the matrices $\Lambda_n(D_1)$
and $\Lambda_n(D_2)$ commute for all $n\in \NN_0$.
\end{cor}
\begin{proof}
By Proposition \ref{propDW}, (3) we have that $D_1D_2=D_2D_1$ if and only if
$\Lambda_n(D_1D_2)=\Lambda_n(D_2D_1)$ for all $n$. From Proposition \ref{propDW}, (2) we get
$\Lambda_n(D_1D_2)=\Lambda_n(D_1)\Lambda(D_2)$. \qed
\end{proof}
\smallskip
\begin{prop}\label{Pro2} Let $\{Q_n\}$ be the sequence of
monic orthogonal polynomials. Let $D=\sum_{i=0}^s
F_i(u)\frac{d^i}{du^i}$ such that $DQ_n^*= Q_n^*\Gamma_n.$ Then
\begin{equation}\label{lambda}
\Gamma_n=\sum_{0\leq i\leq s }[n]_i A_i^i\quad\quad \text{for
all}\quad n\ge0,
\end{equation}
where $A_i^i$ is the coefficient of $x^i$ in the polynomial $F_i$.
\end{prop}
\begin{remark*}
Here we are using the notation $[n]_i=n(n-1)\dots (n-i+1)$ for
$n\geq 1$, and $[n]_0=1$, for $n\geq 0$.
\end{remark*}
\begin{proof}
From
$$ \sum_{0\leq i\leq s}\textstyle F_i(u)\frac{d^i Q_n^*}{du^i}(u)=Q_n^*(u)\Gamma_n ,$$
by comparing the monomials of degree $n$ we get $\sum_{0\leq i\leq s
} [n]_i A_i^i=\Gamma_n.$ \qed
\end{proof}
\begin{remark}
Observe that in particular, Proposition \ref{Pro2} implies that the eigenvalues
$\Gamma_n$ is a polynomial function on $n$ of degree less or equal
to $\deg(D)$.
\end{remark}
\section{The operator $E$}
\subsection{$D$ and $E$ commute}\label{seccDEconmutan} In this
subsection we use the results described in Section
\ref{seccionalgebra} to give an elegant proof of the fact that the
operators $D$ and $E$ commute. Of course we also can verify this by
making the explicit computations.
\begin{thm}\label{comnutan}
The differential operators $D$ and $E$, introduced respectively in
Theorems \ref{D1salto} and \ref{E1salto}, commute.
\end{thm}
\begin{proof}
From Theorem \ref{E1salto} the operator $E$ is symmetric with
respect to the weight $W$. Thus $E$ belongs to the algebra
$\mathcal{D}(W)$ defined in \eqref{algebraDW} (See Theorem
\ref{Vn}). To see that $D$ and $E$ commute it is enough to verify
that the corresponding eigenvalues commute. (See Corollary
\ref{autovconmutan}).
Let $\{Q_n\}$ be the monic sequence of orthogonal polynomials. Then
for any $D\in \mathcal{D}(W)$, we have $DQ_n^*=Q_n^*\Gamma_n(D)$,
where the eigenvalue $\Gamma_n(D)$ is given explicitly in terms of
the coefficients of the differential operator $D$ (see Proposition
\ref{Pro2}).
For the operators $D$ and $E$ introduced respectively in Theorems
\ref{D1salto} and \ref{E1salto}, these eigenvalues are
\begin{align*}
\Gamma_n(D)&= - n(U+n-1)-V \\
\Gamma_n(E)&= -n(n-1)Q_1+nP_1-(\alpha+2\ell+3k)V,
\end{align*}
where the matrices $U,V,Q_1$ and $P_1$ are given in Theorems
\ref{D1salto} and \ref{E1salto}. Explicitly we have
\begin{align*}
\Gamma_n(D)&= -\textstyle{\sum_{i=0}^\ell
\bigl(
n(n+\alpha+\beta+\ell+i+1)+i(i+\alpha+\beta-k+1)\bigr)E_{ii}}\\
&\quad +\textstyle{\sum_{i=0}^{\ell-1}
(\ell-i)(\beta+i-k+1)E_{i,i+1}}\displaybreak[0]\\
\Gamma_n(E)&= -\textstyle{\sum_{i=0}^\ell \bigl(
n(\alpha-\ell+3i)(n+\alpha+\beta+\ell+i+1)}\\
&\textstyle{\qquad \qquad\qquad +(\alpha+2\ell+3k)i(i+\alpha+\beta-k+1)\bigr)E_{ii}}\\
&\quad +\textstyle{\sum_{i=0}^{\ell-1} (\ell-i)(\beta+i-k+1)(\alpha
+ 2\ell+3k+3n)E_{i,i+1}}.
\end{align*}
Now it is easy to verify that
\begin{equation}\label{conmutan1}
\Gamma_n(E)= (\alpha+2\ell+3k+3n) \Gamma_n(D)+
3n(\ell+k+n)(n+\alpha+\beta+\ell+1)I. \end{equation}
Thus the matrix
$\Gamma_n(E)$ commutes with $\Gamma_n(D)$ and by Corollary
\ref{autovconmutan} we have that $D$ and $E$ commute. \qed
\end{proof}
\subsection{The eigenfunctions of $E$.}
In Subsection \ref{orthogsubseccion} we give a sequence $\{P_w\}_w$
of matrix valued polynomials, which are orthogonal with respect to
$W$ and eigenfunctions of the differential operator $D$. The rows
$P_w^j$ of $P_w$ are orthogonal polynomials of degree $w$ and they
satisfy $DP_w^j=\lambda_j(w) P_w^j$.
Since $D$ and $E$ commute, it follows
that $E$ preserves the eigenspaces of $D$. Therefore if an
eigenvalue $\lambda=\lambda_j(w)$ has multiplicity one, then the
vector valued polynomial $P_w^j$ is also an eigenfunction of the
differential operator $E$. In the next theorem, we prove that this
is true, even if the multiplicity of an eigenvalue is bigger than
one.
\begin{thm}\label{orthopolyE}
The sequence $\{P_w\}_w$ of orthogonal polynomials associated to
the pair $\{W,D\}$ satisfies
$$EP_w^*(u)=P_w^*(u) \Lambda_w(E),$$
where $\Lambda_w(E)=\displaystyle\sum_{0\leq j\leq \ell} \mu_j(w) E_{jj}$,
and
$$\mu_j(w)=-w(w+\alpha+\beta+\ell+j+1)(\alpha-\ell+3j)-j(j+\alpha+\beta-k+1)(\alpha+2\ell+3k).$$
\end{thm}
\begin{proof}
Let $\{Q_w^*\}_{w\geq0}$ be the sequence of monic orthogonal
polynomials. Since $E$ is symmetric with respect to the weight $W$,
Theorem \ref{Vn} says that $EQ_w^*=Q_w^*\Gamma_w(E)$ for some matrix
$\Gamma_w(E)$. If $Q_w^*=P_w^*A_w^*$ then we have that
$DP_w^*A_w^*=P_w^*A_w^*\Gamma_w(D)$ and
$EP_w^*A_w^*=P_w^*A_w^*\Gamma_w(E)$. Therefore
\begin{align*}
\Lambda_w(D)&=A_w^*\Gamma_w(D)(A_w^*)^{-1},\\
\Lambda_w(E)&=A_w^*\Gamma_w(E)(A_w^*)^{-1}.
\end{align*}
Thus from \eqref{conmutan1} we obtain that
$$\Lambda_w(E)=(\alpha+2\ell+3k+3w)
\Lambda_w(D)+ 3w(\ell+k+w)(w+\alpha+\beta+\ell+1)I.$$
Observe that the fact that $\Lambda_w(D)$ is a diagonal matrix
implies that $\Lambda_w(E)$ is diagonal. Moreover the eigenvalue
$\mu_j(w)=(\Lambda_w(E))_{jj}$ is given by
\begin{align*}
\mu_j(w)&=(\alpha+2\ell+3k+3w)(-w(w+\alpha+\beta+\ell+j+1)\\
&\quad -j(j+\alpha+\beta-k+1))+3w(\ell+k+w)(w+\alpha+\beta+\ell+1)\\
&=-w(w+\alpha+\beta+\ell+j+1)(\alpha-\ell+3j)\\
&\quad-j(j+\alpha+\beta-k+1)(\alpha+2\ell+3k).
\end{align*}
This concludes the proof of the theorem. \qed
\end{proof}
\subsection{The operator algebra generated by $D$ and $E$}\label{algebra}
In this subsection we study the algebra generated by the
differential operators $D$ and $E$.
Let $\CC[x,y]$ be the algebra of all polynomials in the variables
$x$ and $y$ with complex coefficients.
\begin{thm}\label{alggenDE}
The algebra of differential operators generated by $D$ and $E$ is
isomorphic to the quotient algebra $\CC[x,y]/ \langle Q \rangle$,
where $\langle Q \rangle$ denotes the ideal generated by the
polynomial
$$Q(x,y)=\prod_{j=0}^\ell \left(y-(\alpha-\ell+3j)x+3j(\ell-j+k)(j+\alpha+\beta-k+1) \right).$$
\end{thm}
\begin{proof}
The algebra of differential operators generated by $D$ and $E$ is
isomorphic to the quotient algebra $\CC[x,y]/I$ where $I=\{
p\in\CC[x,y]:p(D,E)=0\}$.
Since $\Lambda_w$ is a representation which separates points of
$\mathcal{D}(W)$ (Proposition \ref{propDW}), we have that $p(D,E)=0$
if and only if
$$\Lambda_w(p(D,E))=p(\Lambda_w(D),\Lambda_w(E))=0,\text{ for all }w.$$
Moreover, since the matrices $\Lambda_w(D)$ and $\Lambda_w(E)$ are
diagonal matrices, we have that $p(\Lambda_w(D),\Lambda_w(E))=0$ if
and only if $p((\Lambda_w(D))_{jj},(\Lambda_w(E))_{jj})=0$ for all
$0\leq j\leq\ell$. Thus the ideal $I$ is
$$I=\{p\in\CC[x,y]:p(\lambda_j(w),\mu_j(w))=0, \text{for } j=0,1,\dots \ell\}.$$
Let $p_j(x,y)$ be the polynomial
$$p_j(x,y)=y-(\alpha-\ell+3j)x+3j(\ell-j+k)(j+\alpha+\beta-k+1).$$
It is easy to verify that $p_j(\lambda_j(w),\mu_j(w))=0,$ for all
$w\geq0$. Therefore $Q(x,y)=\prod_{j=0}^\ell p_j(x,y)$ belongs to
the ideal $I$.
On the other hand we have that any $f\in I$ vanishes in all
points of the form $(x,y)$ with
$y=(\alpha-\ell+3j)x+3j(\ell-j+k)(j+\alpha+\beta-k+1)$, for each
$j=0,\dots ,\ell$. In fact if we let,
$$a_j=\alpha-\ell+3j \qquad b_j=-3j(\ell-j+k)(j+\alpha+\beta-k+1)\quad (j=0,1,\dots ,\ell)$$
then we observe that
the polynomial $f(x,a_jx+b_j)$ has infinitely many roots, because $f(\lambda_j(w),\mu_j(w))=0$
and $\mu_j(w)=a_j\lambda_j(w)+b_j$.\\
Any polynomial in $\CC[x,y]$ is also a polynomial in $x$ and $y-ax-b$. Then it is clear that if
$p(x,y)=0$ in the line $y=ax+b$ then $p$ is divisible by $y-ax-b$.
\\
Thus we have that if $f$ belongs to the ideal $I$ then $f\in
\cap_{j=0}^\ell \langle p_j \rangle= \langle \textstyle \prod_j
p_j\rangle.$ Therefore we have that the ideal $I$ is generated by
the polynomial $Q(x,y)$, which concludes the proof of the
Theorem.\qed
\end{proof}
|
1,108,101,566,678 | arxiv | \section{Introduction}
For minimal surfaces there exist different approaches to define them, for example as solution of Plateau's problem. In 1970 Lawson proved a correspondence between isometric surfaces in spaceforms: For example, each $H$-surface (mean curvature $H\equiv1$) in $\mathbb{R}^3$ corresponds isometrically to a minimal surface in $\mathbb{S}^3$ such that their Gau\ss{} maps coincide and their tangent vectors are rotated by $\pi/2$ (with respect to a proper interpretation of the tangent spaces). Two such surfaces are called conjugate cousins~(\cite{lawson1970}, \cite{brauckmann1993}, \cite{karcher1989}). Daniel proved an equivalent correspondence between surfaces in homogeneous $3$-manifolds in~\cite{daniel2007}. A special case of his theorem yields the so-called sister surfaces: Each simply connected surface with constant mean curvature (cmc) $H$ in $\Sigma^2(\kappa) \times \mathbb{R}$ corresponds to an isometric minimal surface in the homogeneous $3$-manifold \mbox{$E(\kappa+4H^2,H)$}. Hence, instead of proving the existence of a cmc surface in a product space directly, one solves a Plateau problem in a non-trivial Riemannian fibration with base curvature $\kappa+4H^2$ and considers its sister.
In the case of periodic cmc surfaces in product spaces the conjugate surface construction may be outlined as follows: Consider a geodesic polygon the auxiliary $3$-manifold $E(\kappa+4H^2,H)$, which consists of vertical and horizontal edges. Solve the Plateau problem for the given curve. In the case of unbounded domains we have to take a limit of the sequence of minimal surfaces. Afterwards solve the period problem(s), i.e. choose the parameters of the geodesic polygon such that the resulting cmc sister surface has the desired properties. With those parameters we constructed a minimal disk whose sister surface is a fundamental piece of the cmc surface. We use Schwarz reflection to establish a complete smooth surface in a product space. In the last step we prove geometric properties of the surface, such as (Alexandrov-)embeddedness, the behaviour of the ends, the genus, etc.
One challenge in the construction in homogeneous manifolds is, that the normals coincide up to their vertical projection in the sense of a Riemannian fibration only. But there are more problems that occur in the construction. In order to prove the existence of periodic surfaces via conjugate construction, the Jordan curve that bounds the Plateau solution of disk-type has to consists of geodesics. The Daniel correspondence implies that a geodesic in the boundary of a minimal surface corresponds to a curvature line in the boundary of the isometric cmc surface, therefore Schwarz reflection applies and continues the surface smoothly, see~\cite{GK2010} and~\cite{MT2011}. In the case of conjugate minimal surfaces in $\mathbb{R}^3$ the total curvature of the curvature line in one surface is equal to the total turn of the normal along the corresponding geodesic in the conjugate surface. But in the case of cmc surfaces the total curvature depends also on the length of the curve. In order to solve period problems in horizontal planes, one has to control the total turn of the normal along horizontal curvature lines. For a cmc surface with $H\equiv1/2$ in $\mathbb{H}^2\times\mathbb{R}$ this is possible by a horocycle foliation of $\mathbb{H}^2$.
In this paper we prove the existence of new mc $1/2$ surfaces in $\H^2\times\mathbb{R}$, which are $k$-noids (with handle). This work is based on the PhD thesis of the author,~\cite{plehnert2012}. It is organised as follows, we start with an introduction to the geometry of homogeneous $3$-manifolds and prove a formula for the vertical distance of a horizontal lift of a closed curve in Riemannian fibrations.
In Section~\ref{S:Plateau} we cite some properties of Plateau solutions in $3$-manifolds and prove a maximum principle for cmc graphs. Together with a result of Gro\ss{}e-Brauckmann and Kusner, of which we sketch the proof, this implies a Rad\'{o} theorem for $E(\kappa,\tau)$. In the subsequent section we state some facts about certain sister surfaces and have a closer look on related boundary curves, the sister curves. We prove the absense of boundary branch points under specified conditions. Section~\ref{S:refsurfaces} outlines some properties of known minimal surfaces in $\operatorname{Nil}_3$, which we use as barriers in our construction.
In the last section we prove the existence of a family of mc $1/2$ surfaces in $\H^2\times\mathbb{R}$ with $k$ ends, genus $1$ and $k$-fold dihedral symmetry, $k\geq3$, which are Alexandrov embedded. We solve an improper Plateau problem and two period problems in the construction.
Within the last years the theory of minimal in constant mean curvature surfaces in homogeneous $3$-manifolds has been evolved actively by many mathematicians. Abresch and Rosenberg introduced a generalized quadratic differential for immersed surfaces in product spaces and show its holomorphicity. Furthermore they classify those surfaces with vanishing differential,~\cite{AR2005}. Fernandez and Mira constructed a hyperbolic Gau\ss{} map to study mean curvature one half surfaces in $\H^2\times\mathbb{R}$,~\cite{FM2007b}. Recently Cartier and Hauswirth published a work, where they study constant mean curvature $1/2$ surfaces in $\H^2 \times\mathbb{R}$ with vertical ends~\cite{CH2012}. In~\cite{MT2011} Manzano and Torralbo constructed constant mean curvature surfaces which arise via conjugate construction from compact minimal surfaces in the Berger spheres. The idea of this work comes from the paper~\cite{GK2010} of Kusner and Gro\ss{}e-Brauckmann, which is still in progress. They discuss the conjugate construction for minimal and cmc surfaces in project spaces $\Sigma^2(\kappa)\times\mathbb{R}$. The solution of Plateau's problem in mean convex subsets of homogeneous $3$-manifolds is treated. As an example of conjugate Plateau construction they sketch the existence of surfaces with constant mean curvature analogous to the minimal Jorge-Meeks-$k$-noids in $\mathbb{R}^3$, see Section~\ref{S:knoid}. We follow the same construction idea by a sequence of compact minimal sections for our example and give a different proof of the existence in Section~\ref{SS:Plateau}.
\section{Homogeneous $3$-manifolds}\label{S:riem}
We construct cmc surfaces in $\H^2\times \mathbb{R}$ which arise from minimal surfaces in $\operatorname{Nil}_3$. Both Riemannian manifolds are homogeneous $3$-manifolds with $4$-dimensional isometry group, see~\cite{thurston1997} for a complete classification of homogeneous $3$-manifolds.
Simply connected homogeneous $3$-manifolds with an at least $4$-dimensional isometry group besides $\H^3$ can be represented as a Riemannian fibration over a two dimensional space form $\Sigma(\kappa)$. Their fibres are geodesics and there exists a Killing field $\xi$ tangent to the fibres, called the vertical vector field. The translations along the fibres are isometries, hence the Killing field $\xi$ generates a subgroup $G$ of $\operatorname{Iso}(E)$. The integral curves of $\xi$ define a principal bundle with connection $1$-form $\omega(X)=\langle X,\xi\rangle$. The curvature form is $\Omega\coloneqq \operatorname{D} \omega=\operatorname{d}\omega+1/2[\omega,\omega]=\operatorname{d}\omega$. The following equation holds for two arbitrary vector fields $X,Y$
\begin{align*}
\Omega(X,Y)&=\operatorname{d}\omega(X,Y)= X \omega(Y) - Y \omega(X) - \omega\left([X,Y]\right)\\
&=\langle\nabla_X Y,\xi\rangle+\langle Y,\nabla_X\xi\rangle-\langle\nabla_Y X,\xi\rangle-\langle X,\nabla_Y\xi\rangle -\left(\langle\nabla_X Y,\xi\rangle-\langle\nabla_Y X,\xi\rangle\right)\\
&=\langle\nabla_X\xi,Y\rangle-\langle\nabla_Y\xi,X\rangle\\
&\overset{(*)}{=} 2\langle\nabla_X\xi,Y\rangle,
\end{align*}
where $(*)$ follows from the fact that $\xi$ is a Killing field. Moreover, the fact that $\xi$ is Killing implies $\xi\Omega(X,Y)=0$, i.e. $\Omega$ is constant along the fibers. Moreover, we claim: For $X^h=X-X^v$ we have $\Omega(X^h,Y^h)=\Omega(X,Y)$. Therefore $\Omega$ induces a $2$-form $\underline{\Omega}=(\pi^{-1})^*\Omega$ on the base manifold $\Sigma$. To see the claim we compute
\begin{align*}
\Omega(X^h,Y^h)&= 2\langle\nabla_{X^h}\xi,Y^h\rangle\\
&=2\langle\nabla_{(X-X^v)}\xi,Y-Y^v\rangle\\
&=2(\langle\nabla_{X}\xi,Y\rangle\underbrace{-\langle\nabla_X\xi,Y^v\rangle}_{=\langle\nabla_{Y^v}\xi,X\rangle}-\langle\nabla_{X^v}\xi,Y\rangle+\langle\nabla_{X^v}\xi,Y^v\rangle)\\
&=2\langle\nabla_X\xi,Y\rangle,
\end{align*}
because $\nabla_U \xi=0$ for any vertical vector field $U$.
Since $\Sigma$ is oriented, there exists a $\pi/2$-rotation $J$ on $\operatorname{T}_y\Sigma$, it induces a $\pi/2$-rotation $R$ on the horizontal space $(\operatorname{T}_p E)^h$.
\begin{definition}\label{d:bundle}
Let $\pi\colon E\to \Sigma$ be a Riemannian fibration with geodesic fibres. Its \emph{bundle curvature}~$\tau$ is a map $\tau\colon\Sigma\to\mathbb{R}$ given by
\[
\tau(y)\coloneqq-\frac12\Omega(X,RX)=\frac12\langle[X,RX],\xi\rangle,\]
hence $[X,RX]^v = 2\tau(y) \xi$, where $X$ is an arbitrary horizontal unit vector field along $\pi^{-1}(y)$.
\end{definition}
We see that $\Omega$ measures the non-integrability of the horizontal distribution. Since $\Omega$ is constant along the fibres, the map $\tau$ is well-defined. The induced $2$-form $\underline{\Omega}$ factorizes the natural volume form $\operatorname{vol}_\Sigma$ of $\Sigma$
\[-\frac12\underline{\Omega}=\tau\operatorname{vol}_\Sigma,\] since we have $\operatorname{vol}_\Sigma(x,Jx)=1$ and $\underline{\Omega}(x,Jx)=\Omega(\tilde{x},R\tilde{x})$ for any unit vector $x$ on $\Sigma$.
If the Riemannian fibration is homogeneous, the bundle curvature $\tau$ is constant. It is common to write $E(\kappa,\tau)$ for those simply connected spaces. The isometry group of $E(\kappa,\tau)$ depends on the signs of $\kappa$ and $\tau$, and is equivalent to the isometry group of one of the following Riemannian manifolds:
\begin{center}\begin{tabular}{l | l l l}
curv. & $\kappa<0$ & $\kappa=0$ & $\kappa>0$ \\ \hline
$\tau=0$ & $\H^2\times\mathbb{R}$ & $\mathbb{R}^3$ & $\S^2\times\mathbb{R}$ \\
$\tau\ne0$ & $\widetilde{\operatorname{SL}}_2(\mathbb{R})$ & $\operatorname{Nil}_3(\mathbb{R})$ & (Berger-)$\S^3$
\\
\end{tabular}
\end{center}
Another interpretation of the bundle curvature is the vertical distance of a horizontal lift of a closed curve:
\begin{lemma}[Vertical distances]\label{l:vertdistances}
Let $\gamma$ be a closed Jordan curve in the base manifold $\Sigma(\kappa)$ of a Riemannian fibration $E(\kappa,\tau)$ with geodesic fibres and constant bundle curvature $\tau$. With $\Delta$ defined by $\partial\Delta=\gamma$, we have
\[
d(\tilde{\gamma}(0),\tilde{\gamma}(l))=2\tau\operatorname{area}(\Delta),
\]
where $\tilde{\gamma}$ is the horizontal lift with $\pi(\tilde{\gamma}(0))=\pi(\tilde{\gamma}(l))$, $\operatorname{area}$ is the oriented volume and $d(p,q)$ denotes the signed vertical distance, which is positive if $\overline{pq}$ is in the fibre-direction $\xi$.
\end{lemma}
\begin{proof}
We consider an arclength parametrization $\gamma\colon[0,l]\to\Sigma(\kappa)$ and its horizontal lift $\tilde{\gamma}$. Then $\tilde{\gamma}(0)$ and $\tilde{\gamma}(l)$ are contained in one fibre, i.e. $\pi(\tilde{\gamma}(0))=\pi(\tilde{\gamma}(l))$. Hence, there exists a vertical arclength parametrized curve $c$ with $c(0)=\tilde{\gamma}(l)$ and $c(v)=\tilde{\gamma}(0)$. The union $c\cup\tilde{\gamma}\eqqcolon\tilde{\Gamma}$ is a closed curve in $E(\kappa,\tau)$ and $c'$ is parallel to $\xi$. If $c'=\pm\xi$, then
\[
\pm v
=\int\limits_0^v \langle c',\xi\rangle=\int\limits_0^v \langle c',\xi\rangle+\int\limits_0^l\langle \tilde{\gamma}',\xi\rangle,
\]
since $\tilde{\gamma}'$ is horizontal. By definition of the connection $1$-form $\omega$, the sum of the integrals is equal to $\int\limits_{\tilde{\Gamma}}\omega$.
We apply Stokes's Theorem to get
\[
\int\limits_{\tilde{\Gamma}}\omega=\int\limits_{\tilde{\Delta}}\operatorname{d}\omega=\int\limits_{\tilde{\Delta}}\Omega=\int\limits_{\pi(\tilde{\Delta})}(\pi^{-1})^*\Omega,
\]
where $\tilde{\Delta}$ is any lift of $\Delta$, such that $\pi\colon\tilde{\Delta}\to\Delta$ is one-to-one and $\partial\tilde{\Delta}=\tilde{\Gamma}$. Since $(\pi^{-1})^*\Omega$ is a $2$-form on $\Sigma$, it factorizes the natural volume form $\operatorname{vol}_\Sigma$
\[\int\limits_{\pi(\tilde{\Delta})}(\pi^{-1})^*\Omega=-2\int\limits_\Delta\tau \operatorname{vol}_\Sigma=-2\tau\operatorname{area}(\Delta).\]
We conclude that
\[
2\tau\operatorname{area}(\Delta)=
\begin{cases}
-v, & \text{if } c'=\xi, \\
v, & \text{if } c'=-\xi,
\end{cases}
\]
and $d(\tilde{\gamma}(0),\tilde{\gamma}(l))=d(c(v),c(0))=2\tau\operatorname{area}(\Delta)$.
\end{proof}
\begin{comment}
\section{Homogeneous $3$-manifolds}\label{S:riem}
We construct cmc surfaces in $\H^2\times \mathbb{R}$ which arise from minimal surfaces in $\operatorname{Nil}_3$. Both Riemannian manifolds are homogeneous $3$-manifolds with $4$-dimensional isometry group, see~\cite{thurston1997} for a complete classification of homogeneous $3$-manifolds. Simply connected homogeneous $3$-manifolds with an at least $4$-dimensional isometry group besides $\H^3$ can be represented as certain fibrations. In order to understand their geometry, we introduce them in detail.
Let us consider a fibration $\pi\colon E\to\Sigma$ of a $3$-dimensional complete Riemannian manifold $(E,\langle\cdot,\cdot\rangle)$ over an oriented surface $\Sigma$ and a vector field $\xi$ tangent to the fibre. We decompose the tangent space $\operatorname{T}_p E$ of the total space $E$ into its horizontal and vertical parts. The \emph{horizontal space} $(\operatorname{T}_p E)^h$ is defined by
\[
(\operatorname{T}_p E)^h\coloneqq\{v\in\operatorname{T}_p E\colon \langle v,\xi(p)\rangle=0\},
\]
the vertical space is given by $(\operatorname{T}_p E)^v\coloneqq\ker(\operatorname{d}\pi_p)$.
We require the fibration to fulfil four properties: It has to be a {\it Riemannian} fibration with {\it geodesic fibres} and {\it constant bundle curvature} over a $2$-dimensional {\it space form}.
First of all, we consider a \emph{Riemannian fibration}, i.e. $\operatorname{d}\pi_p\colon(\operatorname{T}_p E)^h \to \operatorname{T}_{\pi(p)}\Sigma$ is an isometry. This implies that the horizontal lifts of geodesics in $\Sigma$ are geodesics in $E$. For an arbitrary curve $\gamma$ in $\Sigma$ there is always a unique \emph{horizontal lift} $\tilde{\gamma}$ if we fix $\tilde{\gamma}(0)=p\in E$, i.e. $\tilde{\gamma}'(t)\in(\operatorname{T}_{\tilde{\gamma}(t)} E)^h$, for all $t$.
Secondly, we require the fibres to be geodesics. By computing the Lie derivative, this leads to the following proposition.
\begin{proposition}
For a Riemannian fibration $\pi\colon E\to\Sigma$ with geodesic fibres, there exists an unit vector field $\xi$ tangent to the fibres, which is a Killing field.
\end{proposition}
\begin{proof}
We have to show $\langle\nabla_U \xi,V\rangle=-\langle\nabla_V\xi,U\rangle$ for any $U,V\in\mathcal{V}(E)$.
\end{proof}
We call the vector field $\xi$ the \emph{vertical vector field}. The translations along the fibres are isometries, hence the Killing field $\xi$ generates a subgroup $G$ of $\operatorname{Iso}(E)$. The integral curves of $\xi$ define a principal bundle with connection $1$-form $\omega(X)=\langle X,\xi\rangle$. The curvature form is $\Omega\coloneqq \operatorname{D} \omega=\operatorname{d}\omega+\frac12[\omega,\omega]=\operatorname{d}\omega$. The following equation holds for two arbitrary vector fields $X,Y$
\begin{align*}
\Omega(X,Y)&=\operatorname{d}\omega(X,Y)= X \omega(Y) - Y \omega(X) - \omega\left([X,Y]\right)\\
&=\langle\nabla_X Y,\xi\rangle+\langle Y,\nabla_X\xi\rangle-\langle\nabla_Y X,\xi\rangle-\langle X,\nabla_Y\xi\rangle -\left(\langle\nabla_X Y,\xi\rangle-\langle\nabla_Y X,\xi\rangle\right)\\
&=\langle\nabla_X\xi,Y\rangle-\langle\nabla_Y\xi,X\rangle\\
&\overset{(*)}{=} 2\langle\nabla_X\xi,Y\rangle,
\end{align*}
where $(*)$ follows from the fact that $\xi$ is a Killing field. Moreover, the fact that $\xi$ is Killing implies $\xi\Omega(X,Y)=0$, i.e. $\Omega$ is constant along the fibers. Moreover, we claim: For $X^h=X-X^v$ we have $\Omega(X^h,Y^h)=\Omega(X,Y)$. Therefore $\Omega$ induces a $2$-form $\underline{\Omega}=(\pi^{-1})^*\Omega$ on the base manifold $\Sigma$. To see the claim we compute
\begin{align*}
\Omega(X^h,Y^h)&= 2\langle\nabla_{X^h}\xi,Y^h\rangle\\
&=2\langle\nabla_{(X-X^v)}\xi,Y-Y^v\rangle\\
&=2(\langle\nabla_{X}\xi,Y\rangle\underbrace{-\langle\nabla_X\xi,Y^v\rangle}_{=\langle\nabla_{Y^v}\xi,X\rangle}-\langle\nabla_{X^v}\xi,Y\rangle+\langle\nabla_{X^v}\xi,Y^v\rangle)\\
&=2\langle\nabla_X\xi,Y\rangle,
\end{align*}
because $\nabla_U \xi=0$ for any vertical vector field $U$.
Since $\Sigma$ is oriented, there exists a $\pi/2$-rotation $J$ on $\operatorname{T}_y\Sigma$, it induces a $\pi/2$-rotation $R$ on $(\operatorname{T}_p E)^h$.
\begin{definition}\label{d:bundle}
Let $\pi\colon E\to \Sigma$ be a Riemannian fibration with geodesic fibres. Its \emph{bundle curvature}~$\tau$ is a map $\tau\colon\Sigma\to\mathbb{R}$ given by
\[
\tau(y)\coloneqq-\frac12\Omega(X,RX)=\frac12\langle[X,RX],\xi\rangle,\]
hence $[X,RX]^v = 2\tau(y) \xi$, where $X$ is an arbitrary horizontal unit vector field along $\pi^{-1}(y)$.
\end{definition}
We see that $\Omega$ measures the non-integrability of the horizontal distribution. Since $\Omega$ is constant along the fibres, the map $\tau$ is well-defined. The induced $2$-form $\underline{\Omega}$ factorizes the natural volume form $\operatorname{vol}_\Sigma$ of $\Sigma$
\[-\frac12\underline{\Omega}=\tau\operatorname{vol}_\Sigma,\] since we have $\operatorname{vol}_\Sigma(x,Jx)=1$ and $\underline{\Omega}(x,Jx)=\Omega(\tilde{x},R\tilde{x})$ for any unit vector $x$ on $\Sigma$.
Last but not least we assume constant curvatures: Of the base manifold, i.e. it is a two-dimensional space form $\Sigma=\Sigma(\kappa),\,\kappa\in\mathbb{R}$ and the bundle curvature $\tau\in\mathbb{R}$. This implies a classification of the fibrations.
If $\tau=0$ the bundle is trivial, otherwise we consider its universal cover. It is common to write $E(\kappa,\tau)$ for those simply connected spaces. The isometry group of $E(\kappa,\tau)$ depends on the signs of $\kappa$ and $\tau$, and is equivalent to the isometry group of one of the following Riemannian manifolds:
\begin{center}\begin{tabular}{l | l l l}
curv. & $\kappa<0$ & $\kappa=0$ & $\kappa>0$ \\ \hline
$\tau=0$ & $\H^2\times\mathbb{R}$ & $\mathbb{R}^3$ & $\S^2\times\mathbb{R}$ \\
$\tau\ne0$ & $\widetilde{\operatorname{SL}}_2(\mathbb{R})$ & $\operatorname{Nil}_3(\mathbb{R})$ & (Berger-)$\S^3$
\\
\end{tabular}
\end{center}
Another interpretation of the bundle curvature is the vertical distance of a horizontal lift of a closed curve:
\begin{lemma}[Vertical distances]\label{l:vertdistances}
Let $\gamma$ be a closed Jordan curve in the base manifold $\Sigma(\kappa)$ of a Riemannian fibration $E(\kappa,\tau)$ with geodesic fibres and constant bundle curvature $\tau$. With $\Delta$ defined by $\partial\Delta=\gamma$, we have
\[
d(\tilde{\gamma}(0),\tilde{\gamma}(l))=2\tau\operatorname{area}(\Delta),
\]
where $\tilde{\gamma}$ is the horizontal lift with $\pi(\tilde{\gamma}(0))=\pi(\tilde{\gamma}(l))$, $\operatorname{area}$ is the oriented volume and $d(p,q)$ denotes the signed vertical distance, which is positive if $\overline{pq}$ is in the fibre-direction $\xi$.
\end{lemma}
\begin{proof}
We consider an arclength parametrization $\gamma\colon[0,l]\to\Sigma(\kappa)$ and its horizontal lift $\tilde{\gamma}$. Then $\tilde{\gamma}(0)$ and $\tilde{\gamma}(l)$ are contained in one fibre, i.e. $\pi(\tilde{\gamma}(0))=\pi(\tilde{\gamma}(l))$. Hence, there exists a vertical arclength parametrized curve $c$ with $c(0)=\tilde{\gamma}(l)$ and $c(v)=\tilde{\gamma}(0)$. The union $c\cup\tilde{\gamma}\eqqcolon\tilde{\Gamma}$ is a closed curve in $E(\kappa,\tau)$ and $c'$ is parallel to $\xi$. If $c'=\pm\xi$, then
\[
\pm v
=\int\limits_0^v \langle c',\xi\rangle=\int\limits_0^v \langle c',\xi\rangle+\int\limits_0^l\langle \tilde{\gamma}',\xi\rangle,
\]
since $\tilde{\gamma}'$ is horizontal. By definition of the connection $1$-form $\omega$, the sum of the integrals is equal to $\int\limits_{\tilde{\Gamma}}\omega$.
We apply Stokes's Theorem to get
\[
\int\limits_{\tilde{\Gamma}}\omega=\int\limits_{\tilde{\Delta}}\operatorname{d}\omega=\int\limits_{\tilde{\Delta}}\Omega=\int\limits_{\pi(\tilde{\Delta})}(\pi^{-1})^*\Omega,
\]
where $\tilde{\Delta}$ is any lift of $\Delta$, such that $\pi\colon\tilde{\Delta}\to\Delta$ is one-to-one and $\partial\tilde{\Delta}=\tilde{\Gamma}$. Since $(\pi^{-1})^*\Omega$ is a $2$-form on $\Sigma$, it factorizes the natural volume form $\operatorname{vol}_\Sigma$
\[\int\limits_{\pi(\tilde{\Delta})}(\pi^{-1})^*\Omega=-2\int\limits_\Delta\tau \operatorname{vol}_\Sigma=-2\tau\operatorname{area}(\Delta).\]
We conclude that
\[
2\tau\operatorname{area}(\Delta)=
\begin{cases}
-v, & \text{if } c'=\xi, \\
v, & \text{if } c'=-\xi,
\end{cases}
\]
and $d(\tilde{\gamma}(0),\tilde{\gamma}(l))=d(c(v),c(0))=2\tau\operatorname{area}(\Delta)$.
\end{proof}
\end{comment}
\section{Solution of the Plateau problem}\label{S:Plateau}
From the 18th century until 1930 it was an open question whether a rectifiable closed curve $\Gamma$ in $\mathbb{R}^3$ bounds an area minimizing disc. Douglas~\cite{douglas1931} and Rad\'{o}~\cite{rado1930} proved independently that there always exists an area minimizing disc spanned by $\Gamma$. In 1948 Morrey generalised the theorem to minimal discs in homogeneously regular Riemannian manifolds without boundary~(\cite{morrey1966}). By~\cite{osserman1970} and~\cite{gulliver1973}, the least area disc is a minimal immersion in the interior. Under additional assumptions we may exclude boundary branch points, see Section~\ref{SS:reflection} later on.
In the 80ties Meeks and Yau~(\cite{MY1982}) showed that in compact manifolds with mean convex boundary the Plateau solution $M$ is even embedded. A Riemannian manifold $N$ with boundary is \emph{mean convex} if the boundary $\partial N$ is piecewise smooth, each smooth subsurface of $\partial N$ has non-negative mean curvature with respect to the inward normal, and there exists a Riemannian manifold $N'$ such that $N$ is isometric to a submanifold of $N'$ and each smooth subsurface $S$ of $\partial N$ extends to a smooth embedded surface $S'$ in $N'$ such that $S'\cap N=S$ and two surfaces meet transversally at the non-smooth points of $\partial N$. We call each surface $S$ a \emph{barrier}.
One interesting question in the context of Plateau's problem is to ask, how many minimal surfaces of disc type are defined by a given closed Jordan curve? In general the answer is unknown. Rad\'{o} proved in~\cite{rado1930}: If the Jordan curve $\Gamma$ is graph over the boundary of a convex domain $\Delta\subset\mathbb{R}^2$, then $\Gamma$ bounds at most one minimal surface of disc type and the minimal surface is graph above $\Delta$. We consider Plateau's problem in mean convex domains in $E(\kappa,\tau)$. In a Riemannian fibration we have a natural notion of graphs, i.e. a section of the bundle $\pi\colon E(\kappa,\tau)\to\Sigma(\kappa)$. The following two propositions imply Rad\'{o} theorem for $E(\kappa,\tau)$.
In~\cite{GK2010} was proven if the boundary of a minimal surface projects to the boundary of a convex disc then it is a section:
\begin{proposition}
Let $\Delta\subset\Sigma(\kappa)$ be a convex domain in the base of the Riemannian fibration $E(\kappa,\tau)$. Suppose $\overline{M}\subset\overline{\pi^{-1}(\Delta)}$ is a compact minimal disc. If the boundary projection $\pi\colon\partial M\to\partial\Delta$ is injective, then $M$ is a section over $\Delta$.
\end{proposition}
For the sake of completeness we sketch the proof.
\begin{proof}
Let $\Omega\coloneqq \pi^{-1}(\Delta)$ and $\Phi_t\colon \Omega\to\Omega$ be the flow of the vertical Killing field $\xi$, where $t\in\mathbb{R}$. With $M_t\coloneqq \Phi_t(M)$ we have $M_0=M$. The surface $M$ is a section if $M_t\cap M_s=\emptyset$ for $t\ne s$. It sufficies to show $M\cap M_t=\emptyset$ for all $t$.
Assume the contrary, there exists $T\coloneqq \inf\{t>0\colon M\cap M_t=\emptyset\}>0$. This implies the existence of $p\in \overline{M}\cap\overline{M_T}$. By the maximum principle $\pi(p)\notin\Delta\setminus\partial\Delta$. But for $\pi(p)\in\partial\Delta$ follows that $p$ is an interior point for at least one of the surfaces, since the boundary projection $\pi\colon\partial M\to\partial\Delta$ is injective. This surface is then tangential one-sided to a vertical plane, and by the maximum principle it agrees with a subset of the vertical plane. This contradicts the injectivity of the boundary projection.
The case $T\coloneqq \sup\{t<0\colon M\cap M_t=\emptyset\}<0$ is analogous.
\end{proof}
The maximum principle implies uniqueness of constant mean curvature sections with the same boundary data:
\begin{proposition}\label{p:uniquesection}
Let $\pi\colon E\to\Sigma$ be a Riemannian fibration with geodesic fibres. Suppose $M$ is a section over $\Delta\subset\Sigma$ with mean curvature $H$ and prescribed boundary values such that $\pi\colon\partial M\to\partial\Delta$ is injective. Then $M$ is unique.
\end{proposition}
\begin{proof}
Assume that we have two sections $M$ and $\hat{M}$ over $\Delta$ with the same boundary values given by $u$ and $\hat{u}$. Moreover, assume $w\coloneqq u-\hat{u}\not\equiv 0$, meaning $\abs{w}$ has a maximum in an interior point $p\in\Delta\setminus\partial\Delta$, since $w\vert_{\partial\Delta}\equiv 0$. By exchanging $u$ and $\hat{u}$ we may assume $w\leq w(p)>0$.
The sections fulfil the non-parametric mean curvature equation. Therefore, their parametrizations $u$ and $\hat{u}$ are solutions of the following differential equation:
\[
Q(u)\coloneqq\partial_x\left(\frac{\lambda U}{W}\right)+\partial_y\left(\frac{\lambda V}{W}\right)-2\lambda^2 H=0,
\]
where $U=u_x+\lambda\tau y,\, V=u_y-\lambda\tau x,\, W=\sqrt{1+U^2+V^2}$ and $\lambda=\frac{4}{4+\kappa(x^2+y^2)}$.
This equation is non-linear. We set $a^i(p)\coloneqq\frac{\lambda p_i}{\sqrt{1+p_1^2+p_2^2}}$ and $R\coloneqq U\partial_x+V\partial_y$. Considering the difference we get:
\begin{align*}
0&=Q(u)-Q(\hat{u})=\sum\limits_i \partial_i a^i(R)-\partial_i a^i(\hat{R})\\
&=\sum\limits_i\partial_ia^i(tR+(1-t)\hat{R})\vert_{t=0}^{t=1}\\
&=\int\limits_0^1\frac{\operatorname{d}}{\operatorname{d} s}\left[\sum\limits_i\partial_ia^i(sR+(1-s)\hat{R})\right]_{s=t}\operatorname{d} t\\
&=\sum\limits_{i,j}\int\limits_0^1\partial_i
\left[
\frac{\partial a^i}{\partial p_j}(tR+(1-t)\hat{R})(R_j-\hat{R}_j)
\right]\operatorname{d} t\\
&=\sum\limits_{i,j}\partial_i\left[\underbrace{\left(\int\limits_0^1\frac{\partial a^i}{\partial p_j}(tR+(1-t)\hat{R})\operatorname{d} t\right)}_{\eqqcolon a^{ij}}(R_j-\hat{R}_j)\right]\\
&=\sum\limits_{i,j}\partial_i [a^{ij}\partial_j(u-\hat{u})].
\end{align*}
Since
\[
\partial_ja^i(p)=\frac{\lambda\delta_{ij}}{\sqrt{1+p_1^2+p_2^2}}-\frac{\lambda p_i p_j}{\sqrt{1+p_1^2+p_2^2}^3}=\partial_ia^j(p),
\]
then $(a^{ij})_{ij}$ is symmetric, moreover $\abs{(\partial_i a^{ij})_j}$ is bounded. Therefore with $w= u-\hat{u}$, we have that
\[
L(w)\coloneqq \sum\limits_{i,j}\partial_i (a^{ij}\partial_jw)=Q(u)-Q(\hat{u})=0
\]
is a linear second order partial differential operator.
With $T(t,x,y)\coloneqq tR(x,y)+(1-t)\hat{R}(x,y)$ we get
$
\partial_j a^i(T)=\frac{\lambda\delta_{ij}}{\sqrt{1+\abs{T}^2}}- \frac{\lambda T_i T_j}{\sqrt{1+\abs{T}^2}^3}
$.
For any compact subset $K\subset\Delta$ the norm $\abs{T}$ has a maximum on $[0,1]\times K$. Hence, there exists $\sigma(K,u,\hat{u})>0$, such that we may estimate, by means of the Schwarz inequality,
\[
\sum\limits_{i,j}a^{ij}(T)\xi_i\xi_j=\frac{(1+\abs{T}^2)\abs{\xi}^2-\langle T,\xi\rangle^2}{\sqrt{1+\abs{T}^2}^3}\geq\frac{\abs{\xi}^2}{\sqrt{1+\abs{T}^2}^3}>\sigma\abs{\xi}^2.
\]
We conclude that $L$ is uniformly elliptic and therefore, the maximum principle for elliptic partial differential equations~(\cite{GT2001}) applies.
Hence, $w\equiv w(p)$ which contradicts $w\vert_{\partial\Delta}\equiv 0$.
\end{proof}
\begin{remark}
The uniqueness of a section is also true in a more general case: The projection of $\partial M$ has to be injective except for at most finitely many points of $\partial \Delta$. This means, we allow vertical segments in the boundary. The proof needs a more general maximum principle by Nitsche; for $\mathbb{R}^3$ see~\cite{nitsche1975}.
\end{remark}
\begin{comment}
For a sequence of minimal sections there exists a uniform curvature estimate~(\cite{GK2010}) and a uniform area bound, therefore there exists a limit $M$ if we write them locally as normal graphs over their tangent space.
\begin{proposition}\label{p:limitsection}
Let $M_n$ be a sequence of minimal discs, which are graphs in a Riemannian fibration $\pi\colon E(\kappa,\tau)\to\Sigma(\kappa)$ converging to a minimal disc $M$. Then $M$ is either a graph over $\lim_{n\to\infty}\pi(M_n)$ or a vertical plane.
\end{proposition}
\begin{proof}
Since each $M_n$ is a graph, it is transversal to the fibres and for the normal $\nu_n$ we have $\langle\nu_n,\xi\rangle>0$ for each $n\in\mathbb{N}$. For the limit $\nu_n\to\nu$ we still know that $\langle\nu,\xi\rangle\geq0$.
If $\langle\nu,\xi\rangle>0$, then $M$ is a graph. Otherwise we pick $p\in M$ with $\langle\nu(p),\xi\rangle=0$ and consider the unique vertical plane $V\subset E(\kappa,\tau)$ that is orthogonal to $\nu$ in $p$. If $M\ne V$, we analyse $\Gamma\coloneqq M\cap V$. By the maximum principle we have $\Gamma\ne\{p\}$.
Locally in a neighbourhood $U_p\subset V$ of $p$, the surface $M$ is a graph $M_p$ over $V$ in the normal direction.
We claim: There exists a map $\hat{d}\colon M_p\subset M\to\mathbb{R}$ with $M_p\cap V=\{\hat{d}=0\}$ such that its composition with a parametrization $f$ of $M$ is a real analytic function $d\coloneqq \hat{d}\circ f$.
We construct $\hat{d}$ via the diffeomorphism $e\colon M_p\times(-\delta,\delta)\to E(\kappa,\tau)$ defined by $e(q,t)=\exp^E(t\nu_M(q))$. The map $\hat{d}\coloneqq \pi_{(-\delta,\delta)}\circ e^{-1}\colon M_p\to\mathbb{R}$ is the signed distance function of $M$. We have $M\cap U_p=\{\hat{d}=0\}$.
If we choose a harmonic and conformal parametrization $f\colon D\to E(\kappa,\tau)$ of $M$, where $D\subset\mathbb{R}^2$ is the open unit disc, then the function $d\coloneqq\hat{d}\circ f$ is analytic.
Therefore, it can be expressed as a power series $d(x)=\sum\limits_{j=m}^\infty P_j(x)$, where $P_j$ is a polynomial of degree $j$ and $P_m\ne 0$. Since the first derivatives of $V$ and $M$ coincide, we have $m\geq 2$.
Moreover, $\Gamma$ divides $U_p$ in at least $2m$ regions. There exists $q\in M\setminus\Gamma$ with a neighbourhood $M_q$, such that $M_q\cap \Gamma=\emptyset$ and $\pi_{\Sigma(\kappa)}(M_q)$ is not injective. But since $M$ is the limit of $M_n$ this means that there exists $n\in\mathbb{N}$ for which $ \pi_{\Sigma(\kappa)}(M_n)$ is not injective, a contradiction to $M_n$ being a graph.
Therefore, $P_m=0$ and $d\equiv0$, which means that $M$ is a vertical plane.
\end{proof}
The fact that the intersection of two non-transversally intersecting minimal surfaces consists of $2m$ embedded curves is true in general (see~\cite{CM2011}).
\end{comment}
\section{Sister surfaces}\label{C:sistersurfaces}
Lawson made a great contribution in the study of constant mean curvature surfaces in 1970 when he showed the isometric correspondence between minimal and cmc surfaces in different space forms, for example between minimal surfaces in $\mathbb{S}^3$ and $H$-surfaces in $\mathbb{R}^3$, see~\cite{lawson1970}. By means of the reflection principle in $\mathbb{S}^3$ it was then possible to construct new cmc surfaces in the Euclidean space. In recent years mathematicians have grown their interest within surfaces in other ambient manifolds. In 2007 Daniel published a generalized Lawson correspondence for homogeneous manifolds~(\cite{daniel2007}), we use one special case of his correspondence:
\begin{theorem}[{\cite[Theorem 5.2]{daniel2007}}] \label{t:correspondence} There exists an isometric correspondence between an mc $H$-surface $\tilde{M}$ in $\Sigma(\kappa)\times\mathbb{R}=E(\kappa,0)$ and a minimal surface $M$ in $E(\kappa+4H^2,H)$. Their shape operators are related by \begin{equation}\label{e:shapeoperators}\tilde{S}=JS+H\operatorname{id},\end{equation} where $J$ denotes the $\pi/2$ rotation on the tangent bundle of a surface. Moreover, the normal and tangential projections of the vertical vector fields $\xi$ and $\tilde{\xi}$ are related by
\begin{equation}
\langle\tilde{\xi},\tilde{\nu}\rangle=\langle\xi,\nu\rangle,\qquad J\operatorname{d} f^{-1}(T)=\operatorname{d} \tilde{f}^{-1}(\tilde{T}),
\end{equation}
where $f$ and $\tilde{f}$ denote the parametrizations of $M$ and $\tilde{M}$ respectively, $\nu$ and $\tilde{\nu}$ their unit normals, and $T$, $\tilde{T}$ the projections of the vertical vector fields on $\operatorname{T} M$ and $\operatorname{T}\tilde{M}$.
\end{theorem}
We call the isometric surfaces $M$ and $\tilde{M}$ \emph{sister surfaces}, or \emph{sisters} in short.
{\it Examples}:
\begin{enumerate}
\item For $H=0$ the surfaces $M$ and $\tilde{M}$ are conjugate minimal surfaces in $\Sigma(\kappa)\times\mathbb{R}$.
\item For $H\in(0,1/2)$ and $\kappa=-1$ we have $4H^2-1<0$ and therefore corresponds to a minimal surface in $\widetilde{\operatorname{PSL}}_2(\mathbb{R})$ and its mc $H$-sister surface in $\H^2\times\mathbb{R}$.
\item For $H=1/2$ and $\kappa=-1$ an mc $1/2$-surface in $\H^2\times\mathbb{R}$ corresponds to a minimal surface in $E(0,1/2)=\operatorname{Nil}_3(\mathbb{R})$.
\item Furthermore, for $H>1/2$ an mc $H$-surface in $\H^2\times\mathbb{R}$ results from a minimal surface in the Berger spheres $E(4H^2-1,H)$, since $4H^2-1>0$.
\end{enumerate}
\subsection{Reflection principles}\label{SS:reflection}
We want to apply Schwarz reflection to construct complete periodic cmc surfaces in $E(\kappa,\tau)$. It is well-known that Schwarz reflection extends minimal surfaces in space forms (see~\cite{lawson1970}) with respect to the space form symmetries. In $E(\kappa,\tau)$ the isometry group has at least dimension $4$, and there are symmetries, for which Schwarz reflection applies. We reflect across a geodesic $c\subset E(\kappa,\tau)$ or a totally geodesic plane $V\subset E(\kappa,\tau)$, i.e. the geometric interpretation is to send a point $p$ to its opposite point on a geodesic through $p$ that meets $c$ or $V$ orthogonally.
If $c$ is a horizontal or vertical geodesic of $E(\kappa,\tau)$, then a geodesic reflection across $c$ is an isometry. Moreover, in the product spaces $E(\kappa,0)=\Sigma(\kappa)\times\mathbb{R}$ a geodesic reflection across vertical or horizontal planes is an isometry. For those isometries we formulate \emph{Schwarz reflection principles}:
\begin{quote}
Suppose that a minimal surface is smooth up to the boundary, and the boundary contains a curve which is a horizontal or vertical geodesic of $E(\kappa,\tau)$. Then the geodesic is an asymptotic direction and the reflection preserves the principal curvatures, therefore it extends the surface smoothly.
Moreover, a cmc surface in a product space $E(\kappa,0)$, which is smooth up to the boundary, extends smoothly if the boundary contains a curve in a vertical or horizontal plane, provided the surface conormal is perpendicular to the plane, since the curve a curvature direction. A totally geodesic plane is called \emph{mirror plane}, and a curve in which the surface meets a mirror plane orthogonally is called \emph{mirror curve}.
\end{quote}
We construct surfaces by solving Plateau problems. The solution is an area minimizing map from a disc to $E(\kappa,\tau)$, continuous up to the boundary. By~\cite{osserman1970} and~\cite{gulliver1973} the solution does not have branch points in the interior, hence it is an immersion.
Schwarz reflection extends the surface smoothly, but after reflection branch points may occur. Under certain assumptions, we may exclude boundary branch points, i.e. the surface extends as a smooth immersion:
\begin{proposition}\label{P:boundarybranch}
Let $M\subset E(\kappa,\tau)$ be a minimal surface of disc-type continuous up to the boundary $\Gamma=\partial M$. If $\Gamma$ is a Jordan curve that consists of both horizontal and vertical geodesics, such that for each edge of $\Gamma$ exists a vertical plane or a horizontal umbrella $S'$ as barrier for $\Gamma$, i.e. $S'\cap\overline{M}\subset\Gamma$, and moreover at each vertex $v$, the angle is of the form $\pi/n_v$, with $n_v\ge 2$, and there exists an union $\Gamma_v$ of $n_v$ copies of $\Gamma$, obtained by succesive $\pi$ rotations about the edges, such that there is a barrier $S'$ for $\Gamma_v$ in $v$. Then $M$ extends smoothly without branch points by Schwarz reflection across $\Gamma$.
\end{proposition}
The proof relies on the Hopf boundary lemma:
\begin{proof}
Let us consider an almost conformal harmonic parametrization $f$ of $M$ and a point $p\in\Gamma$ in the interior of an edge with barrier $S'$. Since $S'\cap\overline{M}\subset\Gamma$ the Hopf boundary lemma implies $\operatorname{d} f_p\ne0$, hence $f$ is an immersion.
We assume vertex $v\in\Gamma$ is a branch point, and consider the union $\Gamma_v$ of $n_v$ copies of $\Gamma$. By assumption there is a barrier $S'$ for $\Gamma_v$ and we are in the first case. Hence, we conclude $M$ extends smoothly without branch points by Schwarz reflection across $\Gamma$.
\end{proof}
\subsection{Sister curves}\label{S:sistercurves}
We want to analyse the geometry of periodic surfaces. Therefore, we take a closer look at the boundary curves of a fundamental piece.
Let $c=f\circ\gamma$ be a curve parametrized by arc length in a hypersurface $M=f(\Omega)\subset\overline{M}$ with (surface) normal $\nu$. The \emph{normal curvature} $k$ and the \emph{normal torsion} $t$ along $c$ are defined by
\[
k\coloneqq \nu\cdot\overline{\nabla}_{c'}c'=-\overline{\nabla}_{c'}\nu\cdot c'=\langle S\gamma',\gamma'\rangle ,\qquad
t\coloneqq-\overline{\nabla}_{c'}\nu\cdot Jc'=\langle S\gamma',J\gamma'\rangle.
\]
Let $\tilde{M}\subset\Sigma(\kappa)\times\mathbb{R}$ denote an mc $H$-surface and $M\subset E(\kappa+4H^2,H)$ its minimal sister. Furthermore, let $\gamma$ be a curve in $\Omega$. We call $\tilde{c}\coloneqq \tilde{f}(\gamma)$ and $c\coloneqq f(\gamma)$ \emph{sister curves}.
\begin{lemma}\label{l:curvtor}
For a pair of sister curves the normal curvature and torsion are related as follows:
$$\tilde{k}=-t+H\quad\text{and}\quad\tilde{t}=k.$$
\end{lemma}
\begin{proof}
We apply Equation~\eqref{e:shapeoperators} to the definitions:
\begin{align*}
\tilde{k}&=\langle \tilde{S}\gamma',\gamma'\rangle=\langle (JS+H\operatorname{id})\gamma',\gamma'\rangle=-t+H,\\
\tilde{t}&=\langle \tilde{S}\gamma',J\gamma'\rangle=\langle (JS+H\operatorname{id})\gamma',J\gamma'\rangle=k.\qedhere
\end{align*}
\end{proof}
From this follows a relation between mirror curves and their sister curves, see~\cite{GK2010} and~\cite{MT2011}:
\begin{enumerate}
\item A curve $\tilde{c}\subset\tilde{M}\subset\Sigma(\kappa)\times\mathbb{R}$ is a mirror curve in a vertical plane if and only if its sister curve $c\subset M\subset E(\kappa+4H^2,H)$ is a horizontal geodesic.
\item Similarly, $\tilde{c}$ is a horizontal mirror curve if and only if $c$ is a vertical geodesic.
\end{enumerate}
In our construction we consider the \emph{fundamental piece} of a periodic mc $H$-surface. The complete surface is then generated by reflections. The fundamental piece is simply connected and bounded by mirror curves $\tilde{c}_i$ in vertical and horizontal planes. Given a fundamental piece bounded by $n$ arc length parametrized mirror curves $\tilde{c}_i$, it defines the following geometric quantities:
\begin{itemize}
\item The \emph{length} $\tilde{l}_i$ of the mirror curve $\tilde{c}_i$, also denoted by $l(\tilde{c}_i)$ or $\abs{\tilde{c}_i}$.
\item The \emph{vertex angle} $\tilde{\phi}_i$ of two edges $\tilde{c}_i$ and $\tilde{c}_{i+1}$, which satisfies $$\cos\tilde{\phi}_i=-\tilde{c_i}'(\tilde{l}_i)\cdot\tilde{c}_{i+1}'(0).$$
\end{itemize}
The minimal sister surface is bounded by horizontal and vertical geodesics. Since the surfaces are isometric, we have
\begin{equation}\label{e:equality}
\tilde{l}_i=l_i,\qquad\tilde{\phi}_i=\phi_i.
\end{equation}
\begin{itemize}
\item The total \emph{turn} angle of the normal $\tilde{\nu}$ $$\operatorname{turn}_i=\operatorname{turn}_{\tilde{c}_i}(\tilde{\nu})\coloneqq\int_{\tilde{c}_i}\tilde{k},$$ which measures the total turn of the normal relative to a parallel field.
\end{itemize}
Accordingly, we want to measure the rotational angle of the normal $\nu$ along a curve $c$ in $E(\kappa+4H^2,H)$. We detect the \emph{twist} of the normal with respect to an appropiate vector field $X$
\[\operatorname{twist}_c(\nu,X)\coloneqq\int\limits_c \langle\nabla_{c'}\nu,c'\times\nu\rangle-\langle\nabla_{c'}X,c'\times X\rangle.\]
\begin{definition}\label{d:twist}
\begin{enumerate}
\item\label{d:verttw} Let $c\subset M$ be a vertical geodesic in $E(\kappa+4H^2,H)$, then the twist is defined by the total rotation speed of $\nu$ with respect to a basic vector field $\tilde{e}$, i.e. the horizontal lift of any vector field $e$ on $\Sigma(\kappa+4H^2)$:
\[\operatorname{twist}_v\coloneqq\operatorname{twist}_c(\nu,\tilde{e})=\int\limits_c \langle\nabla_{c'}\nu,c'\times\nu\rangle-\langle\nabla_{c'}\tilde{e},c'\times \tilde{e}\rangle.\]
\item Let $c\subset M$ be a horizontal geodesic in $E(\kappa+4H^2,H)$, then the twist is defined by the total rotation speed of $\nu$ with respect to the vertical vector field $\xi$:
\[\operatorname{twist}_h\coloneqq\operatorname{twist}_c(\nu,\xi)=\int\limits_c \langle\nabla_{c'}\nu,c'\times\nu\rangle-\langle\nabla_{c'}\xi,c'\times \xi\rangle.\]
\end{enumerate}
\end{definition}
With this definition $\operatorname{twist}_v$ measures the angle $\measuredangle(\operatorname{d}\pi_c(\nu(0)),\operatorname{d}\pi_{c}(\nu(l))$ in the projection. We drop the index when it is clear whether the geodesic is vertical or horizontal.
\begin{lemma}\label{l:torsionangle}
\begin{enumerate}
\item Let $c\subset M$ be a vertical geodesic in $E(\kappa+4H^2,H)$, whose sister is a horizontal mirror curve $\tilde{c}\subset\tilde{M}$ in $E(\kappa,0)$. Then $$\operatorname{twist}_v=\int\limits_c t+H l(c)\quad\mbox{and}\quad\tilde{k}=2H-\operatorname{twist}_v'.$$
\item Let $c\subset M$ be a horizontal geodesic in $E(\kappa+4H^2,H)$, whose sister is a vertical mirror curve $\tilde{c}\subset\tilde{M}$ in $E(\kappa,0)$. Then $$\operatorname{twist}_h=-\operatorname{turn}.$$
\end{enumerate}
\end{lemma}
\begin{proof}
\begin{enumerate}
\item Without loss of generality $c'=\xi$. Let $(c',J c',\nu)$ be positively oriented. Let $J$ and $R$ denote $\pi/2$-rotations in the tangent bundle $\operatorname{T} M$ and the horizontal plane $\operatorname{d} \pi^{-1}(\operatorname{T} \Sigma(\kappa+4H^2))$, respectively. Then $Jc'$ and $\nu$ are horizontal with $-R\nu=Jc'$.
We denote by $E$ an unit basic vector field. We have
\begin{align*}
\operatorname{twist}_v &=\int\limits_c\left(\langle\nabla_{c'}\nu,c'\times\nu\rangle-\langle\nabla_{c'}E,c'\times E\rangle\right)\\
&=\int\limits_c\left(\langle\nabla_{c'}\nu,R\nu\rangle-\langle\nabla_{c'}E,R E\rangle\right)\\
&=\int\limits_c\left(-\langle\nabla_{c'}\nu,J c'\rangle-\langle\nabla_{\xi}E,R E\rangle\right)\\
&=\int\limits_c\left(t+H\langle RE,R E\rangle\right)\\
&=\int\limits_c t+Hl(c).
\end{align*}
Lemma~\ref{l:curvtor} implies that $\tilde{k}=2H-\operatorname{twist}_v'$.
\item The rotational angle of the tangent plane $\operatorname{T} _{c(t)}M$ is measured by the rotational speed with respect to $\xi$:
\[
\operatorname{twist}_h'=\langle\nabla_{c'}\nu,\underbrace{c'\times\nu}_{=-Jc'}\rangle-\langle\underbrace{\nabla_{c'}\xi}_{=-HRc'},\underbrace{c'\times\xi}_{=-Rc'}\rangle=t-H=-\tilde{k}.
\]
By integrating along $c$ we get \[\operatorname{twist}_h=-\operatorname{turn}.\]
\end{enumerate}
\end{proof}
\begin{example}
We compute the torsion and the twist of a vertical geodesic $c$ in a vertical plane in $E(\kappa,\tau)$. The geodesic $c$ is a fibre of the Riemannian fibration. Without loss of generality $c'=\xi$, then we get
\begin{align*}
t&=-\nabla_{c'}\nu\cdot Jc'\\
&=\tau R\nu\cdot Jc'=-\tau.
\end{align*}
Namely, the torsion of a vertical geodesic is the negative of the bundle curvature.
Moreover, for the twist we get, as expected
\[
\operatorname{twist}=\int\limits_c t+\tau l(c)=0.
\]
Hence, with respect to parallel fields, the normal does not rotate. Equivalently the normal is constant in the projection.
\end{example}
We shall apply Lemma~\ref{l:torsionangle} to obtain detailed information about vertical geodesics in $\operatorname{Nil}$ and their horizontal sister curves in $\H^2\times\mathbb{R}$. This strategy is due to Laurent Mazet, for whom the author is very grateful.
For a curve $\tilde{c}\subset\H^2$, consider the unique horocycle foliation $\mathcal{F}_{\tilde{c}}$ given by the horocycle that is tangent to $\tilde{c}$ in $\tilde{c}(0)$ and has curvature $1$ with respect to the normal $n$ of $\tilde{c}$. Let $\theta$ be the angle defined by $\tilde{c}^{\prime}=\cos\theta e_1+\sin\theta e_2$, where the orthonormal frame $(e_1, e_2)$ is given by the tangent and minus the normal of the horocycles. By the definition of the considered foliation we have $\theta(0)=0$ and $n=\sin\theta e_1-\cos\theta e_2.$
\begin{figure}[h]
\begin{center}
\psfrag{0}{$y=0$}
\psfrag{1}{$e_1$}
\psfrag{2}{$e_2$}
\psfrag{n}{$n$}
\psfrag{c}{$\tilde{c}^{\prime}$}
\psfrag{t}{$\theta$}
\includegraphics[width=7cm]{foliation2.eps}
\end{center}
\caption{Horocycle foliation $\mathcal{F}_{\tilde{c}}$ given by $\tilde{c}$.}\label{f:foliation}
\end{figure}
\begin{lemma}\label{l:curv}
The curvature of $\tilde{c}$ is given by $$\tilde{k}=\cos\theta-\theta'.$$
\end{lemma}
\begin{proof}
A simple computation gives:
\begin{align*}
\nabla_{\tilde{c}^{\prime}}\tilde{c}^{\prime}=&\nabla_{\cos\theta e_1+\sin\theta e_2}(\cos\theta e_1+\sin\theta e_2)\\
=&-\theta'\sin\theta e_1+\cos\theta\nabla_{(\cos\theta e_1+\sin\theta e_2)}e_1+\theta'\cos\theta e_2+\sin\theta\nabla_{(\cos\theta e_1+\sin\theta e_2)}e_2\\
=&-\theta'\sin\theta e_1+\cos^2\theta\underbrace{\nabla_{e_1}e_1}_{-e_2}+\cos\theta\sin\theta \underbrace{\nabla_{e_2}e_1}_{=0}\\
&+\theta'\cos\theta e_2+\sin\theta\cos\theta\underbrace{\nabla_{e_1}e_2}_{e_1}+\sin^2\theta \underbrace{\nabla_{e_2}e_2}_{0}\\
=&-\theta'\sin\theta e_1-\cos^2\theta e_2+\theta'\cos\theta e_2+\sin\theta\cos\theta e_1\\
=&(-\theta'+\cos\theta)\sin\theta e_1-(-\theta'+\cos\theta)(\cos\theta e_2)\\
=&(\cos\theta-\theta')n.
\end{align*}\qedhere
\end{proof}
We want to control the curve $\tilde{c}$ by means of its sister $c$ in $M\subset E(0,1/2)=\operatorname{Nil}$. Let $\alpha(t)=\operatorname{twist}_v$ measure the twist in $c(t)$ with respect to a basic vector field chosen such that $\alpha(0)=0$, i.e. $\tilde{e}(c(0))=\tilde{\nu}(c(0))$.
\begin{proposition}\label{p:anglecompare}
Let $\tilde{c}\subset\H^2\times\mathbb{R}$ be the horizontal sister curve of a vertical geodesic $c$ in $E(0,1/2)=\operatorname{Nil}$. The angle $\theta$ given by the horocycle foliation $\mathcal{F}_{\tilde{c}}$ and the rotational speed $\alpha'$ of the minimal surface normal along $c$ are related as follows: \[\theta'=\alpha'+\cos\theta-1,\quad\text{ and }\quad \theta\leq\alpha,\]
where $\alpha(t)$ measures the angle between $\nu$ and any parallel field chosen such that $\alpha(0)=0$.
\end{proposition}
\begin{proof}
We have seen in Lemma~\ref{l:torsionangle} that $\tilde{k}=1-\alpha'$. Together with Lemma~\ref{l:curv} we get $$\theta'=\alpha'+\cos\theta-1,$$ namely $\theta'\leq\alpha'$. In particular, along the curves $c$ and $\tilde{c}$ we have $$\int_{\tilde{c}}\theta^{\prime}\leq\int_c\alpha^{\prime}\Rightarrow \theta(t)\leq\alpha(t).$$\qedhere
\end{proof}
\section{Reference surfaces}\label{S:refsurfaces}
\subsection{Horizontal helicoids in $\operatorname{Nil}_3$}\label{S:conormalhoriheli}
In the construction of the $k$-noid with genus $1$ from Section~\ref{S:genus1}, we use as a barrier the horizontal helicoid $H_\alpha(u,v)$ in $\operatorname{Nil}$, $\left(\mathbb{R}^3,\operatorname{d} x_1^2+\operatorname{d} x_2^2+(\operatorname{d} x_3-x_1\operatorname{d} x_2)^2\right)$ by Daniel and Hauswirth~\cite[Section 7]{DH2009}. For $\alpha>0$ the coordinates of the helicoid $H_\alpha$ are given in terms of the solution $\psi$ of the ordinary differential equation $\psi'^2=\alpha^2+\cos^2\psi,\psi(0)=0$:
\begin{align*}
x_1&=\frac{\sinh(\alpha v)}{\alpha(\psi'(u)-\alpha)}\cos\psi(u)\\
x_2&=-G(u)\\
x_3&=\frac{-\sinh(\alpha v)}{\alpha(\psi'(u)-\alpha)}\sin\psi(u),
\end{align*}
where $G$ is defined by $G'(u)=1/(\psi'(u)-\alpha), G(0)=0$. In~\cite{DH2009} was shown that the function $\psi$ is a decreasing odd bijection. There exists a unique $U\coloneqq U(\alpha)>0$ with $\psi_\alpha(U)=-\pi/2$, $\psi_\alpha(-U)=\pi/2$. To visualise the surface, we look at three curves in the helicoid:
\begin{align*}
H_\alpha(-U,v)&=\left(0,G(U),\frac{\sinh(\alpha v)}{\alpha(\alpha-\psi'(-U))}\right)\\
H_\alpha(0,v)&=\left(\frac{\sinh(\alpha v)}{\alpha(\psi'(0)-\alpha)},0,0\right)\\
H_\alpha(U,v)&=\left(0,-G(U),\frac{\sinh(\alpha v)}{\alpha(\psi'-\alpha)}\right).
\end{align*}
The rulings $H_\alpha (\pm U,v)$ are vertical and
define the width $a\coloneqq G(-U)-G(U)$ of the helicoid. The width is well-defined for the whole helicoid, since $\psi$ is periodic: $\psi(u+2U)=\psi(u)-\pi$
\begin{figure}[h]
\begin{center}
\psfrag{1}{$x_1$}
\psfrag{2}{$x_2$}
\psfrag{3}{$x_3$}
\psfrag{+}{$-G(U)$}
\psfrag{-}{$-G(-U)$}
\includegraphics[width=5.6cm]{horiheli.eps}\end{center}
\caption{Sketch of a fundamental piece of the horizontal helicoid from Daniel and Hauswirth in $\operatorname{Nil}$, $v\leq0$.}\label{f:horiheli}
\end{figure}
To ensure that we can consider a helicoid $H_\alpha$ for a given width $a$, we need the following lemma:
\begin{lemma}\label{l:a0alphainf}
For $a>0$, there exists $\alpha>0$ such that $$-2G(U(\alpha))=a,$$ where $U(\alpha)$ is defined by $\psi_\alpha(U)=-\pi/2$. Furthermore, for $a\to 0$ we have $\alpha\to\infty$.
\end{lemma}
\begin{proof}
The idea of the proof is to show that $G$ is a bijection on $\mathbb{R}$ first. The second step is to show that $U$ is a continuous map to $\mathbb{R}_+$.
Step 1: The function $G$ is odd, so $G(-U)-G(U)=-2G(U)$. For $a>0$ we show that there exists exactly one $U>0$ such that $-2G(U)=a$: From $G'=1/(\psi'-\alpha)<0$ we know that $G$ is a decreasing function on $\mathbb{R}$. If we assume that $G$ is bounded, i.e. $G(u)\to g\in\mathbb{R}$ for $u\to\infty$, then $G'(u)\to 0$ for $u\to\infty$. But this implies $\psi'(u)-\alpha\to-\infty$ for $u\to\infty$, which is a contradiction because $\psi'^2=\alpha^2+\cos^2\psi\leq\alpha^2+1$ is bounded. Therefore, $G$ is a decreasing bijection on $\mathbb{R}$.
Step 2: We show for $U>0$ the existence of $\alpha>0$ such that the solution $\psi_\alpha$ of \[\psi'^2_{\alpha}=\alpha^2+\cos^2\psi_\alpha,\quad\psi_\alpha(0)=0\] satisfies $\psi_\alpha(U)=-\pi/2$.
By applying seperation of variables to the ODE $\psi'_\alpha=-\sqrt{\alpha^2+\cos^2\psi},\,\psi(0)=0$, the solution is then given by the inverse of the elliptic integral of the first kind
\[\int\limits_0^\psi -\frac{1}{\sqrt{\alpha^2+\cos^2\theta}}\operatorname{d}\theta.\]
We are interested in $U(\alpha)$ given by $\psi_\alpha(U)=-\pi/2$:
\begin{align*}
U(\alpha)&=\int\limits_0^{-\pi/2} -\frac{1}{\sqrt{\alpha^2+\cos^2\theta}}\operatorname{d}\theta\\
&=\int\limits_0^{\frac\pi 2} \frac{1}{\sqrt{\alpha^2+\cos^2\theta}}\operatorname{d}\theta\\
&=\frac{1}{\sqrt{\alpha^2+1}}\int\limits_0^{\frac\pi 2} \frac{1}{\sqrt{1-\frac{1}{\alpha^2+1}\sin^2\theta}}\operatorname{d}\theta\\
&=\frac{K\left(1/\sqrt{\alpha^2+1}\right)}{\sqrt{\alpha^2+1}},
\end{align*}
where $K(k)=\int\limits_0^{\pi/2}\frac{\operatorname{d}\theta}{\sqrt{1-k^2\sin^2\theta}}=\int\limits_0^1\frac{\operatorname{d} t}{(1-t^2)(1-k^2t^2)}$ denotes the complete elliptic integral of the first kind, defined for $k\in[0,1)$, with the special values $K(0)=\pi/2$ and $\lim_{k\to 1}K(k)=\infty$. For $\alpha\to0$ we have $K\left(1/\sqrt{\alpha^2+1}\right)\to\infty$ and for $\alpha\to\infty$ we have $K\left(1/\sqrt{\alpha^2+1}\right)\to\pi/2$.
Therefore, $U$ is continuous because $K$ is. Moreover, for $\alpha\to0$ we have $U(\alpha)\to\infty$ and for $\alpha\to\infty$ we have $U(\alpha)\to0$.
Together with the Step 1, this concludes the first part of the lemma. For the second part, notice that $a\to0$ implies $U\to0$, since $G$ is a decreasing function with $G(0)=0$. Furthermore, $U(\alpha)\to0$ implies $\alpha\to\infty$, since $K$ is bounded.
\end{proof}
We want to express the height $b$ of the conormal $\eta$ of the helicoid $H_\alpha$ along the vertical rulings depending on the width $a$ and the angle $\phi$ in the horizontal plane $\operatorname{span}\{\partial x_1,\partial x_2+x_1\partial x_3\}$ given by
\[\cos\phi=\langle\partial x_1,\eta\rangle.\]
The conormal along the horizontal ruling \[H_\alpha(-U,v)=\left(0,G(U),\frac{\sinh(\alpha v)}{\alpha(\psi'(-U)-\alpha)}\right)\] is given by
\[\frac{\partial H_\alpha}{\partial u}(-U,v)=\left(\frac{-\sinh(\alpha v)\psi'(-U)}{\alpha(\psi'(-U)-\alpha)},-G'(-U),0\right),\quad
\nu=\frac{\partial_u H_\alpha}{\norm{\partial_u H_\alpha}}.\] The conormal along $H_\alpha(-U,v)$ is horizontal, since $x_1=0$. We may express $\phi$ in terms of $(\alpha,U=U(\alpha))$. By~\cite{DH2009} we have \[\psi'(-U)=G'(-U)\cos^2\psi(-U)-\alpha=-\alpha\quad \text{and}\quad G'(-U)=\frac{1}{\psi'(-U)-\alpha}=\frac{-1}{2\alpha}.\] Therefore \[\eta=\frac{2\alpha^2}{\sqrt{\alpha^2(\sinh^2(\alpha v)+1)}}\partial_u H_\alpha(-U,v)=\frac{2\alpha}{\cosh(\alpha v)}\partial_u H_\alpha(-U,v)\] and
\[\cos\phi=\frac{2\alpha^2\sinh(\alpha v)}{-2\alpha^2\cosh(\alpha v)}=\tanh(-\alpha v),\,v\leq0\Leftrightarrow v=\frac{-1}{2\alpha}\ln\left(\frac{1+\cos\phi}{1-\cos\phi}\right).\]
Using this we get the height $b$ of the conormal $\eta$, in terms of $\phi$ and $(\alpha,U(\alpha))$ for fixed $\alpha>0$, as follows:
\[b=\frac{\sinh\left(\frac{-1}{2}\ln\left(\frac{1+\cos\phi}{1-\cos\phi}\right)\right)}{2\alpha^2}\leq0.\]
One readily sees that $\phi\to0$ implies $b\to-\infty$, and $b\to0$ when $\phi\to\pi/2$.
\begin{remark}
For each $\alpha>0$ the fundamental piece of the helicoid $H_\alpha$ is a section of the Riemannian fibration $\pi\colon\operatorname{Nil}_3\to\mathbb{R}^2$ defined on
$\mathbb{R}\times\left(-G(-U),-G(U)\right)\subset\mathbb{R}^2$.
\end{remark}
\subsection{Constant mean curvature $k$-noids in $\Sigma(\kappa)\times\mathbb{R}$}\label{S:knoid}
In~\cite[Section 5.]{GK2010} Gro\ss{}e-Brauckmann and Kusner sketched the construction of an one-parameter family of surfaces with constant mean curvature $H\geq0$ in $\Sigma(\kappa)\times\mathbb{R}$ with $\kappa\leq0$, which have $k$ ends, dihedral symmetry and genus zero.
Their idea was to consider a sequence of compact Plateau solutions $M_{(r,s)}$, which represent sections in $E(\kappa+4H^2,H)$. Each minimal disc $M_{(r,s)}$ is bounded by horizontal and vertical geodesics, see Figure~\ref{f:knoid}. Let $\Gamma_{(r,s)}$ denote the boundary. The minimal surface $M_{(r,s)}$ is a section of the trivial line bundle \[\pi\colon\Omega_r\subset E(\kappa+4H^2,H)\to\Delta_r,\] where $\Omega_r\coloneqq\pi^{-1}(\Delta_r)$ is a mean convex domain, which is defined as the preimage of a triangle $\Delta_r\subset\Sigma(\kappa+4H^2)$. The triangle $\Delta_r$ is given by a hinge of lengths $a$ and $r$, enclosing an angle $\pi/k$. The parameter $a$ determines the length of the horizontal edge in the boundary of $M$, it corresponds to the necksize in the cmc sister.
\begin{figure}[h]\begin{center}
\psfrag{a}{$a$}
\psfrag{r}{$r$}
\psfrag{pr}{$\pi$}
\psfrag{s}{$s$}
\psfrag{phi}{$\pi/k$}
\includegraphics[width=4.5cm]{knoid.eps}
\caption{The boundary of the minimal disc in $E(\kappa+4H^2,H)$.}\label{f:knoid}\end{center}\end{figure}
In order to show that the sequence of compact minimal surfaces $M_{(r,s)}$ has a minimal surface $M=M(a,k)$ with infinite boundary $\Gamma$ as a limit, such that $M$ is a section projecting to $\Delta\coloneqq\lim_{r\to\infty}\Delta_r$, and $M$ extends without branch points by Schwarz reflection about the edges of $\Gamma$, one has to show that there exist barriers. The proof is equivalent to the one of Theorem~\ref{t:plateauinfinite} below. The complete cmc surface is obtained by considering the sister and using Schwarz reflection.
\section{Constant mean curvature $k$-noids with genus $1$}\label{S:genus1}
We construct surfaces with mc $1/2$ in $\H^2\times\mathbb{R}$ with $k$ ends and genus $1$. Each surface has $k$ vertical symmetry planes and one horizontal symmetry plane, where $k\geq 3$. The idea is to solve a Plateau problem of disc type in $\operatorname{Nil}_3(\mathbb{R})$, where the disc is bounded by geodesics. Its sister disc in $\H^2\times\mathbb{R}$ generates an mc $1/2$ surface by reflections about horizontal and vertical planes. The problem is to define the geodesic contour such that the sister has the desired properties.
\subsection{Boundary construction}
In $\H^2\times\mathbb{R}$ the desired boundary is connected and consists of four mirror curves in three symmetry planes; the two vertical symmetry planes form an angle $\pi/k$, see Figure~\ref{f:genus1contour}.
\begin{figure}[h]
\begin{center}
\psfrag{c1}{$c_1$}
\psfrag{c2}{$c_2$}
\psfrag{a}{$a$}
\psfrag{n}{$n$}
\psfrag{pr}{$\pi$}
\psfrag{phi}{$\phi$}
\psfrag{pi}{$\pi/k$}
\psfrag{ct1}{$\tilde{c}_1$}
\psfrag{ct2}{$\tilde{c}_2$}
\psfrag{p}{$\hat{p}_1$}
\includegraphics[width=0.5\textwidth]{genus1cmccontour2.eps}\hspace{1cm}
\includegraphics[width=0.4\textwidth]{genus1mincontournew.eps}\end{center}
\caption{The desired boundary of the $1/2$-surface in $\H^2\times\mathbb{R}$ and its minimal sister surface in $\operatorname{Nil}$.}\label{f:genus1contour}
\end{figure}
The sister surface is bounded by a geodesic contour $\Gamma\coloneqq\Gamma(a,b,\phi)$: The horizontal mirror curves correspond to vertical geodesics and the vertical mirror curves correspond to horizontal geodesics, their projections enclose an angle $\phi\in\left(0,\pi\right)$.
The length $a>0$ of the finite horizontal geodesic $c_1$ determines the asymptotic parameter of the ends of the $k$-noid. The length $b>0$ of the vertical finite edge $c_2$ defines the size of the hole of the $k$-noid: For $b\to 0$ the $k$-noid is close to the non-degenerate $k$-noid from above, cf. Section~\ref{S:knoid}. The angle $\phi$ measures the curvature of $\tilde{c}_2$. The parameters will be determined by solving the period problems.
To construct a minimal surface that is bounded by $\Gamma$, we truncate the infinite contour $\Gamma$ and get closed Jordan curves $\Gamma_n$, $n>0$. We solve the Plateau problem for the closed Jordan curves and obtain a sequence of compact minimal surfaces. Afterwards we show there exists a minimal surface as a limit.
To define the closed Jordan curves $\Gamma_n$ we consider a geodesic triangle $\Delta_n$ in the base manifold $\mathbb{R}^2$ of the Riemannian fibration of $\operatorname{Nil}_3(\mathbb{R})$: Two edges of lengths $a$ and $n$ form an angle $\phi$ and intersect in a point $\hat{p}_1$. We lift $\hat{p}_1$ and the corresponding edge of length $n$ horizontally and label the vertices with $p_1$ and $p_6$. Then we add a vertical arc of length $b$ at $p_1$ in fibre-direction $\xi$ and label its endpoint with $p_2$. We lift the edge of length $a$ of the base triangle in $\mathbb{R}^2$ horizontally, such that it starts in $p_2$; the other vertex is labelled by $p_3$. We add another vertical edge in fibre-direction $\xi$: it starts in $p_3$, has length $n^2$ and its end vertex is called $p_4$. By lifting the remaining edge of the base triangle horizontally, such that it starts in $p_4$ and inserting another vertical edge with endpoints $p_5$ and $p_6$ we complete the special Jordan curve $\Gamma_n$.
The vertical distances do not sum up: $d(p_1,p_2)+d(p_3,p_4)\ne d(p_5,p_6)$, but we claim $p_6 p_5$ is in fibre direction. We consider an arc length parametrization $\gamma$ of $\partial\Delta_n\subset\mathbb{R}^2$ which runs counter-clockwise. By Lemma~\ref{l:vertdistances} the vertical distance of its horizontal lift is $\operatorname{area}(\Delta_n)$. By construction we get
\[ d(p_6,p_5)=b+n^2-\operatorname{area}(\Delta_n).\]
Since $\operatorname{area}(\Delta_n)$ grows linearly, there exists $N\in\mathbb{N}$ such that for all $n\geq N$: $d(p_6,p_5)>0$ and $d(p_6,p_5)\to\infty$ for $n\to\infty$.
The polygon $\Gamma_n$ has six right angles; its projection $\Delta_n$ is convex for every $n$ and has one fixed angle $\phi\leq\pi$ independent of $n$. We define a mean convex set $\Omega_n\coloneqq\pi^{-1}(\Delta_n)\subset \operatorname{Nil}_3(\mathbb{R})$.
For $n\to\infty$ we have $\Gamma_n\to\Gamma$ in the sense that $\Gamma_n\cap K_x=\Gamma\cap K_x$ for any compact neighborhood $K\subset\operatorname{Nil}_3$ for $x\in\Gamma$ and $n$ large enough.
\subsection{Plateau solutions}\label{SS:Plateau}
To control the Plateau solution for $\Gamma$, we solve the Plateau problem for $\Gamma_n$ first:
\begin{lemma}\label{l:plateaufinite}
The special Jordan curve $\Gamma_n\subset \operatorname{Nil}_3(\mathbb{R})$ bounds a unique Plateau solution $M_n$. It is a section over $\Delta_n$ and extends smoothly without branch points by Schwarz reflection about the edges of $\Gamma_n$.
\end{lemma}
\begin{proof}
Since $\Delta_n$ is convex and $\partial\Delta_n$ consists of geodesics, its preimage $\pi^{-1}(\Delta_n)$ is a mean convex set since the preimage of each geodesic of $\partial\Delta_n$ is minimal. The intersection $\Omega_n$ of $\pi^{-1}(\Delta_n)$ with two horizontal halfspaces defined by the horizontal umbrellas in $p_1$ and $p_4$ (i.e. the exponential map of the horizontal spaces $(\operatorname{T}_{p_1}\operatorname{Nil}_3)^h$ and $(\operatorname{T}_{p_4}\operatorname{Nil}_3)^h$ respectively) as its boundaries is compact. Moreover, by construction we have $\Gamma_n\subset\partial\Omega_n$. Therefore, the solution of the Plateau problem exists and is embedded by~\cite{MY1982}. Moreover, by Section~\ref{S:Plateau} the solution $M_n$ is a unique section of the trivial line bundle $\pi\colon\Omega_n\to\Delta_n$. Proposition~\ref{P:boundarybranch} implies that it extends as a smooth immersion across $\Gamma_n$ by Schwarz reflection since at each vertex the angle is $\pi/2$.
\end{proof}
Actually we are interested in an infinite Plateau solution, we construct it as a limit. Define $\Delta\coloneqq\bigcup\limits_{n \in \mathbb{R}}\Delta_n$.
\begin{theorem}\label{t:plateauinfinite}
There exists a unique minimal surface $M(a,b,\phi) \subset \operatorname{Nil}_3(\mathbb{R})$ with $\partial M=\Gamma$, which is a section on $\Delta$ and extends without branch points by Schwarz reflection about its edges for all $a,b>0$ and $\phi\in(0,\pi)$.
\end{theorem}
\begin{proof}
Consider the sequence of minimal sections $M_n=(x,u_n(x))$ defined on $\Delta_n$ by Lemma \ref{l:plateaufinite}. By the maximum principle $u_n$ is a monotone increasing sequence on $\Delta_k$, $n\geq k$. We claim, the sequence is uniformly bounded on each compact subset $K\subset\Delta$. Since by~\cite{RST2010} we obtain a gradient estimate in any $x\in\Delta'\subset K$ that depends on the distance of $x$ to the boundary and the upper bound, this implies the compactness.
To prove the claim we consider $k\in\mathbb{N}$ such that $K\subset\Delta_k$ and a horizontal helicoid $H_k\coloneqq H_\alpha$, where $\alpha$ depends on the parameters $(a,b,\phi)$ of the sequence $M_n$. We orient $H_k$ such that one of its vertical rulings coincides with the vertical geodesic of $\Gamma_n$ of length $n^2$. Moreover if we consider the Plateau solution $M_k$ on $\Delta_k$ and start with the helicoid in height $k^2$ such that $\pi(H_k)$ is bounded by $\pi(\overline{p_4p_5})$ from one side, then we can move the helicoid downwards up to a height $h_k$, since $M_k$ is a section and $H_k$ is still a barrier from above. Now consider the sequence of minimal sections $M_n$, by the maximum principle there is no point of contact. Hence, $M_n$ is uniformly bounded on each compact domain $K$.
By diagonalization we obtain a subsequence, call it $u_n$ again, converging to some minimal section $u$ on $\Delta$, the convergence is uniform on every compact subset of $\Delta$. The surface $M=(x,u(x))$ is a minimal surface of disc-type continuous up to the boundary $\Gamma$. By Proposition~\ref{P:boundarybranch} $M$ is an immersion that extends without branch points by Schwarz reflection. The surface is unique by Proposition~\ref{p:uniquesection}.
\end{proof}
For $b=0$ the proof still holds, which proves the existence of the minimal surface described in Section~\ref{S:knoid}.
\begin{remark}\label{R:monotone}
For $(a,\phi)$ fixed, the maximum principle implies that the sequence $(M(a,b,\phi))_b$ is monotone in $b$, in the sense that $M_1\coloneqq M(a,b_1,\phi)$ is barrier from above for $M_2\coloneqq M(a,b_2,\phi)$ for $b_1<b_2$. To see this we orient $M_1$ and $M_2$ such that their infinite vertical geodesics in the boundaries coincide in the end, their interior projects to disjoint domains in the base and their infinite horizontal geodesics in the boundaries are in heights $\epsilon>0$ and $-b_2$ resp. In the projection the two finite horizontal geodesics form an angle $\alpha=\pi-\phi>0$. If we increase this angle, i.e. we rotate one surface towards the other, it is clear that there is no inner point of contact. Furthermore the surfaces cannot intersect for any angle $\tilde{\alpha}>0$, since this would lead to two graphs with the same boundary. If we rotate the surface further such that they are graphs above the same domain, there is still no intersection since the normals rotate monotone along the vertical geodesic. The same argument shows that we can translate $M_1$ down without an intersection, which proves the claim.
\end{remark}
\subsection{Period problems}
To construct an mc $1/2$ surface with genus $1$ and certain symmetries we have to solve two period problems. One period is given by the vertical distance of the two horizontal mirror curves that are given by the two horizontal mirror curves. The second period is angular and determined by the geometry of the finite horizontal curve $\tilde{c}_2$.
To solve the first period problem for the mc $1/2$ surface $\tilde{M}\subset \H^2\times\mathbb{R}$, i.e. to construct a minimal surface $M\subset \operatorname{Nil}_3(\mathbb{R})$ such that the two horizontal components of its sister surface lie in the same horizontal plane, we consider the mirror curve in the vertical plane with finite length $a$ in $\partial\tilde{M}$, and call it $\tilde{c}_1$. The period is given by $p=\int \langle\tilde{c}'_1,\tilde{\xi}\rangle_{\H^2\times\mathbb{R}}$, where $\tilde{\xi}$ is the vertical vector field of $\H^2\times\mathbb{R}$. Since we have a first order description for the vertical parts of vector fields, we consider the vector field $\tilde{T}$ on $\tilde{M}$ given by $\tilde{\xi}-\langle\tilde{\xi},\tilde{\nu}\rangle\tilde{\nu}$, where $\tilde{\nu}$ denotes the normal of $\tilde{M}$. It is the tangential projection of the vertical vector field and rotates by $\pi/2$ in the tangent plane under conjugation:
\[
\operatorname{d} f^{-1}(T)=J\operatorname{d}\tilde{f}^{-1}(\tilde{T}),
\]
where $f$ and $\tilde{f}$ denote the corresponding parametrizations of $M$ and $\tilde{M}$. Therefore, we have an analogous formulation of the period on $M$:
\begin{align*}
p(\tilde{M})&=\int\limits_{\tilde{c}_1} \langle\tilde{c}'_1,\tilde{\xi}\rangle_{\H^2\times\mathbb{R}}=\int \langle\operatorname{d}\tilde{f}(\gamma'),\tilde{T}\rangle_{\H^2\times\mathbb{R}}\\
&=\int \langle\operatorname{d}\tilde{f}(\gamma'),\operatorname{d}\tilde{f}(J^{-1}\operatorname{d} f^{-1}(T))\rangle_{\H^2\times\mathbb{R}}=\int \langle J\gamma',\operatorname{d} f^{-1}(T)\rangle\\
&=\int\limits_{c_1}\langle\eta,\xi\rangle_{\operatorname{Nil}_3(\mathbb{R})}=p(M).
\end{align*}
As seen above, we have a Plateau solution $M(a,b,\phi)$ for all $a,b>0$ and $\phi\in(0,\pi)$. The first period problem is solvable for each $a>0$ and $\phi\in\left(0,\pi/2\right)$:
\begin{proposition}\label{l:period1}
For each $a>0$ and $\phi\in\left(0,\pi/2\right)$ there exists $b(a,\phi)>0$ such that $p(M(a,b(a,\phi),\phi))=0$.
\end{proposition}
\begin{proof}
Let $a>0$ and $\phi\in\left(0,\pi/2\right)$ be fixed. By Theorem~\ref{t:plateauinfinite} we have a unique Plateau solution $M_b\coloneqq M(a,b,\phi)$ for each $b>0$. We claim the function $p(b)\coloneqq p(M_b)$ is continuous in $b$. To see this, we take two converging sequences $(b_l)$ and $(b_k)$ with the same limit $b_0$. If $\lim p(b_l)\ne \lim p(b_k)$ this would contradict the uniqueness of the minimal section, since the corresponding minimal surfaces also converge by the boundedness of the sequences $M_{l}\coloneqq M(a,b_l,\phi)$.
We will now show that there exists $b_t\in\mathbb{R}$ such that $p(b)<0$, for all $b>b_t$, and $\lim_{b\to 0}p(b)>0$. By the intermediate value theorem this proves the lemma.
\begin{itemize}
\item To define $b_t$, we consider the horizontal helicoid $M_H$ constructed by Daniel and Hauswirth~\cite{DH2009} from Section~\ref{S:conormalhoriheli}, whose horizontal axis coincides with $c_1$. By Lemma~\ref{l:a0alphainf} there exists a helicoid $M_H$ for each pitch $a>0$ such that the incident vertical arcs of $\Gamma$ are contained in its vertical rulings. Since $\phi<\pi/2$ the helicoid $M_H$ and $M_b$ are both minimal sections over a bounded convex domain $\pi(M_H)\cap\pi(M_b)$, whose boundary consists of three geodesic arcs.
We claim there exists $b_t>0$, such that the surfaces intersect in $\Gamma$ only. In Section~\ref{S:conormalhoriheli} we showed that the conormal $\eta_H$ of the helicoid along the vertical ruling is horizontal and its opening angle depends continuously on the height given by
\[h=\frac{\sinh\left(\frac{-1}{2}\ln\left(\frac{1+\cos\phi}{1-\cos\phi}\right)\right)}{2\alpha^2},\] where $\alpha$ depends on the pitch $a$, see Lemma~\ref{l:a0alphainf}. We consider the vertical plane $V$, given by its tangent plane, that is spanned by the conormal $\eta_H$ and $\xi$ in height $h$. Since each conormal is horizontal and turns monotonically, the intersection $V\cap M_H$ meets the vertical ruling in exactly one point given by the height $h$. Moreover $M_H$ is a section in the interior and therefore $V\cap M_H$ is bounded from below. Hence for $a>0$ and $\phi\in(0,\pi/2)$ exists $b_t\geq \abs{h}$ such that the helicoid is a barrier lying above $M_b$ for every $b> b_t$.
Consequently we can estimate the vertical component of the conormal $\eta$ of the Plateau solution $M_b$ by the helicoid conormal $\eta_H$ along the interior of the curve $c_1$: \[\langle \eta,\xi\rangle<\langle\eta_H,\xi\rangle\,\quad \mbox{ and hence }\quad p(M_b)=\int\langle \eta,\xi\rangle<\int\langle\eta_H,\xi\rangle=0.\]
\item Otherwise for $b\to 0$ we consider a minimal $k$-noid $N$ in $\operatorname{Nil}_3(\mathbb{R})$ as in Section~\ref{S:knoid} with the parameters $a$ and $\phi$. It has a positive period $p_N$, since it is bounded from below by a horizontal umbrella.
For $b\to 0$ we have $M_b\to N$ away from the singularity. Furthermore, the sequence of conormals $\eta_b$ converges uniformly to $\eta_N$ on compact sets $K\subset c_1$. Therefore, the period $p(b)\vert_K$ converges uniformly to $p_N\vert_K >0$ for each compact $K\subset c_1$ and $b\to 0$. Hence, on $c$ we have $p(b)> 0$ for $b\to 0$.
\end{itemize}
\end{proof}
\begin{remark}
For $\phi=\pi/2$ the proof does not work since the helicoid $M_H$ is not a barrier for $b<\infty$. Therefore, we cannot construct a genus $1$ catenoid in $\H^2\times\mathbb{R}$ with this method.
\end{remark}
\begin{lemma}\label{l:bcont}
For each $a>0$ and $\phi\in\left(0,\pi/2\right)$ the map $b\colon \mathbb{R}_+\times\left(0,\pi/2\right)\to\mathbb{R}_+$ defined by Proposition~\ref{l:period1} is a continuous function.
\end{lemma}
\begin{proof}
We assume that $b$ is discontinuous in $(a,\phi)$, i.e. there exist two sequences $(a_l,\phi_l)$ and $(\overline{a}_l,\overline{\phi}_l)$ with limit $(a_0,\phi_0)$ but w.l.o.g. $b_0=\lim b(a_l,\phi_l)>\lim b(\overline{a}_l,\overline{\phi}_l)=\overline{b}_0$. In the proof of~\ref{l:period1} we have seen, that the sequences of the corresponding minimal surfaces contain converging subsequences with limits $M=M(a_0,b_0,\phi_0)$ and $\overline{M}=M(a_0,\overline{b}_0,\phi_0)$ respectively. Both minimal surfaces $M$ and $\overline{M}$ have zero period. But for $b_0>\overline{b}_0$ the minimal surface $M$ bounds a mean convex domain from above with $\partial \overline{M}$ in its boundary. Therefore, $M$ is an upper barrier for $\overline{M}$ with $\langle\eta,\xi\rangle>\langle\overline{\eta},\xi\rangle$, contradicting the fact that both minimal surfaces have zero period.\end{proof}
We solved the first period problem in $b$ depending on $(a,\phi)$. Namely, for each angle $\phi\in(0,\pi/2)$ and horizontal geodesic of length $a$, there exists $b>0$ as length of the vertical geodesic, such that the two horizontal mirror curves in the sister surface lie in the same mirror plane.
To solve the second period problem we need to restate the solution of the first period problem: For an angle $\phi$ and a vertical geodesic $c_2$ of length $b$ there exists a horizontal geodesic of length $a$, such that the period is zero:
\begin{proposition}\label{p:convergelimit}
For each $b>0$ and $\phi\in\left(0,\pi/2\right)$ there exists $a(b,\phi)>0$, such that the first period of the minimal surface $M(a(b,\phi),b,\phi)$ is zero.
\end{proposition}
\begin{proof}
By Lemma~\ref{l:bcont} the map $b\colon \mathbb{R}_+\times\left(0,\pi/2\right)\to\mathbb{R}_+$ continuous. For each $\phi$ we claim $b(a,\phi)\to 0$ for $a\to 0$. Indeed, by the proof of Proposition~\ref{l:period1} there exists $-h\in\mathbb{R}$ as an upper bound of $b(a,\phi)$ given by $a$ and $\phi$: \[-h=\frac{\sinh\left(\frac{1}{2}\ln\left(\frac{1+\cos\phi}{1-\cos\phi}\right)\right)}{2\alpha(a)^2}\geq b(a,\phi),\quad\text{since otherwise}\quad p>0.\] By Lemma~\ref{l:a0alphainf} we know that $\alpha(a)\to\infty$ for $a\to 0$. Therefore, the height $h$ converges to zero as well as $b(a,\phi)$ converges to zero.
Remains to show that the map $b$ is unbounded. For each $\phi>0$ assume the contrary: The function $b_\phi(a)\coloneqq b(a,\phi)\leq \hat{b}$ is bounded. Since $b$ is a continuous function, we have $p(M_{\tilde{b}})\ne0$ for $\tilde{b}>\hat{b}$. By Remark~\ref{R:monotone} the minimal surface $M(a,b(a,\phi),\phi)$ is a barrier from above for $M(a,\tilde{b},\phi)$, therefore $p(M_{\tilde{b}})<0$. The continuity of $p$ in $b$ implies $p(M_{\tilde{b}})<0$ for $\tilde{b}>\hat{b}$ and for all $a>0$.
Consider the map $p_1\colon a\mapsto p(M(a,\tilde{b},\phi))$. We claim the function $p_1$ is continuous. To see this, we take two converging sequences $(a_l)$ and $(a_k)$ with the same limit $a_0$. Since the corresponding minimal surfaces also converge, this implies $\lim p(a_l)= \lim p(a_k)$. Moreover, for $a$ large enough the minimal surface has a positive period, therefore there exists $\hat{a}>0$ with $p_1(\hat{a})=0$, which is a contradiction.
Hence, $b_\phi(a)$ is unbounded with $b_\phi(a)\to0$ for $a\to0$. So we conclude: For all $b>0$ there exists $a(b,\phi)>0$ (not necessarily unique) such that $M(a(b,\phi),b,\phi)$ has zero period.
\end{proof}
\begin{remark}
For conjugate minimal surfaces in $\mathbb{R}^3$ we have a unique $a(b,\phi)$, since $a\mapsto b(a,\phi)$ is injective because of scaling.
\end{remark}
The second period problem relies on the horizontal mirror curve $\tilde{c}_2$ in the cmc sister $\tilde{M}$ and there are two difficulties, we have to solve: First of all, we have to ensure that the two vertical mirror planes, which are perpendicular to $\tilde{c}_2$ intersect. And secondly their angle of intersection has to be $\pi/k$. We have to choose the pair $(b,\phi)$ of the minimal surface such that the sister surface fulfils the desired properties. The second period problem is independent of $a$ since it is given by its curvature and length, therefore we do not need a degree argument like in~\cite{KPS1988}. Hence it makes sense to solve the second period problem by considering $\tilde{c}_2$ only.
Before we solve the second period problem, we want to analyse the finite horizontal mirror curve $\tilde{c}_2$ in $\H^2\times\mathbb{R}$ parametrized by arc length. We choose the downward-pointing surface normal $\nu$ and $(c'_2,\eta,\nu)$ positive oriented. Where $c_2$ is the sister curve, then $\langle c'_2,\xi\rangle=1$. We consider the twist $\phi(t)$ of $c_2$, recall the definition from Section~\ref{S:sistercurves}. The curvature of $\tilde{c}_2$ is $\tilde{k}=1-\phi'(t)<1$, since $\phi'>0$ by the graph property of the minimal surface. Proposition~\ref{p:anglecompare} implies the embeddedness, since $\theta\leq\phi<\pi/2$; moreover, $\theta\geq0$ since $\theta<0$ implies $\tilde{k}>1$. With $\gamma_0$ and $\gamma_b$ we denote the unique geodesics given by $\gamma_i(0)=\tilde{c}_2(i)$ and $\gamma'_i(0)=-\tilde{\nu}(i)$, $i=0,b$, see Figure~\ref{f:intersect}.
\begin{figure
\begin{center}
\psfrag{c}{$\tilde{c}_2$}
\psfrag{0}{$\gamma_b$}
\psfrag{1}{$\gamma_0$}
\psfrag{a}{$\alpha$}
\includegraphics[width=4.5cm]{intersect.eps}\end{center}
\caption{Sketch of the defined geodesics $\gamma_i$, $i=0,b$.}\label{f:intersect}
\end{figure}
As said before, in a first step we have to ensure, that the geodesics intersect:
\begin{proposition}\label{p:intersect}
For each $\phi\in\left(0,\pi/2\right)$ exists $b_0\coloneqq b_0(\phi)\in(0,\phi)$, such that the vertical mirror planes of $\tilde{M}(a(b,\phi),b,\phi)$ intersect for all $b\in(0,b_0(\phi))$ and define an intersection-angle $\alpha>0$.
\end{proposition}
\begin{proof}
As in Proposition~\ref{p:anglecompare} we consider the foliation by horocycles given by $\tilde{\nu}(0)$ and the related angle $\theta\leq\phi<\pi/2$. For the calculation we consider the upper halfplane and orient $\tilde{c}_2$ such that $\tilde{c}_2(0)=(0,1)$ and $\tilde{\nu}(0)=(0,1)$, then $\gamma_0$ is contained in the $y$-axis. We want to parametrize the unique geodesic $\gamma_b\subset\H^2$, which starts in the endpoint of $\tilde{c}_2$ ($\tilde{c}_2(b)=(c_x,c_y)$) and its tangent is parallel to $(\sin(\theta), -\cos(\theta))$. For $\theta\ne k\pi, k\in \mathbb{Z}$, $\gamma_b$ is an Euclidean halfcircle with radius $r$ and its midpoint on the $x$-axis. We solve the linear equations
\[
\gamma_b(\pi-\theta)=(c_x,c_y)=(x+r\cos(\pi-\theta),r\sin(\pi-\theta)).\] The geodesic is in Euclidean coordinates parametrized by \[\gamma_b(t)=\left(c_x+c_y \left(\frac{\cos(\pi-t) -\cos \theta}{\sin\theta}\right),c_y\frac{\sin(\pi- t)}{\sin\theta}\right).\]
The geodesics intersect if the $x$-coordinate of $\gamma_b(\pi)$ is positive: \begin{equation}\label{E:inequal}
c_x+c_y\frac{1-\cos\theta}{\sin\theta}>0.
\end{equation}
Since $d_{\H^2}(\tilde{c}_2(0),\tilde{c}_2(b))\leq b$ we have $c_x\geq -b$ and $c_y>e^{-b}$. So we conclude Equation~\eqref{E:inequal} is true if \[\frac{1-\cos\theta}{\sin\theta}>b e^b.\]
The angles $\theta$ and $\phi$ are related by $\theta'=\phi'+\cos\theta-1$ (see Proposition~\ref{p:anglecompare}) and $\cos\theta\geq0$ implies $\int(\cos\theta-1)\geq-b$. Therefore we know that $\theta\geq\phi-b$. Furthermore, the function $\theta\mapsto (1-\cos\theta)/\sin\theta$ increases monotonically, so we conclude, the geodesics intersect if
\[be^b<\frac{1-\cos(\phi-b)}{\sin(\phi-b)}.\]
This is equivalent to \begin{equation}\label{E:implicitb}
f_\phi(b)\coloneqq \frac{1-\cos(\phi-b)}{\sin(\phi-b)}-b e^{b}>0.\end{equation} Its differential \[f'_\phi(b)=\frac{\cos(\phi-b)-1}{\sin^2(\phi-b)}-e^b(1+b)\] is less than zero for all $b<\phi$. Hence, $f_\phi$ is decreasing. Let us consider the limits at the boundaries:
\begin{align*}
\lim_{b\to0}f_\phi(b)&=\frac{1-\cos\phi}{\sin\phi}>0,\qquad \text{for }\phi\in(0,\pi/2)\\
\lim_{b\to\phi}f_\phi(b)&=-\phi e^\phi<0.
\end{align*}
We conclude, there exists exactly one $b_0(\phi)\in(0,\phi)$ with $f_\phi(b_0(\phi))=0$. Moreover, for all $b<b_0(\phi)$ we have $f_\phi(b)>0$, therefore the geodesics intersect for all $b<b_0(\phi)$.
The vertical mirror planes are given by $\gamma_i\times\mathbb{R}\subset\H^2\times\mathbb{R}$ for $i=0,b$.
\end{proof}
\begin{remark}
For $b=\phi$ the total curvature of $\tilde{c}_2$ is $b-\phi=0$ and therefore the two geodesics $\gamma_0$ and $\gamma_b$ do not intersect in any $p\in\H^2$, but in $\partial\H^2$. By Proposition~\ref{p:convergelimit} there exists $a>0$ to solve the first period problem, hence after reflection the contruction causes a complete singly periodic mc-$1/2$ surface $\tilde{M}(a(\phi,\phi),\phi,\phi)$ in $\H^2\times\mathbb{R}$ with infinitely many ends.
\end{remark}
Now we are able to solve the second period problem that is given by the angle $\alpha$. We want the surface to close after $2k$ reflections about vertical mirror planes, so $\alpha$ has to be $\pi/k$.
\begin{proposition}\label{P:2periods}
For each $k\geq3$ there exists $\epsilon=\epsilon(k)>0$, such that $\phi=\pi/k+\epsilon<\pi/2$ and there exists $0<b<b_0(\phi)$, such that the surface $\tilde{M}(a,b,\phi)$ has angular period $\alpha=\pi/k$.
\end{proposition}
\begin{proof}
The angular period is given by the intersection angle of the vertical mirror planes, i.e. the intersection angle of the two geodesics. For $b<b_{0}(\phi)$ the geodesics $\gamma_0$ and $\gamma_b$ intersect by Proposition~\ref{p:intersect}. Hence, we can apply the Gau\ss{}-Bonnet Theorem to the compact disc $V\subset\H^2$ defined by $\partial V=\tilde{c}_2\cup\gamma_0\cup\gamma_b$:
\[\int\limits_V K+\int\limits_{\partial V} k_g+\sum \alpha_i=2\pi\chi(V),\]
where $\alpha_i,\,i=1,2,3$ are the exterior angles with $\alpha_i=\pi/2$ for $i=1,2$ and $\alpha_3=\pi-\alpha$. Furthermore, $k_g$ is the geodesic curvature with respect to the inner normal of $\partial V$ and since $\tilde{c}_2$ is a mirror curve $k_g=-\tilde{k}$ with respect to the surface normal.
Using this we get
\begin{equation}\label{E:gaussbonnet}\int\limits_V K-\int\limits_{\tilde{c}_2} \tilde{k}+2\pi-\alpha=2\pi.\end{equation}
From Lemma~\ref{l:torsionangle} we know $\tilde{k}=1-\phi'$. Integrating this shows Equation~\eqref{E:gaussbonnet} is equivalent to $\phi-b-\operatorname{area}(V)=\alpha$.
We claim, that the lengths $l(\gamma_i)$ are bounded from above for all $b\leq b_{0}(\phi)$. Recall from the proof of Proposition~\ref{p:intersect} that $\phi\geq\theta\geq\phi-b$. In particular, we have a lower bound for $\theta$, the angle that is given by the tangent of $\tilde{c}_2$ and the horocycle fibration. Assume that the lengths $l(\gamma_i)(b)\to\infty$ for $b\to 0$, this implies $\theta\to0$, a contradiction. If the lengths are bounded, the area tends to zero for $b\to 0$.
The angle $\alpha$ depends continuously on $b<b_{0}(\phi)$: \begin{equation}\label{E:alpha}\alpha(b)=\phi-b-\operatorname{area}(V(b))\end{equation} and decreases. The idea is to show that for any $k\geq3$ exists $\phi_k\in(0,\pi/2)$ such that
\[\lim_{b\to 0}\alpha(b)>\pi/k\quad \text{and}\quad \lim_{b\to b_0(\phi_k)}\alpha(b)<\pi/k.\]
On the one hand $\lim_{b\to 0}\alpha(b) =\phi_k$, hence we have to choose $\phi_k>\pi/k$. Therefore, for any $\epsilon_k>0$, $\phi_k\coloneqq\pi/k+\epsilon_k$ satisfies the first condition.
On the other hand we have \[ \lim_{b\to b_0(\phi_k)}\alpha(b)=\phi_k-b_0(\phi_k)-\operatorname{area}(V(b_0(\phi_k))),\] hence we have to choose $\epsilon_k>0$ such that \[\phi_k<\frac\pi k+b_0(\phi_k)+\operatorname{area}(V(b_0(\phi_k)))\quad\Leftrightarrow\quad \epsilon_k<b_0(\phi_k)+\operatorname{area}(V(b_0(\phi_k))).\] We claim the function $b_0(\phi)$ increases monotonically. Recall Equation~\eqref{E:implicitb}: $b_0$ was defined implicitly by $f_\phi(b)= \frac{1-\cos(\phi-b)}{\sin(\phi-b)}-b e^{b}=0$. By the chain rule we have \[b'_0(\phi)=e^{b_0(\phi)}(1+b_0(\phi))+\frac{1-\cos(\phi-b_0(\phi))}{\sin^2(\phi-b_0(\phi))}>0.\] Therefore with $\epsilon_k=b_0(\pi/k)$ we get:
\[\lim_{b\to b_0(\phi_k)}\alpha(b)=\pi/k+\underbrace{b_0(\pi/k)-b_0(\phi_k)}_{<0}-\operatorname{area}(V(b_0(\phi_k)))<\pi/k.\] Remains to show that $\phi_k<\pi/2$ for all $k\geq3$. Since $b_0$ increases this is true if $\pi/3+b_0(\pi/3)<\pi/2$, i.e. $b_0(\pi/3)<\pi/6$. But this follows directly from the fact that $f_{\pi/3}(\pi/6)<0$.
By the intermediate value theorem follows, there exists $b^*\in(0,b_{0}(\pi/k+\epsilon_k))$ such that $\alpha(b^*)=\pi/k$.
\end{proof}
The proposition proves the existence of one pair $(b_k,\phi_k)$ for each $k\geq3$ such that the angular period is $\pi/k$. It is natural to ask if there is a family of cmc surfaces with this angular period and $k$ ends. The answer is yes:
\begin{proposition}\label{P:neighb}
For each $k\geq3$ there exists an interval $U_k\subset(0,\pi/2)$, such that for all $\phi\in U_k$ there exists $b(\phi)>0$, such that each cmc surface $\tilde{M}(a,b(\phi),\phi)$ has angular period $\alpha=\pi/k$.
\end{proposition}
\begin{proof}
In Proposition~\ref{P:2periods} the existence of a pair $(b(\phi_k),\phi_k)$ was proven such that the surface $\tilde{M}(a,b(\phi_k),\phi_k)$ has the desired property. By Equation~\eqref{E:alpha} we know, the pair $(b(\phi_k),\phi_k)$ is a zero of the following continuously differentiable function
\[
G(b,\phi)=\alpha(b,\phi)-\pi/k=\phi-b-\operatorname{area}(V(b,\phi))-\pi/k.\]
The angle $\alpha(b,\phi)$ is given by the two geodesics $\gamma_0$ and $\gamma_b$. Recall from the proof of Proposition~\ref{P:2periods} that $\partial_b\alpha(b,\phi)<0$. By the implicit value theorem there exists an open neighborhood $U_0$ of $\phi_k$, an open neighborhood $V$ of $b(\phi_k)$, and a unique continuously differentiable function $g\colon U_0\to V$ with $g(\phi_k)=b(\phi_k)$ such that $G(g(\phi),\phi)=0$ for all $\phi\in U_0$.
To define $U_k$ notice that $g(\phi_k)<b_0(\phi_k)$, therefore the subset $U_k\coloneqq \{\phi\in U_0\colon g(\phi)<b_0(\phi)\}\cap(0,\pi/2)$ is not empty. Hence, for all $\phi\in U_k$ there exists $b=g(\phi)<b_0(\phi)$ such that $\alpha=\pi/k$.
\end{proof}
\begin{remark}
We want to analyse the limiting cases:
\begin{itemize}
\item $\phi\to\inf U_k\geq\pi/k$: From $\phi\to\pi/k$ follows \mbox{$b+\operatorname{area}(V(b,\phi))\to 0$}, which implies $b\to0$. Assuming the solution of the first period problem $a(b,\phi)>0$ for $b\to0$ leads to the $k$-noid from Section~\ref{S:knoid} which has a positive period. Therefore if $\inf U_k=\pi/k$ then the sequence of $k$-noids converges to an union of $k$ horocylinders away from the singularity for $\phi\to \pi/k$.
\item $\phi\to\sup U_k\leq \pi/2$: Since $\partial_\phi\operatorname{area}(V(b,\phi))\leq0$, $\phi\to\sup U_k$ implies that $b$ increases. For the cmc surfaces this means that the length of the finite horizontal symmetry curve grows.
\end{itemize}
\end{remark}
\subsection{Main Theorem}
After solving the two period problems we can now prove the existence of the mc $1/2$ surface with genus $1$:
\begin{theorem}
For $k\geq 3$, there exists a family of surfaces $M$ with constant mean curvature $1/2$ in $\H^2\times\mathbb{R}$ such that:
\noindent
$\bullet\, M$ is a proper immersion of a torus minus $k$ points,
\noindent
$\bullet\, M$ is Alexandrov embedded.
\noindent
$\bullet\, M$ has $k$ vertical mirror planes enclosing an $\pi/k$-angle,
\noindent
$\bullet\, M$ has one horizontal mirror plane.
\end{theorem}
\begin{proof}
For $k\geq3$ we consider $(b(\phi),\phi)$ for $\phi\in U_k\subset(0,\pi/2)$ given by~\ref{P:neighb}. The minimal surface $M_\phi=M(a(b(\phi),\phi),b(\phi),\phi)$ defined by~\ref{p:convergelimit} solves the first period problem. By~\cite{daniel2007} the fundamental piece $M_\phi$ has a sister surface $\tilde{M}$ with constant mean curvature $1/2$ in $\H^2\times\mathbb{R}$, which is a graph. By construction and from the solution of the period problems, $\tilde{M}$ has one horizontal and two vertical mirror planes; the two vertical mirror planes enclose an angle $\pi/k$. It consists of four mirror curves: two horizontal (one bounded and one unbounded) and two vertical (one bounded and one unbounded). After Schwarz reflection about one of the vertical mirror planes followed by reflection about the horizontal mirror plane we have completed one end: It is built up of four fundamental pieces $\tilde{M}$. We use the Euler characteristic
\[
\chi=V-E+F=2-2g
\]
to determine the genus $g$ of the complete mc $1/2$ surface $M$ with $k$ ends, which is generated by Schwarz reflection. We have $\chi=4k-8k+4k=0$ and therefore $g=1$.
\begin{figure
\begin{center}
\psfrag{c1}{$c_1$}
\psfrag{c2}{$c_2$}
\psfrag{a}{$a$}
\psfrag{n}{$n$}
\psfrag{pr}{$\pi$}
\psfrag{phi}{$\phi$}
\psfrag{pi}{$\pi/k$}
\psfrag{ct1}{$\tilde{c}_1$}
\psfrag{ct2}{$\tilde{c}_2$}
\psfrag{ct0}{$\tilde{c}_0$}
\psfrag{ct3}{$\tilde{c}_3$}
\psfrag{H}{$\H^2\times\{0\}$}
\includegraphics[width=0.7\textwidth]{genus1cmccontournew2.eps}
\end{center}
\end{figure}
We claim that $M$ is Alexandrov-embedded if we choose as before the downward-pointing normal $\nu$. We show the fundamental piece $\widetilde{M}$ is embedded and stays in the subset of $\H^2\times\mathbb{R}$ that is bounded by mirror planes. The mirror planes are given by the symmetry curves with infinte length. We discuss the embeddedness of each boundary arc, since $\widetilde{M}$ is transverse to the fibres, the fundamental piece is embedded. Therefore, the complete surface $M$ is Alexandrov-embedded.
Let us recall the notation, $c_1$ denotes the finite horizontal geodesic in $\partial M_\phi$ and $\tilde{c}_1$ its sister curve in $\partial\widetilde{M}$.
The sister curve $\tilde{c}_1$ is a mirror curve in a vertical plane. The curve is graph and therefore embedded. The same holds for the other horizontal curve and its sister curve in a vertical plane, call it $\tilde{c}_3$.
Along the horizontal geodesic $c_3$ we have $\langle\eta,\xi\rangle>0$ if we choose $(c_3'\eta,\nu)$ positive oriented. By the first order description of the sister surfaces we know that the projection of the vertical vector field $\xi$ on the tangent plane rotates by $\pi/2$ under conjugation. Therefore, we have $\langle\tilde{c}_3',\tilde{\xi}\rangle<0$. By construction, we know that $\langle\eta,\xi\rangle\to 1$ in the end along the sister curve $c_3$. Hence, in the sister surface for the corresponding mirror curve we have $\langle\tilde{c}'_3,\tilde{\xi}\rangle\to -1$. In other words, the mirror curve comes from $\infty$ in the end.
With the arguments of Hauswirth, Rosenberg and Spruck in the proof of Theorem 1.2 in~\cite{HRS2008} we know that each divergent sequence $(p_n)$ in $M$ with \linebreak \mbox{$\langle\tilde{\nu}(p_n),\tilde{\xi}\rangle\to0$} has a limit in the boundary in the projection: $\pi(p_n)\to\partial\H^2$. The idea of the proof is to show that if the sequence has a horizontal normal in the limit and a limiting point in the projection, then the whole surface is converging to a horocylinder and stays on one side. Hence, by the half-space theorem it is a horocylinder which is a contradiction.
As in the setup let $c_2$ denote the finite vertical geodesic, recall that its sister curve is embedded. Let $c_0$ denote the remaining vertical geodesic in the boundary $\partial M_\phi$ parametrized in $\xi$-direction. Let $\alpha$ denote the twist of the horizontal normal $\nu$ along $c_0$. We have $\alpha'>0$, i.e. the normal rotates monotonically, because $M_\infty$ is a section. We consider the horocycle fibration of $\H^2$ given by $\tilde{\nu}(0)$ as in Proposition~\ref{p:anglecompare}. By the proposition the angle $\theta$ the sister curve encloses with the fibration in $\H^2$ is smaller than $\alpha=\pi-\phi<\pi$. Hence, $\tilde{c}_0$ is embedded. In summary this proves the first part, $\widetilde{M}$ is embedded.
In the second step we show that $\widetilde{M}$ is bounded by symmetry planes. First, we have to check if the surface stays in a horizontal halfspace of $\H^2\times\mathbb{R}$. We define the horizontal mirror plane of $M$ to be $\H^2\times\{0\}$. By the maximum principle $\widetilde{M}$ has no (global) minimum below $\H^2\times\{0\}$.
Assume there is a curve $\tilde{c}$ in $\widetilde{M}$ with $\tilde{c}(0)\in\partial \widetilde{M}$, whose third component goes to minus infinity. Then $\langle\tilde{c}'(t),\tilde{\xi}\rangle<0$ for $t>T$ for some $T\in\mathbb{R}$. This implies for the conormal $\langle\eta(t),\xi\rangle>0$ along the sister curve $c\subset M_\phi$ in the end. Let us recall that $M_\phi$ was constructed as a limit of compact minimal sections $M_n$. Hence there exists a $N\in\mathbb{N}$ such that $\operatorname{T}_p M_N$ is horizontal. The intersection $V$ of $M_N$ and the horizontal umbrella in $p$ consists of $2m$ curves, $m\geq2$. But this implies there exists a loop in $V$ which contradicts the uniqueness of the minimal section. Therefore $\widetilde{M}\subset \H^2\times\mathbb{R}^+_0$.
To finish the proof we have to show that the surface lies on one side of the vertical symmetry plane: We have seen that the projection of the vertical symmetry curve $\tilde{c}_3$ is converging to some point in the boundary $\partial\H^2$. Along $\tilde{c}_0$ the surface is at least locally to the side of the normal, since $\tilde{k}=1-\alpha'<1$ by the graph property of the minimal surface along $c_0$. Therefore, since it is also a graph, it is defined on a domain $\Omega\subset\H^2$ given by $\pi(\tilde{c}_3)\cup\tilde{c}_0$.
\end{proof}
\subsection{Conclusion and outlook}
We constructed an mc $1/2$ surface in $\H^2\times\mathbb{R}$ with $k$ ends, genus $1$ and $k$-fold dihedral symmetry, $k\geq3$, which is Alexandrov embedded. We had to solve two period problems in the construction. The first period guarantees that the surface has exactly one horizontal symmetry. For the second period we had to control a horizontal mirror curve to get the dihedral symmetry. In the case of $H\ne0$ the total curvature of a horizontal mirror curve depends not only on the twist of the normal, but also on its length. An interesting problem is to construct non-embedded examples that correspond to those presented here. Gro\ss{}e-Brauckmann proved in~\cite{brauckmann1993} that each Delaunay surface is the associated mc $1$ surface of a helicoid in $\S^3$. The family of Delaunay surfaces consists of embedded (unduloid) and non-embedded (nodoid) examples. In order to construct non-Alexandrov embedded examples one has to consider the minimal surface from Proposition~\ref{p:convergelimit}, and choose the positive oriented surface, i.e. the upward-pointing surface normal $\nu$. If $(c'_i,\eta,\nu),\,i=0,2$ is positively oriented, then $\langle c'_i,\xi\rangle=-1$. Therefore the twist $\alpha$ is decreasing which implies $\tilde{\kappa}=1-\alpha'>1$ for $\tilde{c}_{0/2}$. With this setup one has to solve the second period problem.
As far as the author knows it is an open problem to show the convergence of the ends of the cmc surface. It would be nice to prove, that $M$ has catenoidal ends, in the sense they are described in~\cite{DH2009}. To show this one may consider the minimal surface in $\operatorname{Nil}_3(\mathbb{R})$ and wrap it between two copies of horizontal helicoids from~\cite{DH2009} which differ by a vertical translation about height $h$:
\begin{align*}
H^1_1(u_1,v_1)&=\frac{\sinh(\alpha v_1)}{\alpha(\psi'(u_1)-\alpha)}\cos\psi(u_1)\\
H^2_1(u_1,v_1)&=-G(u_1)\\
H^3_1(u_1,v_1)&=\frac{-\sinh(\alpha v_1)}{\alpha(\psi'(u_1)-\alpha)}\sin\psi(u_1), \text{ and}\\
H^1_2(u_2,v_2)&=\frac{\sinh(\alpha v_2)}{\alpha(\psi'(u_2)-\alpha)}\cos\psi(u_2)\\
H^2_2(u_2,v_2)&=-G(u_2)\\
H^3_2(u_2,v_2)&=\frac{-\sinh(\alpha v_2)}{\alpha(\psi'(u_2)-\alpha)}\sin\psi(u_2)+h.
\end{align*}
Where $\alpha$ is chosen such that the width of the helicoid is $a\sin\phi$, which is possible by Lemma~\ref{l:a0alphainf}.
The idea is to prove exponential convergence of the two helicoids, i.e. there exists $C$, $\lambda\in\mathbb{R}$ independent of $x\in H_1$ such that
\begin{equation}\label{E:expconv}
d(x,H_2)<C e^{-\lambda\abs{x}},\quad \text{for }\abs{x}\to\infty.
\end{equation}
Since the Plateau solution $M(a,b_0)$ is bounded by $H_1$ and $H_2$ for an appropiate choice of $h$, this proves that $M_\phi$ has a helicoidal end and since the convergence is exponential this translates to the sister surface.
\bibliographystyle{amsalpha}
|
1,108,101,566,679 | arxiv | \section{Introduction}
An experienced optician can detect low-order aberrations by looking at
the defocused image of a point source, and it is trivial to obtain
defocused images with modern telescopes equipped with CCD detectors.
Yet, measurements of low-order aberrations including focus are still
made by indirect techniques, or using special equipment such as
Shack-Hartmann (S-H) sensors. Astronomers spend significant time in
acquiring ``focus sequences'' of stellar images, then fitting the
image half-width vs. focus curve with a parabola to find the
best-focus position.
The appeal of estimating aberrations directly from defocused images is
evident. No special equipment is needed apart from a regular
imager. The aberrations in the true science beam are measured,
including all optics of the instrument but excluding additional optics
of a wave-front sensor. The amount of defocus is easily adjustable,
providing flexibility.
It was recognized since long time that optical aberrations cannot be
retrieved from a focused image of a point source without
ambiguity. However, combining {\em two} images with a known difference
of aberration provides a solution to this problem, even for non-point
sources. The method of {\it phase diversity} which exploits this idea
has been used since the beginning of the 80-s \citep{Thelen99}. Phase
diversity works well when the image is sampled to the diffraction
limit, e.g. in adaptive optics \citep{Hartung}. This is not the case
for conventional astronomical imagery with a pixel size matched to the
seeing. Yet another method for extracting aberrations from
well-sampled focused images by means of a trained neural network was
suggested by Sandler and later tried by \citet{LH92}. The authors note
that their method is extremely computationally intensive and has some
subtleties. To our knowledge, this method is not in use nowadays.
The relation of the intensity distribution in a defocused image to the
local wavefront curvature is described by the so-called irradiance
transport equation \citep{Roddier90}. This relation is basic to {\it
curvature sensing} as used in adaptive optics \citep{Roddier99}. A
commercial software package for telescope aberration analysis based on
the same principle has been developed by Northcott\footnote{
Northcott, M.J., The {\it ef} wavefront reduction package. 1996,
Laplacian Optics Inc.} and is used at some observatories. This
method, however, is not very practical because it requires two images
with relatively large and equal defocus of opposite sign.
The need of two images for curvature sensing has been questioned by
\citet{Hickson94} who shows that even in the context of adaptive
optics a single extra-focal image is sufficient and provides a better
signal-to-noise ratio with a CCD detector and faint guide stars,
despite scintillation noise. One image is sufficient for
non-ambiguous aberration retrieval as long as the rays originating
from different parts of the aperture do not cross each other, i.e.
for a sufficiently large defocus that avoids caustics. The minimum
defocus is proportional to the amplitude of higher-order aberrations.
\citet{Ragazzoni} have used this technique in their experiment.
The intensity transport equation is not valid for a small defocus,
where physical optics must be used instead. However, this is not an
obstacle for sensing low-order aberrations, as long as they are small
enough, so that a relation between aberration and image intensity
remains linear. \citet{Bharmal} develop such near-focus sensing
technique for low-order adaptive optics, providing in their paper
several valuable insights into this problem. However, their method
still requires two images, intra- and extra-focal.
Here we present a quantitative method of measuring optical aberrations
from a single defocused image. Such images often resemble donuts
(because of the shadow at the center caused by the central obscuration
in a Cassegrain telescope), so we call this technique ``donut''. This
work is primarily motivated by the need for a simple wave-front
sensing method for the SOAR telescope in Chile
\citep{Sebring98,Krab04}. All numerical examples in the article were
computed for a telescope diameter $D=4.1$~m with a central obscuration
0.24, appropriate for SOAR. The proposed technique is primarily
intended for active optics, it is too slow for real-time
correction of turbulence.
The donut method is different from standard curvature sensing. We use
physical optics and directly fit a model of the aberrated image to the
measured ``donut''. The initial approximation is obtained from the
second moments of the intensity distribution as described in
Sect.~\ref{sec:mom}. Then an iterative fitting algorithm presented in
Sect.~\ref{sec:fit}, with further details in the Appendix, is used to
refine the model including higher order aberrations. In
Sect.~\ref{sec:perf} we evaluate the errors of aberrations measured by
this method and compare it to a low-order Shack-Hartmann sensor while
examples of actual performance are given in Sect.~\ref{sec:exa}.
Finally we present our conclusions in Sect.~\ref{sec:concl}.
\section{Image formation}
\label{sec:image}
To begin the presentation of our algorithm we recall the textbook
theory of image formation, e.g. \citep{BW}. Let ${\bf a}$ be the
2-dimensional angular coordinate in the image plane (in radians) and
${\bf x}$ -- the coordinate in the plane of telescope pupil. The shape
of the wave-front is $W({\bf x})$ and the phase of the light wave is
$\phi({\bf x}) = (2 \pi/\lambda) W({\bf x})$ for the wavelength
$\lambda$. Then the intensity distribution in the image plane $I({\bf
a})$ is computed as
\begin{equation}
I({\bf a}) = I_0 \left|
\int P ({\bf x}) e ^{ i \phi({\bf x}) - 2 \pi i {\bf x} {\bf
a}/\lambda } \; {\rm d}^2 {\bf
x} \right| ^2 ,
\label{eq:I}
\end{equation}
where $ P({\bf x})$ is the pupil transmission function and the
normalization constant $I_0$ is of no importance here.
\begin{figure}
\plotone{f1.eps}
\caption{Computational grids and scales.
\label{fig:grids}}
\end{figure}
\begin{figure}
\plotone{f2.eps}
\caption{Mosaic of 8 defocused images with Zernike aberrations from 5
to 12 (left to right and top to bottom) of $0.3$~$\mu$m
amplitude. Seeing $1''$, defocus 3.3~$\mu$m. Each image is $7.48''$
wide, 32x32 pixels, $D=4.1$~m.
\label{fig:mosaic}}
\end{figure}
In our implementation of the algorithm, the computation of
(\ref{eq:I}) is carried out using the Fast Fourier Transform (FFT) on
a square numerical grid of $K\times K$ points (Fig.~\ref{fig:grids}).
The linear size $L$ of the pupil-plane grid should be large enough for
a telescope diameter $D$, $L \geq D$; critical sampling of
diffraction-limited images requires $L \geq 2D$. Then the sampling
in the image space is $\lambda/L$ (smaller than the diffraction limit
$\lambda/D$) and the size of the field of view is $K \lambda/L$. We
select a part of the image centered on the star that fits into this
field. In the case of large telescopes the sampling is fine,
hence we are forced to select a large grid size $K$ to have enough
field, at the cost of slower calculation. For computational
efficiency $K$ has to be an integer power of 2. The choice $K=256$ is
good for a 4-m telescope.
The CCD pixels are normally much larger than $\lambda/D$, hence the
resulting image has to be binned by some factor $m$. The number of
``coarse'' CCD pixels is then $N_{CCD} = K/m$. Considering that $K$ is
a power of two, both $m$ and $N_{CCD}$ also have to be integer powers
of two. Typically, $N_{CCD}=32$ and $m=8$. The CCD pixel size is then
$p = m \lambda/ L$.
The wavefront is represented as a sum of Zernike aberrations up to
some number $N_z$,
\begin{equation}
W({\bf x}) = \sum_{j=2}^{N_z} a_j Z_j({\bf x}).
\label{eq:W}
\end{equation}
Zernike polynomials in the form of \citet{Noll} are used. Their
amplitudes (coefficients $a_j$) are equal to the rms wavefront
variance over the pupil. The piston term $(j=1)$ is
omitted. Defocused images (donuts) are obtained by setting the focus
coefficient $a_4$ to some large positive or negative value.
A monochromatic image computed from (\ref{eq:I}) contains sharp
details of the size $\lambda/D$ caused by diffraction. These details
are usually not seen, being smoothed by coarse detector pixels and
seeing. In this case the monochromatic image model also represents
broad-band images, and we can even use a value of $\lambda$ in the simulation
which is larger than the actual wavelength of observation to, in effect,
increase the size of the modeled field.
The blur caused by the time-averaged seeing is modeled as a
convolution with a Gaussian kernel. The FWHM of the seeing disk
$\epsilon$ is proportional to the Gaussian parameter $\sigma$,
$\epsilon = 2 \sqrt {2 \ln 2} \sigma \approx 2.35\sigma$. The
convolution is computed in frequency space by multiplying the FFT
of the image, $\tilde{I}({\bf f})$, by a filter
\begin{equation}
\tilde{I}_s({\bf f}) = \exp (- 2 \pi^2 \sigma^2 |{\bf f}|^2 )
\label{eq:seeing}
\end{equation}
and doing the inverse FFT. This double FFT is costly in computing time
if done on the full $K \times K$ grid. When detector pixels are
smaller than $\epsilon$, as is the case of astronomical imagers, a
much faster calculation on a grid of (binned) detector pixels is
justified. Seeing, together with a set of Zernike coefficients, forms
a vector of parameters that define the donut model. We put the seeing
in the first element of this vector $\epsilon = a_1$, replacing the
useless piston term. An example of donut images corresponding to first few
Zernike aberrations is shown in Fig.~\ref{fig:mosaic}.
\section{Second moments}
\label{sec:mom}
First-order moments (centroids) of telescopic images are widely used
for guiding. Here we show that the second moments are equally useful for
estimating the second-order aberrations, defocus and astigmatism.
Let $I_{ij}$ be the image of a point source presented as an array of
detector pixels $i,j$. The coordinates $x$ and $y$ are measured in
pixels. The zero-order moment $I_0$, first moments $x_c$ and $y_c$
(in pixels) and the second moments $M_x$, $M_y$, and $M_{xy}$ (in
square pixels) are:
\begin{eqnarray}
I_0 & = & \sum I_{ij} \nonumber \\
x_c & = & I_0^{-1} \; \sum x_{ij} I_{ij} \nonumber \\
y_c & = & I_0^{-1} \; \sum y_{ij} I_{ij} \nonumber \\
M_x & = & I_0^{-1} \; \sum (x_{ij}-x_c)^2 I_{ij} \nonumber \\
M_y & = & I_0^{-1} \; \sum (y_{ij}-y_c)^2 I_{ij} \nonumber \\
M_{xy} & = & I_0^{-1} \; \sum (x_{ij}-x_c) (y_{ij}-y_c) I_{ij}
\label{eq:mom}
\end{eqnarray}
Evident combinations of the second moments relate them to defocus and
astigmatism. Indeed, the defocus should be proportional to the size
of the donut which, in turn, is the average of its size in $x$ and
$y$. The $45^\circ$ astigmatism $a_5$ causes image elongation in the
diagonal direction and should be proportional to $M_{xy}$, whereas
$a_6$ should be proportional to the difference of the image size in
$x$ and $y$. Thus, we introduce the coefficients $A_4$, $A_5$, and
$A_6$ and express them in angular units (e.g. arcseconds) with the
help of the angular size of detector pixel $p$:
\begin{eqnarray}
A_4 & = & p \sqrt{(M_x + M_y)/2} \nonumber \\
A_5 & = & p M_{xy} (M_x M_y)^{-1/4} \nonumber \\
A_6 & = & 0.5 p (M_x - M_y) (M_x M_y)^{-1/4} .
\label{eq:A}
\end{eqnarray}
\begin{figure}[h]
\plotone{f3a.eps}
\plotone{f3b.eps}
\caption{Focus aberration $a_4$ (top) and
astigmatism $a_5$ (bottom) measured by moments, as a function of
true coefficients. For the astigmatism, the defocus of 3~$\mu$m is
set. Pixel size 0\farcs5, seeing 0\farcs3, 0\farcs5 and 1\arcsec.
\label{fig:focus}}
\end{figure}
Next we must find the relationship between those coefficients and the
Zernike amplitudes. In the case of defocus, this is relatively
straightforward. The second moment of a uniform disk of radius $\rho$
is readily calculated to be $M_x = M_y = \rho^2/4$. On the other hand,
the angular radius of the defocused image $\rho$ is found as the first
derivative of the wavefront at the edge of the pupil (in the
geometrical-optics approximation),
\begin{equation}
\rho = a_4 \frac{8 \sqrt{3}}{D} ,
\label{eq:rho}
\end{equation}
where $a_4$ is the Zernike coefficient of the wavefront.
This leads to $A_4 = a_4 (4 \sqrt{3})/D$. There is similar linear
relation between $A_5$ and $a_5$ with a different coefficient. We did
not derive this analytically, but rather found the coefficient by means
of numerical simulation, $A_5 = 0.23 a_5/D$ and $A_6 = 0.23 a_6/D$.
Our simulations show that $A_5$ and $A_6$ are indeed very good
measures of the astigmatism (Fig.~\ref{fig:focus}). To the first
order, they do not depend on defocus (provided it is larger than the
astigmatism itself) and on other higher-order aberrations. On the
other hand, the linear relation between $A_4$ and $a_4$ holds only
when the defocus dominates the seeing blur and pixel size, and there
is always some bias.
Second moments provide an easy and fast way to evaluate the defocus
and astigmatism. To recover the sign of these aberrations, however,
we need to know if the donut is intra- or extra-focal. The moments
are used as a first step in fitting models to a donut image.
Second moments are finite in geometrical-optics approximation but they
diverge in physical optics because the intensity of a diffraction spot
does not decrease rapidly enough. Practically, only a finite number of
image pixels is considered, hence the divergence of second moments is
not an issue.
The computation of $A_4$ may be used as a more efficient means of
focusing the telescope than the traditional focus sequence.
Figure~\ref{fig:focus} shows that a dependence of the image size on
the true focus has zero slope near $a_4 = 0$, hence the method of
focus sequences (series of images near best focus) has the lowest
sensitivity to focus and the highest sensitivity to seeing variations.
By taking one image sufficiently far from focus and extrapolating
back, we obtain a better sensitivity and less vulnerability to
seeing. However, a small bias due to seeing still remains. This can be
eliminated by taking two images with large defocus bracketing the
expected true focus. Let $A_4^+$ and $A_4^-$ be the focus parameters
(without sign) derived from these two images that correspond to the
focus encoder settings $F^+$ and $F^-$, respectively. Evidently,
\begin{eqnarray}
A_4^+ & = & \alpha (F^+ - F_0) + \delta \nonumber \\
A_4^- & = & \alpha (F_0 - F^-) + \delta ,
\label{eq:foc}
\end{eqnarray}
where $F_0$ is the encoder setting for perfect focus, $\alpha$ is the
proportionality coefficient specific for each telescope, and
$\delta$ is the small bias due to seeing, which we assume to be the
same for both exposures. It is possible to determine two unknowns
$F_0$ and $\delta$ from this system, so the true focus encoder setting
is
\begin{eqnarray}
F_0 = (F^+ + F^-)/2 + (A_4^- - A_4^+)/(2 \alpha) .
\label{eq:F0}
\end{eqnarray}
The reason this method is not in common use at observatories is likely related
to the need to determine the value of $\alpha$ for each telescope/detector
combination and the need to have a reliable focus encoder. However, the method
should be faster and more accurate than traditional focus
sequences. Hopefully, it will become a standard tool in astronomical imaging.
\section{Iterative model fitting}
\label{sec:fit}
\begin{figure}[h]
\epsscale{1.0}
\plotone{f4.eps}
\caption{Block-diagram of the fitting algorithm.
\label{fig:fit}}
\end{figure}
The relation between the phase aberrations and resulting image is
doubly non-linear. The first non-linear transformation occurs in the
conversion from the phase distribution $\phi$ to the complex light
amplitude $e^{i \phi}$. The second non-linear transformation is the
calculation of the intensity distribution as a square modulus of the
FFT. Thus, it is not possible to fit a model in a straightforward
way, but rather iterative methods have to be employed. At each
iteration, small differences between the model and the image are
translated into small corrections to the model.
An insight into the fitting process is provided by the theory of
curvature sensing \citep{Roddier90}. A defocused image can be
considered as being an approximate image of the pupil where each
aberration produces a signal proportional to the local curvature
(Laplacian). Thus, in the limit of small aberrations, the intensity
distribution in the donut can be represented as the sum of Laplacians
of the Zernike modes with suitable coefficients and scaling. This
provides the required linearization for deriving the correction at
each iteration step. In other words, a combination of a large known
aberration (defocus) with small high-order aberrations leads to an
approximate linearization of the image-formation process with respect
to high-order terms.
The method of modeling the donut is as follows (Fig.~\ref{fig:fit}).
The first estimate of the Zernike coefficients up to $a_6$ is derived
by the method of moments (we initially set $a_1 = 0\farcs5$). At the
second step, the gradients of the model with respect to each of the
parameters are computed as differences between the model image and
images obtained by varying each Zernike coefficient by a small amount.
These differences are computed for each pixel of the image and
combined in the {\it interaction matrix} $H$ of the size $N_p \times
N_z$, where $N_p$ is the total number of pixels in the image and $N_z$
is the number of fitted Zernike terms. This matrix relates small
variations of the parameters (seeing and Zernike coefficients) to the
variations of the signal -- intensities in each pixel. The seeing is
considered as an additional unknown parameter and fitted jointly with
the aberration coefficients.
The matrix $H$ is inverted, so the differences between the model and
the actual image can be converted into the corrections to the Zernike
coefficients. The new set of coefficients is the new model which,
hopefully, is a better approximation of the donut. The process of
image formation being non-linear, we have to repeat this linearized
fitting again and again iteratively until the model converges. The
algorithm is similar to the closed-loop wavefront control algorithm
used in adaptive optics: at each iteration we obtain a better
approximation to the donut. Further details are given in the Appendix.
The number of ``resolution elements'' across the pupil is of the order
$2 \rho /\epsilon$. Thus, if aberrations of high-order are to be
measured, a larger donut radius $\rho$ is needed. On the other hand,
curvature sensors are known to suffer from severe aliasing, where
un-modeled high-order aberrations distort the low-order
coefficients. Hence, a defocus of $2 \rho/\epsilon \sim n$ is
recommended for sensing aberrations up to the radial order $n$. These
considerations are further developed in the next Section.
\section{Performance of the donut algorithm}
\label{sec:perf}
\subsection{Aliasing}
\begin{figure}[hb]
\plotone{f5.eps}
\caption{Aliasing coefficients of Zernike astigmatisms $a_5$ (full
line) and $a_6$ (dash). Seeing $1''$, pixel scale $0\farcs23$,
defocus $2 \rho = 3.1''$, modeling up to $N_z = 11$.
\label{fig:alias}}
\end{figure}
Suppose we want to measure Zernike coefficients up to 11 (spherical
aberration) by fitting a model to the donut. To what extent is the result
distorted by the presence of high-order aberrations? Let $a_k \neq 0 $
be the amplitude of un-modeled high-order aberration ($k > N_z$) which
produces an error $\Delta a_j$ of the $j$-th coefficient. The ratio
$\Delta a_j /a_k$ is called the {\it aliasing coefficient}.
Figure~\ref{fig:alias} plots these coefficients for astigmatism
($j=5,6$). The $a_5$ term is aliased mostly with $a_{13}$ and
$a_{23}$ assuming seeing of $1''$. The condition $2 \rho/\epsilon \sim n$ is approximately
satisfied in this example. However, if the seeing improves to $0.5''$,
the aliasing coefficient with $a_{13}$ increases from $-0.35$ to $+2$.
Clearly, aliasing can be a problem for a donut sensor, as it is for
any curvature sensor. The evident solution, though, is to increase
the order of the fitted model until all aliased terms are explicitly taken
care of. Another way to reduce the aliasing is to decrease the defocus
to the minimum value required to measure a selected set of
low-order aberrations.
For comparison, we studied the aliasing of astigmatism measured by a
2x2 S-H sensor. We find that, if the full telescope aperture is used,
the aliasing coefficient of $a_5$ with $a_{13}$ is $+1.4$, and that
the aliasing coefficient is even larger for some higher terms. The
aliasing of an S-H sensor can be reduced by reducing the portion of
the aperture used for a 2x2 sensor or by increasing the order of the
sensor. It is clear, however, that aliasing in a low-order S-H sensor
is of the same order as for the donut method, with less options
available for decreasing it.
\subsection{Detector noise}
\begin{figure}
\plotone{f6.eps}
\caption{The rms noise of the astigmatism coefficient $a_5$ for
various diameters of the donut and different CCD pixel scales
(indicated on the plot) under $1''$ seeing. Readout noise 10
electrons, $N_{ph}= 1000$.
\label{fig:ast2}}
\end{figure}
\begin{figure}
\plotone{f7.eps}
\caption{The rms noise of the astigmatism coefficient $a_5$ vs. total
number of photons $N_{ph}$ for donut method ($2 \rho = 2\farcs5$, pixel
size $p=0\farcs75$, readout noise $R=10$) and for a 2x2 S-H sensor
($p =0\farcs75$, $R=10$). In both cases seeing is 1\arcsec.
\label{fig:noisecomp}}
\end{figure}
In some instances it is important to measure optical aberrations with
relatively faint stars. The readout and photon noise may then become
an issue because the light in a donut is spread over many CCD pixels.
The problem can be studied by simulating a series of noisy images and
evaluating the scatter of the measured Zernike coefficients. However,
a much faster analytical evaluation of the errors is available through
the covariance matrix, Eq.~\ref{eq:Cz}. We have verified that this
method gives an answer which is consistent with the results of direct
Monte-Carlo simulation.
For a given total number of detected photons $N_{ph}$ and a given
readout noise $R$, the errors of measured Zernike coefficients depend
on the size of the donut, the size of detector pixel, seeing and
aberrations. In the following we assume that all aberrations except
defocus are corrected, as would be appropriate in an active-optics
application; if this is not true, the results would be different.
An example of optimization for measuring $a_5$ under $1''$ seeing is
shown in Fig.~\ref{fig:ast2} for a faint star, when the noise is
mostly dominated by the detector readout noise. The optimum pixel
scale in these conditions is about $1''$ and the optimum donut
diameter is about $2.5''$. However, large deviations from these
optimum values cause only a minor increase of the noise. The optimum
parameters depend on the Zernike number, on seeing and on the flux
$N_{ph}$. A reasonable choice of parameters can be made to ensure a
near-optimum measurement of several Zernike coefficients for a range
of seeing conditions.
In the case of faint stars when the detector noise $R$ dominates, the
errors of the Zernike coefficients must be proportional to $R/N_{ph}$. The
calculations show this to be approximately true up to $N_{ph} \sim
10\;000$ (for our choice of $R=10$). At larger flux, the errors improve
only as $1/\sqrt{N_{ph} }$. However, the photon-noise errors in the
bright-star regime are so small that they become irrelevant compared
to other errors.
The intensity modulation in the donut increases with increasing number
$j$ (at constant amplitude $a_j$), because it is roughly proportional
to the curvature. Equating the modulation with noise, we expect that
noise propagation decreases with $j$. This is indeed the case. One
notable exception, however, is the spherical aberration which can have
an error much larger than other terms of the same order. We trace this
anomalous behavior to the cross-talk between $a_{11}$ and seeing.
Indeed, processing of real data shows that the estimates of $a_{11}$
and $\epsilon$ are often less repeatable, compared to other terms.
We compared the sensitivity of the donut method for measuring
astigmatism with that of a 2x2 S-H sensor and found that their
performance in the low-flux regime can be very similar
(Fig.~\ref{fig:noisecomp}). The noise was computed by the same
method for both measurement techniques i.e. by relating errors of
pixel intensities directly to the errors of Zernike coefficients.
This should give the lowest possible error. In practice, aberrations
are normally derived in a S-H sensor from centroids of the spots,
hence with somewhat larger errors. Naturally, the noise depends on
the parameters such as defocus, seeing, and pixel size, hence in some
situations S-H sensors can perform better than donut. S-H is to be preferred
for measurement of atmospheric tip-tilt errors. The formal
sensitivity of donut to tip and tilt is only slightly inferior
to that of S-H, but at short exposures the centroids of the donut
images will be severely displaced by higher-order aberrations and will
not provide good measurements of tilts.
\subsection{Convergence and reliability}
The iterative fitting has been tested on different simulated donut
images and always produced the expected result. However, processing
real data is sometimes more tricky. The interaction matrix $H$
depends on the aberrations, it changes between different images and
even during the fitting of one image. When a large number of Zernike
terms is considered, it is common to encounter low singular values in
$H$. This means that some combinations of parameters are not well
constrained by the data, hence the noise will be amplified. Leaving
such combinations un-fitted by rejecting small singular values does
not solve the problem because we may obtain a good model of the donut
image with a parameter set which is very different from the true
parameters. This happens when significant high-order aberrations are
present, but the defocus is not large enough, i.e. in the caustic
regime.
One way to get around this problem is to determine high-order
aberrations separately (e.g. by fitting a bright-star image with a
large defocus) and then to include them in the model for low-order
fits. Including such pre-defined parameters (we call them static
aberrations) improves the convergence and the fit quality. Low-order
fits are more stable and give reproducible results. However, the
coefficients of low-order aberrations derived in this way depend on
the adopted static aberrations: a different result is obtained from
the same data when a different vector of static aberrations is
supplied initially.
\subsection{Other error sources}
In real life, optical aberrations in the beam change with time because
of the instability of telescope optics, the changing refractive index
of the air in the dome, and seeing. Averaging donut images over a
sufficiently long time $T$ (typically 10-30s) reduces the contribution
of variable aberrations only by a factor of $\sqrt{\tau/T}$, where
$\tau$ is the time constant of the variation. Consider, for example,
a 4-m telescope with 5~m/s wind and $1''$ seeing. The rms amplitude
of the random astigmatism caused by the seeing is 270~nm, according to
the formulae of \citet{Noll}, and its time constant is 0.25~s. Thus,
in a 10-s exposure we expect a random error of astigmatism of the
order of 40~nm, or larger if the wind is slow and/or some aberrations
are produced by air inside the dome. The statistical noise can be
reduced by taking longer exposures but may still remain a dominating
error source.
If the donut image is blurred in one direction by imperfect guiding or
telescope shake during the exposure, this departure from the ideal
model will result in spurious aberrations, mostly astigmatisms of 2-nd
and 4-th order. Simulations for the case of SOAR show that a blur of
$1''$ causes errors of $a_6$ and $a_{12}$ of only 20~nm, a smaller
blur has negligible effect. Hence the blur is never a problem at
modern telescopes with good tracking.
\section{Examples}
\label{sec:exa}
\subsection{Internal consistency}
\begin{figure}[ht]
\plotone{f8a.eps}
\plotone{f8b.eps}
\caption{Intra-focal (top) and extra-focal (bottom) astigmatic images
taken at SOAR on March 6/7 2005 (on the right) and their corresponding
models (on the left). Pixel size 0\farcs154, field of view 9\farcs85.
The exposure numbers are 113 (top) and 115 (bottom).
\label{fig:113}}
\end{figure}
\begin{table*}
\caption{Some Zernike coefficients ($\mu$m rms) measured on SOAR images
with artificial astigmatism.}
\label{tab:result}
\begin{tabular}{l ccc ccc}
\hline
Image & Seeing, \arcsec & $a_4$ & $a_5$ & $a_6$ & $a_7$ & $a_8$ \\
\hline
113 & 0.936 & $-$3.704 & $-$1.061 & 1.205 & 0.042 & 0.126 \\
114 & 0.978 & $-$3.537 & $-$1.165 & 1.264 & $-$0.006 & 0.130 \\
115 & 1.211 & 3.271 & $-$1.225 & 1.239 & $-$0.055 & 0.033 \\
116 & 1.090 & 3.028 & $-$1.487 & 0.852 & 0.077 & $-$0.242 \\
137a & 0.871 & $-$4.668 & $-$1.446 & 0.133 & 0.570 & 0.783 \\
137b & 0.851 & $-$4.590 & $-$1.431 & 0.135 & 0.555 & 0.762 \\
137c & 0.858 & $-$4.645 & $-$1.426 & 0.080 & 0.623 & 0.783 \\
137d & 0.884 & $-$4.853 & $-$1.504 & 0.185 & 0.630 & 0.770 \\
\hline
\end{tabular}
\end{table*}
Several series of defocused images were taken at the SOAR telescope in
March 2005 and processed with the donut algorithm. One example shown
in Fig.~\ref{fig:113} was acquired with a pixel scale of $0\farcs154$ and
25-s exposure time using a conveniently bright star. An astigmatism
was introduced intentionally by de-tuning the actively controlled
primary mirror. Extra- and intra-focal images were fitted
independently of each other with $N_z = 28$ terms. At each focus
setting, two images were acquired. The defocus of 3~$\mu$m produces
donut images of 4\farcs2 diameter. The results
(Table~\ref{tab:result}) show a good coherence of the measurements,
irrespective of which side of the focus they were taken. The residuals
between model and image are from 5\% to 9\%. The presence of
uncorrected (but well-modeled) high-order aberrations is evident in
Fig.~\ref{fig:113}.
Yet another test was done by fitting defocused images of different
stars in the same exposure. The flux in the image 137a is 30 times
more than in the image 137d, yet the Zernike coefficients derived from
these images agree well (Table~\ref{tab:result}). Here, the fit has
been limited to 11 terms (with static aberrations up to $a_{28}$),
because full fitting of 28 terms did not give reproducible results.
This instability is apparently caused by significant high-order
aberrations, as seen in Fig.~\ref{fig:113}.
An estimate of the internal accuracy of the donut method was obtained
by processing several consecutive images. The rms scatter of the
coefficients for 2-nd and 3-rd order aberrations ranges typically from
0.05 to 0.15~$\mu$m for 60-s exposures.
\subsection{Comparison with a Shack-Hartmann WFS}
\begin{figure}[ht]
\plotone{f9a.eps}
\plotone{f9b.eps}
\caption{Comparison between donut and CWFS at SOAR. (a) Astigmatism
changes caused by the telescope motion in elevation as measured by
the CWFS (horizontal axis) and donut (vertical axis). The data was
taken on April 13/14 2006. (b) Two astigmatism coefficients measured
with donut as the mirror shape is de-tuned with an amplitude of
$\pm$1~$\mu$m and step 0.25~$\mu$m (April 15/16, 2006).
\label{fig:CWFS}}
\end{figure}
\begin{figure}
\plotone{f10.eps}
\caption{Variation of the coma coefficient $a_7$ across the field in one of
the detectors of the Mosaic imager on the Blanco telescope.
\label{fig:coma}}
\end{figure}
The donut method has been compared with the SOAR high-order
Shack-Hartmann control WFS (CWFS) that is part of the active-optics
system used for tuning the primary mirror. The response of the
primary mirror actuators was calibrated independently by the
manufacturer and is $\sim 1.6$ times larger than the aberrations
measured by the CWFS.
The donut data were taken with the SOAR imager and binned to the
pixel scale of $0\farcs154$. Three 60-s exposures for each setting
were processed independently, providing an estimate of the measurement
errors. The CWFS data are single measurements with 10~s exposure,
more vulnerable to the insufficiently averaged atmospheric and dome
turbulence than donuts. The measurements with donut and CFWS are
sequential as these devices occupy different focii of SOAR. The
Zernike coefficients obtained with donut were rotated to the CFWS
system by the known angle between these instruments. Both instruments
give Zernike coefficients on the same scale -- rms microns of
wavefront aberration.
Figure~\ref{fig:CWFS}a shows a comparison between the two sensors
as the telescope was tipped in elevation and brought back. The
systematic trend of the $0^\circ$ astigmatism with elevation is
evidenced by both methods, with some offset and scale factor apparent
from the linear regression. The scatter of points around this
regression is typical for such tests and compatible with the internal
consistency of each method.
For another test, the shape of the SOAR primary was distorted by
``dialing in'' astigmatism coefficients in $0^\circ$ and $45^\circ$
with a full amplitude $\pm 1$~$\mu$m and a step 0.25~$\mu$m (these
numbers refer to the primary mirror aberrations as determined by the
manufacturer). The mirror was initially flattened with the CFWS. The
result (Fig.~\ref{fig:CWFS}b) shows that the donut method measures
these aberrations with a coefficient of $\sim 1.6$ (same as the CWFS)
and an offset presumably arising from the fixed difference of optical
aberrations between the focii of CWFS and imager.
\subsection{Mosaic imager at the Blanco telescope}
The classical 4-m Blanco telescope at Cerro Tololo is equipped with
the wide-field CCD mosaic at its prime focus. The pixel scale is
$0\farcs27$. We processed donut images extracted from the standard
focusing sequences (exposure time 10~s per focus setting, maximum
defocus 1.5 to 2~$\mu$m). Although these data were not intended for
the aberration analysis, fitting them with donut models was quite
successful, with a typical rms intensity residuals of 6\% for 28
Zernike terms. The fitting takes 20--30~s on a 1~GHz PC with $K=256$
grid.
Comparing the coefficients of low-order aberrations determined from
the first and the last images in each sequence, we find a typical
difference of 0.1~$\mu$m or less, i.e. similar to the SOAR data
presented in Table~\ref{tab:result}. The most likely reason of these
differences is a real slow variation of the aberrations between
exposures in the focusing sequence.
We processed the first image of the focusing sequence extracted from
11 different stars in one of the detector segments. These images are
simultaneous and the scatter of the measured coefficients in this test
was much smaller, from 0.025 to 0.073~$\mu$m. Part of this scatter is
caused by real variations of the aberrations across the
field. Figure~\ref{fig:coma} shows a clear trend in the coma
coefficient $a_7$ as a function of the $y$-coordinate of the star.
This example shows how a quantitative analysis of optical aberrations
can be done with simplicity, as a by-product of standard observations.
It is possible to measure aberrations across the field of a prime-focus
camera with a Hartmann mask, but the donut technique makes this task
much easier. The rms accuracy can reach 25~nm, or $\lambda/25$.
\section{Conclusions}
\label{sec:concl}
We have shown that focus and astigmatism can be evaluated
quantitatively from the second moments of defocused images. One useful
application of this analysis will be a fast and accurate focusing
procedure for classical imaging, suggested here as a replacement of
traditional focusing sequences. Furthermore, donut images can be
fitted directly to a set of Zernike coefficients (complemented with an
additional parameter, seeing), offering a practical way to measure
aberrations and to tune the optics of ground-based telescopes.
The donut method proposed here is different from the standard
curvature sensing in several aspects. First, only one defocused image
is needed. Second, no simplifying assumption of linearity is made,
hence the defocus may be quite small while measuring aberrations of
significant amplitude -- comparable to the defocus itself. Third, we do
not use the intensity transport equation \citep{Roddier90} but rather
compute the image model by a full Fraunhofer diffraction integral
using an FFT. Finite detector pixel size and additional blur caused by
the seeing are explicitly taken into account. These two effects
usually wash out any traces of diffraction, so the calculated
monochromatic image is a good model of a wide-band image as well.
The down-side of the full diffraction image modeling is a slow
calculation time (a few seconds for a 4-m telescope) and a restriction
of the modeled field of view. The donut method will work best for
small defocus values and for measuring low-order aberrations. On the
other hand, classical curvature sensing would be probably a better
choice for high-resolution sensing, where a wave-front map (rather
than Zernike coefficients) is sought.
We plan to apply the donut technique to the closed-loop control of the
SOAR active optics and to optical characterization of other
telescopes at CTIO. The method seems to be simple and mature enough to
be offered to other interested users. So far, it is implemented in the
IDL language.
\acknowledgments
We thank D.~Maturana, S.~Pizarro and H.E.M. Schwarz for taking
defocused images at SOAR, B.~Gregory for processing the images and his
valuable comments, A.~Rest for the help in extracting the Mosaic data.
The comments of P.~Hickson on the draft version of this paper are
gratefully acknowledged.
|
1,108,101,566,680 | arxiv | \section{Introduction}
Geometrically frustrated lattices have crucial impacts on the emergence of exotic electronic states
in strongly correlated systems~\cite{poiblanc1,Aoki}, examples are triangular layered
cobaltates $\mathrm{Na_xCoO_2}$~\cite{co}, anisotropic triangular
lattice $\mathrm{Cs_2CuCl_4}$~\cite{Cs} and three dimensional
pyrochlore material $\mathrm{KOs_2O_6}$~\cite{kos}
In particular, resonating valance bond (RVB) spin liquid or valence
bond crystal may exist in frustrated quantum magnets. There is a hope that
unconventional superconducting state may emerge upon doping of
the frustrated magnets, it has been pointed out in the recent
theoretical studies~\cite{YZhou02,Ogata03,Cs-Chung,did,Gan06,huang}.
Recent discovered two dimensional synthesized frustrated
material~\cite{ss3} $\mathrm{SrCu_2(BO_3)_2}$ is an important
compound. It is topologically equivalent to the
Shastry-Sutherland~\cite{ss2,ss3} lattice,
spin-$\frac{1}{2}$ $\mathrm{Cu^{2+}}$ lies in two-dimensional
$\mathrm{CuBO_3}$ layers decoupled from each other by plane of
$\mathrm{Sr^{2+}}$ ions, the antiferromagnetic exchange couplings between
$\mathrm{Cu^{2+}}$ ions is identical to Heisenberg-hamiltonian of
$\mathrm{SS}$ lattice and motivate us to investigate its doping
properties. This lattice has been studied many years ago as a two
dimensional exactly solvable~\cite{ss1} spin model, a schematic
Shastry-Sutherland lattice is illustrated in Fig.~\ref{1}.
Let $J$ and $J'$ be the exchange couplings along the square lattice and diagonal links, respectively.
The production of valence-bond singlets on disjointed diagonal links
is the exact ground state for $J'/J > 1.477$~\cite{ss8,ss81,ss82},
Experiments showed that $J'/J=1.574$ is an optimal
value~\cite{jpbj} for the insulator $\mathrm{SrCu_2(BO_3)_2}$.
\begin{figure}
\includegraphics[width=7cm]{Fig1.eps}
\caption{Schematic structure of Shastry-Sutherland lattice. It includes
four sublattices $1..4$. The hopping
integral and spin-spin coupling are $t$ and $J$ on the n.n.
links (solid line) and $t'$ and $J'$ on diagonal links (dashed
lines). We use $a,b$ to distinguish two diagonal links with different orientations.}\label{1}
\end{figure}
There are many previous investigations on the doping effect of the Shastry-Sutherland
lattice and various techniques have been used~\cite{ss5,ss4,ss6,ssh}.
By using slave-boson mean-field theory~\cite{ss5}, the competing orders of staggered flux state and
d-wave superconducting state are investigated at a specific parameter regime. Similar results have been obtained
in a recent variational Monte Carlo study.~\cite{ss6}
Based upon the analysis of $t$-$J$-$V$ model via the bond-operator formulation,
a number of superconducting states including $s$-wave, $(s+id)$-wave, plaquette
$d$-wave are found as the ground states of the doped Shastry-Sutherland lattice.~\cite{ssh}
On the other hand, exact diagonalization approaches~\cite{ss4} have been employed to study the
ground state of finite system and no superconducting order is found to be favored on doping.
In this paper, we apply the plain vanilla version of RVB
theory~\cite{Zhang88,vannila} to study the emergence of unconventional superconductivity.
We define $\eta=t'/t$ as the frustration amplitude,
where $t'$ and $t$ are hopping integrals on diagonal links and
square lattice links, respectively, and use $t$-$t'$-$J$-$J'$ model to
study the doping effect on the Shastry-Sutherland lattice.
The competition among various superconducting states will be examined for
both hole-doping and electron-doping cases.
The phase diagram is depicted as functions of $\eta$
and doping concentration $\delta$. In particular, four distinct
ground states show up. We classify these states
in terms of relative phase of mean-field pairing amplitudes.
There are four possible ground state candidates.
In certain limiting cases such as $|\eta| \ll 1$, it is well known that pairing symmetry belongs
to $d_{x^2-y^2}$-wave with the superconducting order parameters on square lattice links $\Delta_x=-\Delta_y$ and
the pairing parameters on diagonal links $\Delta_a=\Delta_b=0$. Another candidate is the
$s$-$s$-wave pairing symmetry with $\Delta_x=\Delta_y$ and
$\Delta_a=\Delta_b$, while the relative phase shift between these two distinct links is equal to
$\pi$. The third candidate state is
staggered flux state, it can only be stable in negative $\eta$ and
small doping. In such state, the complex particle-hole mean-field parameter is
modulated alternatively by a staggered magnetic $\pm\phi$.
The last candidate is normal metal with vanishing of mean-field parameters.
Our calculation shows that from weak to intermediate frustration,
$d$-wave state maintains in a large region of electron and hole dopings.
Around the symmetric point $|\eta|=1$, the symmetry of ground state
is sensitive to the doping level since the energies of three distinct states,
$d$-wave, $s$-$s$-wave pairing and staggered flux, are almost identical.
For larger frustration $|\eta|>1$, the ground state has an $s$-$s$-wave symmetry for
both hole and electron doping.
The rest of the paper is organized as follows. In Sec. II, we
propose the formalism of renormalized mean-field theory to study the
$t$-$t'$-$J$-$J'$ model Hamiltonian on the Shastry-Sutherland lattice.
In Sec. III, we present our numerical results of renormalized mean-field theory as functions of
frustration and doping level, and mean-field phase diagram as well.
Finally a summary is given in Sec. IV.
\section{Formalism}
A primitive unit cell of the Shastry-Sutherland lattice includes four
inequivalent sites, we consider a $t$-$t'$-$J$-$J'$ model on such lattice.
The Hamiltonian can be written as
\begin{eqnarray}
H = &-&\sum_{\langle ij \rangle \sigma} t_{ij} \hat{P}(c_{i\sigma
}^{\dagger }c_{j\sigma }+ h.c.)\hat{P}+ \sum_{\langle ij
\rangle}J_{ij}
\vec{S}_{i}\cdot \vec{S}_{j} \nonumber \\
& - & \mu\sum_{i} n_{i},
\end{eqnarray}
where $c_{i\sigma }^{\dagger }$ is to create a hole with spin
$\sigma $ at site $i$, $\vec{S}_{i}$ is a spin operator, $\mu$ is
the chemical potential, $\langle ij \rangle$ denotes a square lattice or diagonal link
on the lattice, $t_{ij}$ and $J_{ij}$ stand for the
hopping integrals and antiferromagnetic exchange couplings,
respectively, $t_{ij}=t$ and $J_{ij}=J$ on the square lattice links, while
$t_{ij}=t'$ and $J_{ij}=J'$ on the diagonal links, as shown in
Fig.~1. We use $t$ as an energy unit and set $t/J=3$ . We choose
$J'/J=(t'/t)^{2}$ to be consistent
with the superexchange relation of $J= 4t^2/U$ in the large Hubbard
$U$ limit. Projection operator~\cite{Zhang88,vannila}
$\hat{P}=\prod\limits_{i}(1- n_{i\uparrow }n_{i\downarrow })$
removes all the doubly occupied states.
We define particle-particle condensate mean-field as well as particle-hole condensate
mean-field as,
\begin{eqnarray}
{\Delta}_{ij} &=& \langle
c^{\dag}_{i\uparrow}c^{\dag}_{j\downarrow}-
c^{\dag}_{i\downarrow}c^{\dag}_{j\uparrow}\rangle_0 \nonumber
\\
\xi_{ij} &=& \langle c^{\dag}_{i\uparrow}c_{j\uparrow}+
c^{\dag}_{i\downarrow}c_{j\downarrow}\rangle_{0},
\end{eqnarray}
where $\langle \rangle_0$ gives expectation value corresponding to
states without constraint of no double occupancy. Although the
number of independent parameters in Shastry-Sutherland lattice is twelve, our
calculation shows the number can be reduced to eight due to certain symmetry.
The effect of the projection operator is taken into account by a set
of renormalized factors~\cite{gutzwiller,vallhardt}, which are
determined by statistical countings. Within the Gutzwiller
approximation, the energy of physical state $|\psi\rangle$ can be
reduced to that of state $|\psi_0\rangle$ which is free of double
occupancy constraint, i.e.,
$\langle\psi|H|\psi\rangle=\langle\psi_0|H'|\psi_0\rangle=\langle\psi_0|g_tH_t+g_sH_s|\psi_0\rangle$.
In homogenous case the renormalized factors $g_{t}=2\delta
/(1+\delta)$ and $g_{s}=4/(1+\delta)^2$, where $\delta$ denotes
the doping density. Thus, we have the effective Hamiltonian,
\begin{eqnarray}
H_{eff} & = & \sum_{\langle ij \rangle \sigma} -g_t t_{ij}
(c_{i\sigma }^{\dagger }c_{j\sigma } + h.c) +\sum_{\langle ij
\rangle} g_s J_{ij}
\vec{S}_{i}\cdot \vec{S}_{j} \nonumber \\
& - & \mu\sum_{i} n_{i},
\end{eqnarray}
and the resulting mean-field Hamiltonian can be expressed as
\begin{eqnarray}\label{effh}
H_{MF}&=&\sum_{ \langle ij \rangle \sigma}-\frac{3}{8} g_s J_{ij}
[\xi _{ij} c_{i\sigma }^{\dagger }c_{j\sigma } + \Delta _{ij} c_{i
\sigma}^{\dagger }c_{j \bar{\sigma}} + h.c. ]\nonumber\\
&\,&-g_t t_{ij} (c_{i\sigma }^\dag c_{j\sigma } + h.c) + const,
\end{eqnarray}
with $const=\frac{3}{8}Jg_s \sum_{\langle ij\rangle}[{ |\xi _{ij}|
}^2 + | \Delta _{ij} |^2]$. We diagonalize the mean-field
Hamiltonian (\ref{effh}) in momentum space, all the local
order parameters and the chemical potential $\mu$ are
self-consistently obtained for each set of frustration parameter $\eta$
and doping density $\delta$, with this procedures the lowest energy state
can be determined.
\section{Numerical Results of Phase Diagram and Mean-field Theory}
In this section, we present our numerical results of renormalized mean-field theory
on the Shastry-Sutherland lattice. The mean-field order parameters depend on both
frustration parameter $\eta$ and doping level $\delta$.
In our calculations, we choose several typical frustration amplitude
to analyze pairing symmetry for different doping
levels. Larger frustration parameter $|\eta|$ corresponds
to stronger interactions on the diagonal bonds.
and the symmetric point $|\eta|=1$ has the strongest frustration.
We will start from the phase diagram, then provide detailed discussion of mean-field order
parameters as functions of frustration parameter $\eta$ and doping level $\delta$.
\begin{figure}
\includegraphics[width=8cm]{Fig2.eps}
\caption{Phase diagram of $t$-$t'$-$J$-$J'$ model on the Shastry-Sutherland
lattice as functions of doping density $\delta$ and
frustration amplitude $\eta$. The thick solid line denotes a first order
phase boundary while the dashed line corresponds to a second order
transition.}\label{2}
\end{figure}
As shown in Fig. \ref{2}. there exists four distinct phases in the phase diagram.
It is obvious that the ground state has a $d_{x^2-y^2}$-wave or $d$-wave symmetry in the limit of
$\eta \ll 1$ at finite doping. Our results show that $d_{x^2-y^2}$-wave state is stable in a wide
parameter regions of $-\sqrt{0.9}<\eta<\sqrt{0.96}$ and finite doping.
Previous studies have shown the robustness of $d$-wave pairing against
weak frustrations on both triangular lattices and checkerboard lattices.~\cite{Ogata03,did,Gan06,huang}.
It seems that such robustness is universal for weakly frustrated systems.
At large $|\eta|>1$, the ground state has an $s$-$s$-wave pairing symmetry
with $\Delta_x=\Delta_y$ and $\Delta_a=\Delta_b$ while the relative phase between $\Delta_x$
and $\Delta_a$ is $\pi$.
Recently the two families of the Fe-based superconductors are 1111 systems ReOFeAs with rare earth ions
Re~\cite{Kamihara} and the 122 systems AeFe$_2$As$_2$ with alkaline
earth element Ae~\cite{Rotter}. An $s$-$s$-wave pairing symmetry was proposed as a popular
candidate for the superconducting pairing symmetry of the Fe-based superconductors.~\cite{Mazin}.
In between the above two regions, there are two non-superconducting states
in such small parameter region around $|\eta|=1$.
The region around $\eta=-1$ corresponds to staggered flux state at low doping
while the normal metal state prevails for $\eta \geq 1.2$ at finite doping ($\delta > 0.10$).
It is interesting to find that there is an abrupt change of superconducting order parameters in between
$d$-wave and $s$-$s$-wave state around $\eta=-1$ and the phase transition is first order.
Around $\eta=1$ region, phase transition from $s$-$s$-wave to $d$-wave state is a
weakly first-order transition in which parameters change continuous at the boundary.
Moreover, the phase transition between staggered-flux
state and $d$-wave state is also first-order. Other phase boundary corresponds to second-order.
\begin{figure}
\includegraphics[width=9cm]{Fig3.eps}
\caption{The magnitudes of the mean-field order
parameters $\Delta$ and $\xi$ as functions of
$\delta$ for (a) $\eta=\sqrt{0.8}$ and (b) $\eta=1$. }\label{3}
\end{figure}
As we pointed out already, in the limit of weak frustration $|\eta| \ll 1$ upon doping,
the model Hamiltonian may correspond to
the well-known $t$-$J$ model in which the $d$-wave superconducting symmetry is the ground state.
Our calculations are performed for various frustration parameter as well as doping level.
In a wide range of parameter region, $d$-wave state appears to be robust as the ground state.
In particular, our calculations show that $d$-wave state have lowest energy
for positive $\eta$ less than $\sqrt{0.96}$.
In Fig. 3, we present the amplitudes of the mean-field parameters as functions of hole density $\delta$
for $\eta=\sqrt{0.8}$ and $\eta=1$, respectively.
As shown in Fig. 3(a) for $\eta=\sqrt{0.8}$, a typical $d$-wave state is obtained and the parameter $\xi_d$ shows
no much doping dependence.
In the parameter region $0.96<\eta\leq1$, $d$-wave and $s$-$s$-wave superconducting
state are highly competing. The mean-field order parameters of ground state are discontinuous
as functions of hole density $\delta$. For better illustration, we take the symmetric point $\eta=1$.
As displayed in Fig. 3(b), the ground state has $s$-$s$-wave symmetry at small doping while the
$d$-wave state prevails for larger doping level. The critical doping level corresponds to $\delta_c \simeq 0.035$.
\begin{figure}
\includegraphics[width=9.5cm]{Fig4.eps}
\caption{Panels (a) and (b) describe the mean-field order parameters as functions
of $\delta$ for $d$-wave and $s$-$s$-wave state for $\eta=1$, respectively.
Panels (c) and (d) correspond to the evolutions of chemical potential and energy per site as a
function of $\delta$ for two competing states. }\label{4}
\end{figure}
To reveal the competition between $s$-$s$-wave and $d$-wave
states more clearly, we compare the mean-field order parameters,
chemical potential as well as energy per site for these two states
in Fig. 4 at the symmetric point $\eta=1$.
Fig. \ref{4}(a) shows parameter functions of $d$-wave,
Fig.\ref{4}(b) shows that for $s$-$s$-wave state in which
$|\Delta_d|$ is larger than $|\Delta_{x,y}|$ where the subscript
$d$ denote the diagonal bonds. For $s$-$s$-wave, all pairing
parameters change non-monotonically to zero, and then metallic
state emerges smoothly. We plot the parameters of $s$-$s$-wave
from $\delta=0.005$, at half filling there is no
self-consistent $s$-$s$-wave solution. Fig. \ref{4}(c)
shows the crossing of chemical potentials for those two competing
states at the transition point $\delta\simeq0.035$.
In Fig. ref{4}(d), such a crossing of energy per site for
those two states exhibits itself as well.
It is rather clear that a zero-temperature first-order quantum
phase transition may occur at the transition point $\delta\simeq0.035$.
\begin{figure}
\includegraphics[width=9cm]{Fig5.eps}
\caption{Amplitudes of the mean-field order parameters as functions of
$\delta$ for $\eta=-\sqrt{0.95}$. Panels (a) and (c) correspond to
$d$-wave and staggered flux state, respectively. Panel (b) describes
the accumulated phase of $\xi$ for staggered flux state.}\label{5}
\end{figure}
From Fig. \ref{2}, one can see that near $\eta=-1$, a staggered-flux state may appear
in a small parameter region. For instance, we plot the mean-field parameters of $d$-wave state and
staggered-flux state for $\eta=-\sqrt{0.95}$ in Fig.
\ref{5}(a),(c) respectively. Calculation shows that for staggered
flux state $\xi_{ij}$ is a complex value. The phase of $\xi_d$ is
$\pi$, and the accumulating phase of $\xi_{ij}$ on a pane is
independent of $\eta$ and reduces linearly with increasing doping,
as shown in Fig. \ref{5}(b). At half-filling, it is hard to obtain
self-consistent solution. For $\eta$ less than $0.04$,
the staggered flux state is stable, while in high doping level $d$-wave
state has lower energy. For $\eta<-1$ $s$-$s$-wave state emerges
with introduce of mobile charge, and have favorable energy than the
staggered flux state.
\begin{figure}
\includegraphics[width=9cm]{Fig6.eps}
\caption{Amplitudes of the mean-field order parameters as functions of
$\delta$ for (a) $\eta=\sqrt{1.005}$ and (b) $\eta=\sqrt{1.08}$.
The inset of (a) zooms in the low doping region.}\label{6}
\end{figure}
For large frustrated amplitude, the interactions on diagonal bonds
may play a dominate role in determination of superconducting pairing symmetry.
When $\eta$ takes a value slightly larger than $1$, as
the superconducting pairing symmetry may change from $s$-$s$-wave to $d$-wave
and mean-field parameters varies rather smoothly. This transition is weakly first order.
In Fig. \ref{6}(a), it shows that $\Delta_d$ varies nonmonotonically to zero and
ground state evolves from $s$-$s$-wave state to $d$-wave state with
increasing doping for $\eta=\sqrt{1.005}$.
Precise calculation of pairing parameters shows that around the
critical point $\Delta_{x,y}\neq0$. This is illustrated in the
inset picture of Fig. \ref{6}(a), and indicates that this is a
weakly first order phase transition. Fig. \ref{6}(b) presents the
mean-field parameter as functions of $\delta$ for
$\eta=\sqrt{1.08}$. We find that a larger $\eta$ corresponds to a
smaller amplitude of $\Delta_{x,y}$ of $d$-wave state. For
$\eta\geq1.4$, amplitude of the $d$-wave state is vanished and metal
state follows the $s$-$s$-wave.
\begin{figure}
\includegraphics[width=9cm]{Fig7.eps}
\caption{Amplitudes of the mean-field order parameters as functions of
$\delta$ for (a) $\eta=-\sqrt{2}$ and (b) $\eta=\sqrt{2}$.}\label{7}
\end{figure}
For large frustrated amplitude, $s$-$s$-wave state is the ground
state for both positive and negative $\eta$. Fig. \ref{7}(b) takes
$\eta=\sqrt{2}$ as an illustration. By increasing doping to a
considerable high level, both $|\Delta_{x,y}|$ and $|\Delta_d|$
approach zero. However they do not reach zero simultaneously and
$\Delta_{x,y}$ decreases more rapidly. It implies that
superconducting order parameter may exist only on diagonal bonds in some cases.
As positive $\eta$ becomes larger, no metal state will appear since the
pairing parameters of $s$-$s$-wave state may have finite amplitude at
high doping level.
For negative larger frustration amplitude cases, $s$-$s$-wave
state is ground state at all doping level. Amplitude of $\Delta_d$
is larger comparing to that of corresponding positive case, since
the negative $t'$ frustrated hopping thus enhance pairing
amplitude. It indicates that superconductivity favors electron
doping. This has been shown in Fig. \ref{7}(a). It should be pointed out
that for larger frustrated case, our mean field theory can not
obtain the exact dimer ground state at half-filling.
\section{Summary}
We have employed the renormalized mean-field theory to study the
$t$-$t'$-$J$-$J'$ model on the geometrically frustrated Shastry-Sutherland
lattice for both hole and electron doping cases. Our
calculation shows that the ground state of the doped system
depends on the frustration amplitude $\eta$ and doping
level $\delta$. For weak frustration $\eta << 1$, $d$-wave state is
stable in a large parameter region in agreement with the case of $t$-$J$ model on
square lattice.
For strong frustration $\eta > 1$, $s$-$s$-wave state
dominates in a wide range of parameter region. This feature
has also been found in the doped triangular and checkerboard antiferromagnets.
When approaching the most frustrated point
$\eta=1$, $d$-wave state competes with $s$-$s$-wave state, the
phase transitions are first-order, the parameters change suddenly
at critical point. Near $\eta=-1$, staggered flux state dominates.
For frustrated amplitude is not very large, as doping increasing
ground state changes from $s$-$s$-wave state to $d$-wave state via
the weakly first-order transition.
Moreover, we have found the enhancement of superconducting order parameter for
negative $\eta$ because the negative $t'$ may introduce
frustration in kinetic energy and result in the enhancement of the pairing amplitude.
Our theoretical predications might be examined in future experiments on doped
SrCu$_2$(BO$_3$)$_2$.
\section{Acknowledgments}
H.X.H. would like to thank Profs. F.C. Zhang, Y.Q. Li and Y. Jiang for
helpful discussions. This work was supported by the National Natural
Science Foundation of China (Grants No. 10747145 and No. 10874032) and
the State Key Programs of China (Grant No. 2009CB929204).
Y.C. acknowledges the support from Shanghai Municipal Education Committee.
|
1,108,101,566,681 | arxiv | \section{}
Phonon-mediated particle detectors~\cite{enss:2008a} (often defined ``bolometers'') have nowadays important applications in neutrino physics,~\cite{giuliani:2012a,nucciotti:2014a} dark-matter searches~\cite{pirro:2017a} and rare nuclear decay investigations.~\cite{belli:2019a} They also provide outstanding $\alpha$, $\beta$, $\gamma$, X-ray and neutron spectroscopy.~\cite{enss:2008a,pirro:2017a,belli:2019a,bekker:2016a}
Neutrinoless double-beta ($0\nu2\beta$) decay~\cite{dolinski:2019a} is a hypothetical rare nuclear transition of an even-even nucleus to an isobar with two more protons, with the emission of just two electrons. Its observation would provide a unique insight into neutrino physics.~\cite{vergados:2016a} Bolometers based on Li$_2$MoO$_4$~crystals are promising detectors for a next-generation $0\nu2\beta$~decay experiment.~\cite{bekker:2016a,armengaud:2017a,armengaud:2020a} They embed the favorable candidate $^{100}$Mo, maximising the detection efficiency. The $0\nu2\beta$~decay signature is a peak in the sum energy spectrum of the two emitted electrons, expected at 3.034~MeV for $^{100}$Mo. In a bolometer, the energy deposited by a particle in the crystal is converted into phonons, which are then detected by a suitable sensor.
The highest challenge in $0\nu2\beta$~decay search is the control of the radioactive background, due to the long expected lifetime of the process ($> 10^{25}-10^{26}$~y).~\cite{gando:2016a,agostini:2020a,adams:2019a} The experiments are located underground under heavy shielding. To reduce the current background level of bolometric experiments, it is mandatory to reject $\alpha$ or $\beta$ events --- defined ``surface $\alpha$'s or $\beta$'s'' in the following for brevity --- induced by radioactive impurities located either close to the surface of the crystal itself or to that of the surrounding structure.\cite{artusa:2014a,alduino:2017a} Surface $\alpha$'s can be rejected in scintillating materials --- such as Li$_2$MoO$_4$~--- by detecting simultaneously scintillation and phonons for the same event~\cite{pirro:2006a,artusa:2014a,poda:2017a} and exploiting the generally lower light yield of $\alpha$'s with respect to $\beta$'s,~\cite{tretyak:2010a} but the rejection of surface $\beta$'s requires dedicated techniques capable of tagging surface events in bolometers.~\cite{foggetta:2005a,marnieros:2008a,nones:2010a,nones:2012a,agnese:2013a}
In this letter, we report an effective method to identify both surface $\alpha$'s and $\beta$'s in Li$_2$MoO$_4$~bolometers. The discrimination is achieved by coating a Li$_2$MoO$_4$~crystal side with a metallic film acting as a pulse-shape modifier for events that release energy close to the coated face. When an ionizing event occurs in a dielectric crystal kept at mK temperature, the deposited energy is readily converted to athermal phonons with typical energies of the order of tens of meV, to be compared with the few $\mu$eV thermal-bath energy.
The energy down-conversion of these athermal phonons occurs mainly by anharmonic decay and is progressively slowing down, as the phonon lifetime scales as the fifth power of the energy.~\cite{orbach:1964a,bron:1982a} If a sensor sensitive mainly to thermal phonons is used (as in this work), the rise time of the signal is in the $\sim 10$~ms range, which corresponds to the typical thermalization time of the deposited energy. However, thermalization can speed up via a metallic film covering a crystal side. If the particle is absorbed close to the film, a significant fraction of its energy is trapped in the metal in the form of hot electrons, excited by the absorption of the particle-generated athermal phonons. The energy is quickly thermalised in the electron system, so that phonons of much lower energies are re-injected in the crystal from the film. Signals from events occurring close to the film will present therefore a shorter rise time and a modified time evolution. We show here that surface events can be tagged according to this approach.
All the detectors in this work share a common basic structure, which is similar to that used in the $0\nu2\beta$~experiments CUORE,~\cite{adams:2019a} LUMINEU,~\cite{armengaud:2017a} CUPID-Mo,~\cite{armengaud:2020a,armengaud:2020b} CUPID-0~\cite{azzolini:2019a} and in the dark-matter experiment EDELWEISS~\cite{armengaud:2017b} as far as the phonon readout is concerned. The surface sensitivity was studied above ground with prototypes of reduced size with respect to the final $0\nu2\beta$~bolometers. The energy absorber of the bolometers described here is a single Li$_2$MoO$_4$~crystal~\cite{grigorieva:2017a} with a size of $20 \times 20 \times 10$~mm$^3$ and a mass of $\sim 12$~g. All the tests involve just a single $20 \times 20$~mm$^2$ coated side.
\begin{figure}[t]
\includegraphics[scale=0.26]{photo-set-up.pdf}
\caption{\label{fig:detector-assembly} Scheme (left) and photograph (right) of the detector assembly with a bare Li$_2$MoO$_4$~crystal. A Ge thermistor and a Si:P heater (used to stabilize the bolometric response) are glued on the upper face of the crystal, which is held by polytetrafluoroethylene (PTFE) elements, not shown in the scheme. A uranium source --- visible in transparency in the photograph --- is placed below the crystal. A bolometric light detector --- removed to take the photograph --- faces the upper side of the crystal. The reflecting foil forms an optical cavity that aids light collection.}
\end{figure}
The phonon sensor is a neutron transmutation doped Ge thermistor~\cite{haller:1984a} (NTD) with a size of $3 \times 3 \times 1$~mm$^3$. Its resistivity increases exponentially as the temperature decreases.~\cite{efros:1975a} The NTD is glued on the crystal by means a two-component epoxy. The glue provides a slow transmission interface, making the NTD sensitive mainly to thermal phonons.
We used uranium radioactive sources to test the detector surface sensitivity. They were obtained by drying up a drop of uranium acid solution on a copper foil. These sources provide two main $\alpha$ lines at $\sim 4.2$ and $\sim 4.7$~MeV from $^{238}$U and $^{234}$U respectively, affected by a significant straggling due to partial $\alpha$ absorption in the source residues and/or in the copper substrate. $^{238}$U disintegration is followed by two consecutive $\beta$ emissions, from $^{234}$Th (with a half-life of 24.1~d and an end-point of 0.27~MeV) and from $^{234m}$Pa (with a half-life of 1.2~min and an end-point of 2.27~MeV). The $^{238}$U $\alpha$ rate and the $^{234m}$Pa $\beta$ rate are extremely close.
\begin{figure*}[t]
\includegraphics[scale=0.48]{Pd-film.pdf}
\caption{\label{fig:Pd-film} Particle identification obtained by a Li$_2$MoO$_4$~detector with a 10-nm-thick Pd coating (see inset on the left) exposed to a uranium source. Left: The pulse-shape parameter $m/S_m$ is plotted as a function of the heat energy (estimated by $S_m$) deposited in the Li$_2$MoO$_4$~crystal, with a $\gamma$-based calibration. Surface events appear as a population with lower $m/S_m$ values. They are selected by visual inspection and highlighted in red. The neutron-capture line from the reaction $^6$Li(n,t)$\alpha$ lays in the interior-event band. The $\alpha$ particles are mis-calibrated by about $20$~\% due to both pulse-shape effects and intrinsic different responses for $\alpha$'s and $\beta$/$\gamma$'s. Right: The Li$_2$MoO$_4$~scintillation light yield (LY) is plotted for the same pulses on the same energy scale. The LY is expressed by the ratio of the energy deposited in the light detector by scintillation photons (in keV) to that deposited in the Li$_2$MoO$_4$~crystal as heat (in MeV), for the same event. The same surface events highlighted in the left panel are shown in red. Surface $\beta$'s lay in the high-LY band, while $\alpha$'s and the neutron capture events are well separated in the low-LY band.
}
\end{figure*}
The detector assembly is shown in Fig.~\ref{fig:detector-assembly}. A first test was conducted with a Li$_2$MoO$_4$~crystal without coating to establish the bare detector performance. All subsequent tests have adopted the configuration shown in Fig.~\ref{fig:detector-assembly}, where the metal-coated side, which is optically polished before the film deposition, faces the radioactive source. A bolometric light detector based on a Ge wafer~\cite{armengaud:2017a,armengaud:2020a} is used to separate $\alpha$ from $\beta$/$\gamma$/muon events by detecting the scintillation light.~\cite{pirro:2006a,artusa:2014a,poda:2017a}
The detectors were cooled down in a dilution refrigerator located in IJCLab, Orsay, France.~\cite{mancuso:2014a} All the data discussed here have been collected with the detector thermalized to a copper plate at $\sim 22$~mK (Fig.~\ref{fig:detector-assembly}). A current of the order of $\sim 5$~nA is injected in the NTD, rising the detector temperature to about $\sim 25$~mK, at which the NTD resistance is about 0.5~M$\Omega$. The voltage signal amplitude across the NTD is $\sim 60$~$\mu$V/MeV for the bare crystal, corresponding to an NTD temperature change of $\sim 0.5$~mK/MeV. The pulse rise time (from 10\% to 90\% of maximum amplitude) is typically in the 3--10~ms range and the pulse decay time (from 90\% to 30\% of maximum amplitude) is tens of ms. The signals are read out by a DC-coupled low-noise voltage-sensitive amplifier.~\cite{arnaboldi:2002a} In all the tests, the Li$_2$MoO$_4$~detector is energy-calibrated using $\gamma$~peaks of the environmental background, and the light detector using the cosmic muons crossing the Ge wafer~\cite{novati:2019a}. In the Li$_2$MoO$_4$~heat channel, we obtained routinely good energy resolutions of 5--10~keV FWHM for environmental $\gamma$ peaks in the 0.2--1~MeV region.
The first test to attain surface sensitivity was performed with a 10-$\mu$m-thick Al-film coating. The details of the achieved results are reported elsewhere.~\cite{bandac:2020a,khalife:2020a} We remind here that an excellent separation of surface $\alpha$ particles was demonstrated thanks to pulse-shape discrimination.
The best separation between surface $\alpha$'s and any type of interior events was obtained via a specially developed pulse-shape parameter --- extensively used here --- that we will designate as $m/S_m$.~\cite{bandac:2020a} To construct it, the signals are passed through a digital optimal filter,~\cite{gatti:1986a} whose transfer function is built using the noise power spectrum and the pulse shape of an interior event. This filter provides the best estimator of the signal amplitude $S_m$ (i.e. energy). An individual pulse $S(t)$ is plotted point by point against an average pulse $A(t)$ --- formed from a large sample of interior events and normalized to~1 --- obtaining approximately a straight line. The related slope parameter $m$ is an estimator of the pulse amplitude as well. The ratio $m/S_m$ turns out to be very sensitive to the pulse shape. Interior events have $m/S_m \sim 1$, as expected. On the contrary, $m/S_m$ deviates from~1 for surface $\alpha$ events.
For the Al-coated Li$_2$MoO$_4$~crystal, the separation between the interior and the surface $\alpha$ events from a uranium source is better than $10 \sigma$ in terms of $m/S_m$ distributions.~\cite{bandac:2020a} Unfortunately, only a slight hint of separation of the surface $\beta$ events emitted by the same source was observed,~\cite{bandac:2020a,khalife:2021a} ruling out Al coating as a viable method for a complete surface-event tagging.
Aluminum was chosen as it is superconductive at the bolometer operation temperature, with a critical temperature $T_C (\mathrm{Al}) \sim 1.2$~K.~\cite{Cochran:1958a} This leads to a negligible contribution to the heat capacity of the full bolometer, as the electron specific heat of superconductors vanishes exponentially with the temperature. We remark that the heat capacity of a bolometer must be as low as possible to achieve high signal amplitudes. In fact, no deterioration of the detector sensitivity was observed with respect to the bare Li$_2$MoO$_4$~crystal. However, the behaviour of superconductors can spoil surface particle tagging. The prompt absorption of athermal phonons by the film breaks Cooper pairs and forms quasi-particles.
Theoretically, quasi-particle lifetime diverges as the temperature of the superconductor decreases,~\cite{kaplan:1976a,barends:2008a} although it is often experimentally found to saturate at low temperatures.~\cite{devisser:2014a,fyhrie:2020a} In aluminum, at very low temperatures such as ours ($T/T_C < 0.02$), we expect the quasi-particle lifetime to be as large as several ms,~\cite{schnagel:2000a,baselmans:2009a,fyhrie:2020a,devisser:2014a} similar to the thermalization time of interior events.
This mechanism competes with the faster thermalization that should be provided by the film.
Driven by these considerations, we tested a Li$_2$MoO$_4$~bolometer with a normal-metal coating. At low temperatures, the electron specific heat of normal metals is proportional to the temperature and tends to dominate over the crystal heat capacity, which scales as $T^3$ according to the Debye law. The thickness of normal-metal films must be substantially smaller than the aluminum ones. We chose palladium as a coating material as it can provide continuous thin films down to 2~nm thickness and no challenging radioactive isotopes are present in its natural composition. A thickness of 10~nm was chosen as a good compromise between heat capacity reduction and phonon absorption probability. The particle-identification results are encouraging, as shown in Fig.~\ref{fig:Pd-film}: both surface $\alpha$'s and $\beta$'s are well separated from the interior events.
Unfortunately, the heat capacity of the Pd film~\cite{mizutani:2001a} competes with that of the Li$_2$MoO$_4$~crystal~\cite{musikhin:2015a} affecting seriously the sensitivity of the detector, which was only $\sim 23$~$\mu$V/MeV, about one third of that achieved with the bare crystal. Therefore, this option is not viable for a full coating of the crystal.
To overcome the heat-capacity problem, we developed a detector coated with an Al-Pd bi-layer (100~nm and 10~nm thick respectively, with Al on the top), which is superconducting by proximity effect below $T_C$(Al-Pd)~$= 0.65$~K. The superconductive gap induced in Pd by the Al film reduces substantially the Pd specific heat with respect to the normal state. This gap is however low enough to ensure the fast thermalization of the energy deposited by surface events. In fact, the surface-event discrimination capability was fully maintained (see Fig.~\ref{fig:Al-Pd}, left). The detector sensitivity was measured to be 43~$\mu$V/MeV, almost doubled with respect to the pure Pd film.
\begin{figure*}[t]
\includegraphics[scale=0.49]{Al-Pd.pdf}
\caption{\label{fig:Al-Pd} Particle identification obtained by a Li$_2$MoO$_4$~detector with an Al-Pd coating exposed to a uranium source. The $\alpha$ events are removed by a light-yield cut. Left: in a plot of the pulse-shape parameter $m/S_m$ versus energy, the surface events (in red) lay below a black curve defining a 3$\sigma$ acceptance region for the interior events (in blue). The analysis is carried out in the [0-3000]~keV energy interval, which is divided in several sub-intervals. For each of them, a double Gaussian fit of the $m/S_m$ distribution is performed to separate the two populations. An example is provided in the inset. The black point is located 3$\sigma$ at the left of the mean of the Gaussian of the interior events. The black curve fits the black points by a power-law function. Right: Energy spectra (with and without source) of the surface events selected according to the procedure illustrated on the left. The live times of the two measurements are normalized. The fit of the source data accounts for the two simulated $\beta$ contributions of the uranium source and that of the background.
}
\end{figure*}
\begin{figure}[t]
\includegraphics[scale=0.49]{alpha-source.pdf}
\caption{\label{fig:alpha-source} Energy spectrum collected by a Li$_2$MoO$_4$~detector with an Al-Pd coating exposed to a uranium source after selection of the $\alpha$ events by a light-yield cut with $\sim 100$\% efficiency. The spectrum is calibrated using the $\alpha$-line positions. The measurement is the same that provided the source data shown in Fig.~\ref{fig:Al-Pd}. The straggling can be reproduced by assuming five source components in copper. Each component is a 6~mm diameter disk with a given thickness. The active nuclei are assumed to be uniformly distributed in each disk. The exact source structure is unknown, but our goal here is to set up a phenomenological model capable of explaining the observed straggling. $^{238}$U and $^{234}$U are not in secular equilibrium, as already observed in these types of liquid sources.}
\end{figure}
\begin{figure}[t]
\includegraphics[scale=0.46]{grid.pdf}
\caption{\label{fig:grid} Particle identification obtained by a Li$_2$MoO$_4$~detector with an Al-Pd grid coating exposed to a uranium source. The event selection is performed as in Fig.~\ref{fig:Al-Pd}, left. In the top inset, the grid-coated crystal is shown. In the bottom inset, pulses from a surface (red) and an interior (blue) event are shown, corresponding to a deposited energy of about 1~MeV.}
\end{figure}
We performed two runs with the bi-layer detector. In the first, a uranium source was present, while the second was a background measurement in the same configuration. The trigger rate was $\sim 0.2$~Hz with the source. The contribution of the source is at the level of $\sim 0.03$~Hz. First, we developed a method to separate the surface $\beta$ component. The events below the black curve in the left panel of Fig.~\ref{fig:Al-Pd} --- collected in a source run --- are selected as surface events, while those above represent more than 99\% of the interior event population. The same analysis was performed for the background run. By means of Geant4-based~\cite{agostinelli:2003a} Monte-Carlo simulations (using the G4RadioactiveDecay and Decay0~\cite{ponkratenko:2000a} event generators), we were then able to confirm that the surface $\beta$ events isolated at low energies come actually from the radioactive source. We built a model to predict the $\beta$ spectrum shape considering the observed $\alpha$ straggling (Fig.~\ref{fig:alpha-source}) and the $\beta$ interactions in the detector. We fitted then the experimental $\beta$ spectrum using the predicted shape and taking the background into account (right panel in Fig.~\ref{fig:Al-Pd}). The total number of $^{234m}$Pa decay events returned by the fit --- the only free parameter --- is 3526(81). To build the source model, we set a uranium-source depth profile capable of reproducing the observed $\alpha$ spectrum and the related straggling, as shown in Fig.~\ref{fig:alpha-source}. From the model and the experimental number of $\alpha$ counts it was possible to predict independently the expected total number of $^{234m}$Pa events, which resulted to be 3455(273), in excellent agreement with that deduced from the selection of the source $\beta$ events. The efficiency in selecting surface $\beta$ events can be estimated as 102(8)\%.
The $\beta$-particle range in Li$_2$MoO$_4$~is of the order of 2~mm at 1~MeV and 4~mm at 2~MeV. Therefore, we can separate events that deposit a significant amount of energy up to $\sim 4$~mm from the film, well beyond its thickness. We performed then a last test by replacing the continuous Al-Pd film with an Al-Pd grid. The width of the grid lines was 70~$\mu$m and the spacing between each line was 700~$\mu$m (see inset in Fig.~\ref{fig:grid}). The purpose of using a grid is manifold: (1) further reduction of the heat capacity of the coating; (2) possibility to extract scintillation light through the coating; (3) availability of geometrical parameters to possibly tune the discrimination depth. The grid was tested with another uranium source, prepared with the same method as the first one, but about twice less intense.
The detector with grid coating can separate surface $\beta$ events (see Fig.~\ref{fig:grid}). The $\beta$ selection efficiency resulted to be 93(10)\%, in good agreement with the continuous-film results. In addition, we measured a discrimination power of about 4.5$\sigma$ for surface $\alpha$ events using the $m/S_m$ parameter. In terms of detector performance, we observed an almost full recovering of the detector sensitivity, that was $\sim 51$~$\mu$V/MeV for $\beta$/$\gamma$ events. Therefore, the grid method is currently our protocol for surface event discrimination.
In conclusion, we have shown that both $\alpha$ and $\beta$ particles absorbed close to a metal-coated surface of a Li$_2$MoO$_4$~bolometer can be rejected with high efficiency by pulse-shape discrimination. The prospects of this approach for $0\nu2\beta$~searches are promising. In fact, the current background model of the future $0\nu2\beta$~experiment CUPID~\cite{CUPID:2019a} predicts a background level of 0.1~counts/(tonne~y~keV). Next-to-next generation experiments aim to a reduction of an additional factor 10. Since surface $\beta$ events contribute significantly to the current background level, a necessary condition to achieve the desired reduction is to reject them with an efficiency up to 90\%. This is achievable with the technique here described.
This work is supported by the European Commission (Project CROSS, Grant ERC-2016-ADG, ID 742345). The ITEP group was supported by the Russian Scientific Foundation (grant No. 18-12-00003). F.A.D., V.I.T. and M.M.T. were supported in part by the National Research Foundation of Ukraine (grant No. 2020.02/0011). The PhD fellowship of H.K. has been partially funded by the P2IO LabEx (ANR-10-LABX-0038) managed by the Agence Nationale de la Recherche (France). The dilution refrigerator used for the tests and installed at IJCLab (Orsay, France) was donated by the Dipartimento di Scienza e Alta Tecnologia of the Insubria University (Como, Italy).
The data that support the findings of this study are available from the corresponding author upon reasonable request.
\nocite{*}
|
1,108,101,566,682 | arxiv | \section{Introduction} \label{sec:intro}
The study of absorption lines in the spectra of high redshift Quasi-Stellar Objects (QSO) is a fundamental tool for Cosmology \citep{meiksin09, mcquinn16}. Along the lines of sight to these powerful light beacons, every parcel of the intervening gas selectively absorbs wavelengths of light, providing information about the spatial distributions, motions, temperature, chemical enrichment, and ionization histories of gaseous structures from redshift seven and beyond until the present.
In particular, thanks to QSO absorption lines, it is possible to address issues like: What were the physical conditions of the primordial Universe? What fraction of the matter was in a diffuse medium and what fraction and how early condensed in clouds? Where are most of the baryons at the various redshifts? When and how did the formation of galaxies and large scale structure start? How early and in what amount have metals been produced? When and how (after the Dark Ages following recombination) did the Universe get re--ionized? What was the typical radiation field, how homogeneous, and what was producing it? Which constraints on cosmological parameters and types of dark matter (e.g. neutrinos) are derived from the large scale structure traced by the inter-galactic medium (IGM)? Does the standard Big Bang nucleosynthesis model makes the correct predictions about the primordial element abundances and the temperature evolution of the CMB? Do fundamental constants of Physics (e.g. the fine structure constant, $\alpha$, or the proton-to-electron mass ratio, $\mu$) vary with cosmic time? Does General Relativity correctly describe the expansion of our Universe?
In order to efficiently pursue these and other similar lines of investigation, it is essential to have the brightest possible light beacons in the background.
Historically, observations in the Southern hemisphere have been hampered by the lack of luminous targets with respect to the North, due to a lesser investment of telescope time to search for bright QSOs in the South. As an example, the {\it Quasar Deep Spectrum} observations carried out with the UVES spectrograph \citep{dodorico16} have targeted the QSO HE0940-1050 (z$_{\rm em} = 3.09$, V=16.9) and required 64.4 hours to reach a Signal-to-Noise-Ratio (SNR), per resolution element ($R = 45,000$), of 120-500 and 320-500 in the \OVI{}/\Lya{} region and in the \CIV{} region, respectively. HE0940-1050 is still the best target at this redshift in the South, but it is not comparable to the (lensed) beacons B1422+231 (z$_{\rm em} = 3.62$, V=15.8) or APM 08279+5255 (z$_{\rm em} = 3.91$, V=15.2), which have been available for observers in the Northern Hemisphere.
It is particularly urgent now to fill this gap in view of the upcoming new instrumentation in the Southern hemisphere, like ESPRESSO at VLT and the planning of new experiments (e.g. the Sandage test with the high-resolution spectrograph HIRES at the E-ELT \citep{cristiani07, liske08}, or the test of the stability of the fine-structure constant and other fundamental couplings \citep{leite16}.
Moreover, finding bright radio-loud QSOs at high-$z$ is particularly important to study the 21cm forest in absorption with future breakthrough facilities, like the Square Kilometer Array (SKA) in the Southern hemisphere, as proposed by \citet{Carilli02}. In addition, UV/optically bright QSOs at $z>3$ with lines of sight free from Lyman Limit Systems (LLS) up to the \HeII{} forest are particularly rare but extremely valuable to study the \HeII{} Reionization \citep{syphers14, worseck16, worseck19}.
By Comparing QSO surface densities, it is statistically evident that relatively high-$z$ objects of bright apparent magnitudes must be also present in the Southern hemisphere: of the 22 known QSOs with $z>2.8$ and $V<17$, only 5 are at $\delta < 0^\circ$, and all the 3 with $V<16$ are in the North.
Such unbalance exists because historically many surveys (e.g. the SDSS) have focused their efforts mainly in the Northern hemisphere. The present time however is ripe for a dramatic change of scenario thanks to new surveys available through the whole sky, or insisting mainly in the Southern hemisphere, such as Gaia DR2, Skymapper, 2MASS, and WISE \citep{RefGaiaMain, RefGaiaDR2, RefSkymapper, Ref2MASS, RefWise}.
In this paper we describe the first results of a program aiming at filling this gap in the Southern hemisphere, finding the brightest QSOs at $z>2.5$ that will be observed at high resolution with the present and future breakthrough facilities.
\begin{deluxetable*}{|l|c|c|c|c|c|}
\tablecaption{Cumulative surface density of QSOs at different redshifts and $i$-band magnitude limits expected from the luminosity function of \citet{kulkarni18}. The minimum and maximum range of surface density in a given redshift interval for QSOs brighter than a given $i$-band magnitude limit is provided. The expected bright QSO number counts are based on the best fit of individual luminosity functions by \citet{kulkarni18} (their Table 2), as well as on the global fit with a continuous evolution in the range $0<z<7$ also by \citet{kulkarni18} (models 1, 2, and 3). All the surface densities are expressed in unit of $10^{-4} deg^{-2}$.\label{tab:surfdens}}
\tablehead{
\colhead{$i \le$} &
\colhead{$\Sigma_{QSO}(2.5<z<3.0)$} &
\colhead{$\Sigma_{QSO}(3.0<z<3.5)$} &
\colhead{$\Sigma_{QSO}(3.5<z<4.0)$} &
\colhead{$\Sigma_{QSO}(4.0<z<4.5)$} &
\colhead{$\Sigma_{QSO}(4.5<z<5.0)$}
}
\startdata
15.5 & 0.10-0.29 & 0.00-0.14 & 0.00-0.09 & 0.00-0.01 & 0.00-0.00 \\
16.0 & 0.51-2.03 & 0.05-0.56 & 0.00-0.45 & 0.00-0.03 & 0.00-0.02 \\
16.5 & 2.65-12.1 & 0.42-2.42 & 0.04-1.64 & 0.00-0.11 & 0.00-0.04 \\
17.0 & 11.7-42.6 & 2.03-10.8 & 0.26-6.26 & 0.05-0.50 & 0.01-0.19 \\
17.5 & 51.5-132.2 & 10.8-44.9 & 1.90-22.5 & 0.42-1.91 & 0.15-0.63 \\
18.0 & 214.3-412.3 & 51.1-182.0 & 10.5-80.3 & 2.36-8.41 & 0.77-2.91 \\
\enddata
\end{deluxetable*}
\begin{deluxetable*}{|c|c|c|c|c|c|}
\tablecaption{Observed cumulative surface density of QSOs with $|b|> 25^\circ$ at different redshifts and $i$-band magnitudes (with $i \ge 15$). In the Northern Hemisphere only the QSOs in the SDSS footprint have been considered, while in the South the known QSOs before the present survey in the Skymapper footprint from both \citet{DR14Q} and \citet{Veron10} have been considered. All numbers are scaled to $10^4$ sq.deg. to allow a direct comparison with Tab.~\ref{tab:surfdens}.\label{tab:ObsQSOs}}
\tablehead{
\colhead{$i \le$} &
\colhead{$\Sigma_{QSO}(2.5<z<3.0)$} &
\colhead{$\Sigma_{QSO}(3.0<z<3.5)$} &
\colhead{$\Sigma_{QSO}(3.5<z<4.0)$} &
\colhead{$\Sigma_{QSO}(4.0<z<4.5)$} &
\colhead{$\Sigma_{QSO}(4.5<z<5.0)$}
}
\startdata
& North - South & North - South & North - South & North - South & North - South \\
15.5 & 0.0 -- 0.0 & 0.0 -- 0.0 & 0.0 -- 0.0 & 0.0 -- 0.0 & 0.0 -- 0.0 \\
16.0 & 4.3 -- 0.0 & 0.0 -- 0.0 & 0.0 -- 0.0 & 0.0 -- 0.0 & 0.0 -- 0.0 \\
16.5 & 7.5 -- 1.6 & 3.2 -- 0.0 & 0.0 -- 0.0 & 0.0 -- 0.0 & 0.0 -- 0.0 \\
17.0 & 25.6 -- 11.3 & 9.6 -- 3.2 & 0.0 -- 0.0 & 0.0 -- 0.0 & 0.0 -- 0.0 \\
17.5 & 85.4 -- 33.0 & 42.7 -- 13.7 & 9.6 -- 2.4 & 2.1 -- 0.8 & 0.0 -- 0.8 \\
18.0 & 324.4 -- 86.9 & 140.8 -- 41.8 & 30.9 -- 9.7 & 8.5 -- 0.8 & 4.3 -- 3.2 \\
\enddata
\end{deluxetable*}
\begin{figure*}
\plotone{map.png}
\caption{Maps of the locations of the sources in the {\it main sample} (\S\ref{sec:mainsample}, gray regions, a darker color indicates a higher density of sources). The locations of the QSOs in the SDSS DR14Q \citep{DR14Q} are represented by blue shaded regions. The new QSOs identified in this work are shown with filled circles, whose color indicates the redshift. Upper panel: equatorial coordinates; lower panel: Galactic coordinates.}
\label{Fig:skymap}
\end{figure*}
\section{The need and design of a new survey}
The typical range of apparent magnitudes of interest, having in mind a follow-up with high resolution spectroscopy, is $i \ \raise -2.truept\hbox{\rlap{\hbox{$\sim$}}\raise5.truept \hbox{$<$}\ } 17$ at $z \sim 2.5$ and $i \ \raise -2.truept\hbox{\rlap{\hbox{$\sim$}}\raise5.truept \hbox{$<$}\ } 18$ at $z \sim 4$.
In order to estimate the expected surface densities of QSOs, we have adopted the parameterization of the luminosity function by \citet{kulkarni18}. We extract random values of
redshift and $M_{1450}$ absolute magnitude following the best-fit luminosity functions in different redshift bins from Table 2 of \citet{kulkarni18}. Then we associate each simulated QSO to different templates from the Polletta empirical library of AGNs \citep{polletta} and convert the absolute magnitude into observed magnitudes in the adopted photometric system (i.e. Skymapper u,v,g,r,i,z; Gaia $BP$,G,$RP$; 2MASS J,H,K; WISE W1,W2,W3,W4; see next sections for a detailed description). We assume here a null Galactic dust extinction, since, as we discuss in the following, we select targets at high galactic latitudes.
Working at bright absolute magnitudes, it is likely to be affected by small number statistics. To avoid this effect, we simulate a sky area of $10^5$ sq. deg., i.e.$\magcir 10$ times larger than the area available from the present survey, and repeat the simulation 10 times. This choice reduces the shot noise in the simulated number counts.
According to \citet{kulkarni18}, the best fit values of the QSO luminosity function (QLF) in their Table 2 can be affected by systematic errors due to the adopted survey selection functions. This is particularly true at z=2-4, where discontinuities and scatter in the QLF parameters appear over short redshift intervals. To avoid such discontinuities, we have computed the surface density adopting also the QLF resulting from a global continuous fit over the redshift range $0<z<7$ by \citet{kulkarni18} with a complex parameterization of the redshift evolution of the slopes, $\Phi^*$, and $M^*$ parameters. We used also their Models 1, 2, and 3 parameterizations to derive our estimates of the bright QSO number counts at $z>2.5$.
In Table \ref{tab:surfdens} we summarize the expected cumulative surface densities of QSOs in different bins of i-band magnitude and redshift. We provide the minimum and maximum expected values for the integral number counts based on the best fit values and on the models 1, 2, and 3 by \citet{kulkarni18}.
We do not consider here the effect of strong lensing, which can increase the luminosity of high-$z$ QSOs if their lines of sight are well aligned with the deep potential wells produced by galaxy over-densities or by single massive galaxies. The adopted luminosity functions, indeed, could be already affected by strong lensing in the bright end, especially at high-$z$ \citep{fan19,pacucci19}.
In the following, we will use our predictions in Table \ref{tab:surfdens} as a reference for the expected bright QSO number counts. We expect that they should not be strongly affected by incompleteness, at least at very bright absolute magnitudes.
In Table \ref{tab:ObsQSOs} we compare the expected number of QSOs at galactic latitudes $| b | > 25^\circ $ with the number of presently known QSOs in the Northern and Southern hemisphere, respectively. As already known, a significant discrepancy is present between the surface densities in the Northern and Southern hemispheres, in particular at $2.5\le z\le 4$. This is mainly due to the strong efforts, mainly by the Sloan Digital Sky Survey \citep{sdss}, devoted to the search for bright QSOs in the north. It is thus clear that a survey of bright high-$z$ quasars is still missing in the Southern Hemisphere.
Comparing the observed surface densities of bright QSOs at $z\ge 2.5$ in the North by Table \ref{tab:ObsQSOs} with the predicted ones in Table \ref{tab:surfdens}, it is clear that some of the models by \citet{kulkarni18} are underestimating the true number counts. In particular, their models 2 and 3 are predicting the lower boundaries in Table \ref{tab:surfdens}. At completion, our survey will probably allow us to provide an assessment of the bright side of the quasar luminosity function at $z\ge 2.5$.
\section{A new selection of bright QSO candidates}
\subsection{The {\it main sample}}
\label{sec:mainsample}
In order to select new bright QSO candidates at redshift $\gtrsim 2.5$ in the Southern hemisphere (declination $<$~0$^\circ$) we have taken advantage of the following databases:
\begin{itemize}
\item The Skymapper survey (DR1.1, \citealt{RefSkymapper});
\item The Gaia DR2 data release (DR2, \citealt{RefGaiaMain}, \citealt{RefGaiaDR2});
\item The WISE survey \citep{RefWise};
\end{itemize}
We considered all sources in the Skymapper survey with the following constraints:
\begin{enumerate}
\item Galactic latitude $|b| > $~25$^\circ$;
\item Magnitude in the $i$ band fainter than 14 and brighter than 18;
\item Flags in the $i$ and $z$ bands equal to zero (i.e. availability of reliable $i$ and $z$ magnitudes);
\item Availability of the Gaia magnitude in the $G$ band;
\item Distance to the closest WISE source $<$~0.5'';
\item Distance to the closest Gaia (DR2) source $<$~0.5'';
\item Signal--to--noise ratio of the matching WISE source in each of the first three bands $>$~3 (i.e. availability of reliable magnitudes in these bands).
\end{enumerate}
We limit our analysis to the magnitude range $i=$~14--18 in order to keep our samples as small as possible. Besides, sources brighter than $i=14$ would hardly be high-$z$ QSOs, and sources fainter than $i>18$ are not interesting for our purposes. The constraint 3), in particular the request of a reliable $i$ magnitude, limits the effectiveness of this selection to redshifts $z \lesssim 5.3$. For the selection of higher redshift QSOs this constraint has to be relaxed. We discarded the regions of the Large and Small Magellanic Clouds to avoid crowding and extinction. The initial sample (hereafter {\it main sample}) contains to \nmain{} objects spanning approximately 12,400 square degrees. When available, we also collected the data in the following bands:
\begin{itemize}
\item $u$, $v$, $g$, and $r$ from Skymapper;
\item $BP$ and $RP$ magnitudes from Gaia;
\item J, H, and K from the 2MASS \citep[][requiring a matching distance $<$~1.5'']{Ref2MASS}.
\end{itemize}
The angular distance matching radii for the above mentioned catalogs (and other reference catalogs, see below) have been determined by empirically checking the distributions of the angular separation histograms (see Fig.~\ref{Fig:match_radius}) with the aim of achieving the great majority of the true matches while minimizing the number of spurious associations.
\begin{figure}
\epsscale{1.2}
\plotone{matchradius.png}
\label{Fig:match_radius}
\caption{Histograms of the angular distances between the Skymapper sources in the {\it main sample} and the matched sources in the considered catalogs. Each histogram has been normalized by its maximum in order to show all of them in the same plot.}
\end{figure}
\subsection{Source classification in the {\it main sample}}
\label{sec:classification}
In order to identify stars in the main sample we used the following criteria:
\begin{itemize}
\item parallax (as measured by Gaia) significantly different from zero ($>3 \sigma$);
\item Gaia proper motion along RA or DEC significantly different from zero ($>3 \sigma$).
\end{itemize}
\nstars{} objects out of \nmain{} (83.2\%) meet at least one of the two above criteria and in the following they will be considered as {\it bona fide} stars.
In order to identify known QSOs and extragalactic objects in the {\it main sample} we matched it against the following catalogs:
\begin{itemize}
\item The SDSS DR14Q quasar catalog \citep[526,356 sources,][]{DR14Q} finding \nsdss{} matching entries within 0.5'';
\item The 13th edition of the Veron--Cetty catalog \citep[167,566 sources,][]{Veron10}, finding \nveron{} matching entries within 2.5'' (only sources with a reliable spectroscopic redshift estimate have been considered);
\item The 2dFGRS catalog \citep{2dfgrs}, finding \ntwodfgrs{} entries within 2'' (only sources with absorption spectra have been considered).
\end{itemize}
In this way we identified \nconfirmed{} spectroscopically confirmed QSO/AGN in the redshift range $0.005 < z < 5.06$, and \ngal{} sources with absorption spectra and no significant proper motion or parallax measurement, i.e., non-active galaxies (mainly at $z \lesssim 0.5$). In total, \nknown{} sources ($84.0\%$) in the {\it main sample} have a reliable object-type identification. The remaining \nunk{} sources build up the {\it unknown} sample.
\subsection{The {\it QSO candidate} sample}
\label{sec:cca}
In order to select new QSO candidates we need a method to identify the QSO characterizing properties among the \nunk{} sources in the {\it unknown sample}. However, we have no access to their spectra, hence we must search for those properties in the available magnitudes. Historically, this task has been accomplished by means of color selections (e.g. \citealt{2002-Richards-QSOSelection-SDSS}, \citealt{2013-Assef-SelectionWISE}, \citealt{2017-Tie-SelectionDES}), i.e. by cuts based on empirically identified linear combination of magnitudes (the so called {\it colors}.
Here we follow a similar approach, but we identify the cuts in an automatic fashion using a machine learning procedure based on the Canonical Correlation Analysis\footnote{\url{https://en.wikipedia.org/wiki/Canonical_correlation}} \citep[CCA,][]{CCA}, rather then using color-color plots to isolate the interesting sources. Our aim is to train the CCA using the object type classification (\S\ref{sec:classification}) as one of the canonical variables. To this purpose we consider all the sources in the {\it main sample} with a clear object-type identification and attached a numerical label as follows:
\begin{itemize}
\item Label = -1: for the non-active galaxies;
\item Label = 0: for the stars;
\item Label = 2: for the spectroscopically confirmed QSOs with $z<2.5$;
\item Label = 3: for the spectroscopically confirmed QSOs with $z>2.5$.
\end{itemize}
This subset represents our {\it training sample}, and we use the numerical label as the first canonical variable\footnote{Canonical variables are obtained from input variables by means of a linear transformation. Since the numerical label is 1-dimensional it is by definition proportional to a canonical variable.}. The actual value of the numerical labels are rather arbitrary (up to constant scale factors and offsets). We just found a better separation when using a label for the stars sitting in the middle between inactive galaxies and QSO sources. Then we arranged the magnitudes discussed in \S\ref{sec:classification} in a matrix with as many rows as the number of sources, and as many columns as the available magnitude estimates, and apply the CCA procedure between this matrix and the numerical label discussed above. The output of the CCA procedure is a linear transformation matrix which can be multiplied by the magnitude matrix to obtain a new, 1-dimensional coordinate (hereafter named \texttt{CCA}{}), representing the canonical variable associated to magnitude estimates. The CCA procedure ensures that the \texttt{CCA}{} coordinate has the highest possible correlation with the numerical label, compatible with the data available in the training set. The \texttt{CCA}{} coordinate for the sources in the {\it main sample} is shown in Fig.~\ref{Fig:cca1} (upper panel) as a function of the $i$ magnitude. Stars align across a rather narrow horizontal stripe at \texttt{CCA}{}~$\sim$~0, while the non--active galaxies occupy the lower part of the plot. Confirmed low-$z$ ($z < 2.5$) QSOs are spread throughout the whole \texttt{CCA}{}--$i$ plane, but QSOs with $z>2.5$ cluster in the upper right corner, hence we expect new (i.e. not yet identified) QSOs at $z>2.5$ to be located in the same region.
Then, we used the same transformation matrix used above to estimate the \texttt{CCA}{} coordinate for the sources in the {\it unknown} sample, obtaining a \texttt{CCA}{} value representative of the source object types (as was the case for sources in the training set). We started our analysis by considering the sources with a reliable magnitude in all the above mentioned bands ($u$, $v$, $g$, $r$, $i$ and $z$ from Skymapper, $G$, $BP$ and $RP$ from Gaia, W1, W2 and W3 from WISE, J, H and K from 2MASS), then we proceeded analyzing the sources with incomplete photometric sets (i.e. with some magnitude missing among the 15 listed above). Among the various configurations of bands we treated first the cases with the highest number of sources (allowing us to determine a more robust correlation), then the others with progressively less sources. In each iteration we performed a linear fit against the original numerical label in order to renormalize the \texttt{CCA}{} coordinates and span always the same dynamical range for each configuration of photometric bands. In this way we have been able to compute the \texttt{CCA}{} coordinates consistently for all the sources in the {\it main sample}.
\begin{figure*}[hbt!]
\epsscale{1}
\plotone{cca-selection1.png}\\
\plotone{cca-selection2.png}\\
\plotone{cca-selection3.png}\\
\caption{The \texttt{CCA}{}--$i$ mag. plane for the subsamples considered in this work. Upper panel: sources in the {\it main} sample for which a reliable type identification is available (\S\ref{sec:cca}). Stars are identified by gray ``+'' symbols, inactive galaxies by black cross symbols, low-$z$ ($<2.5$) QSOs with purple ``+'' symbols, high-$z$ ($>2.5$) QSOs with filled circles. The redshift for the confirmed QSOs with $z_{\rm spec} >2.5$ are shown with the color code shown in the colorbox in the upper left corner. The inset on the left shows the histogram of the \texttt{CCA}{} coordinate for the stars (gray), galaxies (black), low-$z$ QSOs (purple) and high-$z$ QSOs (blue). Middle panel: sources in the {\it main sample} without an object type identification (gray symbols, \S\ref{sec:cca}). The same sources after excluding extended (\S\ref{sec:extended}) and low-$z$ objects (\S\ref{sec:z_cca}) are highlighted in black, and represents potential high-$z$ QSO candidates. Lower panel: the {\it final} sample of high-$z$ QSO candidates, with the redshift $z_{\rm cca}$ estimated using the procedure described in \S\ref{sec:z_cca}.}
\label{Fig:cca1}
\end{figure*}
Fig.~\ref{Fig:cca1} (middle panel, gray symbols) shows the location of the \nunk{} sources ($16 \%$) of the {\it unknown sample} remaining after the removal of the objects with a known object-type identification (stars, galaxies, QSOs). The $z>2.5$ QSOs we are looking for are expected to lie in the region at $\texttt{CCA} \gtrsim 1$, but they are still confused in an overwhelming cloud of extended, inactive galaxies and low-$z$ AGN (shown in the upper panel of Fig.~\ref{Fig:cca1} with black "x" and purple ``+'' symbols, respectively). It is therefore necessary to further distill our $z>2.5$ candidates by selecting against extended objects and low-$z$ sources.
\subsubsection{Excluding extended objects}
\label{sec:extended}
Since we are looking for bright high-$z$ QSOs we expect them to have a point-like appearance. In order to test whether an object in the {\it unknown} sample is spatially extended we have taken advantage of the comparison between the PSF and Petrosian magnitudes reported in the Skymapper catalog. The latter are supposed to be similar to the former only for point-like sources, while capturing more flux with respect to the PSF magnitudes for extended sources. In order to quantify such a difference we divided the whole {\it main sample} into bins of 0.1 mag and adopted the median of the differences between the PSF and the Petrosian magnitude as a reference value within each bin. Then, for each source, we interpolated the reference values corresponding to its PSF mags and computed the quantity:
\begin{equation}
\label{eq:ext_z}
\sigma_{\rm x,extd} = \frac{1}{2} ~ \sum_{x=i,z}
\frac{x_{\rm psf} - x_{\rm petro} - \langle x_{\rm psf} - x_{\rm petro} \rangle_{\rm ref}}{\sqrt{\sigma^2_{x_{\rm psf}} + \sigma^2_{x_{\rm petro}}}}
\end{equation}
where the $x$ represents either considered band, and $\sigma_{x}$ is the associated uncertainty\footnote{Note that in this equation $z$ refers to the magnitude in the $z$ band, not to the redshift.}. We repeated the procedure in both the $i$ and in the $z$ bands and considered the average $\sigma_{\rm extd} = (\sigma_{\rm i,extd} + \sigma_{\rm z,extd}) / 2$ as an estimate of the significance of the object being extended. The histogram of the values of $\sigma_{\rm extd}$ for the objects in the {\it main sample} with an available object-type identification is shown in Fig.~\ref{Fig:etended_z}. Almost all confirmed QSOs with $z>2.5$ have $\sigma_{\rm extd} < 3$, hence we considered this value as a threshold to distinguish point--like sources from extended sources. In this way we discarded 135,238 {\it bona fide} extended sources from the \nunk{} objects of the {\it unknown} sample ($81 \%$).
\begin{figure}[hbt!]
\epsscale{1.1}
\plotone{extended_iz.png}
\caption{Histogram of the $\sigma_{\rm extd}$ quantity (Eq.~\ref{eq:ext_z}) for all the sources in the {\it main} sample. The threshold at $\sigma_{\rm extd}=3.0$ is shown with a vertical dashed line. Sources above this threshold are assumed to be spatially extended and discarded.}
\label{Fig:etended_z}
\end{figure}
\subsubsection{Excluding probable low-$z$ ($z<2.5$) sources}
\label{sec:z_cca}
To estimate the redshift of the sources in the {\it main} sample we used again a CCA transformation, this time using the spectroscopic redshifts of the subsample of confirmed (\S\ref{eq:ext_z}) QSOs as a training set, and following the same procedure we used to calculate the \texttt{CCA}{} coordinate. The comparison between $z_{\rm cca}$ and $z_{\rm spec}$ for the confirmed QSOs with $\sigma_{extd} < 3$ is shown in Fig.~\ref{Fig:zcca} (upper panel). The scatter in the $z_{\rm cca}$ estimates is $\sim$~0.36.
\begin{figure}[hbt!]
\includegraphics[width=0.41\textwidth]{Zcca_cmp1.png}
\includegraphics[width=0.41\textwidth]{Zcca_contam.png}
\caption{Upper panel: the $z_{\rm cca}$--$z_{\rm spec}$ correlation (scatter: $\sim$~0.36). Lower panel: low-$z$ ($<2.5$) {\it ``contamination''} as a function of the adopted $z_{\rm cca}$ threshold and of the high-$z$ ($>2.5$) {\it ``completeness''} (dashed, dot--dashed and dot--dot--dashed lines). We chose a $z_{\rm cca}$ threshold of 2.27 corresponding to a high-$z$ completeness of 95\% and an expected low-$z$ contamination in our {\it final} sample of $\sim$~44\%.}
\label{Fig:zcca}
\end{figure}
Then we estimated the CCA redshift (hereafter $z_{\rm cca}$) using the resulting transformation matrix for the whole {\it unknown} sample. To distinguish a low-$z$ ($z<2.5$) from an high-$z$ ($z>2.5$) source we calculated the following quantities:
\begin{itemize}
\item {\it Low-$z$ ``contamination''}: ratio of the number of low-$z$ sources over the number of sources with $z_{\rm cca}$ above a given threshold;
\item {\it High-$z$ ``completeness''}: ratio of high-$z$ sources with $z_{\rm cca}$ above a given threshold, over the total number of high-$z$ confirmed QSOs.
\end{itemize}
A plot of these quantities, for all the possible value of the $z_{\rm cca}$ threshold, is shown in Fig.~\ref{Fig:zcca} (lower panel, black solid line): a high-$z$ {\it ``completeness''} of 95\% can be reached with a threshold at $z_{\rm cca} = 2.27$, corresponding to a low-$z$ {\it ``contamination''} of 44\% (dot-dashed blue line). Increasing the high-$z$ {\it ``completeness''} to 99\% (blue dot-dot-dashed) would yield a much higher contamination, while decreasing to 90\% (green dashed line) would yield only a small improvement in contamination. Hence, we chose $z_{\rm cca} = 2.27$ as discriminating threshold to select against low-$z$ QSO candidates.
\subsubsection{The final QSO candidate sample}
\label{sec:candidate_sample}
By discarding the extended and low-$z$ sources from the \nunk{} objects of the {\it unknown} sample, we are left with 11,598 potential QSO candidates (black cross symbols in the middle panel of Fig.~\ref{Fig:cca1}). Besides excluding a significant fraction of sources in the {\it unknown} sample (93\%), the above procedure allowed to obtain a better separation of the remaining sources in the \texttt{CCA}{}--$i$ mag plane, resulting in an increased contrast between the peaks above and below \texttt{CCA}{} $\sim$~1 in the histogram on the left of middle panel in Fig.~\ref{Fig:cca1}, and suggesting that a threshold on the \texttt{CCA}{} value might allow to exclude the non--QSO sources. As discussed above, the group at \texttt{CCA}{}~$>$~1 is likely associated with high-$z$ QSOs, while the group at \texttt{CCA}{} $<$~1 is associated with stars and inactive galaxies.
Therefore we discarded all the source with \texttt{CCA}{} $<$~1 to obtain a {\it final} sample of \nfinal{} high-$z$ QSO candidates. The lower panel of Fig.~\ref{Fig:cca1} shows the location of such candidates in the \texttt{CCA}{}--$i$ mag. plane, and their expected redshift (color-coded, as calculated in \S\ref{sec:z_cca}).
As a consistency check, we note that all the known QSOs with $z>2.5$ (blue line in the histogram, both in upper and lower panel of Fig.~\ref{Fig:cca1}) lie above the adopted \texttt{CCA}{} threshold, as expected.
The number of DR14Q and Veron sources with $z>2.5$ in our main sample is 68. Considering that the Skymapper footprint is ~8.3 times larger than the previously surveyed area, we extrapolate a number $\sim 564$ new QSOs with $z>2.5$ in our {\it QSO candidate} sample. Given the size of the QSO candidate sample (\nfinal{} sources) we expect a lower limit for the success rate for high-$z$ ($>2.5$) QSO identification of $\sim$~40\%. Actually, the fraction of new high-$z$ ($>2.5$) QSO spectroscopically confirmed among the candidates we could observe (\S\ref{sec:observations}) is $\sim$~80\%.
At this stage we can also compute the fraction of DR14Q and Veron QSOs with $z>2.5$ and $i < 18$ satisfying all the conditions to be selected by our procedure, $93\%$, and use it as an indication of the completeness of our QSO sample. The $7\%$ of the known QSOs lost were sources with a predicted CCA redshift below our threshold of $z_{\rm cca} = 2.27$ (\S\ref{sec:z_cca}).
\section{Spectroscopic confirmations}
\label{sec:observations}
In order to validate the above-described selection criteria (and test variants), we have carried out extensive spectroscopy observations at Las Campanas Observatory and at the ESO-NTT telescope at La Silla. The first pilot study has been carried out at the Magellan telescopes in 2018 using LDSS-3 (Clay Telescope) and IMACS (Baade Telescope). Observations were obtained in various nights during bright time and variable weather conditions. With LDSS-3 the VPH-all grism has been used with the 1"-central slit and no blocking filter, covering a wavelength range between 4000 - 10000 {\AA} with a low resolution of R $\sim$800.
With IMACS, we used the \#300 grism with a blaze angle of 17.5deg, covering a wavelength range between 4000 - 10000 {\AA} with a dispersion of 1.34 \AA/pixel. Based on these first results, the selection technique has been adjusted in order to include candidates at higher redshift.
In February 2019 we were awarded 2 nights at the du Pont telescope to validate the optimized criteria, and we observed several new candidates with the Wide Field CCD (WFCCD) blue grism that covers a wavelength range between 3700 - 8000 {\AA} providing a 2 \AA /pixel dispersion.
The NTT spectroscopic campaign has been carried out during the ESO observing period P103 under the proposal 0103.A-0746 (PI. A. Grazian). Three nights of spectroscopy have been executed during 27-30 April 2019. The EFOSC2 instrument was used, equipped with the grism \# 13 (wavelength range $\lambda\sim 3700-9300$ {\AA}). Since our main targets are relatively sparse in the sky, we carried out long-slit spectroscopy with exposure times between 3 and 7 minutes per object.
Finally, in June 2019 we performed a few exposures at TNG (La Palma) using the Low Resolution Spectrograph (Dolores) with the LR-B grism (resolution $\sim$~600), a 1" slit aperture and an exposure time of 10 minutes per object, in order to validate our selection criteria against low-$z$ AGNs (\S\ref{sec:z_cca}).
In total we observed \nOBSfin{} sources from our {\it final} QSO candidate sample (\S\ref{sec:candidate_sample}) of \nfinal{} sources. Among these sources, \nFINconf{} turned out to be genuine high-$z$ QSOs with $z>2.5$, 12 are low-$z$ QSOs with $z<2.5$ and 1 is a star. The details of the candidate observations are summarized in Tab.~\ref{tab:ObservCandidates}, while Fig.~\ref{Fig:i_vs_z} shows the redshift-$i$ magnitude plane of the newly discovered QSOs (red circles) and the known QSOs before this work (black cross symbols). So far, we achieved a success rate in identifying new high-$z$ QSOs sources of $\sim$~80\%. On the other hand, we observed only a small fraction of the total QSO candidate sample (\nOBSfin/\nfinal{} sources, $\sim$~5\%), hence the success rate may be biased by the choice of the most promising candidates for the observations.
In the early phases of the project we experimented different versions of the selection algorithm and tested its limits and characteristics with the pilot spectroscopic runs and in part of the NTT run.
As a consequence, we also observed sources that do not belong to the {\it final} sample.
For completeness, we report the observation details for these \nADD{} additional sources in Tab.~\ref{tab:ObsNonCandidate}. Among them we found: 2 QSOs at $z>2.5$, 50 low-$z$ QSOs, and 15 non-QSO sources. The two QSOs at $z>2.5$ were not selected in the {\it main} sample because one has an $i$ magnitude fainter than the threshold of $i=18$ and the other has a Skymapper astrometric position differing of more than $0.5"$ from Gaia DRS2, probably due to image defects in the Skymapper data, as we checked with Skymapper cutouts.
Further observing runs at the DuPont and NTT telescopes have been approved in order to expand our spectroscopically observed sample. All the details and results of the spectroscopic runs will be described in a future paper.
\begin{figure}
\epsscale{1.25}
\plotone{imag_vs_z.png}
\caption{The redshift-$i$ magnitude plane of the QSOs in the area of the present survey. Black crosses: QSOs known before the present observations; Red filled circles: new spectroscopic redshifts obtained in the present survey.}
\label{Fig:i_vs_z}
\end{figure}
\section{Conclusions}
The aim of the present project was to identify new, bright ($i<$~18) QSOs at relatively high redshift ($z>2.5$) in the Southern Hemisphere with a high success rate. At this stage completeness represented a secondary requirement.
Finding in an efficient way relatively high-redshift QSOs is a kind of a {\it needle in a haystack} task. Our approach has been to take advantage of large high-quality photometric and astrometric databases provided by Skymapper, WISE, 2MASS and Gaia, in order to remove sources identified with high-reliability as contaminants (stars, low-$z$ QSOs and galaxies). Then, with the help of a Canonical Correlation Analysis \citep{CCA} we have selected among the remaining {\it unkown} objects a sample of \nfinal{} $z>2.5$ QSO candidates, whose completeness is also expected to be high ($\magcir 90 \%$ for objects up to $z \sim 5$), estimated on the basis of the number of known QSOs that the method would select.
Indeed, the first campaigns of spectroscopic confirmations have been characterized by a high success rate ($\sim$~81\%), and already at this preliminary stage the number of bright QSOs in the southern emisphere has been significantly increased, as shown in Fig.~\ref{Fig:i_vs_z}. The new \nFINconf{} (Tab.~\ref{tab:ObservCandidates}) plus 2 (Tab.~\ref{tab:ObsNonCandidate}) QSOs with $z>2.5$ and $i<18$ are now available to the astronomical community for high-resolution spectroscopic follow-up and the studies of cosmology and fundamental physics described in the introduction.
We are continuing our campaigns of spectroscopic confirmations and at the same time we are exploring other statistical techniques in addition to the CCA analysis to further improve the properties of the selection and extend its range of applicability.
\acknowledgments
We thank Luca Pasquini and Carlos Martins for enlightening discussions.
This work is based on data products from observations made with ESO Telescopes at La Silla Paranal Observatory under ESO programme ID 103.A-0746(A).
The national facility capability for SkyMapper has been funded through ARC LIEF grant LE130100104 from the Australian Research Council, awarded to the University of Sydney, the Australian National University, Swinburne University of Technology, the University of Queensland, the University of Western Australia, the University of Melbourne, Curtin University of Technology, Monash University and the Australian Astronomical Observatory. SkyMapper is owned and operated by The Australian National University's Research School of Astronomy and Astrophysics. The survey data were processed and provided by the SkyMapper Team at ANU. The SkyMapper node of the All-Sky Virtual Observatory (ASVO) is hosted at the National Computational Infrastructure (NCI). Development and support the SkyMapper node of the ASVO has been funded in part by Astronomy Australia Limited (AAL) and the Australian Government through the Commonwealth's Education Investment Fund (EIF) and National Collaborative Research Infrastructure Strategy (NCRIS), particularly the National eResearch Collaboration Tools and Resources (NeCTAR) and the Australian National Data Service Projects (ANDS)
This work has made use of data from the European Space Agency (ESA) mission {\it Gaia} (\url{https://www.cosmos.esa.int/gaia}), processed by the {\it Gaia} Data Processing and Analysis Consortium (DPAC, \url{https://www.cosmos.esa.int/web/gaia/dpac/consortium}). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the {\it Gaia} Multilateral Agreement.
This publication makes use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation
This publication makes use of data products from the Wide-field Infrared Survey Explorer, which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, funded by the National Aeronautics and Space Administration.
This paper includes data gathered with the 6.5 meter Magellan Telescopes located at Las Campanas Observatory, Chile.
We thank Societ\`a Astronomica Italiana (SAIt), Ennio Poretti, Gloria Andreuzzi, Marco Pedani, Vittoria Altomonte and Andrea Cama for the observation support at TNG. Part of the observations discussed in this work are based on observations made with the Italian Telescopio Nazionale Galileo (TNG) operated on the island of La Palma by the Fundación Galileo Galilei of the INAF (Istituto Nazionale di Astrofisica) at the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofisica de Canarias.
\vspace{5mm}
\facilities{Skymapper, Wise, 2MASS, Gaia, Magellan:Baade (IMACS), Magellan:Clay (LDSS-3), du Pont (WFCCD), TNG (Dolores)}
\section{}
\textit{Research Notes of the \href{https://aas.org}{American Astronomical Society}}
(\href{http://rnaas.aas.org}{RNAAS}) is a publication in the AAS portfolio
(alongside ApJ, AJ, ApJ Supplements, and ApJ Letters) through which authors can
promptly and briefly share materials of interest with the astronomical community
in a form that will be searchable via ADS and permanently archived.
The astronomical community has long faced a challenge in disseminating
information that may not meet the criteria for a traditional journal article.
There have generally been few options available for sharing works in progress,
comments and clarifications, null results, and timely reports of observations
(such as the spectrum of a supernova), as well as results that wouldn’t
traditionally merit a full paper (such as the discovery of a single exoplanet
or contributions to the monitoring of variable sources).
Launched in 2017, RNAAS was developed as a supported and long-term
communication channel for results such as these that would otherwise be
difficult to broadly disseminate to the professional community and persistently
archive for future reference.
Submissions to RNAAS should be brief communications - 1,000 words or fewer
\footnote{An easy way to count the number of words in a Research Note is to use
the \texttt{texcount} utility installed with most \latex\ installations. The
call \texttt{texcount -incbib -v3 rnaas.tex}) gives 57 words in the front
matter and 493 words in the text/references/captions of this template. Another
option is by copying the words into MS/Word, and using ``Word Count'' under the
Tool tab.}, and no more than a single figure (e.g. Figure \ref{fig:1}) or table
(but not both) - and should be written in a style similar to that of a
traditional journal article, including references, where appropriate, but not
including an abstract.
Unlike the other journals in the AAS portfolio, RNAAS publications are not
peer reviewed; they are, however, reviewed by an editor for appropriateness
and format before publication. If accepted, RNAAS submissions are typically
published within 72 hours of manuscript receipt. Each RNAAS article is
issued a DOI and indexed by ADS \citep{2000A&AS..143...41K} to create a
long-term, citable record of work.
Articles can be submitted in \latex\ (preferably with the new "RNAAS"
style option in AASTeX v6.2), MS/Word, or via the direct submission in the
\href{http://www.authorea.com}{Authorea} or
\href{http://www.overleaf.com}{Overleaf} online collaborative editors.
Authors are expected to follow the AAS's ethics \citep{2006ApJ...652..847K},
including guidance on plagiarism \citep{2012AAS...21920404V}.
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=0.85,angle=0]{aas.pdf}
\caption{Top page of the AAS Journals' website, \url{http://journals.aas.org},
on October 15, 2017. Each RNAAS manuscript is only allowed one figure or
table (but not both). Including the
\href{http://journals.aas.org//authors/data.html\#DbF}{data behind the figure}
in a Note is encouraged, and the data will be provided as a link in the
published Note.\label{fig:1}}
\end{center}
\end{figure}
\acknowledgments
Acknowledge people, facilities, and software here but remember that this counts
against your 1000 word limit.
|
1,108,101,566,683 | arxiv | \section{Introduction}
\vspace{-0.25cm}
\paragraph{Motivation.}
Pedigrees are useful for disease association~\cite{Thornton2007},
linkage analysis~\cite{Abecasis2002}, and estimating recombination
rates~\cite{Coop2008}. Most of these calculations involve the
pedigree likelihood which is formulated using probabilities for
Mendelian inheritance given a graph of the relationships. Since the
known algorithms for computing the likelihood are exponential, there
have been many attempts to speed up the exact likelihood
calculation~\cite{Fishelson2005,Abecasis2002,Sobel1996,Geiger2009,Browning2002,McPeek2002inference,Kirkpatrick2011xx}.
Due to the running-time issue, other statistical methods have been
introduced which perform genome-wide association studies that use a faster
correction for the relationship
structure~\cite{Bourgain2003,Thornton2007,Thornton2010}.
Pedigree reconstruction, introduced by
Thompson~\cite{thompson1985}, is very similar to methods
used for phylogenetic tree reconstruction. The aim is to search the
space of pedigree graphs for the graph that maximizes the likelihood,
which is the probability of the observed data being inherited on the
given pedigree graph. However, the pedigree reconstruction problem
differs from the phylogenetic reconstruction problem in several
important ways: 1) the pedigree graph is a directed acyclic graph
whereas the phylogeny is a tree, 2) while the phylogenetic
likelihood is efficiently computed, the only known algorithms for the
pedigree likelihood are exponential, either in the number of people or
the number of sites~\cite{Lauritzen2003}, and 3) the phylogenetic
likelihood is identifiable~\cite{Thatte2010}, while we demonstrate
that the pedigree likelihood is non-identifiable for the pedigree
graph.
Whether the pedigree likelihood is identifiable for the pedigree graph
is crucial to forensics where relationship testing is performed using
the likelihood on unlinked sites~\cite{Pinto2010}. The scenario is
that an unknown person, $a$, leaves their DNA at the crime scene, and
it is a close match to a sample, $b$, in a database. The relationship
between $a$ and $b$ is predicted, and any relatives of $b$ who fit
the relationship type are under suspicion. Our results indicate
that the number of people who should fall under suspicion might be
larger than previously thought. For example, paternity and
full-sibling testing are both common and very accurate. However,
half-sibling relationships are non-identifiable from avuncular
relationships and from grand-parental relationships with unlinked
sites. As we will see later, for both unlinked and linked sites,
different types of cousins relationships are also non-identifiable,
even with the addition of genetic material from a third related
person. Due to these non-identifiable relationships, a known
relationship between a third person, $c$ and $b$ is not enough
information for conviction without also checking whether there is a
perfect match between the DNA of $c$ and $a$ and whether there is
additional information.
The likelihood is
also used to correct existing pedigrees where relationships are
mis-specified~\cite{McPeek2000,Sun2002,Stankovich2005}. Much of their
success comes from changing relationships that result in zero or
very low likelihoods. Again, the accuracy of these methods will be
effected by the non-identifiable likelihood.
For similar reasons, the accuracy of pedigree relationship prediction~\cite{Stankovich2005} and reconstruction methods~\cite{thompson1985,Kirkpatrick2011b} is greatly
influenced by the likelihood being non-identifiable, since these
methods rely on the likelihood or approximations of it to guide relationship prediction.
The kinship coefficient is known to be non-identifiable for the
pedigree graph~\cite{Thompson1975}. The kinship coefficient is an
expectation over the condensed identity states which describe the
distinguishable allelic relationships between a pair of individuals.
Pinto et al.~\cite{Pinto2010} showed that there are cousin-type pairs
of pedigrees having the same kinship coefficient. However, these
results apply only to \emph{unlinked} sites, a
special case of the \emph{linked} sites.
This work considers identifiable pedigrees on \emph{linked} sites.
Thompson~\cite{Thompson1975} provided an early discussion of
this topic. Donnelly~\cite{Donnelly1983}
discovered that cousin-type relationships are non-identifiable if two
pedigrees have the same total number of edges separating the two
genotyped cousins from the common ancestor.
In this paper, we make use of a method by Kirkpatrick and
Kirkpatrick~\cite{Kirkpatrick2011xx} to collapse the original hidden
states of the likelihood HMM into the combinatorially largest
partition which is still an HMM. Using this tool-box, we are able to
show that two pedigrees are non-identifiable if and only if they have
an isomorphism between their collapsed state spaces. We relate this
isomorphism to known results on the non-identifiability of the kinship
coefficient. We introduce a method of removing edges from a pedigree
to obtain a minimal pedigree having the same likelihood. We then show
that two pedigrees that have different minimal sizes must be
identifiable. We connect this notion of removing edges to the
pruning introduced by McPeek~\cite{McPeek2002inference} which is
clearly implementable in polynomial time, and we also introduce a result
stating that pedigrees with discrete non-overlapping generations such
as those obtained from the diploid Wright-Fisher (dWF) model are always
identifiable.
We give several examples of the kinship coefficient and pedigree
likelihood being non-identifiable. We give the only known
non-identifiability example where there are more than two individuals
with data. Finally, we discuss a Bayesian
method for integrating over this uncertainty.
\vspace{-0.25cm}
\section{Background}
\vspace{-0.25cm}
A \emph{pedigree graph} is a directed acyclic graph $P=(I(P),E(P))$
where the nodes are individuals and edges are parent-child
relationships directed from parent to child. All individuals in
$I(P)$ must have either zero or two incoming edges. If an individual
has zero incoming edges, then that individual is a \emph{founder}.
The set of founders for pedigree graph $P$ is $F(P)$.
A \emph{pedigree} is a tuple $\mathcal{P} = (P,s,\chi,\ell)$ where $P$ is
the pedigree graph, function $s:I(P) \to \{m,f\}$ are the genders, set $\chi
\subseteq I(P)$ is the individuals of interest, and $\ell:\chi \to
\mathbb{N}$ are the \emph{names} of the individuals of interest. If
$i \in I(P)$ has two incoming edges, $p_0(i)$ and $p_1(i)$, then one parent
must be labeled $s(p_{j}(i)) = m$ and the other $s(p_{1-j}(i)) = f$ for $j \in \{0,1\}$.
The likelihood, $Pr[G~|~\cal{P},\theta]$, is a function of the
genotypes $G$, the recombination rates $\theta$, and the pedigree
$\mathcal{P}$. However, we will abuse notation by referring
to a pedigree by its pedigree graph and writing
$Pr[G~|P,\theta]$. In these instances, the set $\chi$ will be clear
from the context.
Two pedigrees $\mathcal{P}$ and $\mathcal{Q}$ are said to be
\emph{identifiable} if and only if $Pr[G~|~\mathcal{P},\theta] \ne
Pr[G~|~\mathcal{Q},\theta]$ for some values of $G$ and $\theta$. If
$\mathcal{P}$ and $\mathcal{Q}$ are not identifiable, we call them
\emph{non-identifiable}.
Two pedigree graphs, $P$ and $Q$ are \emph{isomorphic} if there exists
a mapping $\phi:I(P) \to I(Q)$ such that $(u,v) \in E(P)$ if and only
if $(\phi(u), \phi(v)) \in E(Q)$. This is an isomorphism of the
pedigree graph rather than of the pedigree, because the genders are
not necessarily preserved by the map $\phi$. From now on, we will
assume that $P$ and $Q$ are not isomorphic.
Two isomorphic pedigrees might have different gender labels,
and they would be identifiable when considering sex-chromosome data. We
restrict our discussion to autosomal data, where these two pedigrees
would be non-identifiable.
\paragraph{The Hidden Markov Model.}
Rather than writing out the cumbersome likelihood equation, we will
define the likelihood by specifying the HMM. For each pedigree
$\mathcal{P}=(P,s,\chi,\ell)$, there is an HMM, and everything in this
section is defined relative to a specific pedigree $\mathcal{P}$. To
specify the HMM, we need to specify the hidden states, the emission
probability, and the transition probabilities. We will begin with the
hidden states.
An inheritance vector $x \in \{0,1\}^n$ has length $n = |E(P)|$. Each
bit, $x_e$, in this vector indicates which grand-parental allele,
maternal or paternal, was inherited along edge $e \in E(P)$.
An \emph{inheritance graph} $R_x$
contains two nodes for each individual in $i \in I(P)$, called $i_0$
and $i_1$, and edges $(p_{j}(i)_{x_e}, i_j)$ for each $(p_{j}(i),i)
\in E(P)$. The sets $\chi_0$ and $\chi_1$ are the paternal and
maternal alleles, respectively, of the individuals of interest. We
will refer to the collective set $\chi_0 \cup \chi_1$ as the
\emph{alleles of interest}. Each node in $R_x$ represents an allele.
The inheritance graph is a forest with each root being a
founder allele. The inheritance vectors are the \emph{hidden states} of
the HMM. Let $\mathcal{H}_P$ be the hypercube of dimension $|E(P)|$;
its vertices represent all the inheritance vectors.
This inheritance graph represents identity-by-descent (IBD) in that
any pair of individuals of interest $i,i' \in \chi$ are IBD if there
exists an inheritance vector $x$ such that one pair of $(i_0,i'_0)$,
$(i_0,i'_1)$, $(i_1,i'_0)$ or $(i_1,i'_1)$ are connected. The
\emph{identity states}, are the sets of the partition induced on the alleles of
interest by the connected components of $R_x$, namely
$D_x = \{y \in \mathcal{H}_{P} | CC(R_y) = CC(R_x)\}.$
The \emph{transition probabilities} are a function of the per-site recombination rates $\theta = (\theta_1,..,\theta_{T-1})$ for $T$ sites.
Let $X_t$ be the random variable for the hidden state at site $t$.
The probability of recombining from hidden state $x$ to state $y$ at site $t$ is
\vspace{-0.25cm}
\begin{eqnarray}
\label{Xtransition}
Pr[X_{t+1} = y~|~X_{t} = x, \theta] = \theta_t^{H(x,y)}(1-\theta_t)^{n-H(x,y)}
\vspace{-0.25cm}
\end{eqnarray}
where $H(x,y) = |x \oplus y |_1$ is the Hamming distance between the two bit vectors, $\oplus$ indicates the XOR operation, and $|.|_1$ is the $L_1$-norm. In some instances, we may make the $\theta$ implicit, because it is clear from context.
The \emph{emission probability} depends on the data, which is the
genotype random variable $G$. Each individual of interest $i \in
\chi$ has two rows in the genotype matrix which encode, for each
column $t$, the alleles that appear in that individual's genome. For
example, $\{g_{it}^0,g_{it}^1\}$ from the $0$th and $1$st rows for
individual $i$ at site $t$ is the (unordered) set of alleles that
appear in that individual's genome. The data for all the individuals
at site $t$ is an $n$-tuple $g_t = (\{g_{it}^0,g_{it}^1\} | \forall i)$ and $g =
(g_1,...,g_T)$ is the data at all $T$ sites. The pedigree HMM
deconvolves these unordered alleles by considering all possible
orderings of the genotypes when assigning them to the hidden alleles.
Specifically, let $CC(R_x)$ be the connected components of $R_x$.
Then the emission probability at site $t$ is
\vspace{-0.25cm}
\[
Pr[G_t = g_t~|~X_t = x, P] \propto \sum_{\tilde{g}_t} \prod_{c \in CC(R_x)} \mathbf{1}\{n(c,\tilde{g}_t) = 1\} Pr[h(c,\tilde{g}_t)]
\vspace{-0.25cm}
\]
where $\tilde{g}_t$ is the ordered alleles $(g_{it}^0,g_{it}^1)$ that
appear in $g_t$, $n(c,\tilde{g}_t)$ is the number of alleles assigned to
$c$ by $\tilde{g}_t$, and $h(c,\tilde{g}_t)$ is the allele of $\tilde{g}_t$
that appears in $c$. Notice that by definition of the identity
states, $\{D_x | \forall x\}$, $Pr[G_t~|~X_t=x_1] = Pr[G_t~|~X_t=x_2]$ for
all $x_1,x_2 \in D_x$.
This completes the definition of the HMM and the likelihood. Now, our
task is to find pairs of pedigree graphs $(P,Q)$ such that
$Pr[G~|~P,\theta] = Pr[G~|~Q,\theta]$ for all $G$ and $\theta$. We
can do this by considering multiple equivalent HMMs and finding the
``optimal'' HMM that describes the likelihood of interest. Given two
optimal HMMs, we can easily compare their likelihoods for different
values of $G$ and $\theta$.
\paragraph{The Maximum Ensemble Partition.}
In this paper, we will use a method similar to that discussed by
Browning and Browning~\cite{Browning2002} and improved by Kirkpatrick
and Kirkpatrick~\cite{Kirkpatrick2011xx}. This method relies on an
algebraic formulation of the hidden states of the Hidden Markov Model
(HMM) that is used to compute the pedigree likelihood. Specifically,
we can collapse the original hidden states into the combinatorially
largest partition which is still an HMM. From the collapsed state
space (termed the maximum ensemble partition), we can easily see that
certain pairs of pedigrees have isomorphic HMMs and thus identical
likelihoods.
For pedigree $\mathcal{P} = (P,s,\chi,\ell)$, consider a new HMM with
hidden states $Y_t$ in a state space that is defined by a partition,
$m(P) := \{W_1,...,W_k\}$, of $\mathcal{H}_P$, meaning that for all
$i,j$, $W_i \cap W_j = \emptyset$ and $\cup_{i=1}^k W_i =
\mathcal{H}_P$. For the HMM for $Y_t$ to have the same
likelihood as the HMM for $X_t$ the \emph{Markov property} and the
\emph{emission property}, defined next, must be satisfied.
Let the transition probabilities of $Y_t$ be the expectation of $X_t$ as follows, for all $i,j$, and for $x \in W_i$
\vspace{-0.25cm}
\begin{eqnarray}
\label{Ytranstion}
Pr[Y_{t+1}=W_j~|~Y_t = W_i] &=& Pr[X_{t+1} \in W_j~|~X_t= x] \\
&=& \sum_{y \in W_j} Pr[X_{t+1}=y~|~X_t=x].
\vspace{-0.25cm}
\end{eqnarray}
Conditioning on $\theta$ is implicit on both sides of the equation.
The \emph{Markov property} is required for $Y_t$ to be Markovian:
\vspace{-0.25cm}
\[
\sum_{y \in W_j} Pr[X_{t+1}=y~|~X_t=x_1] = \sum_{y \in W_j} Pr[X_{t+1}=y~|~X_t=x_2]
\vspace{-0.25cm}
\]
for all $x_1, x_2 \in W_i$ for all $i$ and for all $W_j$. For more
details, see~\cite{Browning2002,Kirkpatrick2011xx}.
The \emph{emission property} states that the emission probabilities of
$X_t$ impose a constraint on $Y_t$. This constraint is that the
partition $\{W_1,...,W_k\}$ must be a sub-partition of the partition
induced on the hidden states by the emission probabilities: \\
$E_x(P) = \left\{y \in \mathcal{H}_P ~|~ Pr[G_t=g_t~|~X_t=x] = Pr[G_t=g_t~|~X_t=y] ~\forall g_t \right\}.$\\
We call the set $\{E_x(P) | \forall x\}$ the \emph{emission partition} since it partitions the state-space $\mathcal{H}_P$.
It has been shown in~\cite{Kirkpatrick2011xx} that the partition
$\{W_1,...,W_k\}$ which satisfies the Markov property and the emission
property and which maximizes the sizes of the sets in the
partition---i.e.~$\textrm{max}_{i \in \{1,...,k\}} |W_i|$---can be found in
time $O(nk2^n)$ where $n$ is the number of edges, and $k$ is a
function of the known symmetries of the pedigree graph $k \le 2^n$.
We call this partition the \emph{maximum ensemble partition}.
It turns out that the maximum ensemble partition is unique, making the
derived HMM the unique ``optimal'' representation for the likelihood.
We will exploit this fact to find non-identifiable pairs of pedigrees.
\vspace{-0.25cm}
\section{Methods}
\vspace{-0.25cm} We will define a general criteria under which a pair
of non-isomorphic pedigree graphs have identical likelihoods for all
input data and recombination rates, as wells as define a
uni-directional polynomial-checkable criteria whereby we can determine
whether some pairs of pedigrees are identifiable. In the following
section, we will apply these results to investigate when pedigrees are
identifiable, to give several examples where the pedigrees are
non-identifiable, and to suggest a Bayesian solution.
Given two non-isomorphic pedigree graphs $P$ and $Q$, and their maximum ensemble partitions $m(P)$ and $m(Q)$, respectively. We say that $\psi$ is a \emph{proper isomorphism} if $\psi$ is a bijection $m(P)$ onto $m(Q)$ such that the following hold:
\begin{description}
\vspace{-0.25cm}
\item[Transition Equality] $Pr[Y_{t+1}^P~|~Y_t^P,\theta] = Pr[\psi(Y_{t+1}^P)~|~\psi(Y_{t}^P),\theta] ~~\forall t$
\item[Emission Equality] $Pr[G_t~|~Y_t^P,P] = Pr[G_t~|~\psi(Y_t^P), Q] ~~\forall t$
\vspace{-0.25cm}
\end{description}
where $Y_{t}^P$ is the random variable for the hidden state for pedigree $P$.
\begin{theorem}
\label{thm:equivalence}
There exists isomorphism $\psi:m(P) \to m(Q)$ satisfying the transition and emission equalities if and only if the likelihoods for $\mathcal{P}$ and $\mathcal{Q}$ are non-identifiable, $Pr[G~|~\theta, P] = Pr[G~|~\theta, Q]$, for all $G$ and $\theta = (\theta_1,...,\theta_{T-1})$ where $T$ is the number of sites and $T \ge 2$.
\end{theorem}
\begin{proof}
($\Rightarrow$) Given a proper isomorphism $\psi:m(P) \to m(Q)$ that satisfies the transition and emission equalities, the likelihoods are necessarily the same, by definition of the Hidden Markov Model.
($\Leftarrow$) Given that the two pedigrees are identifiable, we will construct $\psi$. Consider pedigrees $P$ and $Q$. They both have unique maximum ensemble partitions $m(Q)$ and $m(P)$~\cite{Kirkpatrick2011xx}. By the definition of $Pr[G~|~\theta, Q]$, this distribution can be represented by an HMM, called $\mathcal{M}(Q)$, over state-space $m(Q)$. By the equality $Pr[G~|~\theta, P] = Pr[G~|~\theta, Q]$, we know that there is an HMM for $P$, $\mathcal{M}(P)$, with the same transition matrix and emission probabilities as $\mathcal{M}(Q)$. Since $\mathcal{M}(Q)$ has maximum ensemble state-space $m(Q)$, then by uniqueness, there is no other state-space that is as small. By the equality of the two distributions, we know that $\mathcal{M}(P)$ also has maximum ensemble state-space $m(P)$. But since $m(P)$ is the unique maximum ensemble state-space for $\mathcal{M}(P)$, there must be an isomorphism $\psi:m(P) \to m(Q)$ satisfying the transition and emission equalities. \qed
\end{proof}
To apply this method, we need to obtain $m(P)$ and $m(Q)$ and the appropriate proper isomorphism $\psi$. To obtain $m(P)$ and $m(Q)$ we rely on the maximum ensemble algorithm~\cite{Kirkpatrick2011xx}. The proper isomorphism is obtained by examining the transition probabilities of the respective HMMs.
\begin{corollary}
\label{cor:unlinked}
For unlinked sites $\theta_t = 0.5$ for all $1 \le t \le T-1$, for any pedigree graphs $P$ and $Q$ with maximum ensemble states $|m(P)| = |m(Q)|$ and identical identity states, the pedigrees are non-identifiable. (proven in Appendix)
\end{corollary}
We are now in a position to relate non-identifiability on pedigree
HMMs to non-identifiability of an important calculation that relies
on independent sites---the kinship coefficient. The \emph{kinship
coefficient} for a pair of individuals of interest is defined as the
probability of IBD when randomly choosing one allele from each
individual of interest. Let the two individuals of interest be $\chi
= \{a,b\}$. We write the kinship coefficient for $\chi$ as
$\Phi_{I}(P)_{\chi} = \sum_{x} \frac{\eta(x,\chi)}{4} \frac{1}{2^n}$
where $\eta(x,\chi)$ is the number of pairs of alleles of interest
$\chi_0 \cup \chi_1$ sharing the same connected component in $R_x$ and
$\chi_0 \cup \chi_1 = \{\{a_0,b_0\}, \{a_0,b_1\}, \{a_1,b_0\},
\{a_1,b_1\}\}$.
\begin{corollary}
\label{cor:kinship}
For unlinked sites $\theta_t = 0.5$ for all $1 \le t \le T-1$, given two
non-identifiable pedigree graphs, $P$ and $Q$, with two
individuals of interest $\chi=\{a,b\}$, the kinship coefficient is
identical. (proven in Appendix)
\end{corollary}
This last corollary is a uni-directional implication. There are some pairs of pedigrees $P$ and $Q$ for which the kinship coefficient is equal but for which the likelihood is identifiable, see Fig~\ref{fig:example1}.
The final set of results we introduce will try to answer the question of when are pedigrees identifiable. Since some algorithms use the likelihood to choose the best pedigree graph or relationship type, these results give some guarantees for when those algorithms will make correct decisions.
We wish to show that under some definition of ``necessary'' edges for some individuals of interest, pedigrees $P$ and $Q$ with different numbers of necessary edges have no proper isomorphism and are, therefore, identifiable. We will relate our definition of a necessary edge to the literature.
And, we will establish an even more restricted class of pedigrees for which no pair of pedigrees is identifiable. This is the class of all dWF pedigrees.
For an edge, $e$, in the pedigree, let $\sigma$ be the indicator vector with bits $\sigma_f = 0$ for all $f \ne e$ and $\sigma_e = 1$.
For pedigree $P$ having states $\{W_1,...,W_k\}$, we will define an edge $e \in E(P)$ to be \emph{superfluous} if and only if the following two properties hold
\begin{description}
\vspace{-0.25cm}
\item[1)] $Pr[X_{t+1} = y | X_t = x] = Pr[X_{t+1} = \sigma \oplus y | X_t = \sigma \oplus x]$, for every $y \in W_j$ and $x \in W_i$ and for ever $i$ and $j$, and
\item[2)] $Pr[G_{t}| X_t = x] = Pr[G_{t} | X_t = \sigma \oplus x]$ for all $x \in \mathcal{H}_P$.
\vspace{-0.25cm}
\end{description}
Conversely, an edge $e$ is \emph{necessary} if it is not superfluous. For an example, see the edge adjacent to the grand-father in $P'$ of Fig~\ref{fig:example3}.
\begin{lemma}
We say that an edge is \emph{removed} if its bit is set to a fixed value in all the inheritance vectors.
Any superfluous edge can be removed without changing the value of the likelihood. (proven in Appendix)
\end{lemma}
\begin{theorem}
\label{thm:cardinalityofedges}
If pedigrees $P$ and $Q$ have a different number of \emph{necessary} edges, then there is no proper isomorphism and the likelihoods for $P$ and $Q$ are identifiable. (proven in Appendix)
\end{theorem}
In order to connect our definition of superfluous edges to the literature we will reiterate McPeek's formulation of \emph{superfluous} individuals~\cite{McPeek2002inference}. An \emph{individual $i \in I(P)$ is superfluous} if for every pair $\{a,b\} \in \chi$ at least one of the following holds:
\vspace{-0.25cm}
\begin{enumerate}
\item $i \notin A(a) \cup A(b)$ where $A(a)$ is the ancestors of $a$
\item $A(i) \cap \{a,b\} = \emptyset$ and there exists some $c \in I(P) \setminus \{a,b\}$ and $d \in I(P)$ such that for every $e \in \{i\} \cup A(i)$ for every $l \ge 1$ and every directed path $q = (q_0,...,q_l)$ of length $l$ with $q_0 = e$ and $q_l \in \{a,b\}$, we have c = $q_m$ and $d = q_{m+1}$ for some $0 \le m \le l-1$.
\vspace{-0.25cm}
\end{enumerate}
This last condition states that every directed path from $i$ or an ancestor of $i$ to $\{a,b\}$ must pass through directed edge $(c,d)$.
The reason for the definition of superfluous individuals is that it is polynomial-time checkable. If one were to directly check the definition of superfluous edges, one would find it necessary to compute the emission partition and the maximal ensemble state space which requires exponential time. Despite this, from the definition of superfluous edges, it is easy to see the operational consequence: edges can be removed from the pedigree. Superfluous edges and superfluous individuals are related as follows.
\begin{lemma}
An individual is \emph{superfluous} if and only if all the edges adjacent to that individuals are \emph{superfluous}. (proven in Appendix)
\end{lemma}
Theorem~\ref{thm:cardinalityofedges} tells us that when two pedigrees have a different number of necessary edges they are certainly identifiable. While this criteria is useful if we are interested in a particular pedigree, it does not allow us to draw broad conclusions about a class of pedigrees. Ideally, if we want to integrate over the space of pedigrees, we would want to integrate only over identifiable pedigrees for efficiency of computation.
The class of diploid Wright-Fisher (dWF) pedigrees are haploid Wright-Fisher genealogies which are two-colorable where there is a color for each gender. These pedigrees have discrete non-overlapping generations, and all the individuals of interest are `leaves' of the genealogy.
\begin{theorem}
\label{thm:dwf}
Two non-isomorphic, dWF pedigrees $P$ and $Q$ contain only necessary edges and have individuals of interest $\chi$ labeling the `leafs' which are the individuals with no children.
Then pedigrees $P$ and $Q$ are identifiable. (proven in Appendix)
\end{theorem}
\vspace{-0.25cm}
\section{Examples}
\vspace{-0.25cm}
We will consider several examples. The first of which is a trio of pedigrees that are non-identifiable with data from unlinked sites. This fact is well known due to their identical kinship coefficients. However, these three pedigrees are identifiable with data from linked sites. The second example is an extension of the well-known non-identifiable cousin-type relationships. In this example, we extend the relationship from two to three individuals of interest and show that the relationships remain non-identifiable. To the best of our knowledge, this is the first example of non-identifiable pedigrees on more than two individuals of interest.
\begin{figure}[ht!]
\begin{center}
\includegraphics[scale=0.7]{example1.pdf}
\end{center}
\caption{{\bf Half-siblings, grand-parent-grand-child, and avuncular relationships are identifiable.} Individuals are drawn as boxes, if male, and circles, if female. The individuals of interest are $\chi=\{a,b\}$. Alleles are drawn as disks with a line between the allele and the parent it was inherited from. For each edge, numbered $e \in \{1,...,5\}$, the binary value $x_e$ in the inheritance vector indicates which parental allele was chosen for that hidden state where zero indicates the paternal and the leftmost of the two alleles. The numbers labeling the edges indicate in which order the bits appear in their respective vectors. These three relationships have identical kinship coefficients. The likelihoods of these relationships are identifiable given data on linked sites.
}
\label{fig:example1}
\end{figure}
\paragraph{Half-Sibling, Avuncular, and Grandparent-Grandchild Relationships.}
The first example we will consider is the well-known trio of pedigrees where the kinship coefficient is identical: half-sibling, avuncular, and grandparent-grandchild relationships. There are two individuals of interest, $a$ and $b$ for whom we have data. These three relationships are drawn in Fig~\ref{fig:example1}.
The maximum ensemble partition for each of these three pedigrees are $\{ W_1^P =\{00,11\}, W_2^P = \{01,10\}\}$ for the half-siblings, $\{W_1^R = \{00,01\},$ $W_2^R = \{10,11\}\}$ for the grand-parent-grand-child, and for the avuncular relationship:
\vspace{-0.25cm}
\begin{eqnarray*}
W_1^Q = \{00000, 01010, 00101, 01111, 10000, 11010, 10101, 11111\} \\
W_2^Q = \{00001, 01011, 00100, 01110, 10010, 11000, 10111, 11101\} \\
W_3^Q = \{00010, 00111, 01000, 01101, 10001, 10100, 11011, 11110\} \\
W_4^Q = \{00011, 00110, 01001, 01100, 10011, 10110, 11001, 11100\}
\vspace{-0.25cm}
\end{eqnarray*}
To get the transition probabilities, we need to sum Equation~\ref{Xtransition} as in Equation~\ref{Ytranstion}. Since for the first two pedigrees, $P$ and $R$, there are only two states, we need only compute the transition probability for one state (the others are obtained by observing that the transition probabilities sum to one). For pedigree $P$, we have
\vspace{-0.25cm}
\[ Pr[Y_{t+1}^P = W_1^P~|~Y_t^P = W_1^p] = (1-\theta_t)^2 + \theta_t^2 = 2\theta_t^2 - 2\theta_t +1.
\vspace{-0.25cm}
\]
For pedigree $R$,
\vspace{-0.25cm}
\[ Pr[Y_{t+1}^R = W_1^R~|~Y_t^R = W_1^R] = (1-\theta_t)^2 + \theta_t(1-\theta_t) = 1-\theta_t.
\vspace{-0.25cm}\]
It is evident that there is no proper isomorphism that has transition equality for pedigrees $P$ and $R$. For pairs $P,Q$ and $R,Q$ there is no proper isomorphism, because all three pedigrees contain only necessary edges and both $|m(P)| \ne |m(Q)|$ and $|m(R)| \ne |m(Q)|$.
So, we conclude that these pedigrees are identifiable as long as the number of sites $T \ge 2$ and $\theta_t < 0.5$ for all $1 \le t \le T-1$.
Despite the well-known fact that these three pedigrees have identical kinship coefficients, these pedigrees \emph{are} identifiable when the data is from multiple linked sites. To the best of our knowledge, this paper is the first to prove this simple fact.
\begin{figure}[ht!]
\begin{center}
\includegraphics[scale=0.6]{example3.pdf}
\end{center}
\caption{{\bf Half-cousins and grand-half-avuncular relationships are \emph{non-}identifiable even when there is a third individual of interest.} Pedigree $P$ is derived from pedigree $P'$ by removing the superfluous edge. The two pedigree graphs, $P$ and $Q$ are not isomorphic, yet the likelihoods are non-identifiable, meaning that no amount of data on the individual $a,b$, and $c$ will distinguish these likelihoods.}
\label{fig:example3}
\end{figure}
\paragraph{Half-Cousins and Full-Cousins Relationships.}
To the best of our knowledge Donnelly~\cite{Donnelly1983} was the first to remark that pairs of pedigrees either of the half-cousin or of the full-cousin type and having equal numbers of edges are non-identifiable.
Figure 6 of~\cite{Donnelly1983}
illustrates this situation. Suppose we have two pedigrees $P_{d_a,d_b}$ and $P_{d'_a,d'_b}$ each having two individuals of interest, $\chi=\{a,b\}$ at the leaves, and the most recent common ancestors of $\chi$ have the same relationship type in both pedigrees, either half or full relationships.
Let $d_a$ and $d_b$ be the number of edges or meioses that separate individuals $a$ and $b$ from their common ancestor(s) in pedigree $P_{d_a,d_b}$. Then as long as $d_a + d_b = d'_a + d'_b$, the two pedigrees are non-identifiable.
Donnelly remarked this means that no amount of autosomal genetic information can distinguish these two pedigrees, ``unless of course information is available on a third person related to both of the individuals in question.'' Figure~\ref{fig:example3} shows that for some third individuals these relationships remain non-identifiable. To the best of our knowledge, this is the first example of a pair of non-identifiable pedigrees each having three individuals of interest.
By Theorem~\ref{thm:equivalence} and Corollary~\ref{cor:kinship} we can show that \emph{both} the pedigree likelihood and the kinship coefficient are non-identifiable for half-cousin-type relationships, see Figure~\ref{fig:example3}. The isomorphism is omitted for space reasons. We believe that a similar result can be obtained for the full-cousin-type relationship. However, the number of edges is large enough that calculation is difficult due to the exponential algorithm.
These examples mean that the likelihood alone is not a practical tool for testing relationships, for inferring pedigrees, or for correcting pedigrees that have relationship errors since the pedigrees under consideration might be non-identifiable.
\vspace{-0.25cm}
\section{A Potential Solution}
\vspace{-0.25cm}
This paper has focused on the likelihood $Pr[G|P,\theta]$,
since it is currently the object being used for relationship testing and pedigree reconstruction. However, a common alternative to the likelihood is the posterior distribution obtained via Bayes rule
\vspace{-0.25cm}
\[
Pr[P|G, \theta] = \frac{Pr[G|P,\theta]Pr[P]}{Pr[G|\theta]} = \frac{Pr[G|P,\theta]Pr[P]}{\sum_Q Pr[G|Q,\theta]Pr[Q]}.
\vspace{-0.25cm}
\]
The utility of this expression is that the posterior $Pr[P|G, \theta]$ will distinguish between non-identifiable pedigrees provided that the prior has the property that $Pr[P] \ne Pr[Q]$ when $P$ and $Q$ are non-identifiable. Indeed, the uniform distribution over dWF pedigrees is such a prior. Taking care with the zero-probability pedigrees which do not occur under the dWF model, we suggest a refinement. Let $W$ be the set of all dWF pedigrees, and let $\bar{W}$ be the pedigrees which are not dWF. Then, let $Pr[P] = 1/(|W|+1)$ for $P\in W$, and for an arbitrary ordering $Q_1,...,Q_{|\bar{W}|}$ with $Q_i \in \bar{W}$, let $Pr[Q_i] = (1/2^i)/ (Z(|W|+1))$ where $Z = \sum_{i=1} 1/2^i$. Since the number of non-diploid WF pedigrees are countably infinite, we can approximate $Z$ using its limit $Z=1$.
Now that we have a prior, the challenge of using the posterior is that the partition function, the denominator $Pr[G|\theta]$, is most certainly intractable. This is because there are an exponential number of pedigrees and the likelihood algorithm has exponential run-time for each pedigree.
The intractability of the partition function points to the use of sampling methods, in particular, the Metropolis-Hastings Markov Chain Monte Carlo approach might be well suited to this problem.
Indeed, MCMC facilitates computing the proposed prior, because we can simply take the $Q_i$ in the order that they are encountered by the Markov chain.
If we obtain a sample pedigree $P^\tau$, we can draw a new pedigree $P^{\tau+1}$ by proposing a pedigree $Q$ according to a proposal distribution $q[Q|P^\tau]$ and then choosing to accept, $P^{\tau+1} = Q$ with probability
\vspace{-0.25cm}
\[
\textrm{min} \left\{1, \frac{Pr[G|Q,\theta]Pr[Q]}{Pr[G|P^\tau,\theta]Pr[P^\tau]} \frac{q[P^\tau|Q]}{q[Q|P^\tau]}\right\}
\vspace{-0.25cm}\]
otherwise $P^{\tau+1} = P^\tau$ remains unchanged. A sequence of $P^1, P^2,...,P^\tau$ is guaranteed to converge to the stationary distribution $Pr[P^\tau|G,\theta]$. After convergence at time-step $\tau$, take $\delta$ pedigree samples
$\{P^{\tau},P^{k+\tau},...,P^{\delta k+\tau}\}$ where $k$ is the number of steps between samples. Those samples can yield information about the posterior distribution, such as the confidence for each edge. One could also take the most probable pedigree that was sampled, and treat that as the estimated pedigree.
The complexity here comes down to three issues, first the likelihood calculation which is exponential, second the prior on the pedigrees which might be tailored to a specific set of pedigrees having positive probabilities, i.e. those containing particular ``known'' edges, and third calculating the proposal distribution which should be tractable and produce non-zero pedigrees. The latter is critical, because MCMC methods will not converge if they repeatedly propose zero-probability events. This can probably be overcome by using moves inspired by the phylogenetic prune and re-graft method. As yet, all these details are an open problem.
Alternative to integration over the whole space of pedigrees, if we have a single pedigree of which we are fairly confident, we could use this method to integrate over `nearby' pedigree graphs to get a measure of our confidence in our chosen pedigree. We could use Theorem~\ref{thm:cardinalityofedges} as a guide to integrate only over a set of pedigrees all having the same number of necessary edges while giving a zero prior to all other pedigrees. Such an approach might even be computationally feasible due to the polynomial-time checkable definition of necessary edges. This would allow us to incorporate into our calculations the uncertainty we have about our chosen pedigree relative to its non-identifiable `neighbors'.
\vspace{-0.25cm}
\section{Discussion}
\vspace{-0.25cm}
This paper reviews the pedigrees that were known to be non-identifiable, namely the half-cousin-type and full-cousin-type relationships. It also introduces a troubling new pair of non-identifiable pedigrees that are also half-cousin-type pedigrees but which contain three individuals of interest. This is the first discussion of non-identifiable pedigrees with genetic data available for more than two individuals, demonstrating that identifiability is not restricted to pedigrees having two individuals with data.
We introduce a general criteria that can be used to detect non-identifiable pedigrees. We show how non-identifiable likelihoods relate to non-identifiable kinship coefficients. An example is given showing that the kinship coefficient can be identical while the likelihood is sufficient to distinguish the pedigrees. Finally, we show that a broad class of pedigree pairs, namely those with different numbers of necessary edges, are identifiable, and the necessary edges can be obtained in polynomial time. We also introduce a class of pedigrees, i.e.~diploid Wright-Fisher genealogies, which are provably identifiable.
In order to effectively deal with non-identifiable pedigrees, we can use Bayes rule to obtain the posterior as a function of the likelihood and the prior. Some mild conditions on the prior mean that the posterior will distinguish among the potential pedigrees. The class of dWF pedigrees provides such a prior. Furthermore, we could use Theorem~\ref{thm:cardinalityofedges} as a guide to integrate over the uncertainty we have about a pedigree structure.
|
1,108,101,566,684 | arxiv | \section{Introduction}
The current standard model of cosmology was recently confirmed analysing the precise cosmic microwave background (CMB)
radiation data measured by the Planck satellite~\citep{Planck18}.
The 6-parameters cosmological model that best fits the observational data, from CMB radiation and other cosmological observables, is the flat-$\Lambda$CDM, where the 3-dimensional space is Euclidean (i.e., zero spatial curvature or $k = 0$), and the universe is filled in with cold dark matter (CDM) and dark energy –in the form of a cosmological constant $\Lambda$– in addition to the standard baryonic and electromagnetic ingredients~\citep{Planck18,Planck16}.
One distinctive feature of the Planck measurements is the high angular resolution that has allowed to measure the CMB temperature fluctuations at the scales $\sim 4'$ (i.e., $\ell \sim 2500$).
This made it possible to reconstruct, by inverse procedure, the lensing potential map that causes the weak lensing action of matter on the CMB photons path from the last scattering surface to us~\citep{Planck15-XV,Jia15,GAM,Bianchini}.
In fact, the CMB photons detected by Planck are already weakly lensed by the matter they encounter in their paths, and the prediction of this phenomenon in the standard model is a tiny effect of smoothing the CMB acoustic peaks at very small angular scales (i.e., $\ell \stackrel{>}{_{\sim}} 1000$).
Since the observed CMB photons can not be delensed, the solution adopted by the Planck collaboration for the CMB data analyses is to include the weak lensing effect in the model that best fits the CMB angular power spectrum (APS). In practice, this solution contemplates that the amplitude of the weak lensing effect on CMB photons is, in the
flat-$\Lambda$CDM model, $A_{L} = 1$.
The precise measurements of the Planck CMB temperature-temperature (TT) APS at small angles have stimulated
accurate statistical analyses about the suitability of the lensing amplitude parameter $A_{L} = 1$,
as a way to validate the $\Lambda$CDM concordance model.
As a result, some works have reported discordance with this amplitude.
In fact, previous analyses indicate a clear preference for a higher lensing amplitude, i.e., $A_{L} > 1$
\citep{Calabrese,Bianchini,DES22,Ballardini,Valentino20b,Valentino20a,Planck16}\footnote{see the table 9 in page 32, in ref.
https://arxiv.org/pdf/1605.02985.pdf}.
More recently, the Planck collaboration reported:
$A_{L} = 1.180 \,\pm\, 0.065$ (68\%, Planck TT,TE,EE+lowE) and
$A_{L} = 1.243 \,\pm\, 0.096$ (68 \%, Planck TT+lowE), where the lensing amplitude $A_L$ is varied in the
parameter analysis (see section 6.2 in~\cite{Planck18}).
These results motivate us to perform a detailed
examination of the CMB acoustic peaks sensitive to the lens phenomenon, i.e. $\ell > 1000$,
due to a possible excess of lensing power in the TT APS data, a signal that can be accounted for with a
lensing amplitude $A_{L} > 1$, that is, higher than the value adopted by the flat-$\Lambda$CDM Planck's
best-fit model $A_{L} = 1$.
The outline of this work is the following:
In section~\ref{sec2} we describe the Planck data employed in this work.
In section~\ref{sec3} we perform the statistical analyses of the spectra difference (Planck CMB APS minus Planck best-fit
$\Lambda$CDM APS) to investigate:
(i) if it corresponds to white noise or not;
in negative case, (ii) if it has signature and amplitude that can be explained by an extra lensing amplitude $A_{lens} > 0$
interpreted as a possible excess of lensing signature not accounted by the Planck best-fit $\Lambda$CDM APS;
(iii) if this spectrum difference can be explained by modifying some other cosmological parameters in the APS of
the $\Lambda$CDM model.
In section~\ref{sec4} we discuss our results, and also present our conclusions and final remarks.
\section{Data}\label{sec2}
The most accurate current measurements of the CMB temperature fluctuations are part of the third public data release
of the Planck collaboration~\cite{Planck18}, precise data that allow to re-examine many interesting features already reported
with diverse CMB data sets~\citep{WHR1,B-HR,Samal,BR12,Aluri12,BOP,Polastri,Aluri16,Aluri17,Vafaei,Goyal,Saha21,Chiocchetta,Saha22}.
Further, CMB products combined with data sets from other cosmological tracers are being used to study models alternative to the flat-$\Lambda$CDM model (see, e.g.~\cite{WHR2,WHR3,Bessa}).
In particular, the large set of data products released by the Planck team offers the possibility to perform exhaustive analyses
to learn more about features of the matter clustering, investigating the small angular scales where the weak lensing phenomenon
left its signature imprinted. In fact, the Planck CMB data are specially accurate at small angular scales due to their high angular
resolution~\citep{Marques20a,Marques20b,Dong,Avila,Tanseri}.
Thus, we shall analyse in details the small angular scales of the CMB TT APS which is part
of the third data release of the PLANCK collaboration~\cite{Planck18}. Our main goal is to study if the difference between the measured Planck APS and the best-fit $\Lambda$CDM APS (based on the 6-parameters flat-$\Lambda$CDM model)
from the data fit analyses done by the Planck team, is just statistical noise or it has indeed some indicative signature of an incorrect modeling of the weak lensing amplitude (i.e., $A_L \ne 1$), or of other cosmological parameters.
Clearly, the null hypothesis that deserves investigation establishes that the difference between the observed Planck APS
and the $\Lambda$CDM APS is just white noise.
\vspace{0.3cm}
For our analyses some features are important to be considered.
Firstly, the possible signal would be significantly detected only at the angular scales with the smallest
error measurements, that is, for the interval $\ell \sim 1000 - 2000$;
at the same time, one knows that these scales correspond to the regime where the lensing phenomenon has a large impact on
the CMB APS.
Secondly, for these analyses we use the original unbinned data from Planck repository pages, then our analyses shall consider
binnation inspections beside the $\Delta \ell = 30$ case considered by the Planck team.
In our scrutiny, the released Planck CMB TT APS, the Planck APS~\footnote{COM$_{-}$PowerSpect$_{-}$CMB-TT-full$_{-}$R3.01.txt},
covers the range of multipoles $\ell = 2-2508$, where the multipoles of interest here, $\ell \ge 30$, were derived from the
cross-half-mission likelihood Planck, Plik (for details see ref.~\cite{Planck18-V}).
The Planck APS was derived from the Commander component-separation algorithm applied to the combination of Planck 2018 temperature
data between 30 and 857 GHz, including 86\% of the sky (Planck-2018 results VI) (associated $1\sigma$ errors include beam uncertainties).
The analysis was done by optimally combining the spectra in the frequency range $100-217$ GHz, and correcting them for unresolved
foregrounds using the best-fit foreground solution (regions were chosen as to avoid the areas dominated by noise).
For the best-fit $\Lambda$CDM CMB APS, to be subtracted to the Planck APS for the aim of our analyses, we use the 6-parameter
flat-$\Lambda$CDM minimal model, also released by the Planck team and hereafter termed the $\Lambda$CDM
APS~\footnote{COM$_{-}$PowerSpect$_{-}$CMB-base-plikHM-TTTEEE-lowl-lowE-lensing-minimum-theory$_{-}$R3.01.txt}.
This $\Lambda$CDM APS uses the Planck TT, TE, EE+lowE+lensing data.
Both power spectra, the Planck APS and $\Lambda$CDM APS, were publicly released by the Planck collaboration\footnote{
\url{https://archives.esac.esa.int/doi/html/data/astronomy/planck/Cosmology.html}
}.
\section{Methodology and data analyses}\label{sec3}
This section describes the methodological approach we shall follow to study the Planck data APS, specifically concerning the
possibility that the lensing amplitude parameter that best-fits the APS data is larger than 1, $A_L > 1$, where the value
used by the Planck collaboration to obtain the flat-$\Lambda$CDM model is $A_L = 1$.
\subsection{Methodology to investigate the spectrum difference}\label{sec3.1}
We want to investigate if there is an excess of lensing power in the TT APS measured by the Planck Collaboration
$C^{\mbox{\footnotesize Planck}}_{\ell}$, with 1$\sigma$ standard deviation $\sigma^{\mbox{\footnotesize Planck}}_{\ell}$,
with respect to the flat-$\Lambda$CDM APS, $C^{\Lambda\mbox{\footnotesize CDM}}_{\ell}$,
obtained through a best-fit procedure of the 6 parameter flat-$\Lambda$CDM cosmological model to the TT Planck APS data~\citep{Planck18}.
For this, we first define the spectrum difference :
\begin{equation}\label{Dobs}
\delta^{obs}_{\ell} \equiv \frac{C^{\mbox{\footnotesize Planck}}_{\ell} - C^{\Lambda\mbox{\footnotesize CDM}}_{\ell}}
{C^{\Lambda\mbox{\footnotesize CDM}}_{\ell}} \,.
\end{equation}
Notice that the $\Lambda$CDM APS, $C^{\Lambda\mbox{\footnotesize CDM}}_{\ell}$, takes into account the CMB lensing phenomenon with $A_L = 1$, which is the value assumed by the Planck Collaboration in the flat-$\Lambda$CDM model.
We define the symbol $A_{lens} \in [0,1]$
as the parameter that quantifies the excess of lensing amplitude to explain the residual $\delta^{obs}_{\ell}$.
We construct synthetic APS, $C^{syn,L}_{\ell}(A_{lens})$,
with the Planck 2018 flat-$\Lambda$CDM cosmological parameters but for arbitrary lensing amplitude
$A_{lens} \ge 0$.
We also define
\begin{equation}\label{Dsyn}
\delta^{\mbox{\,\footnotesize exc}}_{\ell}(A_{lens}) \,\equiv\, \frac{ C^{\mbox{\,\footnotesize syn},L}_{\ell}(A_{lens})
- C^{\mbox{\,\footnotesize syn},uL}_{\ell} } {C^{\mbox{\,\footnotesize syn},uL}_{\ell}} \,,
\end{equation}
where
\begin{eqnarray}
C^{\Lambda\mbox{\footnotesize CDM}}_{\ell} &=& C^{\mbox{\,\footnotesize syn},L}_{\ell}(A_{lens} = 1) \, , \\
C^{\mbox{\,\footnotesize syn},uL}_{\ell} &=& C^{\mbox{\,\footnotesize syn},L}_{\ell}(\mbox{A}_{lens}=0) \,,
\end{eqnarray}
where the upper letters $L$ and $uL$ mean {\em lensed} and {\em unlensed} APS, respectively.
The quantity $\delta^{\mbox{\,\footnotesize exc}}_{\ell}(A_{lens})$ measures the possible excess of lensing power represented by the relative difference of synthetic lensed and unlensed APS (the difference
that depends on the parameter $A_{lens}$; clearly, the case
$A_{lens} = 0$ implies
that $\delta^{\mbox{\,\footnotesize exc}}_{\ell} = 0$).
Then, a good statistical agreement between the excess quantity $\delta^{\mbox{\,\footnotesize exc}}_{\ell}$ and the observed difference
$\delta^{\mbox{\,\footnotesize obs}}_{\ell}$, measured with the reduced $\chi^2$ best-fit for some $A_{lens} > 0$ value,
will provide an explanation for $\delta^{\mbox{\,\footnotesize obs}}_{\ell}$ as being an excess of lensing power in the Planck APS not accounted for
by the $\Lambda$CDM lensing amplitude $A_{L} = 1$.
Of course, other interpretations for $\delta^{\mbox{\,\footnotesize obs}}_{\ell}$ would be possible, as for instance that it is just statistical noise, and for this reason it is interesting to examine them too. In order to evaluate if the spectrum difference $\delta^{\mbox{\,\footnotesize obs}}_{\ell}$ corresponds to white noise, we shall apply the Ljung-Box test \citep{ljungbox}.
Our analyses include the simulation of APS, from now on termed synthetic APS, which considers the modeling of
the weak lensing phenomenon on CMB photons for cases with $A_{lens} = 0$ (unlensed) and $A_{lens} \ne 0$ (lensed).
The details for the production of these synthetic APS are the following:
\begin{itemize}
\item The Boltzmann code $CLASS$ \citep{class} was modified to allow the code to consider
the lensing amplitude $A_{lens}$ as a parameter.
\item
We use this modified code with the cosmological parameters corresponding to the observed Planck TT APS, as detailed in the
table~\ref{table1}, to produce synthetic lensed and unlensed $\Lambda$CDM TT APS, $C^{\mbox{\,\footnotesize syn},L}_{\ell}$
and $C^{\mbox{\,\footnotesize syn},uL}_{\ell}$, respectively.
\begin{table*
\caption {The cosmological parameters used to generate the synthetic angular power spectra.} \label{table1}
\begin{center}
\begin{tabular}{|l|l|l|l|l|l|l|}
\hline
parameters & $\Omega_b h^2$ & $\Omega_c h^2$ & $100 \theta_{MC}$ & $n_s$ & $\ln(10^{10}A_s)$ & $\tau$\\
\hline \hline
$\Lambda$CDM & 0.022383 & 0.12011 & 1.040909 & 0.96605 & 3.0448 & 0.0543 \\
\hline
\end {tabular}
\end{center}
\end {table*}\label{table1}
\item We then vary the value of the lensing amplitude $A_{lens}$ and construct the quantity $\delta^{\mbox{\,\footnotesize exc}}_{\ell}(A_{lens})$.
\end{itemize}
Moreover the relative error for $\delta^{obs}_{\ell}$, $error(\delta)_{\ell}$ , is calculated by considering the error
from the observed Planck TT APS,
$\sigma^{\mbox{\footnotesize Planck}}_{\ell}$,
and using the standard approach for propagation of uncertainties from equation (\ref{Dobs}).
Our analysis shall consider binnation inspections beside that one adopted by the Planck team.
Thus, after calculating the spectrum difference
for each $A_{lens}$ considered, and the corresponding errors, we have to choose
the bin length $\Delta \ell$, which is the size of the bin that we are dividing our $\delta^{obs}_{\ell}$ array into.
Each bin will be represented by its mean value with error $\sigma_{\ell} \equiv error(\delta)_{\ell} / \sqrt{\,\Delta \ell}$.
In the last step, we compute the $\chi^2$, after binning the spectrum difference and computing the error
\begin{equation}
\chi^2(A_ {lens}) = \sum\frac{[\delta^{\mbox{\,\footnotesize obs}}_{\ell} - \delta^{\mbox{\,\footnotesize exc}}_{\ell}(A_ {lens})]^2}{\sigma^2_{\ell}} \,,
\end{equation}
where $\delta^{obs}_{\ell}$ and $\delta^{exc}_{\ell}(A_{lens})$ are defined in equations (\ref{Dobs}) and (\ref{Dsyn}), respectively.
The sum above is performed over the binned spectra difference data $\{ \delta_{\ell}^{\mbox{\,\footnotesize obs / exc}} \}$.
Finally, the space of parameters is extended to include the
neutrino mass $\sum m_\nu$ and the spatial curvature $\Omega_k$ to explore the possibility that any lensing excess signal can be mimicked by the effects of some of these parameters.
\subsection{Null hypothesis analyses}\label{sec3.2}
In this subsection we shall test if the set of values $\{ \delta^{obs}_{\ell} \}$ is a statistical noise or not,
i.e. we test the randomness of the spectrum difference.
To perform this test we consider the null hypothesis:
\\
\\
\noindent
$H_0$: The spectrum difference, $\{ \delta^{obs}_{\ell} \}$, corresponds to a residual statistical (or white) noise.
\\
\\
That $H_0$ be true means that $A_L= 1$ completely accounts for with the lensing signal in the observed Planck APS.
In order to examine this hypothesis we shall use the Ljung-Box (LB) test~\citep{ljungbox}, which is a modification
of the Box-Pierce Portmanteau {\bf Q} statistic~\citep[][]{boxpierce}.
The LB test is used to look for correlation in a data series,
determining whether there is or not a remaining signature in the residuals after a forecast model has been fitted to the data.
Basically, the LB test is a useful tool to evaluate the autocorrelation between the data in analysis, and to quantify its
statistical significance.
As a first step, it is necessary to compute the autocorrelation in a given data set
$\{ \delta^{obs}_{i} \}$ in an interval $M = [\ell_{min},\ell_{max}]$ with $N$ data points
\begin{eqnarray} \label{auto}
\rho_k =\frac{\sum^{N-k}_{i=1} \left( \delta^{obs}_{i}- \overline{\delta^{obs}_{i}}\right)\left(\delta^{obs}_{i+k}- \overline{\delta^{obs}_{i}}\right)}{\sum^N_{i=1} \left(\delta^{obs}_{i}- \overline{\delta^{obs}_{i}}\right)^2} \,,
\end{eqnarray}
where $\overline{\delta^{obs}_{i}}$ is the average of all $N$ points in the $M$ interval, $k$ is commonly called the \text{lag} and $\rho_k$ is called the lag $k$ autocorrelation.
Since $\rho_k$ measures the correlation between multipoles separated by $k$, the autocorrelation $\rho_k$ can be used, in principle, to detect non-randomness in the data. However, it is more recommended to use tests considering multiple (sometimes called global or total) correlations across all the data interval and for several lags jointly, like the LB test~ \citep{ljungbox}.
The null hypothesis $H_0$ for this test establishes that the first $h$ lags autocorrelations are jointly zero, i.e.
\begin{eqnarray}
H_0: \rho_1 = \rho_2 = \cdots =\rho_k= \cdots =\rho_h = 0 \,,
\end{eqnarray}
where $h$ is the maximum lag considered in the test. In other words, $H_0$ being true implies that all the analyzed data are uncorrelated and correspond to a white noise signal. The LB statistic is defined by
\begin{eqnarray} \label{q}
Q_h = N(N+2)\sum^h_{k=1} \frac{\rho^2_k}{N-k} \,,
\end{eqnarray}
where $\rho_k$ is the estimated correlation using equation (\ref{auto}).
Thus , LB test does not consider just a particular lag $k$ but a set of $h$ estimated correlations.
Since $Q$ asymptotically follows a $\chi^2$ distribution, to determine the statistical significance of the test
it is compared to a $\chi^2$ distribution with $h'=h-q$ degrees of freedom under the condition \citep{ljungbox}
\begin{eqnarray}
Q_h > \chi^2_{1-\alpha,h'} \,,
\end{eqnarray}
where $q$ is the number of parameters used to fit the observed Planck APS and $\alpha$ is the significance level.
Then, small $p$-values ($p<\alpha$) will imply that
significant correlation exists between the data in the set $\{ \delta^{obs}_{i} \}$ and thus, $H_0$ is rejected in the $M$ interval.
A choice of the $h$ parameter in equation (\ref{q}) requires a more detailed discussion.
Several studies has been performed to define which is the optimal value.
For instance, empirically \citet{Ljung1986} suggests $h=5$, \citet{Tsay2010} suggests $h\sim \ln N$, \citet{Hyndman2018}
$h = min(10,N/5)$, \citet{Shumway2011} $h=20$. Furthermore, \citet{Hyndman2014} employed a study with simulations to show that
for very large values of $h$, LB test could lead to not so reliable results.
Recently, \citet{hassani2020} also
used simulations to evaluate the optimal value for the number of lags $h$ involved in the LB test.
Their results have shown that for the order of thousands data, the optimal values are $h=50$ for $\alpha=0.05$ and $h=25$ for $\alpha = 0.01$.
To perform the LB test we first compute $\{ \delta^{obs}_{l} \}$ in different $M$ intervals and apply it for each interval.
Our results are summarized in table~\ref{LBIntervals} where $q=7$, $\alpha =0.01$ and, as recommended by~\citet{hassani2020}, $h =25$ was used.
We rerun the test for different values of $h$ mentioned in the previous paragraph and confirm that the results in the table~\ref{LBIntervals} are robust.
The first interval analyzed is $2\leq \ell \leq 2500$, i.e. the complete range of multipoles measured by Planck.
In such interval we found a rejection of null hypothesis $H_0$ with $p = 0.0$. Then, we applied the test for three different intervals:
$2 \leq \ell \leq 100$, $2\leq \ell \leq 800$ and $2 \leq \ell \leq 1200$, where it was found a no rejection of $H_0$.
These results are indicating that the CMB power spectrum is well fitted by the cosmological parameters found by Planck team, and thus residual $\{ \delta^{obs}_{\ell} \}$ is white noise until, roughly, $\ell \sim 1200$.
We also considered three more intervals,
$1100\leq \ell \leq 2500$, $1600\leq \ell \leq 2500$ and $2000 \leq \ell \leq 2500$.
Results in such intervals are indicating a rejection of the $H_0$ hypothesis at more than $99\%$ CL, i.e.
$\{ \delta^{obs}_{\ell} \}$ is not white noise and some signal could be hidden in such multipole intervals.
One possibility is that rejection of $H_0$ would be due to the not so accurate measurements, represented by the large error bars, observed in the last multipoles (especially for
$\ell \gtrsim 2200$).
In order to evaluate this possibility we consider one more interval.
\begin{table}
\caption {Statistical LB analyses to investigate the rejection or acceptance of the null hypothesis $H_0$}
\begin{center}
\begin{tabular}{|l|l|l|l|}
\hline
Multipole intervals & $p$-value & Reject $H_0$ & $\%$repetition \\
\hline \hline
$\ell = [2, 2500]$ & $0$ & Yes & $100 \%$ \\
\hline
$\ell = [2, 100]$ & $6.0346 \times 10^{-1}$ & No & $95 \%$ \\
\hline
$\ell = [2, 800]$ & $1.1739\times 10^{-1}$ & No & $56 \%$ \\
\hline
$\ell = [2, 1200]$ & $1.709739\times 10^{-2}$ & No & $20 \%$ \\
\hline
$\ell = [1100, 2500]$ & $1.043610\times 10^{-14}$ & Yes & $100 \%$ \\
\hline
$\ell = [1600, 2500]$& $3.70592\times 10^{-8} $ & Yes & $94 \%$ \\
\hline
$\ell = [2000, 2500]$ & $1.567727\times 10^{-3}$ & Yes & $59 \%$ \\
\hline
$\ell = [1100, 2200]$ & $2.6334\times 10^{-13}$ & Yes & $100 \%$ \\
\hline
\end{tabular}
\end{center}\label{LBIntervals}
\end{table}
In fact, we also consider the statistical LB analyses in the interval: $1100 \leq \ell \leq 2200$, and according to the very small $p$-values obtained, we also confirm the rejection of the null hypothesis $H_0$ in this interval at $> 99\%$ confidence level (CL) as we can see in the table \ref{LBIntervals}.
These results confirm a significant correlation among the $\{ \delta^{obs}_{\ell} \}$ and support the hypothesis of the
presence of some structure left in the spectrum difference, even if one does not consider the data with largest errors
for $\ell > 2200$.
Roughly, the null hypothesis is rejected for $\ell \gtrsim 1000 -1200$, precisely at the angular scales where theoretical estimates
indicate that lensing signal starts to be more relevant \citep{Lewis-Challinor}.
So far, we have applied the LB test to the main values of the spectrum difference, i.e. without considering the errors $error(\delta)_{\ell}$. Now, let us investigate the robustness of our results by introducing error measurements in the LB test. In order to continue the analyses we construct artificial sets $\{{\widetilde\delta}^{obs}_{\ell}\}$ defined as $\{ {\widetilde\delta}^{obs}_{\ell}\} \equiv \{ \delta^{obs}_{\ell} + R_{\ell} \}$ and generated as follows: first, for each $\ell$ we generate a $R_{\ell}$ which is a random number obtained from a Gaussian distribution --with zero mean and standard deviation: $error(\delta)_{\ell}$-- that is added to $\{ \delta^{obs}_{\ell}\}$ to get $\{{\widetilde\delta}^{obs}_{\ell}\}$.
These sets $\{{\widetilde\delta}^{obs}_{\ell}\}$ are hypothetical points representing simulated spectrum differences inside 1$\sigma$ error.
We then apply the LB test, in different M intervals, for $100$ simulated spectrum differences $\{{\widetilde\delta}^{obs}_{\ell}\}$ and compute the percentage of repetition of rejection, or not rejection, of the null hypothesis.
The results are shown in the last column in table \ref{LBIntervals}.
For example, in the first interval $2 \leq \ell \leq 2500$, the $100\%$ of repetitions reject $H_0$, while in the second interval the
$95\%$ of repetitions do not reject $H_0$. The percentage of repetitions for $H_0$ no rejection is decreasing for the next two intervals: $2 \leq \ell \leq 800$ and $2 \leq \ell \leq 1200$, which can be an indicative that around $\ell \sim 1000$ there exists the probability of some correlation between the data $\{\delta^{obs}_{\ell}\}$. Nevertheless in all the other cases, percentage of repetitions for rejection, or no rejection, of $H_0$ is sufficiently high to reinforce the results shown in the third column of table~\ref{LBIntervals}.
As a conclusion of this subsection we must point out that there exist a strong evidence to reject the null hypothesis
in the interval $1100 \leq \ell \leq 2500$, where lensing effects on CMB photons are more relevant~\citep{Lewis-Challinor}.
Nevertheless, to be conservative, we can leave out the last multipoles due to their large error bars, i.e., $\ell > 2200$,
and will focus our next analyses in the interval $1100 \leq \ell \leq 2200$.
\subsection{Measuring the excess of lensing power}\label{sec3.3}
The first aim of this work was to investigate if the spectra difference $\delta^{obs}_{\ell}$ defined in
equation~(\ref{Dobs}), corresponds to statistical noise or not.
Results of the previous subsection showed that for several intervals with $\ell \gtrsim 1000-1200$
there seems to be a rejection of the null hypothesis $H_0$.
Now, our second aim is to explore the possibility that the $\delta^{obs}_{\ell}$ data could be well
reproduced through the $\chi^2$ best-fit analyses, described in the subsection~\ref{sec3.1},
by a synthetic APS according to the equation~(\ref{Dsyn}),
$\delta^{\mbox{\,\footnotesize exc}}_{\ell}(A_{lens})$, for some value $A_{lens} > 0$.
As illustrative examples for several $A_{lens}$ cases and diverse binnation schemes $\Delta \ell$,
we show some plots of these best-fit analyses in figures~\ref{fig1},
\ref{fig2}, and~\ref{fig3}.
The fact that one can find a good fit, i.e. $\chi^2 \simeq 1$ as seen in table~\ref{table3}, for the spectrum difference $\delta^{obs}_{\ell}$
for a synthetic APS with $A_{lens} > 0$ suggests that the lensing amplitude parameter in the Planck APS should be
indeed larger than 1, i.e. $1 + A_{lens} > 1$, meaning that this parameter was underestimated
in the analyses of the APS, $C^{\mbox{\footnotesize Planck}}_{\ell}$, done by the Planck collaboration.
Assuming that the spectrum difference $\delta^{obs}_{\ell}$ has the signature of the lensing phenomenon,
as suggested by the plots shown in figures~\ref{fig1}--\ref{fig3},
one can find the parameter value $A_{lens}$ of the synthetic APS that best-fits these data.
In the table~\ref{table3} we display the $\chi^2$ values obtained for different $A_{lens}$
values and diverse $\Delta \ell$ bin lengths.
One notice that in several cases, $\chi^2 \simeq 1$ is fully possible, including the $A_{lens} = 0$ case which
is also considered in our analyses.
This suggests that a more detailed likelihood analyses are in due.
We start considering the $\chi^2$ procedure, of the synthetic APS that best-fits the spectrum difference, $\delta^{obs}_{\ell}$,
for a continuous set of $A_{lens}$ values.
As a result, one obtains a $\chi^2$ value for each $A_{lens}$ considered, that is, $\chi^2$ becomes a function of the $A_{lens}$
parameter: $\chi^2 = \chi^2(A_{lens})$, information that can be plotted as curves $\chi^2$ versus $A_{lens}$, as observed
in the figure~\ref{fig4}, considering moreover different bin lengths $\Delta \ell$.
In the table~\ref{table4} we show the numerical values of the intersections between the curves $\chi^2(A_{lens})$
appearing in figure~\ref{fig4}, obtained for different binnations, and the straight line
$\chi^2 = 1$.
According to these results, we conclude that values $A_{lens} \ne 0$ exist, which imply that the spectrum difference
$\delta^{obs}_{\ell}$ can be well fitted by $\delta^{exc}_{\ell}(A_{lens})$ for some $A_{lens} \ne 0$ values and several $\Delta \ell$ choices.
Furthermore, we perform statistical likelihood analyses to find the best-fit $A_{lens}$ values, as shown in table~\ref{table5}, also for several bin lengths.
Observing the figure~\ref{fig4} one can argue that several binnations provide a best-fit with reduced $\chi^2 = 1$
for some $A_{lens} = A_{lens}(\Delta \ell)$ values, meaning that a solution with $A_{lens} \ne 0$ exists but is not unique.
Complementing this $\chi^2$ studies, the maximum likelihood analyses shown in figure~\ref{fig5} let us to conclude
that $A_{lens} \in [0.10,0.29]$ at 68\%~CL. See also the figure~\ref{fig6} for a detailed study of the illustrative case $\Delta \ell = 17$, where it can be seen that $A_{lens} > 0$ at 2.6 $\!\sigma$.
The same results are found for other binnation schemes.
\begin{figure
\centering
\includegraphics[height=3cm, width=8.5cm]{0.2bin63.png}
\includegraphics[height=3cm, width=8.5cm]{al20_bin63.png}
\includegraphics[height=3cm, width=8.5cm]{0.22bin63.png}
\includegraphics[height=3cm, width=8.5cm]{al22_bin63.png}
\caption{Illustrative examples: plots of the spectrum difference $\delta^{obs}_{\ell}$ together
with $\delta^{exc}_{\ell}(A_{lens})$, for the cases $A_{lens} = 0.20$ (first and second panels) and $A_{lens} = 0.22$
(third and fourth panels), for $\ell \in [1100,2000]$ and $\ell \in [1100,2200]$ as indicated in the plots.
In all these plots we consider $\Delta \ell = 63$.
Although the best-fit curve in the first and third panels seem equals,
they are indeed slightly different.
}\label{fig1}
\end{figure}
\begin{figure
\centering
\includegraphics[height=3cm, width=8.5cm]{0.2test.png}
\includegraphics[height=3cm, width=8.5cm]{al20_bin32.png}
\includegraphics[height=3cm, width=8.5cm]{0.2.png}
\includegraphics[height=3cm, width=8.5cm]{al20_bin40.png}
\caption{Illustrative examples: plots of the spectrum difference $\delta^{obs}_{\ell}$ together with
$\delta^{exc}_{\ell}(A_{lens})$ for $\ell \in [1100,2000]$ and $\ell \in [1100,2200]$ as indicated in the plots.
In all these plots we consider $A_{lens} = 0.20$.
They were obtained for the bin lengths $\Delta \ell = 32$, for the first and the second panels and
$\Delta \ell = 40$ for the third and the fourth panels.
}
\label{fig2}
\end{figure}
\newpage
\begin{figure
\centering
\includegraphics[height=6cm, width=8.5cm]{al20_delta63.png}
\includegraphics[height=6cm, width=8.5cm]{al22_delta63.png}
\caption{Illustrative analyses comparing the spectrum difference $\delta^{obs}_{\ell}$ and
$\delta^{exc}_{\ell}(A_{lens})$ for $\ell = [1100, 2200]$ and $\ell = [1100, 2000]$ with $\Delta \ell = 63$
for $A_{lens} = 0.20$ (upper 2 plots) and $A_{lens} = 0.22$ (lower 2 plots)}
\label{fig3}
\end{figure}
\begin{figure
\centering
\includegraphics[height=6cm,width=8.5cm]{bin.png}
\caption{$\chi ^2$ as a function of $A_{lens}$ and for different number of bins. This plot shows
that, independent of the bin size $\Delta_{\ell}$, the $\chi^2$ function exhibits solutions for the case $\chi^2 = 1$ for
$A_{lens} \ne 0$, which justify our search for the value $A_{lens}$ that best-fits the observed data
$\delta^{obs}_{\ell}$.
}
\label{fig4}
\end{figure}
\begin{table
\caption{Illustrative examples of $\chi^2$ calculations for different values of $A_{lens}$
and several binnation choices $\Delta \ell$. Some of these cases can be seen in figures~\ref{fig1},~\ref{fig2},
and~\ref{fig3}.} \label{table3}
\begin{center}
\begin{tabular}{|l|l|l|l|l|l|l|l|}
\hline
$A_{lens}$ & $\Delta \ell = 17$ & $\Delta \ell = 32$ & $\Delta \ell = 41$ & $\Delta \ell = 51$ & $\Delta \ell = 63$ \\
\hline \hline
0.00 &1.0363 & 1.2809 & 1.02590 & 0.95802 & 1.63446 \\
\hline
0.10 &0.9414 & 1.0747 & 0.7610 & 0.6416 & 1.0798 \\
\hline
0.20 & 0.9329 & 1.0305 & 0.7094 & 0.5851 & 0.8395 \\
\hline
0.24 & 0.9525 & 1.0561 & 0.7459 & 0.6319 & 0.8274 \\
\hline
0.34 & 1.0565 & 1.2238 & 0.9734 & 0.9150 & 0.9980 \\
\hline
\end{tabular}
\end{center}
\end{table}
\vspace{0.5cm}
\subsection{Cosmological parameters dependency}
In the previous section we explore the possibility that the residual structure in CMB lensing signal can be explained by an excess in lensing amplitude $A_{lens} > 0$. However, there is also the possibility that such residual structure could be mimicked by varying in other cosmological parameters. The scrutiny of this dependency is important to confirm if the signal found in the previous section is indeed due to
the lensing effect or it also has contribution of other cosmological parameters.
This analysis is the main objective in this subsection.
As it is well known, the APS, $C^{\Lambda\mbox{\footnotesize CDM}}_{\ell}$, is sensitive to the cosmological parameters \citep{Peebles1968,Doroshkevich1978,Wilson1981}. Such sensitivity is inherited by the quantity $\delta^{obs}_{\ell}$ defined in equation (\ref{Dobs}). So far, the model considered was the simplest $\Lambda$CDM with the six parameters listed in table~\ref{table1} plus lensing amplitude $A_L=1$. Now, the space of parameters is extended to include the neutrino mass $\sum m_\nu$ and the spatial curvature $\Omega_k$. Massive neutrinos slow down the growth of matter perturbations and prevent clustering process \citep{Bond1980,Lesgourgues2006}, resulting, therefore, in an anticorrelation with gravitational lensing amplitude. Moreover, spatial curvature also has correlation with lensing and thus, CMB lensing signal is sensitive to the neutrino mass and the spatial curvature of the universe.
In order to investigate the space of parameters
($A_{lens}$, $\sum m_\nu$, $\Omega_k$) in the light of the spectrum difference $\delta^{obs}_{\ell}$, the other cosmological parameters were fixed by using best-fit values found by Planck 2018 data~\cite{Planck18} (see table \ref{table1}) and the parameters $A_{lens}$, $\sum m_\nu$, and $\Omega_k$ were allowed to vary. Then we use MCMC techniques to find constraints on these three free parameters by considering flat priors and marginalizing to compute the one dimensional posterior distributions. Results are presented in
figure~\ref{fig7} and table~\ref{table7}.
In figure~\ref{fig7} the one dimensional posterior distributions for $\Delta \ell = 17$ and $\Delta \ell =63$ are shown. We can say, in general, that our results are consistent at $68 \%$ CL with an almost flat spatial curvature, with $\sum m_\nu < 0.47$ eV, and with a $\sim 20\%$ lensing amplitude excess quantified by the $A_{lens}$ parameter. This $A_{lens}$ excess is still present at around three standard deviations (see table \ref{table7}), even though these analyses considered a larger number of parameters.
All these results are robust when changing $\Delta \ell$ indicating that the neutrino mass or the spatial curvature weakly impact in the $\delta^{obs}_{\ell}$ intensity signal.
Therefore their effects are not sufficient to explain the signature in the spectrum
difference $\delta^{obs}_{\ell}$, which is more likely explained by a lensing amplitude $A_{lens} \simeq 0.20$, which implies that the lensing amplitude in the Planck's CMB APS should be $\sim 20\%$ larger than the value expected in the flat-$\Lambda$CDM model.
\begin{table
\caption{Table with the numerical values obtained due to the intersection between the horizontal line,
representing the reduced-$\chi^2 = 1$, with the curves produced calculating the reduced-$\chi^2$ for the parameter $A_{L}$,
and considering various bin-length cases $\Delta \ell$.
The first and second points in the table below represent the values of $A_{L}$ where $\chi ^2 = 1$.
As observed in figure~\ref{fig4} there is no intersection for the case $\Delta \ell = 32$.
} \label{tab:title}
\begin{center}
\begin{tabular}{|l|l|l|l|l|l|l|l|}
\hline
bin length & first point & second point \\
\hline \hline
$\Delta \ell = 17$ & $0.028675$ & $0.29479$ \\
\hline
$\Delta \ell =32$ & \hspace{0.4cm}-- & \hspace{0.4cm}-- \\
\hline
$\Delta \ell =41$ & $0.02495$ & $0.3305$ \\
\hline
$\Delta \ell =51$ & $0.0023$ & $0.3477$ \\
\hline
$\Delta \ell =63$ & $0.16053$ & $0.29425$ \\
\hline
\end{tabular}
\end{center}\label{table4}
\end{table}
\begin{table
\caption{Values of $A_{L}$ from the likelihood plot. The last column indicates the confidence level (CL) for having $A_{lens} > 0$ in each binnation case.}~\label{table5}
\begin{center}
\begin{tabular}{|l|c|c|c|l|l|l|l|}
\hline
$\!$bin length$\!$ & $\!\!A_{lens}$ at max. likelihood$\!$ & $\!\!$1$\sigma$ interval$\!$ & $\!\!$CL for $A_{lens}\!>\!0$$\!\!$ \\
\hline \hline
$\Delta \ell = 17$ & $0.16036$ & $[0.0993, 0.2213]$ & 2.6$\sigma$\\
\hline
$\Delta \ell =32$ & $0.177477$ & $[0.1164, 0.2384]$ & 2.8$\sigma$\\
\hline
$\Delta \ell =41$ & $0.17567$ & $[0.1136, 0.2376]$ & 2.8$\sigma$\\
\hline
$\Delta \ell =51$ & $0.172072$ & $[0.1090, 0.2350]$ & 2.7$\sigma$\\
\hline
$\Delta \ell =63$ & $0.22702$ & $[0.1630, 0.2910]$ & 3.5$\sigma$\\
\hline
\end{tabular}
\end{center}\label{likelihoods}
\end{table}
\begin{figure
\centering
\includegraphics[height=7cm, width=8.5cm]{confidence.png}
\caption{Maximum likelihood analyses, complementing the analyses done in figure~\protect\ref{fig4}, showing the 68\% CL of the $A_{lens}$ values that perform the reduced $\chi^2$ best-fit of the spectrum difference $\delta^{obs}_{\ell}$, that is $A_{lens} \in [0.10,0.29]$, for several bin sizes.}
\label{fig5}
\end{figure}
\begin{figure
\centering
\includegraphics[height=7cm, width=8.5cm]{confidence1.png}
\caption{Illustrative example of $\sigma$ intervals for the $\Delta \ell = 17$ case.
As observed, for this binnation case the value $A_{lens} = 0$ (equivalently $A_{L} = 1$)
is excluded with $2.6 \sigma$ (i.e., at 99\% CL).}
\label{fig6}
\end{figure}
\section{Discussions and Final Remarks}\label{sec4}
We now discuss the exhaustive analyses done with the precise Planck APS data and the results obtained investigating the statistical features of the spectrum difference $\delta^{obs}_{\ell}$.
Recent analyses have reported the preference of the lensing amplitude parameter $A_L$ for a value larger than 1, with high statistical significance~\citep{Planck16,Planck18,Bianchini,DES22},
and we want to study if this result could be sourced by a statistical artifact or if it has a physical origin.
Our analyses can be considered complementary, and independent from those done by the Planck collaboration~\cite{Planck18}.
In the previous section we have performed a detailed examination of the unbinned CMB APS.
We have proved that the spectrum difference $\delta^{obs}_{\ell}$ is not statistical noise;
additionally we found several multipole intervals, starting at $\ell \gtrsim 1000$, where this result is true
(see table~\ref{LBIntervals}), supporting the crucial result that the spectrum difference is not statistical noise at small scales.
This result is suggesting that some signal is present in the spectra difference, and the signature shown in
these data must be an indication of the phenomenon that caused it.
However, degeneracy can also happen, that is, the signature present in the data could be reproduced by more than
one source and for this reason we also explore diverse hypotheses to explain the signature and the amplitude of
the spectrum difference data.
Bearing in mind that, at the scales $\ell \gtrsim 1000$, the acoustic peaks are extremely sensitive to the lensing
phenomenon, the first hypothesis examined was that the signature in the spectrum difference corresponds to an underestimated lensing amplitude in the Planck best-fitting procedure.
As shown in the analyses of section~\ref{sec3.3} this hypothesis is indeed verified, where we find a lack of lensing amplitude of around $20\%$ with respect to the Planck APS fitted assuming the flat-$\Lambda$CDM model with $A_L = 1$.
As a matter of fact, we find that there is an excess of signal in $\delta^{obs}_{\ell}$ (see equation~(\ref{Dobs}))
that is well explained by a $\Lambda$CDM APS with a non-null lensing amplitude parameter $A_{lens} > 0$, with values in the interval
$[0.10,0.29]$ at 68\%~CL.
Moreover, we found several scheme binnations that best-fits the spectrum difference $\delta^{obs}_{\ell}$,
with $\chi^2 = 1$, with lensing amplitudes $A_{lens} \simeq 0.2$ (see
tables~\ref{table4} and~\ref{table5}).
According to our likelihood analyses, the synthetic APS produced for these best-fit procedures of $\delta^{obs}_{\ell}$,
with lensing amplitude $A_{lens}$ as a parameter, show that $A_{lens} > 0$, or equivalently $A_L > 1$, with statistical significance of $\sim 3 \sigma$.
Additionally, we have also investigated a possible dependency of the spectrum difference $\delta^{obs}_{\ell}$,
in signature and intensity, on some cosmological parameters associated to lensing amplitude, neutrino mass and spatial curvature. Despite neutrino mass and spatial curvature can have impact in lensing signal, the impact in $\delta^{obs}_{\ell}$ residual is weak and cannot be enough to reproduce the excess in $A_{lens}$, in signature and intensity.
Our likelihood analyses for the examination of the cosmological parameters
$A_{lens}$, $\sum m_{\nu}$, and $\Omega_k$ are displayed in figure~\ref{fig7}.
\begin{figure
\centering
\includegraphics[height=16cm, width=8.5cm]{SeveralParameters.png}
\caption{One dimensional posterior distributions for $A_{lens}$, $\sum m_\nu$ and $\Omega_k$ for $\Delta \ell=17$ and $\Delta \ell=63$, as illustrative examples.
The $\delta^{obs}_{\ell}$ signal is consistent with: $\sum m_\nu < 0.47$eV, an almost flat spatial curvature, and $\sim 20\%$ of excess in $A_{lens}$. These results show the consistency of the different values that these parameters can assume to explain the spectrum difference $\delta^{obs}_{\ell}$.}
\label{fig7}
\end{figure}
\begin{table}
\caption{Values $A_{lens}$, $\sum m_\nu$ and $\Omega_k$ at 1$\sigma$ for $\Delta \ell = 17$ and $\Delta \ell = 63$.
The last column indicates the confidence level (CL) for having $A_{lens} > 0$ in each binnation case.}~\label{table7}
\begin{center}
\begin{tabular}{|l|c|c|c|l|l|l|l|}
\hline
$\!$bin length$\!$ & $\!A_{lens}$ $\!$ & \!$\sum m_\nu $\! & $\!\Omega_K\!$ & \!\!CL for $A_{lens}\!>\!0$\\
\hline \hline
$\Delta \ell = 17$ & $0.18^{+0.063}_{-0.063}$ & $<0.47$ eV & $0.0029^{+0.009}_{-0.009}$ & 3$\sigma$\\
\hline
$\Delta \ell =63$ & $0.22^{+0.065}_{-0.065}$ & $< 0.45$ eV & $0.0013^{+0.0071}_{-0.0071}$ & 2.9$\sigma$\\
\hline
\end{tabular}
\end{center}
\end{table}
\section*{Acknowledgements}
We acknowledge the use of data from the Planck/ESA mission, downloaded from the Planck Legacy Archive.
RM and WHR thank CNPq, TWAS and FAPES (PRONEM No 503/2020) for the fellowships and finantial support under which this work was
carried out.
AB acknowledges a CNPq fellowship.
We acknowledge that our work made use of the CHE cluster, managed and
funded by the COSMO/CBPF/MCTI, with financial support from FINEP and FAPERJ, and operating
at Javier Magnin Computing Center/CBPF.
\bibliographystyle{mnras}
|
1,108,101,566,685 | arxiv | \section{Introduction}
This paper presents an application and slight extension of the recent work on \emph{barriers} in constrained nonlinear systems, see \cite{DeDona_siam,EsterhuizenPHDThesis,Ester_Lev_arxiv}. Given a pendulum on a cart with the rigid bar replaced by a massless cable, we aim at designing a control law which guarantees that the cable always remains taut. The study of this system may be useful to the investigation of safely controlling overhead cranes where slackness of the cable would result in free-fall of the working mass, which would therefore be uncontrolled, and thus potentially harmful for the system and its environment. Such a system whose dynamics may switch conditionally to an event which is, itself, a function of the state and input, is generally called a \emph{hybrid system} (see e.g. \cite{Mitchell2003,VanDerShaft2000,Lygeros2007}). The reader may also refer to \cite{KissPHDThesis,KLM-pp,KLM-jss} for studies on modeling and trajectory planning of \emph{weight handling equipment}. A similar problem appears in \cite{NicotraNalGor_IFACE2014} where the authors study \emph{tethered unmanned aerial vehicles} in the different perspective of designing a stabilizing feedback controller.
For a constrained nonlinear control system, the \emph{admissible set} is the set of all initial conditions for which there exists a control such that the constraints are satisfied for all time. Under mild assumptions, this set is closed and its boundary consists of two complementary parts. One of them, called the \emph{barrier}, enjoys the so-called \emph{semi-permeability} property \cite{Isaacs} and its construction is done via a minimum-like principle \cite{DeDona_siam,EsterhuizenPHDThesis,Ester_Lev_arxiv}. Our approach to solving the above mentioned problem of the pendulum on a cart is to find this system's admissible set and to guarantee the cable tautness as follows: if the state remains in the admissible set's interior, the control can be arbitrary in some state-dependent constraint set for almost all time and, if the state reaches the barrier, a special control, which we indeed exhibit, needs to be employed in order to keep the cable taut. This admissible set may be interpreted as a \emph{safe set}, or more precisely as a \emph{potentially safe set}.
Note also that we emphasize on systems with \emph{mixed} constraints, i.e. constraints that are functions, in a coupled way, of the control and the state \cite{Clarke_DiPinho,Hestenes,Ester_Lev_arxiv}, the reason being that tautness of the cable, which is expressed by the fact that the tension in the cable remains nonnegative, can be shown to be equivalent to imposing a mixed constraint. Such constraints are by far more complicated than pure state constraints since they are control dependent, with controls that may be discontinuous with respect to time, thus possibly creating jumps on the constraint set.
Admissible sets are strongly related to invariant sets \cite{Chutinan03c,Teel:HybridBook2012} and viability kernels \cite{Aubin,Lygeros99,Tomlin2000,VanDerShaft2000,Kaynama:2012:CVK:2185632.2185644,Mitchell2003,Mitchel2005,Lhommeau:capture:11}. Our approach contrasts with these works by the fact that in place of computing flows or Lyapunov functions or solutions of Hamilton-Jacobi equations over the whole domain, \emph{we reduce the computations to the boundary of the set under study}. The same kind of comparison also holds with barrier Lyapunov functions \cite{Tee2009a}, or barrier certificates \cite{Prajna06}.
The originality of the results of this paper is threefold:
\begin{itemize}
\item the interpretation of the cable tautness / slackness as a mixed constraint may be found in \cite{NicotraNalGor_IFACE2014} but, as already said, with a different stabilization objective.
In this paper, we are interested in the analysis and computation of the associated admissible set, namely the largest state domain where one can find an open-loop control such that the cable remains taut, which is new to the authors' knowledge;
\item the computation of this admissible set by focusing on its boundary strongly contrasts, in spirit, with the various theoretical constructions found in the literature \cite{Aubin,Lygeros99,Tomlin2000,VanDerShaft2000,Kaynama:2012:CVK:2185632.2185644,Mitchell2003,Mitchel2005,Lhommeau:capture:11,NicotraNalGor_IFACE2014} where numerical integration is used to compute flows, each step being simple but the number of steps and iterations exponentially increasing with the dimension of the problem;
\item the necessary conditions used here have been obtained in \cite{Ester_Lev_arxiv} at the exception of the terminal condition called \emph{ultimate tangentiality condition}. This new terminal condition, introduced to overcome a double problem of singularity and nonsmoothness, is essential for the computation of the barrier: the latter equations cannot be integrated without suitable terminal conditions and the ultimate tangentiality condition of \cite{Ester_Lev_arxiv} turned out to be too coarse to obtain a solution.
\end{itemize}
The paper is organised as follows. In Section \ref{sec:Barriers_In_Constrained_Sys} we summarise the main results from \cite{DeDona_siam}, \cite{EsterhuizenPHDThesis} and \cite{Ester_Lev_arxiv} which we present without proofs. In Section \ref{sec:Barrier_for_Pend} we construct the system's barrier. Section \ref{sec:Discussion} provides a discussion of the physical interpretations of the results, and the paper ends with Section \ref{sec:Conclusions} that summarises the conclusions and points out future research.
\section{Barriers in Nonlinear Control System with Mixed Constraints}\label{sec:Barriers_In_Constrained_Sys}
\subsection{Constrained Nonlinear Systems with Mixed Constraints}
The contents of this section is borrowed from \cite{EsterhuizenPHDThesis} and \cite{Ester_Lev_arxiv}, where more details may be found. However, Proposition~\ref{ult-tan-1d-pr} and Theorem~\ref{BarrierTheorem1} of this paper slightly extend the ones of these references. We consider the following nonlinear system with mixed constraints:
\begin{align}
\label{eq:state_space}
& \dot{x} = f(x,u), \\
\label{eq:initial_condition}
& x(t_0) = x_0, \\
\label{eq:input_constraint}
& u \in {\mathcal U}, \\
\label{eq:state_const}
& g_i\big(x(t), u(t)\big) \leq 0 \quad \mathit{a.e.~} t \in [t_0, \infty) \quad i=1,...,p
\end{align}
where $x(t)\in {\mathbb R}^{n}$.
The set ${\mathcal U}$ is the set of Lebesgue measurable functions from $[t_0, \infty)$ to $U$, a given compact convex subset of $
^{m}$; Thus $u$ is a measurable function such that $ u(t) \in U$ for almost all $t\in [t_0, \infty)$.
We denote by $x^{(u,x_0,t_0)}(t)$ the solution of the differential equation~\eqref{eq:state_space} at $t$ with input \eqref{eq:input_constraint} and initial condition~\eqref{eq:initial_condition}. Sometimes the initial time or initial condition need not be specified, in which cases we will use the notation $x^{(u,x_0)}(t)$ or $x^u(t)$ respectively.
The constraints \eqref{eq:state_const}, called \emph{mixed constraints} \cite{Clarke_DiPinho,Hestenes}, explicitly depend both on the state and the control. We denote by $g(x,u)$ the vector-valued function whose $i$-th component is $g_i(x,u)$. By $g(x,u)\prec 0$ (resp. $g(x,u)\preceq 0 $) we mean $g_i(x,u) < 0$ (resp. $g_i(x,u) \leq 0$) for all $i$. By $g(x,u)\circeq 0$, we mean $g_i(x,u) = 0$ for at least one $i$. As said before, even if $g$ is smooth, the mapping $t\mapsto g(x(t),u(t))$ is only measurable and the associated mixed constraints are thus assumed to be satisfied almost everywhere.
\subsection{The Admissible Set}
We define the following sets:
\begin{gather} \displaystyle
G \triangleq \bigcup_{u\in U} \{x\in{\mathbb R}^n : g(x,u)\preceq 0\} \label{def:G}
\\ \displaystyle
G_0 \triangleq \{x \in G : \min_{u\in U} \max_{i\in\{1,...,p\}}g_i(x,u)=0 \}\label{def:G0}
\\ \displaystyle
G_{-} \triangleq \bigcup_{u\in U} \{x\in{\mathbb R}^n : g(x,u) \prec 0\} \label{def:G_-}
\end{gather}
We further assume:
\begin{description}
\item[(A2.1)] $f$ is an at least $C^{2}$ vector field of ${\mathbb R}^{n}$ for every $u$ in an open subset $U_1$ of $\Rset^{m}$ containing $U$, whose dependence with respect to $u$ is also at least $C^{2}$.
\item[(A2.2)] There exists a constant $0 < C < +\infty$ such that the following inequality holds true:
$$\sup_{u\in U}\vert x^T f(x,u) \vert \leq C(1+ \Vert x \Vert^{2} ), \quad \mbox{\textrm{for all}}~x$$
where the notation $x^T f(x,u)$ indicates the inner product of the two vectors $x$ and $f(x,u)$.
\item[(A2.3)] The set $f(x,U)$, called the \emph{vectogram} in \cite{Isaacs}, is convex for all $x\in {\mathbb R}^{n}$.
\item[(A2.4)] $g$ is at least $C^{2}$ from ${\mathbb R}^{n}\times U_1$ to ${\mathbb R}^p$ and convex with respect to $u$ for all $x\in {\mathbb R}^{n}$.
\end{description}
We also introduce the following state-dependent control set:
\begin{equation}
U(x) \triangleq \{u \in U : g(x,u) \preceq 0 \}\quad \forall x \in G.\label{def:U(x)}
\end{equation}
The convexity of $U$ and (A2.4) imply that $U(x)$ is convex for all $x\in G$ and, since $g$ is continuous, the multivalued mapping $x\mapsto U(x)$ is closed with range in the compact set $U$, and therefore upper semi-continuous (u.s.c.) (see e.g. \cite{berge,F}).
We assume that, for every $x\in G$, the set $U(x)$ is locally expressible as
\begin{equation}\label{def:UStraight}
U(x) \triangleq \{u\in{\mathbb R}^m : \gamma_i(x,u)\leq 0, i = 1,\dots,r\}
\end{equation}
the functions $\gamma_{i}$ being of class $C^2$, linearly independent, and convex with respect to $u$ for all $x\in G$.
For a pair $(x,u)\in {\mathbb R}^n \times U$, we denote by ${\mathbb I}(x,u)$ the set of indices, possibly empty, corresponding to the ``active'' mixed constraints:
\begin{equation}\label{IIdef}
{\mathbb I}(x,u) \triangleq \{ i\in \{1,\ldots,r\} : \gamma_{i}(x,u) = 0\}.
\end{equation}
The number $\#({\mathbb I}(x,u))$ of elements of ${\mathbb I}(x,u)$ thus represents the number of ``active'' constraints among the $r$ independent constraints at $(x,u)$. We further assume:
\begin{description}
\item[(A2.5)] For almost all $z$ in a neighborhood of $G_0$ and all $u\in U(z)$ such that $0<\#({\mathbb I}(z,u))$, the (row) vectors
$\frac{\partial\gamma_{i}}{\partial u}(z,u)$, $i \in {\mathbb I}(z,u)$, are linearly independent.
\end{description}
\begin{definition}[Admissible Set]\emph{\cite{Ester_Lev_arxiv}}
\label{def:admiss_states}
We say that a point $x_0$ is \emph{admissible} if, and only if, there exists, at least, one input function $u\in {\mathcal U}$, such that~\eqref{eq:state_space}--\eqref{eq:state_const} are
satisfied:
\begin{equation}\label{eq:Admiss_states}
{\mathcal A} \triangleq \{x_0 \in G: \exists u\in {\mathcal U},~ g\big(x^{(u,x_{0})}(t), u(t)\big) \preceq 0, \mathit{a.e.~} t\}.
\end{equation}
\end{definition}
As in \cite{PBGM} a \emph{Lebesgue} point, for a given control $u\in {\mathcal U}$ is a time $t\in [t_0,\infty)$ where $u$ is continuous, the interval $[t_0,\infty)$ being possibly deprived of a bounded subset of zero Lebesgue measure which does not contain $t$.
If $u_{1}\in {\mathcal U}$ and $u_{2}\in {\mathcal U}$, and if $\tau\geq t_{0}$ is given, the concatenated input $v$, defined by $v(t)= \left\{ \begin{array}{ll} u_{1}(t)&\mbox{\textrm if~} t\in [t_{0}, \tau[\\u_{2}(t)&\mbox{\textrm if~} t \geq \tau\end{array}\right.$ satisfies $v\in {\mathcal U}$. The concatenation operator relative to $\tau$ is denoted by $\Join_{\tau}$, i.e. $v=u_{1}\Join_{\tau} u_{2}$.
Since system (\ref{eq:state_space}) is time-invariant, the initial time $t_0$ may be taken as 0. When clear from the context, ``$\forall t$'' or ``for \emph{a.e} $t$'' will mean
``$\forall t \in [0, \infty)$'' or ``for \emph{a.e.} $t\in [0, \infty)$'', where \emph{a.e.} is understood with respect to the Lebesgue measure.
\begin{proposition}\label{close-cor}\emph{\cite{Ester_Lev_arxiv}}
Under assumptions (A2.1) - (A2.4) the set ${\mathcal A}$ is closed.
\end{proposition}
\begin{remark}\label{rem:A22}
Assumption (A2.2), which implies an at most linear growth of $f$ with respect to $x$, is introduced to guarantee the uniform boundedness and uniform convergence of a sequence of integral curves that appear in the proof of Proposition~\ref{close-cor}. However, this condition is far from being necessary and many systems
do not satisfy it though having bounded trajectories under admissible controls. Any other condition on $f$ ensuring uniform boundedness (see e.g. \cite{F}) would give similar compactness results.
\\
Assumption (A2.5) is used in the proof of Theorem \ref{BarrierTheorem1} (see Subsection \ref{barrierTh:subsec}). It replaces the stringent independence condition (A4) of \cite{Ester_Lev_arxiv} which is not satisfied in many examples, including the pendulum one of Section \ref{sec:Barrier_for_Pend}.
\end{remark}
We denote by $\partial \mathcal{A}$ the boundary of ${\mathcal A}$ and ${\mathcal A}^{\mathsf C}$ its complement. We indeed have $\partial {\mathcal A} \subset {\mathcal A}$.
\begin{definition}\label{def:barrier}
The set $\left [\DA\right]_{\mathcal{-}} = \partial \mathcal{A}\cap G_-$ is called the \emph{barrier} of the set ${\mathcal A}$.
\end{definition}
It is characterised by the two next Propositions, proved in \cite{Ester_Lev_arxiv}, and the (new) Proposition~\ref{ult-tan-1d-pr} of Subsection~\ref{ult-tan:subsec}.
\begin{proposition}\label{boundary:prop}\emph{\cite{Ester_Lev_arxiv}}
Assume (A2.1) to (A2.4) hold. The barrier $\left [\DA\right]_{\mathcal{-}}$ is made of points $\bar{x}\in G_-$ for which there exists $\bar{u}\in{\mathcal U}$ and an integral curve $x^{(\bar{u},\bar{x})}$ entirely contained in $\left [\DA\right]_{\mathcal{-}}$ either until it intersects $G_0$, i.e. at a point $z = x^{(\bar{u},\bar{x})}(\bar{t})$, for some $\bar{t}$, such that $\min_{u\in U} \max_{i=1,\ldots,p} g_{i}(z,u) = 0$, or that never intersects $G_0$.
\end{proposition}
\begin{proposition}\label{bar-sem-cor}\emph{\cite{Ester_Lev_arxiv}}
Assume (A2.1) to (A2.4) hold. Then from any point on the boundary $\left [\DA\right]_{\mathcal{-}}$, there cannot exist a trajectory penetrating the interior of ${\mathcal A}$ before leaving $G_{-}$.
\end{proposition}
\subsection{Barrier End Point Condition}\label{ult-tan:subsec}
We introduce the notation
\begin{equation}\label{gt:def}
\tilde{g}(x) \triangleq \min_{u\in U} \max_{i = 1,\dots, p} g_i(x,u),
\end{equation}
i.e. $G_0 = \{ x : \tilde{g}(x) = 0 \}$. In \cite{EsterhuizenPHDThesis} and \cite{Ester_Lev_arxiv} it was shown that $\tilde{g}$ is locally Lipschitz (this is a version of Danskin's Theorem, see e.g. \cite{danskin}), and therefore differentiable almost everywhere.
The intersection between $\left [\DA\right]_{\mathcal{-}}$ and $G_0$, if it exists, must satisfy the condition given in the next proposition.
\begin{proposition}\label{ult-tan-1d-pr}
Assume (A2.1) to (A2.4) hold. Consider $\bar{x} \in \left [\DA\right]_{\mathcal{-}}$ and $\bar{u}\in {\mathcal U}$ as in Proposition~\ref{boundary:prop}, i.e. such that the integral curve $x^{(\bar{u},\bar{x})}(t) \in \left [\DA\right]_{\mathcal{-}}$ for all $t$ in some time interval until it reaches $G_{0}$ at some finite time $\bar{t}\geq 0$. Then, the point $z= x^{(\bar{u},\bar{x})}(\bar{t})\in \mathsf{cl}(\left [\DA\right]_{\mathcal{-}})\cap G_{0}$, satisfies
\begin{equation}\label{ineq:ult_tan}
\min_{u\in U(x^{(\bar{u},\bar{x})}(\bar{t}_-))} D\tilde{g}(x^{(\bar{u},\bar{x})}(\bar{t}_-)) f(x^{(\bar{u},\bar{x})}(\bar{t}), u) \geq 0,
\end{equation}
where $D\tilde{g}$ is the gradient of $\tilde{g}$, $h(x(\bar{t}_-))$ indicating the left limit of $h(x(\tau))$, when $\tau \nearrow \bar{t}$ (i.e. with $\tau < \bar{t}$), of an arbitrary function or multivalued mapping $h$, not necessarily continuous.
Moreover, if the point $z$ is a differentiability point of $\tilde{g}$, condition \eqref{ineq:ult_tan} reads
\begin{equation}\label{eq:ult_tan_smooth}
\min_{u\in U(z)} D\tilde{g}(z) f(z, u) = 0
\end{equation}
\end{proposition}
\begin{proof}
\sloppy Let $x_0\in \left [\DA\right]_{\mathcal{-}}$, then there exists a control $\bar{u}\in {\mathcal U}$ such that $\tilde{g}(x^{(\bar{u},x_0)}(t)) < 0$ until $x^{(\bar{u},x_0)}$ intersects $G_0$ at some $\tilde{t}$ that we assume finite. Consider an open set ${\mathcal O} \subset {\mathbb R}^n$ such that $x_0 +\varepsilon h \in {\mathcal A}^{\mathsf C}$, the complement of ${\mathcal A}$, for all $h \in {\mathcal O}$ and $\Vert h\Vert \leq H$, with $H$ arbitrarily small, and all $\varepsilon$ sufficiently small.
Introduce a needle perturbation of $\bar{u}$, labelled $u_{\kappa,\varepsilon}$ as in Appendix \ref{Appendix:var},
where $v \in U(x^{(\bar{u},x_0)}(\tau))$ for all $t\in [\tau-l\varepsilon, \tau[$, at some Lebesgue point $\tau$ of $\bar{u}$ before $x^{(\bar{u},x_0)}$ intersects $G_0$.
Because $x_0 + \varepsilon h \in {\mathcal A}^{\mathsf C}$, $\exists t_{\varepsilon,\kappa,h}<\infty$ at which $x^{(u_{\kappa,\varepsilon},x_0+\varepsilon h)}(t_{\varepsilon,\kappa,h})$ crosses $G_0$, see Proposition \ref{bar-sem-cor}. As a result of the uniform convergence of $x^{(u_{\kappa,\varepsilon},x_0+\varepsilon h)}$ to $x^{(\bar{u},x_0)}$, there exists a $\bar{t} \geq \tilde{t}$, s.t. $x^{(u_{\kappa,\varepsilon},x_0+\varepsilon h)}(t_{\varepsilon,\kappa,h})\rightarrow x^{(\bar{u},x_0)}(\bar{t})$ as $\varepsilon \rightarrow 0$ and, according to the continuity of $\tilde{g}$, we have
$$
\lim_{\varepsilon \rightarrow 0} \tilde{g}(x^{(u_{\kappa,\varepsilon},x_0+\varepsilon h)}(t_{\varepsilon,\kappa,h})) = 0 = \tilde{g}(x^{(\bar{u},x_0)}(\bar{t})).
$$
Because $\tilde{g}(x^{(u_{\kappa,\varepsilon},x_0+\varepsilon h)}(t_{\varepsilon,\kappa,h})) = 0$ and $\tilde{g}(x^{(\bar{u},x_0)}(t_{\varepsilon,\kappa,h})) \leq 0$ (recall that $\tilde{g}(x^{(\bar{u},x_0)}(t_{\varepsilon,\kappa,h})) \leq g(x^{(\bar{u},x_0)}(t_{\varepsilon,\kappa,h}), \bar{u}(t_{\varepsilon,\kappa,h})) \leq 0$ since the pair $(x^{(\bar{u},x_0)}(t),\bar{u}(t))$ satisfies the constraints for all $t$), we have that
$$
\tilde{g}(x^{(u_{\kappa,\varepsilon},x_0+\varepsilon h)}(t_{\varepsilon,\kappa,h})) - \tilde{g}(x^{(\bar{u},x_0)}(t_{\varepsilon,\kappa,h})) \geq 0.
$$
Recall from Appendix \ref{Appendix:var} that
\begin{equation}
\begin{aligned}
x^{(u_{\kappa,\varepsilon},x_0+\varepsilon h)}(t_{\varepsilon,\kappa,h}) = &x^{(\bar{u},x_0)}(t_{\varepsilon,\kappa,h}) \\ &\,\,+ \varepsilon w(t_{\varepsilon,\kappa,h},\kappa,h) + O(\varepsilon^2)
\end{aligned}
\end{equation}
where $w(t,\kappa,h)$ satisfies \eqref{needle-eq}.
Since $\tilde{g}$ is almost everywhere differentiable, we have:
\begin{equation}\label{genDerOfGTilde}
\begin{aligned}
&\frac{\tilde{g}(x^{(u_{\kappa,\varepsilon},x_0+\varepsilon h)}(t_{\varepsilon,\kappa,h})) - \tilde{g}(x^{(\bar{u},x_0)}(t_{\varepsilon,\kappa,h}))}{\varepsilon} \\ &=D \tilde{g}(x^{(\bar{u},x_0)}(t_{\varepsilon,\kappa,h})) . w(t_{\varepsilon,\kappa,h},\kappa,h)) +O(\varepsilon) \geq 0
\end{aligned}
\end{equation}
for every $v \in U(x^{(\bar{u},x_0)}(\tau))$ and almost every $\varepsilon$ and $h$.
Note that $\tilde{g}(x^{(\bar{u},x_0)}(t)) \prec 0$ for all $t\in ]\bar{t}-\eta,\bar{t}[$, which implies, according to \eqref{gt:def} and (A2.4), that there exists open sets $V(t)$ such that $\mathsf{cl}(V(t)) \subset U(x^{(\bar{u},x_0)}(t))$ and $g(x^{(\bar{u},x_0)}(t),\bar{u}(t)) \preceq g(x^{(\bar{u},x_0)}(t),v) \prec 0$ for all $v\in V(t)$ and all $t\in ]\bar{t}-\eta,\bar{t}[$. Moreover, the multivalued mapping $t\mapsto \mathsf{cl}(V(t))$ may be chosen lower semi-continuous on $]\bar{t}-\eta,\bar{t}[$ (see e.g. \cite{Repov_etal_98} and the survey \cite{Repov_arxiv_14}).
Therefore we can select a continuous selection $v_{\tau} \in \mathsf{cl}(V(\tau)) \subset U(x^{({\bar{u},\bar{x}})}(\tau))$ for $\tau \in ]\bar{t} - \eta, \bar{t}[$ (see again \cite{Repov_arxiv_14}) such that $\lim_{\tau\nearrow \bar{t}} v_{\tau} = v$. Taking the limit as $\tau \nearrow \bar{t}$ \eqref{genDerOfGTilde} becomes:
\[
\min_{v\in U(x^{(\bar{u},\bar{x})}(\bar{t}_-))} D\tilde{g}(x^{(\bar{u},\bar{x})}(\bar{t}_-)) f(x^{(\bar{u},\bar{x})}(\bar{t}), v) \geq 0
\]
hence the result. The last part of the proposition, in the differentiable case, may be found in \cite{Ester_Lev_arxiv}.
\end{proof}
\subsection{The Barrier Theorem}\label{barrierTh:subsec}
The barrier's construction is done according to the following:
\begin{thm}\label{BarrierTheorem1}
\sloppy Assume (A2.1) to (A2.5) hold. Consider an integral curve $x^{\bar{u}}$ on $\left [\DA\right]_{\mathcal{-}} \cap \mathsf{cl}(\mathsf{int}({\mathcal A}))$ and assume that the control function $\bar{u}$ is piecewise continuous. Then $\bar{u}$ and $x^{\bar{u}}$ satisfy the following necessary conditions.
There exists a non-zero absolutely continuous adjoint $\lambda^{\bar{u}}$ and piecewise continuous multipliers $\mu_i^{\bar{u}} \geq 0$, $i=1,\dots,p$, such that:
\begin{equation}\label{costateEquation1}
\begin{aligned}
\dot{\lambda}^{\bar{u}}(t) =-& \left(\frac{\partial f}{\partial x}(x^{\bar{u}}(t),\bar{u}(t)) \right)^T \lambda^{\bar{u}}(t)
\\&\quad \quad - \sum_{i=1}^{p}\mu_i^{\bar{u}}(t)\frac{\partial g_i}{\partial x}(x^{\bar{u}}(t),\bar{u}(t))
\end{aligned}
\end{equation}
with the ``complementary slackness condition''
\begin{equation}\label{eq:complement1}
\mu_i^{\bar{u}}(t)g_i(x^{\bar{u}}(t),\bar{u}(t)) = 0, \quad i=1,\ldots, p.
\end{equation}
Moreover, at almost every $t$, the Hamiltonian, denoted by $\mathcal{H}(x^{\bar{u}}(t),u,\lambda^{\bar{u}}(t)) = \left(\lambda^{\bar{u}}(t) \right)^Tf(x^{\bar{u}}(t),u)$, is minimised over the set $U(x^{\bar{u}}(t))$ and constant:
\begin{equation}\label{HamiltonianMinimised1}
\begin{aligned}
&\min_{u\in U(x^{\bar{u}}(t))} \lambda^{\bar{u}}(t)^T f(x^{\bar{u}}(t),u)\\ &=\min_{u\in U} \left[ \left(\lambda^{\bar{u}}(t) \right)^T f(x^{\bar{u}}(t),u) + \sum_{i=1}^{p} \mu_{i}^{\bar{u}}(t)g_{i}(x^{\bar{u}}(t),u)\right] \\
&= \lambda^{\bar{u}}(t)^T f(x^{\bar{u}}(t),\bar{u}(t))
\end{aligned}
\end{equation}
with the following boundary conditions:
\begin{itemize}
\item If the barrier ends on $G_0$ at a non differentiability point, then at this point the adjoint satisfies
\begin{equation}\label{eq:finalConditions1}
\lambda^{\bar{u}}(\bar{t}) = \left( D\tilde{g}(z_-) \right)^T
\end{equation}
where $z$ and $\bar{t}$ are such that $z= x^{\bar{u}}(\bar{t})\in G_0$ and \eqref{ineq:ult_tan}, namely
$\min_{u\in U(z_-)} D\tilde{g}(z_-) f(z, u) \geq 0$,
where $D\tilde{g}(z_-)$ indicates the left limit of $D\tilde{g}(x^{\bar{u}}(\tau))$, when $\tau \nearrow \bar{t}$.
\item If the barrier ends on $G_0$ at a differentiability point, then at this point the adjoint satisfies
\begin{equation}\label{eq:finalConditions2}
\lambda^{\bar{u}}(\bar{t}) = \left( D\tilde{g}(z) \right)^T
\end{equation}
where $z$ and $\bar{t}$ are such that $z= x^{\bar{u}}(\bar{t})\in G_0$ and \eqref{eq:ult_tan_smooth}, namely
$\min_{u\in U(z)} D\tilde{g}(z) f(z, u) = 0$
\end{itemize}
\end{thm}
\begin{proof} Easy adaptation of the proof of Theorem 5.1 of \cite{Ester_Lev_arxiv}. See also Theorem 3.4.1 of \cite{EsterhuizenPHDThesis}.
\end{proof}
The reader may find a thorough discussion of this result, its limitations and related open problems in \cite{EsterhuizenPHDThesis}.
\section{The Barrier for the Pendulum on a Cart with a Non-Rigid Cable}\label{sec:Barrier_for_Pend}
\subsection{Derivation of Constrained System}\label{subsec:Derivation_Of_Const_Sys}
We consider the system as in Figure \ref{Fig:PendulumOnCart}: a mass of $m$ (kg) is attached to the end of a massless cable that may go slack, which is suspended from a cart of mass $M$ (kg) that may move unconstrained along a horizontal line. The control, $u$, is the force (N) applied to the cart satisfying $|u| \leq 1$. The angle (rd) between the cable and the vertical is $\theta$, $l$ is the length (m) of the cable and $g$ the acceleration due to gravity. The cart's position is $x$, and the coordinates of the mass $m$ are $(y,z)$. As long as the cable is taut, $l$ is constant, $y = x + l\sin \theta$ and $z = l\cos \theta$.
\begin{figure}
\begin{center}
\includegraphics[height=4cm]{Pendulum.eps}
\caption{Pendulum on a cart with non-rigid cable.}
\label{Fig:PendulumOnCart}
\end{center}
\end{figure}
One way to guarantee tautness in the cable is to impose the condition that the cable's tension, $T$, is always nonnegative. Under this assumption the dynamics of the system, obtained via the Euler-Lagrange method, are given by:
\begin{align}
\label{eq:pend1}
& \dot{\theta}_1 = \theta_2 , \\
\label{eq:pend2}
& \dot{\theta}_2 = \frac{-u\cos\theta_1 + (M + m)g\sin\theta_1 - ml\theta^2_2\cos\theta_1\sin\theta_1}{l\left(M + m\sin^2\theta_1\right)} \\
\label{eq:pend3}
&\dot{x}_1 = x_2 \\
\label{eq:pend4}
&\dot{x}_2 = \frac{u + ml\theta^2_2\sin\theta_1 - mg\cos\theta_1\sin\theta_1}{M + m\sin^2\theta_1}
\end{align}
where $x_1 = x$, and $\theta_1 = \theta$. To lighten our notation, we introduce ${\mathbf q} \triangleq (\theta_1,\theta_2,x_1,x_2)$ and ${\boldsymbol \theta} \triangleq (\theta_1,\theta_2)$.
Remark that the dynamics \eqref{eq:pend1}, \eqref{eq:pend2} of ${\boldsymbol \theta}$, where no simplification or approximation of any kind has been made, do not depend explicitly on the cart's position and velocity $(x_1,x_2)$ and that the dynamics of $(x_1,x_2)$ and ${\boldsymbol \theta}$ are only coupled via the force $u$.
We now show that imposing the condition that the tension in the cable remains nonnegative is equivalent to imposing a \emph{mixed} constraint on the system.
By considering the balance of forces on the mass $m$, its projection on the vertical axis is indeed given by $m\ddot{z}= -mg -T\cos\theta_1$ (see for example \cite{JLbook}). Thus $T= - \frac{m \left(\ddot{z}+g\right)}{\cos\theta_{1}} \geq 0$, which is equivalent to:
\begin{equation}\label{eq:balance_of_forces}
-\frac{\ddot{z} + g}{\cos \theta_1} \geq 0.
\end{equation}
Noting that $\ddot{z} = -l\left(\theta_2^2 \cos \theta_1 + \dot{\theta}_2 \sin \theta_1 \right)$, we substitute equation \eqref{eq:pend2} in the latter expression and multiply \eqref{eq:balance_of_forces} by $l\left(M + m\sin^2 \theta_1 \right)$. The inequality then simplifies to the mixed constraint:
\begin{equation}\label{ineq:PendulumMixedConstraint}
u\sin\theta_1 + Mg\cos\theta_1 - Ml\theta_2^2 \leq 0.
\end{equation}
Thus, the problem is to obtain the barrier for the system described by \eqref{eq:pend1}-\eqref{eq:pend4} subject to the constraint on the control, $|u| \leq 1$, and the mixed constraint \eqref{ineq:PendulumMixedConstraint}. We do not consider a constraint on the cart's track length for clarity's sake.
Note that zero tension in the cable results in free-fall but, depending on the cart's trajectory, the cable may remain taut or become slack.
Note that the smoothness and convexity assumptions (A2.1), (A2.3), (A2.4), are indeed satisfied by the right-hand side of \eqref{eq:pend1}--\eqref{eq:pend4} and constraint \eqref{ineq:PendulumMixedConstraint} and that (A2.2) is only satisfied for bounded $\theta_2$, which is not a real limitation in vue of Remark~\ref{rem:A22}.
\subsection{Constructing the Barrier}
Recalling the notations ${\mathbf q} = (\theta_1,\theta_2,x_1,x_2)$ and ${\boldsymbol \theta} = (\theta_1,\theta_2)$, we label the mixed constraint ${\mathbf h} (\textbf{q},u) =u\sin\theta_1 + Mg\cos\theta_1 - Ml\theta_2^2$ and so
$$\tilde{{\mathbf h}}(\textbf{q}) = \min_{|u|\leq 1} \mathbf{h}(\textbf{q},u) = -|\sin\theta_1| + Mg\cos\theta_1 - Ml\theta_2^2.$$ Let us assume that barrier trajectories reach the set $G_0= \{{\mathbf q} : \tilde{{\mathbf h}}({\mathbf q}) = 0 \}$, whose projection onto the plane $(\theta_1,\theta_2)$ is shown in Figure \ref{Fig:HTilde_Pendulum}.
Note that the equation $\tilde{{\mathbf h}}({\mathbf q}) = 0$ only has a solution for $\theta_1\in[-\arctan(Mg) + 2k\pi,\arctan(Mg) + 2k\pi]$, $k\in {\mathbb N}$, and that $\tilde{\mathbf{h}}$ is not differentiable if $\theta_1 = 2k\pi$. Also, according to \eqref{def:U(x)}, we have
\begin{equation}\label{eq:U(x)}
U(\textbf{q})
=\begin{cases}
[-1, \frac{Ml\theta_2^2 - Mg\cos\theta_1}{\sin\theta_1}]\cap[-1,1] & \mathrm{if}\,\, \sin\theta_1 > 0\\
[\frac{Ml\theta_2^2 - Mg\cos\theta_1}{\sin\theta_1}, 1]\cap[-1,1] & \mathrm{if}\,\, \sin\theta_1 < 0\\
[-1,1] & \mathrm{if}\,\, \sin\theta_1 = 0.
\end{cases}
\end{equation}
It is easily verified that the independence condition of assumption (A2.5) is met everywhere in a neighborhood of $G_0$, except on $G_0$ itself, where $U({\mathbf q})=\{\pm1\}$ and $\#{\mathbb I}({\mathbf q},u)= 2$. Therefore (A2.5) is satisfied.
\begin{figure}[thpb]
\begin{center}
\includegraphics[height=12cm]{Non_Diff_Point2.eps}
\caption{The set $\{\boldsymbol{\theta} : -|\sin\theta_1| + Mg\cos\theta_1 - Ml\theta_2^2 = 0 \}$ for $M = 0.1$, $m = 0.1$, $l = 1$ and $g = 10$, along with some important points. The top figure presents a closer look at the point $(0,\sqrt{\frac{g}{l}})$ and specifies the set $U(\textbf{q})$ in various parts of a neighbourhood of the point.}
\label{Fig:HTilde_Pendulum}
\end{center}
\end{figure}
\subsubsection{Barrier End Points}\label{subsec:Pts_Of_Tan}
\sloppy We first look at points on $G_0$ where $\tilde{{\mathbf h}}$ is differentiable and without loss of generality we will only carry out the analysis for $\theta_1\in[-\arctan(Mg), 0[$.
Invoking equation \eqref{eq:ult_tan_smooth} as well as the final condition \eqref{eq:finalConditions2}, we obtain:
\begin{equation}\label{eq:Ult_Tan_pend}
\begin{array}{l}
\min_{u\in\{ 1 \}} \left(\cos\theta_1 - Mg\sin\theta_1\right)\theta_2 \\
+ 2Ml\theta_2\left(\frac{u\cos\theta_1 - (M+m)gsin\theta_1 + ml\theta_2^2\cos\theta_1\sin\theta_1}{l\left(M + m\sin^2(\theta_1)\right)}\right) = 0
\end{array}
\end{equation}
where $U(\textbf{q}) = \{1 \}$ since, for $\textbf{q}\in G_0$, $\lim_{(\theta_{1}, \theta_{2}) \rightarrow (-\arctan Mg,0)}\frac{Ml\theta_2^2 - Mg\cos\theta_1}{\sin\theta_1} = 1$. From here we easily identify $(\theta_1^{\bar{u}}(\bar{t}),\theta_2^{\bar{u}}(\bar{t})) = (-\arctan(Mg),0)$, with $x_1^{\bar{u}}(\bar{t})$ and $x_2^{\bar{u}}(\bar{t})$ free, as end points, along with the final adjoint given by \eqref{eq:ult_tan_smooth}:
\begin{equation}
\begin{array}{l}
\displaystyle \lambda^{\bar{u}}(\bar{t}) = \\ \hspace{0.5cm}\displaystyle (\cos\theta_1^{\bar{u}}(\bar{t}) - Mg\sin\theta_1^{\bar{u}}(\bar{t}),-2Ml\theta_2^{\bar{u}}(\bar{t}),0,0)^T.\label{eq:final_costate_smooth}
\end{array}
\end{equation}
Let us show that \eqref{eq:Ult_Tan_pend} does not have another solution for any $\theta_1\in[-\arctan(Mg),0[$. Indeed, $\theta_2 \neq 0$ and we must investigate:
\begin{equation}
\begin{array}{l}
\left(\cos\theta_1 - Mg\sin\theta_1\right) \\
+ 2Ml\left(\frac{\cos\theta_1 - (M+m)gsin\theta_1 + ml\theta_2^2\cos\theta_1\sin\theta_1}{l\left(M + m\sin^2(\theta_1)\right)}\right) = 0.
\end{array}
\end{equation}
We now substitute $\theta_2^2$ using $\tilde{\mathbf{h}}(\textbf{q}) = 0$, multiply by $l\left(M + m\sin^2(\theta_1)\right)$ and use the identity $m\cos^3\theta_1 - m\cos\theta_1 = -m\cos\theta_1\sin^2\theta_1$ to arrive, after some algebra, at the expression:
\begin{equation}
\begin{array}{l}
-M\cos\theta_1 - Mmg\sin\theta_1\cos^2\theta_1\\
+ M^2g\sin\theta_1 + Mmg\sin\theta_1 - \sin^2\theta_1(m\cos\theta_1) = 0.
\end{array}
\end{equation}
After grouping terms we get:
\[
\left( Mg\sin\theta_1 -\cos\theta_1 \right) \left(M + m\sin^2\theta_1\right) = 0.
\]
Since $\left(M + m\sin^2\theta_1\right)>0$ we get $\theta_1 = \arctan(\frac{1}{Mg}) \notin[-\arctan(Mg),0[$, and so there is not another solution for $\theta_1\in [-\arctan(Mg),0[$.
Along the same lines we deduce that all the points $(\theta_1,\theta_2) = (\pm\arctan(Mg) + 2k\pi,0)$, $k\in {\mathbb N}$, with $x_1$ and $x_2$ free, are the only end points on $G_0$ where $\tilde{{\mathbf h}}$ is differentiable.
We now turn our attention to the point $(\theta_1,\theta_2) = (0,\sqrt{\frac{g}{l}})$ (with $x_1$ and $x_2$ arbitrary) where $\tilde{\mathbf{h}}$ is not differentiable; the analysis will carry over to the points $(\theta_1,\theta_2) = (2k\pi,\pm\sqrt{\frac{g}{l}})$, $k\in {\mathbb N}$, in a similar way. We introduce the following sets:
\[
\mathcal{B} \triangleq \left\{\boldsymbol{\theta} : \theta_1 < 0, \frac{Ml\theta_2^2 - Mg\cos\theta_1}{\sin\theta_1} \geq -1, \tilde{\mathbf{h}}(\textbf{q}) \leq 0 \right\}
\]
\[
\mathcal{C} = \mathcal{B}^{\mathsf{C}} \setminus \{\boldsymbol{\theta} : \tilde{\mathbf{h}}(\textbf{q}) > 0 \}
\]
as in Figure \ref{Fig:HTilde_Pendulum}. From Theorem \ref{BarrierTheorem1}, if a barrier trajectory intersects the point $(\theta_1,\theta_2) = (0,\sqrt{\frac{g}{l}})$, with $x_1$ and $x_2$ arbitrary, at time $\bar{t}$ (without confusion we use the same label for this time instant as was used previously for the analysis of the point $(\theta_1,\theta_2) = (-\arctan(Mg),0)$) then condition \eqref{ineq:ult_tan} holds. If this barrier trajectory approached the point from the set $\mathcal{C}$ then it can be verified that we would get
\[
\min_{u\in U(\textbf{q}^{\bar{u}}(\bar{t}_-))} D\tilde{\mathbf{h}}(\textbf{q}^{\bar{u}}(\bar{t}_-)) f(\textbf{q}^{\bar{u}}(\bar{t}), u)< 0,
\]
where $D\tilde{{\mathbf h}}$ denotes the gradient of $\tilde{{\mathbf h}}$ with respect to the vector ${\mathbf q}$ and with $f$ the right-hand side of \eqref{eq:pend1}--\eqref{eq:pend4},
which would violate condition \eqref{ineq:ult_tan}. Moreover, this trajectory can clearly not approach the point from the set $\{\theta : \tilde{{\mathbf h}}({\mathbf q}) > 0 \}$. The only possibility left is that it approaches the point from the set labelled $\mathcal{B}$, and the final adjoint is given by \eqref{eq:finalConditions1} with condition \eqref{ineq:ult_tan}:
\begin{equation}\label{eq:costate_non_diff}
\lambda^{\bar{u}}(\bar{t}_-)^T = D\tilde{{\mathbf h}}({\mathbf q}^{\bar{u}}(\bar{t}_-)) = (1, -2Ml\sqrt\frac{g}{l},0,0).
\end{equation}
\subsubsection{Deriving the Control Associated with the Barrier}
The adjoint equations are given by \eqref{costateEquation1}, from which it can be verified that $\dot{\lambda}_3 = 0$ and $\dot{\lambda}_4 = \lambda_3$.
From equations \eqref{eq:final_costate_smooth} and \eqref{eq:costate_non_diff}, we deduce that $\lambda_3(t) = \lambda_4(t) \equiv 0$. The Hamiltonian is here given by $$\begin{aligned} &\mathcal{H}(\textbf{q},u,\lambda) = \lambda_1 \theta_2 \\ &\quad + \lambda_2\frac{-u\cos\theta_1 + (M + m)g\sin\theta_1 - ml\theta^2_2\cos\theta_1\sin\theta_1}{l\left(M + m\sin^2\theta_1\right)}\end{aligned}$$ and from
\eqref{HamiltonianMinimised1} we have that $\mu$ satisfies: $\frac{\partial H}{\partial u} + \mu\frac{\partial \mathbf{h}}{\partial u} = 0$
which gives:
\[
\mu(t) = \left\{\begin{array}{ll} \frac{\lambda_2(t)\cot\theta_1(t)}{l\left(M + m\sin^2(\theta_1)\right)} & ~\mathrm{if}~\mathbf{h}(\textbf{q}(t),\bar{u}(t)) = 0\\
0 & ~\mathrm{if}~ \mathbf{h}(\textbf{q}(t),\bar{u}(t)) < 0.
\end{array}\right.
\]
The Hamiltonian mimimisation yields:
\[
\begin{array}{lll}
\mathrm{if}~\lambda_2(t)\cos\theta_1(t) > 0 & \\
\,\,\,\,\bar{u}(t)\hspace{-2pt}=\hspace{-3pt}\left\{
\begin{array}{lll}
\hspace{-2pt}1 & \mathrm{if} & \sin\theta_1(t)\leq 0\\
\hspace{-2pt}\min \left( \frac{Ml\theta_2^2(t) - Mg\cos\theta_1(t)}{\sin\theta_1(t)} , 1\right)& \mathrm{if} & \sin\theta_1(t) > 0
\end{array}\right. &\\
\mathrm{if}~\lambda_2(t)\cos\theta_1(t) < 0 & \\
\,\,\,\,\bar{u}(t)\hspace{-2pt}=\hspace{-3pt}\left\{
\begin{array}{lll}
\hspace{-2pt}-1 & \hspace{-2pt}\mathrm{if} & \sin\theta_1(t) \geq 0\\
\hspace{-2pt}\max\left(\frac{Ml\theta_2^2(t) - Mg\cos\theta_1(t)}{\sin\theta_1(t)},-1 \right)& \hspace{-2pt}\mathrm{if} & \sin\theta_1(t) < 0
\end{array}\right. &\\
\mathrm{if}~\lambda_2(t)\cos\theta_1(t) = 0 & \\
\,\,\,\,\bar{u}(t)=\mathrm{arbitrary}.&
\end{array}
\]
\subsubsection{Backward Integration of System Equations}\label{subsection:Backward_Integration}
As previously remarked, the right hand sides of equations \eqref{eq:pend1} and \eqref{eq:pend2}, as well as the mixed constraint, equation \eqref{ineq:PendulumMixedConstraint}, and the control associated with the barrier, $\bar{u}$, are independent of $x_1$ and $x_2$. Moreover, as shown in subsection \ref{subsec:Pts_Of_Tan}, the values of $x_1$ and $x_2$ are arbitrary at points of ultimate tangentiality. These facts allow us to simplify the analysis by ignoring $x_1$ and $x_2$ from this point forward, only focusing on ${\boldsymbol \theta} =(\theta_1,\theta_2)$.
Integrating equations \eqref{eq:pend1}, \eqref{eq:pend2} and \eqref{costateEquation1} backwards using $\bar{u}$, the expression of \eqref{costateEquation1} being omitted for clarity's sake, from the identified points of ultimate tangentiality, $(\pm\arctan(Mg) + 2k\pi,0)$ and $(2k\pi,\pm\sqrt{\frac{g}{l}})$, $k\in {\mathbb N}$, utilising the appropriate final adjoint for each point, we obtain the trajectories as in Figure \ref{Fig:BigMassPendulum} and Figure \ref{Fig:SmallCartMass}.
Figure \ref{Fig:BigMassPendulum} shows the obtained admissible set for a set of constants where the mass of the cart is big relative to the pendulum mass. On the contrary, when the cart mass is relatively small, the barrier trajectories intersect and by Theorem \ref{thm:stopping_points} of Appendix \ref{Appendix:stop}, we deduce that these intersection points are stopping points, see Figure \ref{Fig:SmallCartMass}. Figure \ref{Fig:SmallCartMassZoomedIn} provides a closer look at the control function along barrier trajectories as obtained for the constants in Figure~\ref{Fig:SmallCartMass}.
\begin{figure}[thpb]
\begin{center}
\includegraphics[height=7.9cm]{BigCartMass.eps}
\caption{The admissible set for the pendulum on a cart with a nonrigid cable, with the constraint that the tension in the cable is always nonnegative, \eqref{ineq:PendulumMixedConstraint}. The constants in this case are: $g = 10\mathrm{m/s}^2$, $l = 1$ metre, $M = 0.5$ kg, $m = 0.1$ kg. Note that the admissible set is made up of disjoint parts.}
\label{Fig:BigMassPendulum}
\end{center}
\end{figure}
\begin{figure}[thpb]
\begin{center}
\includegraphics[width=8.3cm]{SmallCartMass.eps}
\caption{The admissible set for the pendulum on a cart with a slack rope, equations, with the constraint that the tension in the cable is always nonnegative, \eqref{ineq:PendulumMixedConstraint}. The constants in this case are: $g = 10\mathrm{m/s}^2$, $l = 1$ metre, $M = 0.1$ kg, $m = 0.1$ kg. Note that the obtained barrier trajectories intersect at \emph{stopping points}.}
\label{Fig:SmallCartMass}
\end{center}
\end{figure}
\begin{figure}[thpb]
\begin{center}
\includegraphics[width=0.8\columnwidth]{SmallCartMassZoomedIn.eps}
\caption{A closer look at the control associated with the barrier trajectories from Figure \ref{Fig:SmallCartMass}. The ``$\times$'' correspond to $\theta_1 = \pm\frac{\pi}{2}$ where the control switches. The ``+'' correspond to the points where the controls associated with the barrier trajectories arriving at $(0,\pm\frac{g}{l})$ switch to $\frac{Ml\theta_2^2 - Mg\cos\theta_1}{\sin\theta_1}$.}
\label{Fig:SmallCartMassZoomedIn}
\end{center}
\end{figure}
\section{Discussion}\label{sec:Discussion}
If $\boldsymbol{\theta}$ is such that $\tilde{\mathbf{h}}(\textbf{q}) \geq 0$ then the angular velocity of the mass is too small to provide positive tension in the cable, and the mass enters free-fall. If $\boldsymbol{\theta}\in {\mathcal A}^{\mathsf C}$ then no admissible control function can prevent a loss of tautness in the future.
At the points of ultimate tangentiality $(\pm\arctan(Mg) + 2k\pi, 0)$, $k\in {\mathbb N}$, the angular velocity of the mass is zero as well as the tension in the cable and the mass is in free-fall. However, employing the only admissible control at this point (i.e. $u = \pm 1$ depending on the point) results in the state immediately entering the interior of the admissible set and the tension can be made positive again.
At the singular points $(2k\pi,\pm \sqrt\frac{g}{l})$, $k\in {\mathbb N}$, the control acts perpendicularly to the cable and so does not have any effect on the tension. The barrier trajectory that passes through these points is quite interesting because along the entire part of the trajectory for which $\bar{u} = \frac{Ml\theta_2^2 - Mg\cos\theta_1}{\sin\theta_1}$ the mass is in free-fall but taut. When the barrier arrives at the points $(2k\pi,\pm \sqrt\frac{g}{l})$ it is again possible to employ a control such the state enters the interior of the admissible set and the tension becomes positive again.
If the constants are such as those specified in Figure \ref{Fig:BigMassPendulum} the admissible set consists of a periodic sequence of two connected components, one of them being bounded and the other not. This has the interpretation that if the system is initiated in the bounded one then it is impossible to increase the angular velocity beyond a certain bound without leaving the admissible set. On the contrary, if the system is initiated in the unbounded one, one can ``spin'' the mass through the full range of angles, i.e. through all $\theta_1 \in {\mathbb R}$, always maintaining a taut cable. Let us stress that no control allows the state to pass from one component to the other without entering ${\mathcal A}^{\mathsf C}$.
The admissible set obtained in Figure \ref{Fig:SmallCartMass} is now connected thus allowing more manoeuvrability.
\section{Concluding Remarks and Future Research}\label{sec:Conclusions}
We can model the pendulum on a cart with a non-rigid cable as a hybrid automaton, see for e.g. \cite{VanDerShaft2000}. In this framework a hybrid system is specified by a graph with the nodes corresponding to ``locations'', where at each location the continuous state evolves according to a particular differential equation. At ``event times'' there occur ``transitions'' between locations along the graph's edges.
The pendulum system may be modelled as a hybrid automaton with two locations: at the first location the continuous state evolves according to \eqref{eq:pend1} - \eqref{eq:pend4} and at the second location the state evolves according to the free-fall dynamics of the mass with slack cable. The admissible set can then be interpreted as a \emph{potentially safe set}, in other words if the state remains in this set it is guaranteed that there exists a control such that the system does not transition out of the initial location. We contrast this with the usual notion of \emph{safety sets} in hybrid systems where it is generally required that the state remains in this set \emph{for all} possible control functions, see e.g. \cite{Mitchell2003} and \cite{Lygeros2007}, hence the term \emph{potentially} safe.
The results may find application in the study of other similar problems in engineering, such as the control of weight-handling equipment, UAVs and to obtaining potentially safe sets in hybrid systems. Indeed, the same approach should be applicable to higher dimensional problems such as the pendulum in 3 dimensions with non-rigid cable. This application will be the subject of future research.
Future research could also focus on the development of a richer theory of potentially safe sets for hybrid automata with any finite number of locations, similar to the ideas in \cite{Mitchell2003}.
\bibliographystyle{plain}
|
1,108,101,566,686 | arxiv | \section{Introduction}
\noindent
Letting \ ($\Omega, \mathcal A, P)$ \ be a probability space$,$ and \ $K$ \ be an integer such that \ $K \geq 2,$ \ we consider random variables \ $X_1 , \cdots , X_K$ with values in Euclidean vector spaces \ $\mathcal{X}_1 , \cdots , \mathcal{X}_K$ \ respectively. We suppose that for any $k\in\{1,\cdots,K\}$, one has $\mathbb{E}(X_k)=0$ and $\mathbb E(\left\|X_k\right\|_k^2)$ $<\infty ,$ where \ $\left\| \cdot \right\|_k$ \ denotes the norm induced by the inner product $\left\langle \cdot\ ,\ \cdot\right\rangle_{k}$ of $\mathcal{X}_k$. The multiple-set linear canonical analysis (MSLCA) of $X_1,\cdots,X_K$ is a statistical method that permits to analyse the relationships among these variables. It has been introduced for many years (e.g., \cite{gifi}) and has been extensively studied (e.g., \cite{gardner}, \cite{hwang}, \cite{nkiet}, \cite{takane},\cite{tenenhaus}). Formally, considering the random variable $X=(X_1,\cdots, X_K)$ with values into the space $\mathcal{X}:=\mathcal{X}_1\times \mathcal{X}_2\times\cdots\mathcal{X}_K$, MSLCA is the search of a sequence \ $\left(\alpha^{(j)}\right)_{ 1\leq j \leq q}$ \ of vectors of $\mathcal{X}$, where $q =$ dim$(\mathcal{X})$, satisfying:
\begin{equation}\label{acg}
\alpha^{(j)} =\text{arg}\ \underset{\alpha \in \mathcal C_j}{\text{max}}\ \mathbb E\left(< \ \alpha\ ,\ X\ >^2_{\mathcal{X}}\right) ,
\end{equation}
\noindent
where
$\mathcal C_1 = \ \left\{ \alpha \in \mathcal{X} / \ \sum_{k=1}^K \ var\left(< \ \alpha_k\ ,\ X_k\ >_{k}\right) = 1 \right\}$ and, for $ j \geq 2$:
\[
\mathcal C_j = \ \left\{ \alpha \in \mathcal C_1 / \ \sum_{k=1}^K \ cov\left(< \ \alpha^{(r)}_k\ ,\ X_k\ >_{k} , < \ \alpha_k\ ,\ X_k\ >_{k}\right) = 0 , \ \forall \ r \in \left\{ 1 , \cdots , j - 1 \right\}\right\}.
\]
A solution of the above maximization problem is obtained from the spectral
analysis of a given operator $T$. For $(k,\ell) \in \left\{ 1 , \cdots , K \right\}^2 $, considering the covariance operators
$V_{k\ell} = \mathbb E(X_\ell\otimes X_k) = V_{\ell k}^*$ and \ $V_{k} := V_{kk}$,
where \ $\otimes$ \ denotes the tensor product such that \ $x\otimes y$\ is the linear map : $h \mapsto < x, h > y$, letting $ \tau_k$ be the canonical projection
$\tau_k: \alpha =(\alpha_1,\cdots,\alpha_K)\in \mathcal{X} \mapsto \alpha_k \in \mathcal{X}_k$ and assuming that each $V_k$ is invertible, we have $T= \Phi^{-1/2}\Psi\Phi^{-1/2}$, where
$
\Phi = \sum_{k=1}^K \tau_k^*V_{k}\tau_k$ and $\Psi = \sum_{k=1}^{K} \sum_{\underset{\ell\neq k}{\ell=1}}^{K} \tau_k^*V_{k\ell}\tau_\ell$.
If $\left\{ \beta^{(1)}, \cdots, \beta^{(q)}\right\}$ is an orthonormal basis of $\mathcal{X}$ \ such that $\beta^{(j)}$ is an eigenvector of $T$ associated with the $j$-th largest eigenvalue $\rho_j,$ then we obtain a solution of (1) by taking
$\alpha^{(j)}= \Phi^{-1/2}\beta^{(j)}$. Classical estimation of MSLCA is based on empirical covariance operators that are known to be very sensitive to outliers. This makes this method a non robust one and highlights the interest of providing a robust estimation of MSLCA as it was done for others multivariate statistical methods such as principal component analysis, discriminant analysis, canonical correlation analysis (\cite{crouxdehon}, \cite{crouxfilm}, \cite{crouxhaes}, \cite{taskinen}). A natural way for doing that is to replace the covariance operator of $X$ by a robust estimator. Among such robust estimators, the S-estimator has been extensively studied (\cite{davies}, \cite{lopuhaa}, \cite{lopuhaa2}) and it is known to have good robustness and efficiency properties. In this paper, we propose a robust version of MSLCA based on S-estimator of the covariance operator. This estimation procedure is introduced in Section 2. The related influence functions are derived in Section 3. Section 4 is devoted to the asymptotic properties of the introduced estimator, and a robust test for mutual non-correlation is proposed in Section 5.
\section{Estimation of MSLCA based on S-estimator}
\noindent We assume that the following condition holds:
\bigskip
\noindent $(\mathscr{A}_1):$ $X$ has an elliptical contoured distribution with density
$
f_X(x) = (\textrm{det}(V))^{-1/2}h(<x , V^{-1}x >_{\mathcal{X}})
$, where $h : [0, +\infty[ \rightarrow [0, +\infty[$ is a function having a strictly negative derivative $h'$.
\bigskip
\noindent Let $\{X^{(1)},\cdots,X^{(n)}\}$ be an i.i.d. sample of $X$, we consider a fixed real $b_0$ and a function $\xi : \mathbb{R} \rightarrow \mathbb{R}$. We denote by $\mathcal{P}\left(\mathcal{X}\right)$ the set of positive definite symmetric operators from $\mathcal{X}$ to itself. The S-estimators $\widetilde{\mu}_{n}$ and $\widetilde{V}_{n}$ of the mean and the covariance operator of $X$ respectively are given by the pair $(\widetilde{\mu}_{n},\widetilde{V}_{n})$ that minimizes the determinant det$\left(G\right)$ over all $(\mu,G) \in\mathcal{X}\times\mathcal{P}\left(\mathcal{X}\right)$ that satisfy
$
\frac{1}{n} \sum_{i=1}^{n} \xi\left(\left\|G^{-1/2}(X^{(i)} - \mu)\right\|_{\mathcal{X}}\right)\leq b_0 .
$
\noindent
It is well known that these estimators are robusts and have high breakdown points (see, e.g., \cite{davies}). From them, we can introduce an estimator of MSLCA which is expected to be also robust. Indeed, putting $
\widetilde{\Phi}_n = \sum_{k=1}^{K} \tau^*_k\widetilde{V}_{k.n}\tau_k$ and $\widetilde{\Psi}_n = \sum_{k=1}^{K} \sum_{\underset{\ell\neq k}{\ell=1}}^{K} \tau^*_k\widetilde{V}_{k\ell.n}\tau_\ell$,
where $\widetilde{V}_{k.n} = \tau_k\widetilde{V}_{n}\tau^*_k$ and $\widetilde{V}_{k\ell.n} = \tau_k\widetilde{V}_{n}\tau^*_\ell$, we estimate $T$ by
$
\widetilde{T}_n = \widetilde{\Phi}^{-1/2}_n \widetilde{\Psi}_n\widetilde{\Phi}^{-1/2}_n
$.
Considering the eigenvalues $\widetilde{\rho}_{1.n} \geq \widetilde{\rho}_{2.n} \geq \cdots\geq \widetilde{\rho}_{q.n}$ of $\widetilde{T}_n$ and $\left\{ \widetilde{\beta}^{(1)}_n \cdots \widetilde{\beta}^{(q)}_n \right\}$ an orthonormal basis of $\mathcal{X}$ such that \ $\widetilde{\beta}^{(j)}_n$ \ is an eigenvector of $\widetilde{T}_n$ associated with $\widetilde{\rho}_{j.n},$ we estimate $\rho_{j}$ \ by $\widetilde{\rho}_{j.n},$ $\beta^{(j)}$ \ by \ $\widetilde{\beta}^{(j)}_n$ \ and \ $\alpha^{(j)}$ \ by \ $\widetilde{\alpha}^{(j)}_n = \widetilde{\Phi}^{-1/2}_n \widetilde{\beta}^{(j)}_n$.
\section{Influence functions}
\noindent For studying the effect of a small amount of contamination at a given point on MSLCA it is important, as usual in robustness litterature (see \cite{hampel}), to use influence function. More precisely, we have to derive expressions of the influence functions related to the functionals that give $T$, $\rho_j$ and $\alpha^{(j)}$ (for $1\leq j \leq q$) at the distribution $\mathbb{P}_X$ of $X$. Recall that the influence function of a functional $S$ at $\mathbb{P}$ is defined as
\[
\textrm{IF}\left(x;S,\mathbb{P}\right)=\lim_{\varepsilon\downarrow 0}\frac{S\left((1-\varepsilon)\mathbb{P}+\varepsilon\delta_x\right)-S(\mathbb{P})}{\varepsilon},
\]
where $\delta_x$ is the Dirac measure putting all its mass in $x$.
In order to derive the influence functions related to the above estimator of MSLCA$,$ we have to specify the functional that corresponds to it. We impose the following properties on the loss function $\xi$:
\bigskip
\noindent $(\mathscr{A}_2)$: $\xi$ is symmetric, has a continuous derivative $\psi $ and is such that $\xi(0)=0$;
\bigskip
\noindent $(\mathscr{A}_3)$: there exists $c_0 > 0$ such that \ $\xi$ is strictly increasing on \ $[0 ,\ c_0]$ \ and constant on $[c_0 ,\ +\infty[$;
\bigskip
\noindent $(\mathscr{A}_4)$: the function $t \mapsto \psi(t)t^{-1}$ is continuous and bounded.
\bigskip
\noindent For example, the function $\xi (t)=\frac{c^2}{6}\left(1-\left(1-\frac{t^2}{c^2}\right)^3\right)\,\textrm{\textbf{1}}_{[-c,c]}(t)+\frac{c^2}{6}\,\textrm{\textbf{1}}_{\mathbb{R}\backslash [-c,c]}(t)$, where $c>0$, satisfies the above conditions. Its derivative is the Tukey's biweight function $\psi (t)=\left(1-\frac{t^2}{c^2}\right)^2\,\textrm{\textbf{1}}_{[-c,c]}(t)$.
The functional $\mathbb V_s$ related to the aforementioned S-estimator of V is defined in \cite{lopuhaa2} (see also \cite{lopuhaa}); it is such that $\mathbb V_s(\mathbb{P})$ is the solution to the problem of minimizing the determinant det$\left(G\right)$ over all $\mu \in \mathcal{X}$ and $ G \in \mathcal{P}( \mathcal{X})$ that satisfy
\begin{eqnarray*}
\int_{\mathcal{X}} \xi\left(\left\|G^{-1/2}(x - \mu)\right\|_{\mathcal{X}}\right)\,d\mathbb{P}(x)&\leq& b_0 .
\end{eqnarray*}
\noindent
It is known that at elliptical distribution $\mathbb V_s(\mathbb P_X) = V$ (see \cite{lopuhaa}, p.222). Therefore, the functional $\mathbb T_s$ defined as $\mathbb T_s(\mathbb P) = f\left(\mathbb V_s(\mathbb P)\right)^{-1/2} g\left(\mathbb V_s(\mathbb P)\right) f\left(\mathbb V_s(\mathbb P)\right)^{-1/2}$,
where $f\left(A\right) = \displaystyle\sum_{k=1}^K \tau_k^*\tau_k A\tau_k^*\tau_k$ and $ g\left(A\right) = \displaystyle \sum_{k=1}^{K} \sum_{\underset{\ell\neq k}{\ell=1}}^{K} \tau_k^*\tau_kA\tau_\ell^*\tau_\ell $, we have $\mathbb T_s(\mathbb{P}_X)=T$ . Putting $T_s = \mathbb T_s(\mathbb P_X) $,
\begin{equation}\label{l}
\lambda(x,V)=\sum_{k=1}^K \sum_{\underset{\ell\neq k}{\ell=1}}^K -\frac{1}{2}\tau_k^*\left(x_k\otimes x_k\right) V_{k\ell}\tau_\ell - \frac{1}{2} \tau_\ell^* V_{\ell k}\left(x_k\otimes x_k\right)\tau_k+\tau_k^* \left(x_\ell\otimes x_k\right)\tau_\ell
\end{equation}
\[
\gamma_1 = \frac{2\pi^{q/2}}{\Gamma(q/2)(q + 2)} \int_{0}^{+\infty}\left( \psi'(r)r^2 + (q + 1)\psi(r)r\right) r^{q - 1}h(r^2)dr
\]
and
\[
\gamma_2 = \frac{2\pi^{q/2}}{\Gamma(q/2)} \int_{0}^{+\infty}\psi(r)r^{q} h(r^2)dr ,
\]
$\Gamma$ \ being the usual gamma function, we have:
\bigskip
\noindent
\textbf{Theorem 3.1.}\textsl{ We suppose that the assumptions $(\mathscr{A}_1)$ to $(\mathscr{A}_4)$ hold. Then}
\begin{eqnarray*}
\textrm{IF}(x; T_s, \mathbb{P}_X) &=& \frac{q}{\gamma_1} \psi\left(\left\|V^{-1/2}x\right\|_{\mathcal{X}}\right)\,\,\left\|V^{-1/2}x\right\|_{\mathcal{X}}^{-1}\,\lambda(x,V).
\end{eqnarray*}
\noindent From the properties of $\psi$, it is easily seen that $\textrm{IF}(x; T_s, \mathbb{P}_X) $ equals $0$ if $\Vert V^{-1/2}x\Vert_\mathcal{X}>c_0$. Otherwise, we have $\Vert x\Vert_\mathcal{X}\leq c_0\,\Vert V^{1/2}\Vert_\infty$, where
$\left\|\cdot\right\|_{\infty}$ denotes the usual operators norm defined by $\left\|A\right\|_{\infty}=\text{sup}_{x\neq 0 }\left(\left\|Ax\right\| /\left\|x\right\| \right)$. Then, it is easy to check the inequality
\[
\underset{x \in \mathcal{X}}{\sup}\left\|\textrm{IF}(x; T_s,\mathbb{P}_X)\right\|_{\infty} \leq \sup_{t \in \mathbb R^*_{+}}\left( \frac{\psi(t)}{t}\right) \frac{Kq}{\left|\gamma_1\right|}\left(K - 1\right)\left( \left\|V\right\|_{\infty} + 1 \right)c_0^2\left\|V^{1/2}\right\|^2_{\infty}
\]
that shows that the influence function is bounded and, therefore, that the estimation procedure is robust. Now, we give the influence functions related to the canonical coefficients and the canonical directions obtained from the robust MSLCA introduced above. For $j\in\{1,\cdots,q\}$, denoting by $\mathbb{R}_{s\cdot j}$ (resp. $\mathbb{B}_{s\cdot j}$; resp. $\mathbb{A}_{s\cdot j}$ ) the functional such that $\mathbb{R}_{s\cdot j}(\mathbb{P})$ is the $j$-th largest eigenvalue of $\mathbb{T}_{s}(\mathbb{P})$ (resp. the associated eigenvector; resp. $\mathbb{A}_{s\cdot j}(\mathbb{P})= f\left(\mathbb{V}_{s}(\mathbb{P} )\right)^{-1/2}\mathbb{B}_{s\cdot j}(\mathbb{P})$ ), we put $\rho_{s\cdot j}=\mathbb{R}_{s\cdot j}(\mathbb{P}_X)$, $\beta^{(j)}_{s}=\mathbb{B}_j(\mathbb{P}_X)$ and $\alpha^{(j)}_{s}=\mathbb{A}_{s\cdot j}(\mathbb{P}_X)$.
Putting
\begin{eqnarray}\label{h}
H\left(\xi , \psi , V , x\right) &= & \frac{2}{\gamma_2}\left(\xi\left(\left\|V^{-1/2}x\right\|_{\mathcal{X}}\right) - b_0\right)\mathbb{I} \nonumber\\
& &+ \frac{q}{\gamma_1}\psi\left(\left\|V^{-1/2}x\right\|_{\mathcal{X}}\right)\left\|V^{-1/2}x\right\|_{\mathcal{X}}\left( \frac{1}{\left\|V^{-1/2}x\right\|^2_{\mathcal{X}}} - \frac{1}{q}\right)\mathbb{I}
\end{eqnarray}
and
\begin{eqnarray}\label{lj}
\lambda_j( x , V)
&=&\sum_{k=1}^{K} \sum_{\underset{l\neq k}{\ell=1}}^{K}\sum_{\underset{m\neq j}{m=1}}^{q} \frac{1}{\rho_j - \rho_m}\bigg(<\beta_k^{(m)} , x_k >_k< x_\ell , \beta_\ell^{(j)} >_\ell \nonumber\\
& &-\frac{1}{2} < \beta_k^{(m)} , x_k >_k < x_k , V_{k\ell} \beta_\ell^{(j)} >_k \nonumber\\
& &- \frac{1}{2} < x_k , V_{k\ell }\beta_\ell^{(m)} >_\ell < x_k , \beta_k^{(j)} >_k \bigg)\beta^{(m)}\\
& &-\frac{1}{2}\left( \sum_{k=1}^{K}\left[ \tau_k^*\left(x_k\otimes x_k\right)\tau_k + < \beta_k^{(j)}, x_k>_k^2\mathbb{I}\right] - 2\mathbb{I}\right)\beta^{(j)} \nonumber,
\end{eqnarray}
where $\mathbb{I}$ denotes the identity operator of $\mathcal{X}$, we have:
\bigskip
\noindent
\textbf{Theorem 3.2.} \textsl{We suppose that the assumptions the assumptions $(\mathscr{A}_1)$ to $(\mathscr{A}_4)$ hold. Then, for
any $j \in \left\{1, \cdots , q\right\},$ we have:}
\\\\
$\left(i\right)$ \ $\textrm{IF}(x; \rho_{s.j}, \mathbb P_X) = \frac{q}{\gamma_1}\psi\left(\left\|V^{-1/2}x\right\|_{\mathcal{X}}\right)\left\|V^{-1/2}x\right\|^{-1}_{\mathcal{X}} \sum_{k=1}^{K} \sum_{\underset{l\neq k}{\ell=1}}^{K} < \beta_k^{(j)} , x_k >_{k} < x_\ell - V_{\ell k}x_k , \beta_\ell^{(j)} >_{\ell}$.
\bigskip
\noindent
$\left(ii\right)$ \textsl{If $\rho_1>\cdots>\rho_q$, then } $\textrm{IF}( x ; \alpha_s^{(j)} ,\mathbb P_X) = \frac{q}{\gamma_1}\psi\left(\left\|V^{-1/2}x\right\|_{\mathcal{X}}\right)\left\|V^{-1/2}x\right\|^{-1}_{\mathcal{X}}\,\lambda_j( x , V) - H\left(\xi , \psi , V , x\right)\beta^{(j)} $.
\section{Asymptotic distributions}
\noindent
We first establish asymptotic normality for $\widetilde{T}_{n}$. Putting
\[
\beta_3 = \frac{2\pi^{q/2}}{\Gamma(q/2)} \int_{0}^{+\infty} \frac{4}{q + 2}\psi(r)r^{q + 2} h'(r^2)dr ,
\]
\noindent
we have:
\bigskip
\noindent
\textbf{Theorem 4.1. }\textsl{We suppose that the assumptions $(\mathscr{A}_1)$ to $(\mathscr{A}_4)$ hold and that $\mathbb{E}\left(\Vert X\Vert_\mathcal{X}^4\right)<+\infty$. Then, $ \sqrt{n}\left(\widetilde{T}_n - T\right)$ converges in distribution, as
$n \rightarrow +\infty$, to a random variable $U_s$ having a normal distribution in $\mathcal L(\mathcal{X}),$ with mean $0$ and
covariance operator equal to that of the random operator}
\begin{eqnarray}\label{zs}
Z_s &=& -2q\beta^{-1}_3\psi\left(\left\|V^{-1/2}X\right\|_{\mathcal{X}}\right)\,\left\|V^{-1/2}X\right\|_{\mathcal{X}}^{-1}\,\, \lambda(X,V).
\end{eqnarray}
\noindent
This result allows to consider a robust test for mutual non-correlation, that is the test for the hypothesis $\mathscr{H}_0 : \forall (k, \ell) \in \left\{1, ... , K \right\}^2,$ $k \neq \ell,$ $V_{k\ell} = 0$
against the alternative
$\mathscr{H}_1 : \exists (k, \ell) \in \left\{1, ... , K\right\}^2$, $k \neq \ell$, $V_{k\ell} \neq 0$. We take as test statistic the random variable $
\widetilde{S}_{n} = \displaystyle \sum_{k=2}^{K} \sum_{\ell=1}^{k - 1}\textrm{tr}\left( \pi_{k\ell}\left(\widetilde{T}_{n}\right)\pi_{k\ell}\left(\widetilde{T}_{n}\right)^*\right)$, where $\pi_{k\ell}$ is the operator $A\mapsto \tau_k A\tau_\ell^\ast$.
Then, putting
\begin{eqnarray}
\kappa_0 &=& \frac{-2\beta^{-1}_3}{(q + 1)}\mathbb E\bigg(\psi\left(\left\|X\right\|_{\mathcal X}\right)\left\|X\right\|_{\mathcal X}^3\bigg),
\end{eqnarray}
\noindent
we have:
\bigskip
\noindent
\textbf{Theorem 4.2. }\textsl{We suppose that the assumptions $(\mathscr{A}_1)$ to $(\mathscr{A}_4)$ hold and that $\mathbb{E}\left(\Vert X\Vert_\mathcal{X}^4\right)<+\infty$. Then, under
$\mathscr{H}_0$, the sequence $(\kappa_0)^{-1}n\widetilde{S}_{n}$ converges in distribution$,$ as $n \rightarrow +\infty,$ to \ $\chi_d^2,$ \ where $d = \sum_{k=1}^{K}\sum_{l=1}^{k-1} p_kp_l$ with \ $p_k = \textrm{dim}(\mathcal X_k).$}
\bigskip
\noindent In practice, $\kappa_0$ is replaced by a consistent estimator; for example by $\widehat{\kappa}_0= \frac{-2\beta^{-1}_3}{n(q + 1)}\sum_{i=1}^n \psi\left(\left\|X^{(i)}\right\|_{\mathcal X}\right)\left\|X^{(i)}\right\|_{\mathcal X}^3.$
\section{Proof of Theorem 3.1}
\noindent
It is shown in Lopuha$\ddot{\textrm{a}}$ (1989) (see Corollary 5.2, p. 1672) that under spherical distribution $\mathbb P^0_X,$ one has
\begin{eqnarray*}
\textrm{IF}(x; V_s, \mathbb P^0_X) &=& \frac{2}{\gamma_2}\left(\xi(\left\|x\right\|_{\mathcal{X}}) - b_0\right)\mathbb I + \frac{q}{\gamma_1}\psi(\left\|x\right\|_{\mathcal{X}})\left\|x\right\|_{\mathcal{X}}\left( \frac{x\otimes x}{\left\|x\right\|^2_{\mathcal{X}}} - \frac{1}{q}\mathbb I\right) .
\end{eqnarray*}
\noindent
Then, affine equivariant property implies that under elliptical model given in assumtion $(\mathscr{A}_1)$ we have:
\begin{eqnarray}\label{ifvs}
\textrm{IF}(x; V_s, \mathbb P_X) &=& V^{1/2}\left\{ \frac{2}{\gamma_2}\left(\xi\left(\left\|V^{-1/2}x\right\|_{\mathcal{X}}\right) - b_0\right)\mathbb I\right. \nonumber\\
& &+\left. \frac{q}{\gamma_1}\psi(\left\|V^{-1/2}x\right\|_{\mathcal{X}})\left\|V^{-1/2}x\right\|_{\mathcal{X}}\left( \frac{(V^{-1/2}x)\otimes (V^{-1/2}x)}{\left\|V^{-1/2}x\right\|^2_{\mathcal{X}}} - \frac{1}{q}\mathbb I\right)\right\}V^{1/2} \nonumber\\
&=& \frac{2}{\gamma_2}\left(\xi\left(\left\|V^{-1/2}x\right\|_{\mathcal{X}}\right) - b_0\right)V \nonumber\\
& &+ \frac{q}{\gamma_1}\psi\left(\left\|V^{-1/2}x\right\|_{\mathcal{X}}\right)\left\|V^{-1/2}x\right\|_{\mathcal{X}}\left( \frac{x\otimes x}{\left\|V^{-1/2}x\right\|^2_{\mathcal{X}}} - \frac{1}{q}V\right).
\end{eqnarray}
\noindent
Putting $V_s = \mathbb V_{s} \left(\mathbb P_X\right) = V,$ we have $f\left(\mathbb V_{s} \left(\mathbb P_X\right)\right) = f\left(V\right) = \mathbb I.$
Thus
\begin{eqnarray*}
\mathbb T_s \left(\mathbb P_{\varepsilon,x}\right) - \mathbb T_s \left(\mathbb P_X\right) &=& f\left(\mathbb V_{s} \left(\mathbb P_{\varepsilon,x}\right)\right)^{-1/2}g\left(\mathbb V_{s} \left(\mathbb P_{\varepsilon,x}\right)\right)f\left(\mathbb V_{s} \left(\mathbb P_{\varepsilon,x}\right)\right)^{-1/2} - g\left(V_s\right)\\
&=& \left(f\left(\mathbb V_{S} \left(\mathbb P_{\varepsilon,x}\right)\right)^{-1/2} - \mathbb I\right)g\left(\mathbb V_{s} \left(\mathbb P_{\varepsilon,x}\right)\right)f\left(\mathbb V_{s} \left(\mathbb P_{\varepsilon,x}\right)\right)^{-1/2}\\
&&+\left(g(\mathbb V_{s} \left(\mathbb P_{\varepsilon,x}\right) - V_s)\right)f\left(\mathbb V_{s} \left(\mathbb P_{\varepsilon,x}\right)\right)^{-1/2}\\
&&+g\left(V_s\right)\left(f\left(\mathbb V_{s} \left(\mathbb P_{\varepsilon,x}\right)\right)^{-1/2} - \mathbb I\right).
\end{eqnarray*}
\noindent
where $\mathbb P_{\varepsilon,x} = (1 - \varepsilon)\mathbb P_X + \varepsilon \delta_x$ with $\varepsilon \in [0; 1]$. Then$,$ using the equality
\begin{eqnarray}\label{deco}
A^{-1/2} - \mathbb I &=& -A^{-1}(A - \mathbb I) \left(A^{-1/2} + \mathbb{I}\right)^{-1}
\end{eqnarray}
\noindent
we obtain:
\begin{eqnarray*}
\mathbb T_s \left(\mathbb P_{\varepsilon,x}\right) - \mathbb T_s \left(\mathbb P_X\right) &=& -f\left(\mathbb V_{s} \left(\mathbb P_{\varepsilon,x}\right)\right)^{-1/2}f\left(\mathbb V_{s} \left(\mathbb P_{\varepsilon,x}\right) - \mathbb V_{s} \left(\mathbb P_X\right)\right)\left(f\left(\mathbb V_{s} \left(\mathbb P_{\varepsilon,x}\right)\right)^{-1/2} + \mathbb I\right)^{-1}\\
&&\times g\left(\mathbb V_{s} \left(\mathbb P_{\varepsilon,x}\right)\right)f\left(\mathbb V_{s} \left(\mathbb P_{\varepsilon,x}\right)\right)^{-1/2}\\
&&+\left(g(\mathbb V_{s} \left(\mathbb P_{\varepsilon,x}\right) - V_S)\right)f\left(\mathbb V_{s} \left(\mathbb P_{\varepsilon,x}\right)\right)^{-1/2}\\
&&-g\left(V_s\right)f\left(\mathbb V_{s} \left(\mathbb P_{\varepsilon,x}\right)\right)^{-1/2}f\left(\mathbb V_{s} \left(\mathbb P_{\varepsilon,x}\right) - \mathbb V_{s} \left(\mathbb P_X\right)\right)\left(f\left(\mathbb V_{s} \left(\mathbb P_{\varepsilon,x}\right)\right)^{-1/2} + \mathbb I\right)^{-1}.
\end{eqnarray*}
\noindent
Then$,$ from
\begin{eqnarray*}
\textrm{IF}( x ; V_s ,\mathbb P)&=& \lim_{\varepsilon\rightarrow 0} \frac{\mathbb V_{s} \left(\mathbb P_{\varepsilon,x}\right) - \mathbb V_{s}\left(\mathbb P_X\right) }{\varepsilon}
\end{eqnarray*}
\noindent
and the continuity of the maps $A \mapsto A^{-1}$ and $A \mapsto A^{-1/2} ,$ we
deduce that
\begin{eqnarray}\label{ifts}
\textrm{IF}(x; T_s, \mathbb P_X)&=& - \frac{1}{2}f\left(\textrm{IF}(x; V_s, \mathbb P_X)\right)g\left(V_{s}\right)\nonumber\\
&&+g\left(\textrm{IF}(x; V_s, \mathbb P_X)\right)\nonumber\\
&&- \frac{1}{2}g\left(V_{s}\right)f\left(\textrm{IF}(x; V_s, \mathbb P_X)\right).
\end{eqnarray}
\noindent
Since $f\left(V\right) = \mathbb I ,$ and under elliptical model $g\left(V_{s}\right) = g\left(V\right) ,$ inserting (\ref{ifvs}) in (\ref{ifts}) gives the equality
\begin{eqnarray*}
\textrm{IF}(x; T_s, \mathbb P_X)&=& \frac{q}{\gamma_1}\psi\left(\left\|V^{-1/2}x\right\|_{\mathcal{X}}\right)\left\|V^{-1/2}x\right\|^{-1}_{\mathcal{X}}\bigg\{ - \frac{1}{2}f\left(x\otimes x\right)g\left(V_s\right) \\
& &- \frac{1}{2}g\left(V_s\right)f\left(x\otimes x\right) + g\left(x\otimes x\right)\bigg\}\\
&&+ \left\{ \frac{2}{\gamma_2}\left(\xi\left(\left\|V^{-1/2}x\right\|_{\mathcal{X}}\right) - b_0\right) - \frac{q}{\gamma_1}\psi\left(\left\|V^{-1/2}x\right\|_{\mathcal{X}}\right)\left\|V^{-1/2}x\right\|_{\mathcal{X}}\right\}\\
&& \times \left\{ - \frac{1}{2}f\left(V\right)g\left(V_s\right) - \frac{1}{2}g\left(V_s\right)f\left(V\right) + g\left(V\right)\right\}.
\end{eqnarray*}
Using properties of the tensor product, is is easy to check that
\[
- \frac{1}{2}f\left(x\otimes x\right)g\left(V\right) - \frac{1}{2}g\left(V\right)f\left(x\otimes x\right) + g\left(x\otimes x\right)=\lambda(x,V).
\]
Hence
\begin{eqnarray*}
\textrm{IF}(x; T_s, \mathbb P_X)&=& \frac{q}{\gamma_1}\psi\left(\left\|V^{-1/2}x\right\|_{\mathcal{X}}\right)\left\|V^{-1/2}x\right\|^{-1}_{\mathcal{X}}\,\lambda(x,V)\\
&&+ \bigg\{ \frac{2}{\gamma_2}\left(\xi\left(\left\|V^{-1/2}x\right\|_{\mathcal{X}}\right) - b_0\right) \\
& &- \frac{q}{\gamma_1}\psi\left(\left\|V^{-1/2}x\right\|_{\mathcal{X}}\right)\left\|V^{-1/2}x\right\|_{\mathcal{X}}\bigg\}\left(g\left(V\right) - g\left(V_s\right)\right)\\
&=& \frac{q}{\gamma_1}\psi\left(\left\|V^{-1/2}x\right\|_{\mathcal{X}}\right)\left\|V^{-1/2}x\right\|^{-1}_{\mathcal{X}}\,\lambda(x,V).
\end{eqnarray*}
\section{Proof of Theorem 3.2}
$(i)$. \ From Lemma 3 in Croux and Dehon (2002), we obtain
\begin{eqnarray*}
\textrm{IF}(x; \rho_{s.j}, \mathbb P_X) &=& < \beta^{(j)} , \textrm{IF}(x; T_s, \mathbb P_X) \beta^{(j)}>_{\mathcal{X}}\\
&=& \frac{q}{\gamma_1}\psi\left(\left\|V^{-1/2}x\right\|_{\mathcal{X}}\right)\left\|V^{-1/2}x\right\|^{-1}_{\mathcal{X}} < \beta^{(j)} , \lambda(x,V) \beta^{(j)} >_{\mathcal{X}}\\
&=& \frac{q}{\gamma_1}\psi\left(\left\|V^{-1/2}x\right\|_{\mathcal{X}}\right)\left\|V^{-1/2}x\right\|^{-1}_{\mathcal{X}} \sum_{k=1}^{K} \sum_{\underset{\ell\neq k}{\ell=1}}^{K} < \beta_k^{(j)} , x_k >_{k} < X_\ell - V_{\ell k}x_k , \beta_l^{(j)} >_{l}.
\end{eqnarray*}
\noindent
$(ii)$. \ Since $f\left(V_s\right) = f\left(V\right) = \mathbb I ,$ we obtain by applying the second part of Lemma 3 in Croux and Dehon (2002):
\begin{eqnarray}
\textrm{IF}(x; \beta_s^{(j)}, \mathbb P_X) &=& \sum_{\underset{m\neq j}{m=1}}^{q} \frac{1}{\rho_{j} - \rho_{m} }< \beta^{(m)} , \textrm{IF}(x; T_s, \mathbb P_X) \beta^{(j)}>_{\mathcal{X}}\beta^{(m)} \nonumber\\
&& - \frac{1}{2}< \beta^{(j)} , \textrm{IF}(x; f(V_s), \mathbb P_X) \beta^{(j)}>_{\mathcal{X}}\beta^{(j)}.
\end{eqnarray}
\noindent
Using (\ref{ifvs}), the equalities $\textrm{IF}(x; f(V_s), \mathbb P_X) = f\left(\textrm{IF}(x; V_s, \mathbb P_X)\right)$, $f(V)=\mathbb{I}$ and
Theorem 3.1, we obtain:
\begin{eqnarray*}
\textrm{IF}(x; \beta_s^{(j)}, \mathbb P_X) &=& \frac{q}{\gamma_1}\psi\left(\left\|V^{-1/2}x\right\|_{\mathcal{X}}\right)\left\|V^{-1/2}x\right\|^{-1}_{\mathcal{X}} \lambda(x,V)\sum_{\underset{m\neq j}{m=1}}^{q} \frac{1}{\rho_{j} - \rho_{m} }< \beta^{(m)} , \beta^{(j)}>_{\mathcal{X}}\beta^{(m)} \nonumber\\
&& - \frac{1}{2} \bigg\{\frac{2}{\gamma_2}\left(\xi\left(\left\|V^{-1/2}x\right\|_{\mathcal{X}}\right) - b_0\right)\\
& &+ \frac{q}{\gamma_1}\psi\left(\left\|V^{-1/2}x\right\|_{\mathcal{X}}\right)\left\|V^{-1/2}x\right\|^{-1}_{\mathcal{X}}< \beta^{(j)} ,(f(x\otimes x) - \mathbb I )\beta^{(j)}>_{\mathcal{X}}\\
&& + \frac{q}{\gamma_1}\psi\left(\left\|V^{-1/2}x\right\|_{\mathcal{X}}\right)\left\|V^{-1/2}x\right\|_{\mathcal{X}}\left( \frac{1}{\left\|V^{-1/2}x\right\|^2_{\mathcal{X}}} - \frac{1}{q}\right)\bigg\}\beta^{(j)}\\
&=& \frac{q}{\gamma_1}\psi\left(\left\|V^{1/2}x\right\|_{\mathcal{X}}\right)\left\|V^{-1/2}x\right\|^{-1}_{\mathcal{X}}\eta_j(x,V) \nonumber\\
&& - \frac{1}{2}\bigg\{ \frac{2}{\gamma_2}\left(\xi\left(\left\|V^{-1/2}x\right\|_{\mathcal{X}}\right) - b_0\right)\\
& & + \frac{q}{\gamma_1}\psi\left(\left\|V^{-1/2}x\right\|_{\mathcal{X}}\right)\left\|V^{-1/2}x\right\|_{\mathcal{X}}\left( \frac{1}{\left\|V^{-1/2}x\right\|^2_{\mathcal{X}}} - \frac{1}{q}\right)\bigg\}\beta^{(j)},
\end{eqnarray*}
where
\begin{eqnarray*}
\eta_j(x,V)&=&\frac{q}{\gamma_1}\psi\left(\left\|V^{-1/2}x\right\|_{\mathcal{X}}\right)\left\|V^{-1/2}x\right\|^{-1}_{\mathcal{X}} \bigg\{\lambda(x,V)\sum_{\underset{m\neq j}{m=1}}^{q} \frac{1}{\rho_{j} - \rho_{m} }< \beta^{(m)} , \beta^{(j)}>_{\mathcal{X}}\beta^{(m)} \\
& &- \frac{1}{2}< \beta^{(j)} ,(f(x\otimes x) - \mathbb I )\beta^{(j)}>_{\mathcal{X}}\beta^{(j)}\bigg\}.
\end{eqnarray*}
\noindent
On the other hand, since
\[
\alpha_s^{(j)}(\mathbb P_{X}) = f\left(\mathbb V_{s}(\mathbb P_X)\right)^{-1/2}\beta_s^{(j)}(\mathbb P_X) = f\left(V\right)^{-1/2}\beta_s^{(j)}(\mathbb P_X) = \beta_s^{(j)}(\mathbb P_X),
\]
it follows
\begin{eqnarray*}
\alpha_s^{(j)}(\mathbb P_{\varepsilon,x}) - \alpha_s^{(j)}(\mathbb P_{X}) &=& f\left(\mathbb V_{s}(\mathbb P_{\varepsilon,x})\right)^{-1/2}\beta_s^{(j)}(\mathbb P_{\varepsilon,x}) - \beta_s^{(j)}(\mathbb P_X)\\
&=& f\left(\mathbb V_{s}(\mathbb P_{\varepsilon,x})\right)^{-1/2}\left(\beta_s^{(j)}(\mathbb P_{\varepsilon,x}) - \beta_s^{(j)}(\mathbb P_X)\right)\\
&& + \left(f\left(\mathbb V_{S}(\mathbb P_{\varepsilon,x})\right)^{-1/2} - \mathbb I\right)\beta_s^{(j)}(\mathbb P_X)\\
\end{eqnarray*}
\noindent
Then using (\ref{deco}), we obtain:
\begin{eqnarray*}
\alpha_s^{(j)}(\mathbb P_{\varepsilon,x}) - \alpha_s^{(j)}(\mathbb P_{X}) &=& f\left(\mathbb V_{S}(\mathbb P_{\varepsilon,x})\right)^{-1/2}\beta_s^{(j)}(\mathbb P_{\varepsilon,x}) - \beta_s^{(j)}(\mathbb P_X)\\
&=& f\left(\mathbb V_{s}(\mathbb P_{\varepsilon,x})\right)^{-1/2}\left(\beta_s^{(j)}(\mathbb P_{\varepsilon,x}) - \beta_s^{(j)}(\mathbb P_X)\right)\\
&& - f\left(\mathbb V_{s}(\mathbb P_{\varepsilon,x})\right)^{-1}\left(f(V_{s}(\mathbb P_{\varepsilon,x}) - V_{s}(\mathbb P_{X}))\right)\\
&&\times\left(f\left(\mathbb V_{S}(\mathbb P_{\varepsilon,x})\right)^{-1/2} + \mathbb I\right)^{-1}\beta_s^{(j)}(\mathbb P_X)\\
\end{eqnarray*}
\noindent
From the continuity of the maps $A \mapsto A^{-1}, A \mapsto A^{-1/2},$ and the equalities
$ \lim_{\varepsilon\rightarrow 0}f\left(\mathbb V_{S}(\mathbb P_{\varepsilon,x})\right) = f\left(V_{s}(\mathbb P_{X})\right)=f\left(V\right) = \mathbb I ,$ we deduce that
\begin{eqnarray*}
\textrm{IF}( x ; \alpha_s^{(j)} ,\mathbb P)&=&\textrm{IF}(x; \beta_s^{(j)}, \mathbb P_X) - \frac{1}{2}f\left(\textrm{IF}(x; V_s, \mathbb P_X)\right)\beta_s^{(j)}\\
&=& \frac{q}{\gamma_1}\psi\left(\left\|V^{-1/2}x\right\|_{\mathcal{X}}\right)\left\|V^{-1/2}x\right\|^{-1}_{\mathcal{X}}\left\{\eta_j(x,V) - \frac{1}{2}\left(f\left(x\otimes x\right)-\mathbb{I}\right)\beta^{(j)}\right\}\nonumber\\
&& + \frac{1}{2}\bigg\{ \frac{q}{\gamma_1}\psi\left(\left\|V^{-1/2}x\right\|_{\mathcal{X}}\right)\left\|V^{-1/2}x\right\|^{-1}_{\mathcal{X}}\left(f\left(x\otimes x\right)-\mathbb{I}\right)\beta^{(j)} \\
& & - f\left(\textrm{IF}(x; V_s, \mathbb P_X)\right)\beta_s^{(j)} \bigg\}\nonumber\\
&& - \frac{1}{2}\bigg\{ \frac{2}{\gamma_2}\left(\xi\left(\left\|V^{-1/2}x\right\|_{\mathcal{X}}\right) - b_0\right) \\
& &+ \frac{q}{\gamma_1}\psi\left(\left\|V^{-1/2}x\right\|_{\mathcal{X}}\right)\left\|V^{-1/2}x\right\|_{\mathcal{X}}\left( \frac{1}{\left\|V^{-1/2}x\right\|^2_{\mathcal{X}}} - \frac{1}{q}\right)\bigg\}\beta^{(j)}.
\end{eqnarray*}
\noindent
It is easy to check that $\eta_j(x,V) - \frac{1}{2}\left(f\left(x\otimes x\right)-\mathbb{I}\right)\beta^{(j)}=\lambda_j(x,V)$. Then, since $\beta_s^{(j)} = \beta^{(j)}$, we deduce from (\ref{ifvs}) that:
\begin{eqnarray*}
\textrm{IF}( x ; \alpha_s^{(j)} ,\mathbb P)&=& \frac{q}{\gamma_1}\psi\left(\left\|V^{-1/2}x\right\|_{\mathcal{X}}\right)\left\|V^{-1/2}x\right\|^{-1}_{\mathcal{X}}\lambda_j(x,V) \nonumber\\
&&+ \frac{q}{2\gamma_1}\psi\left(\left\|V^{1/2}x\right\|_{\mathcal{X}}\right)\left\|V^{-1/2}x\right\|^{-1}_{\mathcal{X}}\left(f\left(x\otimes x\right)-\mathbb{I}\right)\left(\beta^{(j)} - \beta_s^{(j)}\right)\\
&& - \frac{1}{2}\bigg\{ \frac{2}{\gamma_2}\left(\xi\left(\left\|V^{-1/2}x\right\|_{\mathcal{X}}\right) - b_0\right)\\
& & + \frac{q}{\gamma_1}\psi\left(\left\|V^{-1/2}x\right\|_{\mathcal{X}}\right)\left\|V^{-1/2}x\right\|_{\mathcal{X}}\left( \frac{1}{\left\|V^{-1/2}x\right\|^2_{\mathcal{X}}} - \frac{1}{q}\right)\bigg\}\left( \beta^{(j)} + \beta_s^{(j)}\right)\\
&=& \frac{q}{\gamma_1}\psi\left(\left\|V^{-1/2}x\right\|_{\mathcal{X}}\right)\left\|V^{-1/2}x\right\|^{-1}_{\mathcal{X}}\lambda_j(x,V) \nonumber\\
&& - \bigg\{ \frac{2}{\gamma_2}\left(\xi\left(\left\|V^{-1/2}x\right\|_{\mathcal{X}}\right) - b_0\right) \\
& &+ \frac{q}{\gamma_1}\psi\left(\left\|V^{-1/2}x\right\|_{\mathcal{X}}\right)\left\|V^{-1/2}x\right\|_{\mathcal{X}}\left( \frac{1}{\left\|V^{-1/2}x\right\|^2_{\mathcal{X}}} - \frac{1}{q}\right)\bigg\}\beta^{(j)}.
\end{eqnarray*}
\section{Proof of Theorem 4.1}
\subsection{A preliminary lemma}
\noindent
The following lemma gives the asymptotic distribution of the random variable
\begin{eqnarray*}
\widetilde{H}_{n} &=& \sqrt{n}\left(\widetilde{V}_n - V\right).
\end{eqnarray*}
\noindent
\textbf{Lemma 1.} \textit{We suppose that the assumptions $(\mathscr{A}_1)$ to $(\mathscr{A}_4)$ hold and that $\mathbb{E}\left(\Vert X\Vert_\mathcal{X}^4\right)<+\infty$. Then, $\widetilde{H}_{n}$ converges
in distribution, as $n \mapsto +\infty$, to a random variable having a normal distribution in $\mathcal L(\mathcal{X})$ with mean $0$ and covariance operator equal to that of }
\begin{eqnarray*}
\mathcal{Z} &=& -2q\beta^{-1}_3\psi\left(\left\|V^{-1/2}X\right\|_{\mathcal{X}}\right)\left\|V^{-1/2}X\right\|^{-1} X\otimes X\\
&& - 2 \left( \frac{\xi(\left\|V^{-1/2}X\right\|_{\mathcal{X}}) - b_0}{q\beta_1} - \frac{\psi(\left\|V^{-1/2}X\right\|_{\mathcal{X}})\left\| V^{-1/2}X\right\|_{\mathcal{X}}}{\beta_3}\right) V.
\end{eqnarray*}
\\\\
\noindent
\textit{Proof.} Let $\theta$ and $\phi$ be the functions from $\mathbb{R}_+$ to $\mathbb{R}$ defined as $\theta (t)= 2q\beta^{-1}_3\psi(t) t$ and $\phi(t) = 2q^{-1}\beta^{-1}_1\left(\xi(t) - b_0\right) - 2\beta^{-1}_3\psi(t)t $. Using affine equivariant property$,$ we deduce from the proof of Corollary 2 in Lopuha$\ddot{\textrm{a}}$ (1997) (see p. 235)
that:
\begin{eqnarray*}
\widetilde{H}_{n} &=& V^{1/2}\bigg( - \frac{1}{ \sqrt{n}} \sum_{i=1}^{n}\bigg\{ \frac{\theta(\left\|V^{-1/2}X^{(i)}\right\|_{\mathcal{X}})}{\left\|V^{-1/2}X^{(i)}\right\|^2_{\mathcal{X}}} \left(V^{-1/2}X^{(i)}\right)\otimes\left(V^{-1/2}X^{(i)}\right)\\
&& + 2 \left( \frac{\xi(\left\|V^{-1/2}X^{(i)}\right\|_{\mathcal{X}}) - b_0}{q\beta_1} - \frac{\psi(\left\|V^{-1/2}X^{(i)}\right\|_{\mathcal{X}})\left\| V^{-1/2}X^{(i)}\right\|_{\mathcal{X}}}{\beta_3}\right) \mathbb{I} \bigg\} + o_P\left(1\right) \bigg)V^{1/2} \\
&=& - \frac{1}{ \sqrt{n}} \sum_{i=1}^{n}\bigg\{ \frac{\theta(\left\|V^{-1/2}X^{(i)}\right\|_{\mathcal{X}})}{\left\|V^{-1/2}X^{(i)}\right\|^2_{\mathcal{X}}} X^{(i)}\otimes X^{(i)}\\
&& + 2 \left( \frac{\xi(\left\|V^{-1/2}X^{(i)}\right\|_{\mathcal{X}}) - b_0}{q\beta_1} - \frac{\psi(\left\|V^{-1/2}X^{(i)}\right\|_{\mathcal{X}})\left\| V^{-1/2}X^{(i)}\right\|_{\mathcal{X}}}{\beta_3}\right) V \bigg\} + o_P\left(1\right)\\
&=& - \widehat{W}_n + o_P\left(1\right)
\end{eqnarray*}
\noindent
where $\widehat{W}_n = n^{-1/2}\sum_{i=1}^{n} \mathcal Z_i ,$ with
\begin{eqnarray*}
\mathcal Z_i &=& \frac{\theta(\left\|V^{-1/2}X^{(i)}\right\|_{\mathcal{X}})}{\left\|V^{-1/2}X^{(i)}\right\|^2_{\mathcal{X}}} X^{(i)}\otimes X^{(i)} + \phi\left(\left\|V^{-1/2}X^{(i)}\right\|_{\mathcal{X}}\right)\,V
\end{eqnarray*}
\noindent
Slustky's theorem permits to
conclude that $\widetilde{H}_{n}$ has the same limiting distribution than $-\widehat{W}_n ,$ which can be obtained by using central limit theorem. For doing that$,$ we first have to check that $\mathbb{E}(\mathcal Z_i) = 0$ and $\mathbb{E}\left(\Vert \mathcal Z_i\Vert^2\right)<+\infty$, where $\Vert\cdot\Vert$ is the operator norm induced by te inner product $<A,B>=\textrm{tr}(AB^\ast)$. For proving this last property we consider the inequality
\begin{eqnarray*}
\mathbb{E}\left(\Vert \mathcal Z_i\Vert^2\right)&\leq&2\mathbb{E}\bigg[\bigg(\frac{\theta(\left\|V^{-1/2}X^{(i)}\right\|_{\mathcal{X}})}{\left\|V^{-1/2}X^{(i)}\right\|^2_{\mathcal{X}}}\bigg)^2\Vert X^{(i)}\otimes X^{(i)}\Vert^2\bigg]\\
& &+2\Vert V\Vert^2\,\mathbb{E}\bigg( \phi\left(\left\|V^{-1/2}X^{(i)}\right\|_{\mathcal{X}}\right)^2\bigg)\\
&=&2\mathbb{E}\bigg[\bigg(\frac{\theta(\left\|V^{-1/2}X^{(i)}\right\|_{\mathcal{X}})}{\left\|V^{-1/2}X^{(i)}\right\|^2_{\mathcal{X}}}\bigg)^2\Vert X^{(i)}\Vert^4_\mathcal{X}\bigg]\\
& &+2\Vert V\Vert^2\,\mathbb{E}\bigg( \phi\left(\left\|V^{-1/2}X^{(i)}\right\|_{\mathcal{X}}\right)^2\bigg).
\end{eqnarray*}
Assumption $(\mathscr{A}_4)$ implies that there exists $C>0$ such that $\sup_{t\in\mathbb{R}_+}\left(\theta(t)/t^2\right)\leq C$. On the other hand, from the proof of Corollary 2 in Lopuha$\ddot{\textrm{a}}$ (1997) (see p. 236) the functions $t \mapsto \xi(t)$ and $t \mapsto \psi(t)t$ are \ bounded; then, $\phi$ is also bounded and
$\mathbb{E}\left( \phi\left(\left\|V^{-1/2}X^{(i)}\right\|_{\mathcal{X}}\right)^2\right)< +\infty$ . Hence
\begin{eqnarray*}
\mathbb{E}\left(\Vert \mathcal Z_i\Vert^2\right)&\leq&2C^2\mathbb{E}\bigg(\Vert X^{(i)}\Vert^4_\mathcal{X}\bigg)+2\Vert V\Vert^2\,\mathbb{E}\bigg( \phi\left(\left\|V^{-1/2}X^{(i)}\right\|_{\mathcal{X}}\right)^2\bigg)
\end{eqnarray*}
and since $\mathbb{E}\bigg(\Vert X^{(i)}\Vert^4_\mathcal{X}\bigg)<+\infty$, we deduce that $\mathbb{E}\left(\Vert \mathcal Z_i\Vert^2\right)<+\infty$. Putting
$Y^{(i)}=V^{-1/2}X^{(i)},$ we have
\begin{eqnarray}\label{ezi}
\mathbb{E}\left(\mathcal Z_i\right) &=& V^{1/2}\mathbb{E}\left( \frac{\theta\left(\left\|Y^{(i)}\right\|_{\mathcal{X}}\right)}{\left\|Y^{(i)}\right\|^2_{\mathcal{X}}} Y^{(i)}\otimes Y^{(i)} + \phi\left(\left\|V^{-1/2}X^{(i)}\right\|_{\mathcal{X}}\right)\, \mathbb{I} \right)V^{1/2} \ \ \
\end{eqnarray}
\noindent
and since $Y^{(i)}$ has a spherical distribution$,$ from equation (4) in Lopuha$\ddot{\textrm{a}}$ (1997) (see p. 222) we obtain
$
\mathbb{E}\left(\xi\left(\left\|Y^{(i)}\right\|_{\mathcal{X}}\right) - b_0\right) = 0$.
\noindent
Further$,$ we have
\begin{eqnarray*}
\mathbb{E}\left(\theta\left(\left\|Y^{(i)}\right\|_{\mathcal{X}}\right)\right) &=& 2q\beta^{-1}_3\mathbb{E}\left(\psi\left(\left\|Y^{(i)}\right\|_{\mathcal{X}}\right)\left\|Y^{(i)}\right\|_{\mathcal{X}}\right).
\end{eqnarray*}
\noindent
Therefore (\ref{ezi}) becomes:
\begin{eqnarray}\label{ezi2}
\mathbb{E}\left(\mathcal Z_i\right) &=& V^{1/2}\left(\mathbb{E}\left( \frac{2q\beta^{-1}_3\psi\left(\left\|Y^{(i)}\right\|_{\mathcal{X}}\right)}{\left\|Y^{(i)}\right\|_{\mathcal{X}}} Y^{(i)}\otimes Y^{(i)}\right) - \frac{1}{q}\mathbb{E}\left(\theta\left(\left\|Y^{(i)}\right\|_{\mathcal{X}}\right)\right) \mathbb{I} \right)V^{1/2}.
\end{eqnarray}
\noindent
From Lemma 1 in Lopuha$\ddot{\textrm{a}}$ (1997) (see p. 221) we have:
\begin{eqnarray*}
\mathbb{E}\left( \frac{2q\beta^{-1}_3\psi\left(\left\|Y^{(i)}\right\|_{\mathcal{X}}\right)}{\left\|Y^{(i)}\right\|_{\mathcal{X}}} Y^{(i)}\otimes Y^{(i)}\right) &=& \frac{1}{q}\mathbb{E}\left( \frac{2q\beta^{-1}_3\psi\left(\left\|Y^{(i)}\right\|_{\mathcal{X}}\right)}{\left\|Y^{(i)}\right\|_{\mathcal{X}}} \left\|Y^{(i)}\right\|^2_{\mathcal{X}}\right)\mathbb{I}\\
&=& \frac{1}{q}\mathbb{E}\left(2q\beta^{-1}_3\psi\left(\left\|Y^{(i)}\right\|_{\mathcal{X}}\right) \left\|Y^{(i)}\right\|_{\mathcal{X}}\right)\mathbb{I}\\
&=& \frac{1}{q}\mathbb{E}\left(\theta\left(\left\|Y^{(i)}\right\|_{\mathcal{X}}\right)\right)\mathbb{I}.
\end{eqnarray*}
\noindent
Then, (\ref{ezi2}) implies $\mathbb{E}\left(\mathcal Z_i\right)= 0$. Now$,$ using the central limit theorem we conclude that $-\widehat{W}_n$
converges in distribution$,$ as $n \mapsto + \infty,$ to a normal distribution
$N(0, \Lambda)$ in $\mathcal L(\mathcal X),$ \ where
$\Lambda$ is the covariance operator of
\begin{eqnarray*}
\mathcal Z &=& - \frac{\theta(\left\|V^{-1/2}X\right\|_{\mathcal{X}})}{\left\|V^{-1/2}X\right\|^2_{\mathcal{X}}} X\otimes X\\
&& - 2 \left( \frac{\xi(\left\|V^{-1/2}X\right\|_{\mathcal{X}}) - b_0}{q\beta_1} - \frac{\psi(\left\|V^{-1/2}X\right\|_{\mathcal{X}})\left\| V^{-1/2}X\right\|_{\mathcal{X}}}{\beta_3}\right) V \\
&=& -2q\beta^{-1}_3\psi\left(\left\|V^{-1/2}X\right\|_{\mathcal{X}}\right)\left\|V^{-1/2}X\right\|^{-1}X\otimes X\\
&& - 2 \left( \frac{\xi(\left\|V^{-1/2}X\right\|_{\mathcal{X}}) - b_0}{q\beta_1} - \frac{\psi(\left\|V^{-1/2}X\right\|_{\mathcal{X}})\left\| V^{-1/2}X\right\|_{\mathcal{X}}}{\beta_3}\right) V.
\end{eqnarray*}
\noindent
\subsection{Proof of the theorem}
\noindent
Arguing as in the proof of Theorem 3.2 in Nkiet (2017) (see p. 203), we have the equality $ \sqrt{n}\left(\widetilde{T}_n - T\right)= \widehat{\varphi}_n\left(\widetilde{H}_{n} \right)$, where $\widehat{\varphi}_n$ is the random operator from \ $\mathcal L(\mathcal{X})$ to itself
defined by:
\begin{eqnarray*}
\widehat{\varphi}_{n}(A)&=& -f(\widetilde{V}_{n})^{-1}f(A)\left(f(\widetilde{V}_{n})^{-1/2} + \mathbb{I}\right)^{-1}g(\widetilde{V}_{n})f(\widetilde{V}_{n})^{-1/2} + g(A)f(\widetilde{V}_{n})^{-1/2}\\
& &-g(V)(f(\widetilde{V}_{n})^{-1/2}f(A)\left(f(\widetilde{V}_{n})^{-1/2} + \mathbb{I}\right)^{-1}.
\end{eqnarray*}
Considering the linear map $\varphi$ from $\mathcal{X}$ to itself defined as $\varphi(A)=- \frac{1}{2}f(A)g(V) + g(A) - \frac{1}{2}g(V)f(A) $
and denoting by \ $\left\|\cdot\right\|_{\infty}$ \ and \ $\left\|\cdot\right\|_{\infty \infty}$ the norm of \ $\mathcal L(\mathcal{X})$ \ and \ $\mathcal L(\mathcal L(\mathcal{X})),$ respectively defined by
$\left\|A\right\|_{\infty}=\text{sup}_{x \in \mathcal{X}-\{0\}}\left\|Ax\right\|_\mathcal{X}/\left\|x\right\|_\mathcal{X}$ and $\left\|Q\right\|_{\infty \infty}=\text{sup}_{B \in \mathcal L(\mathcal{X})-\{0\}}\left\|Q(B)\right\|_{\infty}/\left\|B\right\|_{\infty} ,$ we have :
\begin{eqnarray}\label{in1}
\left\|\widehat{\varphi}_n(\widetilde{H}_{n} )-\varphi(\widetilde{H}_{n} )\right\|_{\infty}&\leq&\left\|\widehat{\varphi}_n-\varphi\right\|_{\infty\infty}\left\|\widetilde{H}_{n} \right\|_{\infty}
\end{eqnarray}
\noindent
and
\begin{eqnarray}\label{in2}
\left\|\widehat{\varphi}_n-\varphi\right\|_{\infty\infty}&\leq& \Bigg(\left\|f\right\|_{\infty \infty}\left\|\left(f(\widetilde{V}_{ n})^{-1/2} + \mathbb{I}\right)^{-1}g(\widetilde{V}_{n})f(\widetilde{V}_{n})^{-1/2}\right\|_{\infty}\nonumber\\
&&+ \left\|f\right\|_{\infty \infty}\left\|g(V)\right\|_{\infty} + \left\|g\right\|_{\infty \infty}\nonumber\\
&&+ \left\|f\right\|_{\infty \infty}\left\|g(V)\right\|_{\infty}\left\|\left(f(\widetilde{V}_{n})^{-1/2} + \mathbb{I}\right)^{-1}\right\|_{\infty} \Bigg)\left\|f(\widetilde{V}_{n})^{-1/2} - \mathbb{I}\right\|_{\infty}\nonumber\\
&&+ \Bigg(\left\|f\right\|_{\infty \infty}\left\|g(\widetilde{V}_n)f(\widetilde{V}_{n})^{-1/2}\right\|_{\infty}\nonumber\\
&&+ \left\|f\right\|_{\infty \infty}\left\|g(V)\right\|_{\infty}\Bigg)\left\|\left(f(\widetilde{V}_{n})^{-1/2} + \mathbb{I}\right)^{-1} - \frac{1}{2}\mathbb{I} \right\|_{\infty} \nonumber\\
&&+ \frac{1}{2}\left\|f\right\|_{\infty \infty}\left\|g\right\|_{\infty \infty}\left\|f(\widetilde{V}_{n})^{-1/2}\right\|_{\infty}\left\|\widetilde{V}_{n} - V \right\|_{\infty} \ \ \
\end{eqnarray}
Lemma 1 implies that $\widetilde{V}_n$ converges in probability to
$V,$ as $n \rightarrow +\infty$.
Then$,$ \ using the \ continuity\\
\noindent
of maps $f$, $g$, $ A\mapsto A^{-1}$ and $A\mapsto A^{-1/2}$ we
deduce that $f(\widetilde{V}_n)$ (resp. $f(\widetilde{V}_n)^{-1}$; resp. $f(\widetilde{V}_n)^{-1/2}$; resp. $g(\widetilde{V}_n)$ converges
in probability$,$ as $n \rightarrow +\infty,$ to \ $\mathbb{I}$ (resp. \ $\mathbb{I};$ resp. \ $\mathbb{I};$ resp. \ $g(V))$. \
Consequently
from (\ref{in1}) and (\ref{in2}) we deduce that $\widehat{\varphi}_n(\widetilde{H}_{n} )-\varphi(\widetilde{H}_{n} )$ converges
in probability to $0$ as $n\rightarrow+\infty$.
Slutsky's theorem allows to conclude that
$\widehat{\varphi}_n(\widetilde{H}_{n} )$ and $\varphi(\widetilde{H}_{n} )$ both converge to the same distribution, that is the distribution of $\varphi_s\left(M_{s}\right)$. Since $\varphi_s$ is linear this distribution is the normal
distribution with mean equal to $0$ and covariance operator equal to that of
the random variable:
\begin{eqnarray*}
Z_s &=& \varphi\left(\mathcal Z\right) = \frac{-2q\beta^{-1}_3\psi\left(\left\|V^{-1/2}X\right\|_{\mathcal{X}}\right)}{\left\|V^{-1/2}X\right\|} \varphi\left(X\otimes X\right) - \phi\left(\left\|V^{-1/2}X\right\|_{\mathcal{X}}\right)\,\varphi\left(V\right).
\end{eqnarray*}
\noindent
Besides
\begin{eqnarray*}
\varphi\left(X\otimes X\right)&=& \sum_{k=1}^K \sum_{\underset{\ell\neq k}{\ell=1}}^K \bigg\{- \frac{1}{2}\bigg(\tau_k^*\left(X_k\otimes X_k\right) V_{k\ell}\tau_\ell \ + \tau_\ell^*V_{\ell k}\left(X_k\otimes X_k\right)\tau_k\bigg)\\
& & + \tau_k^* \left(X_\ell\otimes X_k\right)\tau_\ell\bigg\} ,
\end{eqnarray*}
and from $f(V) = \mathbb{I},$ it follows:
$
\varphi(V)=g(V) - g(V_s) = g(V) - g(V)= 0$.
Thus
\begin{eqnarray*}
Z_s &=& \frac{-2q\beta^{-1}_3\psi\left(\left\|V^{-1/2}X\right\|_{\mathcal{X}}\right)}{\left\|V^{-1/2}X\right\|_{\mathcal{X}}} \sum_{k=1}^K \sum_{\underset{\ell\neq k}{\ell=1}}^K \bigg\{- \frac{1}{2}\bigg(\tau_k^*\left(X_k\otimes X_k\right) V_{k\ell}\tau_\ell + \tau_\ell^*V_{\ell k}\left(X_k\otimes X_k\right)\tau_k\bigg)\\
&& + \tau_k^* \left(X_\ell\otimes X_k\right)\tau_\ell\bigg\} .
\end{eqnarray*}
\section{Proof of Theorem 4.2}
\noindent
Under $\mathscr{H}_0$ we have $T = 0$ and, therefore, $\sqrt{n}\widetilde{T}_{n} = \sqrt{n}\left(\widetilde{T}_{n} - T\right)$. Consequently, from Theorem 4.1 we deduce that $\sqrt{n}\widetilde{T}_{n}$ converges in distribution, as $n \rightarrow +\infty$, to a random variable $U$ which has a normal distribution in $\mathcal L(\mathcal X)$ with mean 0 and covariance operator equal to that of $Z_s$. Since the map \ $A \mapsto\sum_{k=2}^{K}\sum_{\ell=1}^{k-1}tr\left(\pi_{k\ell}\left(A\right)\pi_{k\ell}\left(A\right)^*\right)$ is continuous$,$ we deduce that $n\widetilde{S}_{n}$ converges in distribution, as $n \rightarrow +\infty$,
to
\begin{eqnarray*}
\mathcal Q &=& \sum_{k=2}^{K}\sum_{\ell=1}^{k-1}\textrm{tr}\left(\pi_{k\ell}\left( U\right)\pi_{k\ell}\left( U \right)^*\right).
\end{eqnarray*}
\\
\noindent
On the other hand, Theorem 4.1 in Nkiet (2017) shows that $\mathcal Q= \mathbb{W}^T\mathbb{W}$ where $\mathbb{W}$ is a random variable having a centered normal distribution in $\mathbb R^d$ with covariance matrix $\Theta$ defined in Nkiet (2017) with
\begin{eqnarray*}
\gamma_{ijpt}^{k\ell,ru} &=& < \mathbb{E}\bigg(\pi_{k\ell}\left( U \right)\widetilde{\otimes}\pi_{ru}\left( U \right)\bigg)\left(e_j^{(\ell)}\otimes e_i^{(k)}\right) , e_t^{(u)}\otimes e_p^{(r)} >\\
&=& < \mathbb{E}\bigg(\pi_{k\ell}\left( Z_s \right)\widetilde{\otimes}\pi_{ru}\left( Z_s \right)\bigg)\left(e_j^{(\ell)}\otimes e_i^{(k)}\right) , e_t^{(u)}\otimes e_p^{(r)} >
\end{eqnarray*}
where $\widetilde{\otimes}$ denotes the tensor product related to the inner product of operators
$< A , B > = tr (AB^*)$ and \ $\left\{e^{(k)}_i\right\}_{1\leq i \leq p_k}$ is an orthonormal basis of $\mathcal X_k$. Since
under $\mathscr{ H}_0$ we have $V = \mathbb{I}$, $Z_s$ becomes
\begin{eqnarray*}
Z_s &=& \displaystyle\frac{-2q\beta^{-1}_3\psi\left(\left\|X\right\|_{\mathcal X}\right)}{\left\|X\right\|_{\mathcal X}}\displaystyle \sum_{k=1}^K \sum_{\underset{\ell\neq k}{\ell=1}}^K \tau_k^* \left(X_\ell\otimes X_k\right)\tau_\ell.
\end{eqnarray*}
\noindent
Thus
\begin{eqnarray*}
\pi_{k\ell}\left( Z_s \right) &=& \frac{-2q\beta^{-1}_3\psi\left(\left\|X\right\|_{\mathcal X}\right)}{\left\|X\right\|_{\mathcal X}}\left(X_\ell\otimes X_k\right)
\end{eqnarray*}
and
\begin{eqnarray*}
\gamma_{ijpt}^{k\ell,ru} &=& -2q\beta^{-1}_3\mathbb{E}\bigg(\displaystyle\frac{\psi\left(\left\|X\right\|_{\mathcal X}\right)}{\left\|X\right\|_{\mathcal X}}< \left(\left( X_\ell\otimes X_k\right)\widetilde{\otimes}\left( X_u\otimes X_r\right)\right)\left(e_j^{(\ell)}\otimes e_i^{(k)}\right) , e_q^{(u)}\otimes e_p^{(r)}>\bigg)\\
&=& -2q\beta^{-1}_3\mathbb{E}\bigg(\displaystyle\frac{\psi\left(\left\|X\right\|_{\mathcal X}\right)}{\left\|X\right\|_{\mathcal X}}< X_\ell\otimes X_k , e_j^{(\ell)}\otimes e_i^{(k)}> < X_u\otimes X_r , e_t^{(u)}\otimes e_p^{(r)}>\bigg)\\
&=& -2q\beta^{-1}_3\mathbb{E}\bigg(\displaystyle\frac{\psi\left(\left\|X\right\|_{\mathcal X}\right)}{\left\|X\right\|_{\mathcal X}}< X_k , e_i^{(k)}>_k < X_r , e_p^{(r)}>_r < X_\ell , e_j^{(\ell)}>_\ell < X_u , e_t^{(u)}>_u \bigg)\\
&=& -2q\beta^{-1}_3\mathbb{E}\bigg(z\left(\left\|X\right\|^2_{\mathcal X}\right)< X_k , e_i^{(k)}>_k < X_r , e_p^{(r)}>_r < X_\ell , e_j^{(\ell)}>_\ell < X_u , e_t^{(u)}>_u \bigg),
\end{eqnarray*}
\noindent
where \ $z: t \mapsto \displaystyle\frac{\psi\left(\displaystyle\sqrt{t}\right)}{\displaystyle\sqrt{t}}$.
Therefore$,$ if $(k, \ell) = (r, u)$ and $(i, j) = (p, t)$ \ with \ $\ell \neq k$ and $u \neq r,$ then
\begin{eqnarray*}
\gamma_{ijpt}^{k\ell,ru} &=& -2q\beta^{-1}_3\mathbb{E}\bigg(z\left(\left\|X\right\|^2_{\mathcal X}\right)< X_k , e_i^{(k)}>_k^2 < X_\ell , e_i^{(\ell)}>_l^2\bigg)
\end{eqnarray*}
\noindent
and from Lemma 1 in Lopuha$\ddot{\textrm{a}}$ (1997) we deduce that
\begin{eqnarray*}
\gamma_{ijpt}^{k\ell,ru} &=& \displaystyle\frac{-2q\beta^{-1}_3}{q(q + 1)}\mathbb{E}\bigg(z\left(\left\|X\right\|^2_{\mathcal X}\right)\left\|X\right\|_{\mathcal X}^4\bigg)=\frac{-2\beta^{-1}_3}{(q + 1)}\mathbb{E}\bigg(\psi\left(\left\|X\right\|_{\mathcal X}\right)\left\|X\right\|_{\mathcal X}^3\bigg).
\end{eqnarray*}
\noindent
Otherwise, if one of the conditions $(k,\ell) = (r, u)$ and $(i, j) = (p, t)$ with \ $\ell \neq k$ and $u \neq r$
does not hold then $\gamma_{ijpt}^{k\ell,ru} = 0$ . We deduce that
\begin{eqnarray*}
\Theta &=& \displaystyle\frac{-2\beta^{-1}_3}{(q + 1)}\mathbb{E}\bigg(\psi\left(\left\|X\right\|_{\mathcal X}\right)\left\|X\right\|_{\mathcal X}^3\bigg)I_d
\end{eqnarray*}
\noindent
where $I_d$ is the $d\times d$ identity matrix. Thus$,$ $\mathcal Q= \displaystyle\frac{-2\beta^{-1}_3}{(q + 1)}\mathbb{E}\bigg(\psi\left(\left\|X\right\|_{\mathcal X}\right)\left\|X\right\|_{\mathcal X}^3\bigg)\mathcal Q'$ \ where $\mathcal Q'$ is a random variable with distribution equal to $\chi_d^2.$
|
1,108,101,566,687 | arxiv | \section{Conclusions}\label{Conclusions}
In this paper, we have proposed a visual approach for the recognition of dynamic 3D hand gestures through the use of convolutional neural network models. The pipeline that we propose acquires data (on file or in real-time) from a Leap Motion sensor, it performs a representation in a 3D virtual space from which one or more 2D views are extracted. These images, which condense the temporal information in the form of traces of the fingertips with varying color intensity, are then fed to a CNN model, first in the training phase, then in real-time for the inference phase. The two models trained on the LMDHG dataset achieved an accuracy of above the 91\% and 92\% respectively, while the model trained on the new dataset proposed in this paper reaches an accuracy above the 98\%.
Future work will have the primary objective of enriching the new dataset, both in terms of the number of images, possibly by joining it with the LMDHG dataset after making the appropriate modifications and re-labeling, and in terms of the number of recognized gestures.
In addition, the performance of the real-time pipeline will be validated with a benchmark extended to the largest possible number of users.
\section{Experiments}\label{Experiments}
In this section, we present the experimental results obtained by processing the LMDHG dataset, represented in form of images from our 3D visualizer. The main obtained results concern the training of three distinct models through (i) images depicting a single view of the hands from above (see sub-section \ref{EvaluationSingleView}); (ii) images obtained by stitching two views together (from the top and from the right) to provide further information to the classifier (see sub-section \ref{EvaluationDoubleView}); and (iii) a new dataset that we publicly release at this URL\footnote{\url{https://imaticloud.ge.imati.cnr.it/index.php/s/YNRymAvZkndzpU1}} containing about 2000 new gestures performed more homogeneously with each other, with less noise and with fewer mislabeling occurrences than in the LMDHG dataset (see sub-section \ref{EvaluationNewDataset}). Indeed, we deem this dataset is richer and more suitable for the initial stages of training of CNN models when there are few samples available and it is important that the signal-to-noise ratio of the information used for training is high.
\subsection{Training of the models}\label{TrainingDescription}
The training took place using Jupyter Notebook and the popular deep learning library, Fast.ai \cite{fastai}, based on PyTorch. The hardware used was a GPU node of the new high-performance EOS cluster located within the University of Pavia. This node has a dual Intel Xeon Gold 6130 processor (16 cores, 32 threads each) with 128 GB RAM and 2 Nvidia V100 GPUs with 32 GB RAM.
The training was performed on 1920x1080 resolution images rendered by our 3D visualizer, properly classified in directories according to the original LMDHG dataset and divided into training and validation sets, again following the indications of the original paper \cite{boulahia2017dynamic}.
As previously mentioned, the model chosen for training is a pre-trained version of a ResNet-50 architecture. Fast.ai convenient APIs, allow to download pre-trained architecture and weights in a very simple and automatic way. Fast.ai also automatically modifies the architecture so that the number of neurons in the output layer corresponds to the number of classes of the current problem, initializing the new layer with random weights.
The training has performed using the progressive resizing technique, i.e. performing several rounds of training using the images of the dataset at increasing resolutions to speed up the early training phases, have immediate feedback on the potential of the approach, and to make the model resistant to images at different resolutions (i.e. the model generalizes better on the problem). The specific section in \cite{cellular_super_resolution} explains very well the concept of progressive resizing. For our particular problem, we have chosen the resolutions of 192, 384, 576, 960, 1536 and 1920 px (i.e. 1, 2, 3, 5, 8 and 10/10 of the original 1920x1080 px resolution).
Each training round at a given image resolution is divided into two phases (\textit{a} = frozen, \textit{b} = unfrozen), each consisting of 10 training epochs. In phase \textit{a}, the weights of all the layers of the neural network except those of the new output layer are frozen and therefore are not trained (they are used only in the forward pass). In phase \textit{b}, performed with a lower learning rate (LR), typically of one or two orders of magnitude less\footnote{Fast.ai's \textit{Learner} class has a convenient \textit{lr\_find()} method that allows to find the best learning rate with which to train a model in its current state.}, all layers, even the convolutional ones, are trained to improve the network globally.
As neural network model optimizer, we chose Ranger as it combines two of the best state-of-the-art optimizers, RAdam \cite{RAdam} (Rectified Adam) and Lookahead \cite{Lookahead}, in a single optimizer. Ranger corrects some inefficiencies of Adam \cite{Adam}, such as the need for an initial warm-up phase, and adds new features regarding the exploration of the loss landscape, keeping two sets of weights, one updated faster and one updated more slowly, and interpolating between them to improve the convergence speed of the gradient descent algorithm.
Once all the training rounds were completed, the model with the best accuracy was selected for the validation phase. At the same accuracy between checkpoints at different training rounds, the model generated by the lowest round (i.e. trained with lower image resolution) was selected and reloaded for validation. This has a substantial advantage in the inference phase since smaller images are classified faster.
All the code and jupyter notebooks described in this section are available at the following URL\footnote{https://github.com/aviogit/dynamic-hand-gesture-classification}.
\subsection{Evaluation on the LMDHG gesture dataset - single view}\label{EvaluationSingleView}
To allow a further comparison with the method provided by Boulahia et al. \cite{boulahia2017dynamic}, we split the dataset according to their experiments, i.e. by using sequences from 1 to 35 of the dataset to train the model (779 samples representing $\sim70\%$ of the dataset) and sequences from 35 to 50 to test it (355 samples representing $\sim30\%$ of the dataset).
With this partition, our approach reaches an accuracy of 91.83\% outperforming the 84.78\% performed by Boulahia et al.
From the confusion matrix illustrated in \figurename\ \ref{fig:ConfMatrixOneView}, we can notice that most of the classes are well recognized with an accuracy over 93\%.
Misclassifications occur when the paired actions are quite similar. For example,
the gestures \textit{Point to} and \textit{Rotate}, which are recognized with an accuracy of 80\% and 73\% respectively, are confused with the nosiest class \textit{Rest}; \textit{Point to with two hands}, recognized with an accuracy of 73\%, is confused with the close class \textit{Point to}; while \textit{Shake with two hands}, recognized with an accuracy of 80\%, is reasonably confused with the two close classes \textit{Shake}, \textit{Shake down}.
\begin{figure}[htbp]
\centering
\includegraphics[height=0.45\textheight]{Figures/ConfMatrixOneView.png}
\caption{Confusion matrix obtained using a single view.}
\label{fig:ConfMatrixOneView}
\end{figure}
For a comprehensive evaluation, in \figurename\ \ref{fig:TopLossesOneView} we show the top losses for our model. The top losses plot shows the incorrectly classified images on which our classifier errs with the highest loss. In addition to the most misclassified classes deduced also from the confusion matrix analysis (i.e. \textit{Point to}, \textit{Rotate} and \textit{Shake with two hands}), from the analysis of the top losses plot, we can pinpoint a few mislabeled samples.
For example, in \figurename\ \ref{fig:TopLossesOneView} it can be seen that the third sample (prediction: \textit{Rest}, label: \textit{Rotate}) does not actually represent a \textit{Rotate} at all. The same is valid for \textit{Draw Line/Zoom}, \textit{Scroll/Rest} and \textit{Point to/Rest} samples, and having so few samples in the dataset, these incorrectly labeled samples lower the final accuracy of the model and prevent it from converging towards the global optimum.
\begin{figure}[htbp]
\centering
\includegraphics[width=\linewidth]{Figures/TopLossesOneView.png}
\caption{Top losses plot obtained using a single view.}
\label{fig:TopLossesOneView}
\end{figure}
\newpage
\subsection{Evaluation on the LMDHG gesture dataset - double view}\label{EvaluationDoubleView}
To reduce these misclassifications, we trained a new model by increasing the amount of information available in each individual image: in this case, in addition to the top view, we stitch the final image by adding a view from the right. This approach allows the classifier (exactly like a human being) to disambiguate between gestures that have a strong informative content on the spatial dimension implicit in the top view (such as the \textit{Scroll} gesture, for example). Some example images are shown in \figurename\ \ref{fig:gesturePatternDoubleView}.
\begin{figure}[htbp]
\centering
\subfigure[Catching\label{DoubleCatching}]{
\includegraphics[width=0.35\linewidth,angle =90]{Figures/DoubleCatching.png}}\quad\quad
\subfigure[Rotating\label{DoubleRotating}]{
\includegraphics[width=0.35\linewidth,angle =90]{Figures/DoubleRotating.png}}
\subfigure[Scroll\label{DoubleScroll}]{
\includegraphics[width=0.35\linewidth,angle =90]{Figures/DoubleScroll.png}}\quad\quad
\subfigure[Shaking\label{DoubleShaking}]{
\includegraphics[width=0.35\linewidth,angle =90]{Figures/DoubleShaking.png}}
\subfigure[Draw line\label{DoubleLine}]{
\includegraphics[width=0.35\linewidth,angle =90]{Figures/DoubleLine.png}}\quad\quad
\subfigure[Zoom\label{DoubleZoom}]{
\includegraphics[width=0.35\linewidth,angle =90]{Figures/DoubleZoom.png}}
\caption{Examples of 2D hand gesture patterns obtained using a double view.}
\label{fig:gesturePatternDoubleView}
\end{figure}
Using this pattern representation, the accuracy of our method reaches 92.11\%.
This model performs better than the one trained only with top view images, but the improvement is not as significant as we expected. The main reason is that the LMDHG dataset is challenging both in terms of noise, mislabeled samples and for the various semantic interpretations of the gestures which are collected from different users.
\figurename\ \ref{fig:pointPattern} shows different examples of the \textit{Point to} gesture performed by several persons and used to feed the neural network. As can be seen, it is objectively difficult, even for a human being, to be able to distinguish shared characteristics among all the images that univocally indicate that they all belong to the \textit{Point to} class.
\begin{figure}[htbp]
\subfigure[\label{Point1}]{
\includegraphics[width=0.3\linewidth]{Figures/Point/point1.png}}\quad
\subfigure[\label{Point2}]{
\includegraphics[width=0.3\linewidth]{Figures/Point/point2.png}}\quad
\subfigure[\label{Point3}]{
\includegraphics[width=0.3\linewidth]{Figures/Point/point3.png}}
\subfigure[\label{Point4}]{
\includegraphics[width=0.3\linewidth]{Figures/Point/point4.png}}\quad
\subfigure[\label{Point5}]{
\includegraphics[width=0.3\linewidth]{Figures/Point/point5.png}}\quad
\subfigure[\label{Point6}]{
\includegraphics[width=0.3\linewidth]{Figures/Point/point6.png}}
\subfigure[\label{Point7}]{
\includegraphics[width=0.3\linewidth]{Figures/Point/point7.png}}\quad
\subfigure[\label{Point8}]{
\includegraphics[width=0.3\linewidth]{Figures/Point/point8.png}}\quad
\subfigure[\label{Point9}]{
\includegraphics[width=0.3\linewidth]{Figures/Point/point9.png}}
\caption{Examples of \textit{point} patterns present in the training set.}
\label{fig:pointPattern}
\end{figure}
\subsection{Evaluation on our new dataset - single view}\label{EvaluationNewDataset}
With the aim of reducing this type of occurrence, we decided to create a new, more balanced dataset\footnote{The images obtained from our new dataset and the ones for the LMDHG dataset are available at this URL: https://github.com/aviogit/dynamic-hand-gesture-classification-datasets}, with more samples per class, and with gestures performed in a more homogeneous and less noisy way. The dataset has around 2000 gesture images sampled every 5 seconds and each class has around 100 samples. The \textit{Rest} class now contains only images of hands that are mostly still. Two further classes have been added: the \textit{Blank} class which contains only traces of gestures that are distant in time (or no gesture at all) and the \textit{Noise} class, which represents all the gestures not belonging to any other class. The dataset is provided both in the form of images and ROS \textit{bags}. The latter can be replayed (in a very similar way to a ``digital tape" of the acquisition) through ROS' \textit{rosbag play} command and this will re-publish all the messages captured during the acquisition (skeleton + depth images) allowing to rerun the pipeline, possibly by changing the processing parameters (e.g. displaying the gestures in a different way or changing the sampling window to improve the real-time acquisition).
Using this new dataset, we then trained a new model, using a 70\%/30\% random split (1348 images for the training set, 577 images for the validation set). The overall accuracy of the model is 98.78\%. We report in \figurename\ \ref{fig:gesturePatternDoubleView} the confusion matrix obtained from this model.
\begin{figure}[htbp]
\centering
\includegraphics[height=0.45\textheight]{Figures/ConfMatrixOurDS.png}
\caption{Confusion matrix obtained using the new dataset.}
\label{fig:ConfMatrixDoubleView}
\end{figure}
\newpage
\subsection{Real-time application}
The real-time acquisition, visualization and classification pipeline has already been used extensively to acquire the new dataset proposed in this paper and for qualitative user tests, again with a sampling window set to 5 seconds. On a PC with an Nvidia GTX 770 GPU, the ResNet-50 model takes a few hundred milliseconds to perform inference on an image produced by the 3D visualizer, thus making the real-time approach usable on practically any machine. However, these tests do not yet have sufficient statistical significance and must therefore be extended to several participants before they can be published. This part will be a subject of future works.
\section{Introduction}
Gesture recognition is an interesting and active research area whose applications are numerous and various, including, for instance, robotics, training systems, virtual prototyping, video surveillance, physical rehabilitation, and computer games.
This wide interest is due to the fact that hands and fingers are used to communicate and to interact with the physical world \cite{ahmad2019hand}; then, by analyzing human gestures it is possible to improve the understanding of the non-verbal human interaction. This understanding poses the basis for the creation of more natural human-computer interaction, which is fundamental for the creation of immersive virtual environments with a high sense of presence.
Despite this popularity and interest, until few years ago, finger movements were difficult to be acquired and characterized, especially without the use of sophisticated tracking devices, which usually turn to be quite unnatural.
Indeed, there exist many methods trying to solve hand gesture recognition by using wearable devices \cite{dipietro2008survey,bourke2007evaluation,kevin2004trajectory}.
With the recent technology improvement, fingers' tracks can be digitally obtained relying only on RGB cameras eventually enhanced with depth information.
In this manner, it is possible abstracting human hands by adopting two main representations: 3D model-based and appearance-based \cite{rautaray2015vision}. Generally, 3D model-based representations are deduced by exploiting depth information, but there are methods trying to reconstruct 3D hand representations using only RGB-data.
Hand gestures can be classified as \textit{static}, i.e. if no change occurs over time, or \textit{dynamic}, i.e. if several hand poses contribute to the final semantic of the gesture within an arbitrary time interval. So far, several works address static gesture, focusing on pre-defined gesture vocabularies, such as the recognition of the sign language of different countries \cite{kumar20203d,ravi2019multi,kaluri2018optimized,kumar2018independent,mapari2016american,kuznetsova2013real}.
Even if dynamic gestures are not universal but vary in different countries and cultures, they are more natural and intuitive than the static ones.
Since a tracking without gloves or controllers is more natural and efficient for users, in this paper we aim at defining a method for dynamic gesture recognition based on 3D hand representation reconstructed from the Leap Motion sensor tracking.
Our method relies on deep learning techniques applied to the images obtained by plotting the positions of the hands acquired over time on a specific 2D plane, condensing the temporal information uniquely as traces left by the fingertips that fade towards a value of transparency (the alpha value) equal to zero as time passes. Compared to also drawing the traces of the other edges that make up the hand, we have found that this approach maximizes the information that can be condensed into a single image while keeping it understandable for humans.
For the training, a public available dataset presenting 1134 gestures \cite{boulahia2017dynamic,LMDHG_dataset} has been used. The first stage of the evaluation of the deep neural network has been carried out on a subset of the 30\% of the available gestures, mantaining the split as presented in the original paper \cite{boulahia2017dynamic} and reaching an overall accuracy of the 91.83\%. We also propose our own dataset with about 2000 new dynamic gesture samples, created following considerations on the balance, number of samples and noise of the original dataset. We will show that by using our approach and our dataset, it is possible to exceed 98\% accuracy in the recognition of dynamic hand gestures acquired through the Leap Motion sensor. Finally, we will briefly talk about the real-time setup and how it has already been successfully used to acquire the new dataset proposed in this paper and to perform some preliminary user tests.
\clearpage
The rest of the paper is organized as follows.
Section \ref{RelatedWorks} reviews the most pertinent related works. Section \ref{Method} and \ref{Experiments} detail the proposed method and show the results of the experimentation carried out respectively.
Finally, section \ref{Conclusions} ends the paper providing conclusions and future steps.
\section{Overview of the proposed approach}\label{Method}
Based on the assumption that natural human-computer interaction should be able to recognize not only predefined postures but also dynamic gestures, here we propose a method for the automatic recognition of gestures using images obtained from LM data. Our method uses state-of-the-art deep learning techniques, both in terms of the CNN architectures and the training and gradient descent methods employed.
In the following sub-sections, first we describe the problem of dynamic gesture recognition from images (Section \ref{problemFormulation}), then we illustrate the pipeline to create the required images and how we feed them to the neural network model (Section \ref{pipeline}). Finally, we introduce the LMDHG dataset adopted and the rationale that led us to use it (Section \ref{DatasetDescription}).
\subsection{Problem formulation}\label{problemFormulation}
Let $g_{i}$ be a dynamic gesture and $S = \{C_{h}\}_{h=1}^{N}$ a set of gesture classes, where $N$ identifies the number of classified gestures.
The variation of $g_{i}$ over time can be defined as:
\begin{equation}
G_{i} = \Big\{\mathcal{G}_{i}^{\tau} \Big\}_{\tau = 1}^{T_{i}},
\end{equation}
where $\tau \in [1, T_{i}]$ defines a certain instant in a temporal window of size $T_{i}$ and $\mathcal{G}_{i}^{\tau}$ represents the frame of $g_{i}$ at the time $\tau$.
Note that a gesture can be performed over a variable temporal window (depending on the gesture itself or on the user aptitude).
The dynamic hand gesture classification problem can be defined as finding the class $C_{h}$ where $g_{i}$ \textit{most likely} belongs to, i.e. finding the pair $(g_{i}, C_{h})$ whose probability distribution $\mathbb{P}(g_{i}, C_{h})$ has the maximum value $\forall h$.
Let $\Phi$ be a mapping that transforms the space and the temporal information associated with a gesture $g_{i}$ resulting into a single image defined as:
\begin{equation}
I_{i} = \Phi(G_{i}).
\end{equation}
With this representation, there exist a single $I_{i}$ for each gesture $g_{i}$ regardless of the temporal window size $T_{i}$.
This new representation encodes in a more compact manner the different instants $\tau$ of each gesture and represents the new data to be recognized and classified.
Then, the classification task can be redefined in finding whether an image $I_{i}$ belongs to a certain gesture class $C_{h}$, i.e. finding the pair $(I_{i}, C_{h})$ whose probability distribution $\mathbb{P}(I_{i}, C_{h})$ has the maximum value $\forall h$.
\subsection{Hand gesture recognition pipeline}\label{pipeline}
We propose a view-based approach able to describe the performed movement over time whose pipeline is illustrated in \figurename \ref{fig:pipeline}.
As input, a user performs different gestures recorded as depth images by using a Leap Motion sensor (blue box). A 3D gesture visualization containing temporal information is created by using the joint positions of the 3D skeleton of the hands (magenta box) as obtained from the Leap Motion sensor. From the 3D environment, we create a 2D image projecting the obtained 3D points on a view plane (green box). The created image is fed to the pre-trained convolutional neural network (yellow box), to whose output neurons (14 such as the gesture classes to be classified) a softmax function is applied which generates a probability distribution which finally represents the predicted classes (purple box). Finally, the gesture is labeled with the class that obtains the maximum probability value (orange box). In the following the two main steps are described.
\begin{figure}[t]
\includegraphics[width=\linewidth]{Figures/pipeline.png}
\caption{Pipeline of the proposed hand gesture recognition}
\label{fig:pipeline}
\end{figure}
\subsubsection{The 3D visualizer}
We used the VisPy\footnote{http://vispy.org} library to visualize the 3D data of the hands in a programmable 3D environment. The visualizer is able to acquire the skeleton data both from the files belonging to the LMDHG dataset (through the Pandas\footnote{https://pandas.pydata.org} library), and in real time using the Leap Motion SDK wrapped through the popular framework ROS (Robot Operating System) \cite{ros} which provides a convenient publish/subscribe environment as well as numerous other utility packages.
A 3D hand skeleton is created by exploiting the tracking data about each finger of the hand, the palm center, the wrist and the elbow positions. If at a certain time the whole or a part of a finger is not visible, the Leap Motion APIs allows to estimate the finger positions relying on the previous observations and on the anatomical model of the hand.
Once the 3D joint positions are acquired, spatial and temporal information of each gesture movement are encoded by creating a 3D joint gesture image, where 3D points and edges are depicted in the virtual space for each finger. Here, the color intensity of the joints representing the fingertips changes at different time instants; specifically, recent positions ($\tau \sim T_{i}$) have more intense colors, while earlier positions ($\tau \sim 0$) have more transparent colors.
Finally, we create a 2D image by projecting the 3D points obtained at the last instant of the gesture on a view plane. In particular, we project the 3D fingertips of the hands on a plane corresponding to the top view, which represents hands in a ``natural" way as a human usually see them.
\figurename \ref{fig:gesturePattern} shows an examples of the 2D hand gesture patterns obtained for four different gestures.
Although this view does not contain all the information available in the 3D representation of the hands, we have found that it is sufficient for a CNN to classify the set of dynamic gestures under study very accurately.
\begin{figure}[t]
\subfigure[Catching\label{catching}]{
\includegraphics[width=0.5\linewidth]{Figures/catching.png}}\quad
\subfigure[Rotating\label{rotating}]{
\includegraphics[width=0.5\linewidth]{Figures/rotating.png}}
\subfigure[Scroll\label{scroll}]{
\includegraphics[width=0.5\linewidth]{Figures/scroll.png}}\quad
\subfigure[Shaking\label{shaking}]{
\includegraphics[width=0.5\linewidth]{Figures/shaking.png}}
\subfigure[Draw line\label{Line}]{
\includegraphics[width=0.5\linewidth]{Figures/Line.png}}\quad
\subfigure[Zoom\label{Zoom}]{
\includegraphics[width=0.5\linewidth]{Figures/Zoom.png}}
\caption{Examples of 2D hand gesture patterns}
\label{fig:gesturePattern}
\end{figure}
\subsubsection{Classification method}
The proposed method leverages a pre-trained ResNet-50 \cite{resnet}, a state-of-the-art 2D CNN that has been modified and fine-tuned to classify the images produced by our 3D visualizer. We decided to use a ResNet-50 because this kind of architecture is pre-trained on ImageNet \cite{imagenet} and it is one of the fastest at making inference on new images, having one of the lowest FLOPS count among all the architectures available today \cite{cnn_benchmarking}. Unfortunately, given the modest size of the original LMDHG dataset, it would not have been possible to train from scratch a 3D CNN model capable of classifying all the available information coming from the LM sensor.
\subsection{The LMDHG gestures dataset}\label{DatasetDescription}
Most of the reviewed gesture datasets are composed of gestures executed with a single hand, performed and recorded perfectly, with no noise or missing parts, and segmented always with the same duration. These hypotheses ensure a good class separation improving the classification results but they are far from the reality. For instance, it is not unusual to record hand trembles during the gestures including a significant amount of noise.
To improve the evaluation of different methods over a more realistic dataset, Boulahia et al. \cite{boulahia2017dynamic} define a dataset of unsegmented sequences of hand gestures performed both with one and two hands. At the end of each gesture, the involved participants were asked to perform a ``rest" gesture, i.e. keeping the hands in the last movement position for a few seconds, thus providing a kind of \textit{null gesture} that can be used to recognize the ending of a certain movement.
We chose their dataset as a starting point to test our method because it was the most realistic dataset created using the Leap Motion sensor that we were able to identify. It is our opinion that the original LMDHG paper provides three major contributions: i) the evaluation of the method proposed by the authors against the DHG dataset \cite{de2016skeleton}, ii) the evaluation of the method proposed by the authors against the properly segmented version of the LMDHG dataset, iii) the evaluation of the method proposed by the authors against the non-segmented version (i.e. without providing their classifier with the truth value on where a gesture ends and the next one starts) of the LMDHG dataset. For this paper, we decided to apply our method in order to replicate and improve only point ii), namely, against the properly segmented LMDHG dataset.
This dataset contains 608 ``active" plus 526 ``inactive" (i.e. classified as the \textit{Rest} gesture) gesture samples, corresponding to a total of 1134 gestures. These gesture instances fall into 14 classes, \textit{Point to}, \textit{Catch}, \textit{Shake down}, \textit{Shake}, \textit{Scroll}, \textit{Draw Line}, \textit{Slice}, \textit{Rotate}, \textit{Draw C}, \textit{Shake with two hands}, \textit{Catch with two hands}, \textit{Point to with two hands}, \textit{Zoom} and \textit{Rest}, of which the last 5 gestures are performed with two hands.
Unfortunately, the gesture classes are divided unevenly having a number of samples not uniform among them. Indeed, most of the classes have roughly 50 samples, except \textit{Point to with hand raised} that presents only 24 samples and \textit{Rest}, as previously said, that presents 526 samples.
\section{Related works}\label{RelatedWorks}
The ability of recognizing hand gestures, or more in general understanding the interaction between humans and the surrounding environment, has arisen interests in numerous fields and has been tackled in several studies consequently.
So far, several commercial sensors for capturing full hand and finger action are available on the market, generally they can be divided into \textit{wearable} (such as data gloves) and \textit{external} devices (such as video cameras). Wearable sensors can address different purposes, for instance VR Glove by Manus\footnote{https://manus-vr.com/}, Cyber Glove System\footnote{http://www.cyberglovesystems.com/}, Noitom Hi5 VR\footnote{https://hi5vrglove.com/} are designed mainly for VR training; while the Myo Gesture Control Armband is especially used in medical applications \cite{bachmann2018review}.
This kind of technology is very accurate and with fast reaction speed.
However, using gloves requires a calibration phase every time a different user starts and not always allows natural hand gestures and intuitive interaction because the device itself could constrain fingers motion \cite{abraham2018hand,gunawardane2017comparison,lawson2016future,sharp2015accurate}.
Therefore, research on hand motion tracking has begun investigating vision-based techniques relying on external devices with the purpose of allowing a natural and direct interaction \cite{ahmad2019hand}.
In the following sections, we review methods registering hands from RGB cameras (both monocular or stereo) and RGB-D cameras and interacting through markerless visual observations.
\subsection{Methods based on RGB sensors}
The use of simple RGB cameras for the hand tracking, and consequently for their gesture recognition, is a challenging problem in computer vision.
So far, works using markerless RGB-images mainly aim at the simple tracking of the motion, as the body movement \cite{khokhlova20183d,mehta2017vnect,bobick2001recognition} or the hand skeleton \cite{GANHands2018,romero2010hands,stenger2006model}; while the motion recognition and interpretation has still big room for improvement.
Considering the method proposed in \cite{GANHands2018}, which presents an approach for real-time hand tracking from monocular RGB-images, it allows the reconstruction of the 3D hand skeleton even if occlusions occur. As a matter of principle, this methodology could be used as input for a future gesture recognition. Anyhow, it outperforms the RGB-methods but not the RGB-D ones presenting some difficulties when the background has similar appearance as the hand and when multiple hands are close in the input image.
Focusing on hand gesture recognition, Barros et al. \cite{barros2014real} propose a deep neural model to recognize dynamic gestures with minimal image pre-processing and real time recognition. Despite the encouraging results obtained by the authors, the recognized gestures are significantly different from each other, so the classes are well divided, which usually greatly simplifies the recognition of the gestures.
Recently, \cite{santos2019dynamic} proposes a system for the 3D dynamic hand gesture recognition by a deep learning architecture that uses a Convolutional Neural Network (CNN) applied on Discrete Fourier Transform on the artificial images.
The main limitation of this approach is represented by the acquisition setup, i.e. it must be used in an environment where the cameras are static or where the relative movement between the background and the person is minimal.
\subsection{Methods based on Depth sensors}
To avoid many issues related to the use of simple RGB-images, depth cameras are widely used for hand tracking and gesture recognition purposes. Generally, the most common used depth cameras are the Microsoft Kinect\footnote{https://developer.microsoft.com/en-us/windows/kinect/} and the Leap Motion (LM) sensor\footnote{https://developer.leapmotion.com}.\\
The Kinect sensor includes a QVGA (320x240) depth camera and a VGA (640x480) video camera, both of which produce image streams at 30 frames per seconds (fps). The sensor is limited by near and far thresholds for depth estimation and it is able to track the full-body \cite{suarez2012hand}.
The LM is a compact sensor that exploits two CMOS cameras capturing images with a frame rate of 50 up to 200fps \cite{ameur2016comprehensive}. It is very suitable for hand gesture recognition because it is explicitly targeted to hand and finger tracking.
Another type of sensor that is adopted sometimes is the Time-of-Flight camera, which measures distance between the camera and the subject for each point of the image by using an artificial light signal provided by a laser or an LED. This type of sensor has a low resolution (176x144) and it is generally paired with a higher resolution RGB camera \cite{van2011combining}.
Using one of the above mentioned sensors, there are several works that address the recognition of \textit{static} hand gestures.
Mapari and Kharat \cite{mapari2016american} proposed a method to recognize the American Sign Language (ASL). Using the data extracted from the LMC, they compute 48 features (18 positional values, 15 distance values and 15 angle values) for 4672 collected signs (146 users for 32 signs) feeding an artificial neural network by using a Multilayer Perceptron (MLP).
Filho et al. \cite{stinghen2016gesture} use the normalized positions of the five finger tips and the four angles between adjacent fingers as features for different classifiers (K-Nearest Neighbors, Support Vector Machines and Decision Trees). They compare the effectiveness of the proposed classifiers over a dataset of 1200 samples (6 uses for 10 gestures) discovering that the Decision Trees is the method that better performs.
Still among the methods to recognize static postures, Kumar et al. \cite{kumar2018independent} apply an Independent Bayesian Classification Combination (IBCC) approach.
Their idea is to combine hand features extracted by LM (3D fingertip positions and 3D palm center) with face features acquired by Kinect sensor (71 facial 3D points) in order to improve the meaning associated with a certain movement. One challenge performing this combination relies on fusion of the features, indeed pre-processing techniques are necessary to synchronize the frames since the two devices are not comparable.
A more challenging task, which increases the engagement by a more natural and intuitive interaction, is the recognition of \textit{dynamic} gestures. In this case, it is crucial preserving spatial and temporal information associated with the user movement.
Ameur et al. \cite{ameur2016comprehensive} present an approach for the dynamic hand gesture recognition extracting spatial features through the 3D data provided by a Leap Motion sensor and feeding a Support Vector Machine (SVM) classifier based on the one-against-one approach.
With the aim of exploiting also the temporal information, Gatto et al. \cite{gatto2017orthogonal} propose a representation for hand gestures exploiting the Hankel matrix to combine gesture images generating a sub-space that preserves the time information. Then, gestures are recognized supposing that if the distance between two sub-spaces is small enough, then these sub-spaces are similar to each other.
Mathe et al. \cite{mathe2018arm} create artificial images that encode the movement in the 3D spaces of skeletal joints tracked by a Kinect sensor. Then, a deep learning architecture that uses a CNN is applied on the Discrete Fourier Transformation of the artificial images. With this work, authors demonstrate that is possible to recognize hand gestures without the need of a feature extraction phase.
Boulahia et al. \cite{boulahia2017dynamic} extract features on the hands trajectories, which describe \textit{local information}, for instance describing the starting and ending 3D coordinates of the 3D pattern resulting from trajectories assembling, and \textit{global information}, such as the convex hull based feature. Temporal information is considered by extracting features on overlapping sub-sequences resulting from a temporal split of the global gesture sequence. In this way, the authors collect a vector of 356 elements used to feed a SVM classifier.
In general, the use of complex and sophisticated techniques to extract ad-hoc features and manage temporal information requires more human intervention and does not scale well when the dictionary of gestures to be classified has to be expanded. Furthermore, the extraction of hundreds of features at different time scales may even take more CPU time than a single forward pass on a standard CNN already optimized against modern GPU architectures, thus not guaranteeing real-time performance in classification.
|
1,108,101,566,688 | arxiv | \section{Introduction}
Bismuth Germanate ($\mbox{Bi}_4\mbox{Ge}_3\mbox{O}_{12}$) crystals,
commonly known as BGO, have been extensively used for
electromagnetic(EM) calorimetry in high energy
physics experiments\cite{l3,topaz,cmd2}.
Advantages of BGO are its excellent $e/\gamma$ energy
resolution (0.3 -- 1 \%/$\sqrt(E(GeV)$),
high density(7.1 gm/cc), short radiation length(1.12 cm),
large refractive index(1.251), suitable scintillating properties(fast decay
time of about 300 $ns$ and peak scintillation at about 480 $nm$)
and non-hygroscopic nature. It is therefore
one of the best candidates for EM calorimetry in
collider experiments,
especially where space imposes a serious constraint.
Bismuth Silicate ($\mbox{Bi}_4\mbox{Si}_3\mbox{O}_{12}$) crystals,
known as BSO, on the other hand, although known to the
particle physics community for some time\cite{koba}, are yet to
find a major deployment in a particle detector experiment. BSO has
very similar properties as BGO~: high density(6.8 gm/cc),
short radiation length(1.2 cm),
large refractive index(2.06), decay time of about 100 $ns$, peak
scintillation at about 450 $nm$ and non-hygroscopic nature.
Although it sells at about the same price as BGO at the moment, it has
the advantage of being cheap if {\em commercially} produced,
since the expensive
raw material germanium in BGO is replaced by silicon, which is
much cheaper. The light output of pure BSO crystal, however, is
only about one-fourth of that of pure BGO, and hence energy resolution
of a calorimeter made up of BSO will be consequently worse than that
of a BGO calorimeter with similar geometry.
Both, pure BGO and pure BSO are known to be radiation hard at megarad
level~\cite{koba,yano,sahu}, even upto 100 MRad. This fact,
reinforced with the qualities cited in the above two paragraphs
makes these crystals potential materials for making
high resolution EM calorimeters
at small angles(below 10$^\circ$) in B-factories.
Radiation level at such small angles
is rather high in B-factories due to intense flux of photons
and electrons generated by Bhabha events and spent-electron
background events\cite{tom}.
Such a calorimeter has been proposed, for example, for the BELLE
detector\cite{BELLE} at KEK B-factory to cover very small angles around
the beam pipe\cite{efc}.
With such calorimeters one desires, driven by several physics
motivations, to detect not only
EM showers, but also to tag minimum-ionizing
particles (MIP), such as high energy charged pions, muons, kaons
and protons. Light output of these crystals at these
small angles is typically read out by photodiodes(PD). Use of
photomultiplier tubes(PMT) at such small angles is severely
restricted due to issues like lack of space and high non-uniform
magnetic field. PDs have much lower gain and much worse
signal-to-noise ratio than PMTs, and therefore detecting MIPs
becomes a challenge with a BGO(BSO)+PD system, since
MIPs produce a lot less light than $e^\pm$ and $\gamma$.
In this paper we report our successful effort in
making a low-noise amplifier system, with which we
can detect MIPs with BGO(or BSO)+PD system.
Although the light output of BSO is only $1/4$ of that of BGO,
we are still able to see the MIPs with this system.
We describe the design and performance of the preamplifiers
we developed for this purpose in the next section.
Setup for MIP detection by BGO and BSO crystals by
using a high energy pion beam
and observation of nuclear counter effect is described in the third section.
In the fourth section we make an analysis of the data and study the
effect of reflector around
the crystal on light collection. Results are summarized
in the last section.
\section{Amplifier}
A customized charge preamplifier was developed
for amplification of signal from photodiodes.
Circuit diagram of the preamp is shown in Fig.~\ref{fig:circuit}.
We adapted the design from a preamplifier
used in AMY experiment\cite{AMY} and experimented with different JFETs.
We settled on two brands of preamp : one using 2SK291, and the other using
2SK715 JFET as the main amplifying element. \\
\noindent
{\bf Calculation of gain~:} An input square pulse sequence of width 200
$\mu$s, and amplitude 60 mV was delivered into the TEST
key of the preamp (see Fig.~\ref{fig:circuit}). The output
of the preamp was digitized with a CAMAC ADC LeCroy 2249A.
ADC counts are plotted in Fig.~\ref{fig:amp}(a). The peak at the
left corresponds to the pedestal, whereas the peak on the right
gives the integrated charge. From the ratio of this peak
(calibrated as 65 mV) to the charge input(10.5 fC),
the gain of the preamp was calculated to be 6.2 V/pC. \\
\noindent
{\bf Test with Radioactive Source :}
We tested two kinds of
photodiodes from Hamamatsu Photonics\cite{hama}: S5106 and S2662-03.
Active area and capacitance of S5106(S2662-03) are 5$\times$5 mm$^2$
(7.5$\times$20 mm$^2$) and 40 pF(100 pF), respectively.
The preamp was now coupled to the PD at the point shown in
Fig.~\ref{fig:circuit}.
An $^{241}$Am radioactive source was mounted on the PD, and the whole
setup was placed in a light-tight box.
The source has a $\gamma$--ray peak at 60 keV, which may sometimes be
absorbed completely without any energy leakage by the 300 $\mu$m thick
depletion layer of the photodiode.
Signals generated by these $\gamma$--rays in the PD were
amplified in the preamp, self-triggered, and integrated by a
CAMAC ADC LeCroy 2249A with a gate width of 200 ns.
The 60 keV peak is easily seen with the system. The corresponding
pulse-height spectra for different combinations of photodiodes
and JFET are shown in Fig.~\ref{fig:amp}(b), (c) and (d).
These figures are for different PDs and different JFETs, as
indicated in the respective plots.
Fits to the 60 keV peak and pedestal for all these
three combinations of PD and JFET are given in Table~1. \\
\noindent
{\bf Calculation of ENC :} While working with photodiodes,
one often wants to know the noise or resolution of the system
in terms of electron-hole pairs produced in the PD.
{\em Equivalent Niose Charge} or ENC represents such a
measure. In Fig.~\ref{fig:amp}(b), for example, width of
the peak is 3.29\% of the mean(Table~\ref{tab:tab1}), and hence the noise is
3.29\% of the signal produced. Since the energy required
to produce an electron-hole pair in silicon is about
3.6 eV, a 60 keV photon produces 16,667 electrons in the PD.
So the noise translates to 16,667 $\times$ 3.29\% = 548 electrons,
which is the ENC for this system. ENC for the other two systems
are 970 and 906, as posted in Figs.~\ref{fig:amp}(c) and (d), respectively.
It is clear that the configuration of \underline{PD S5106} and preamp with
\underline{JFET 2SK715} renders the least ENC, and hence corresponds to
least noise. We therefore \underline{chose this system} to measure
the scintillation of BGO and BSO crystals. \\
\noindent
{\bf Estimation of S/N for MIP :} A MIP deposits an energy of
about 100 MeV in the length of our BGO crystal.
About 300 eV is needed for one scintillation in pure BGO.
Assuming about 20\% light collection efficiency and 100\%
quantum efficiency of PD, we end up with about
66,000 electron-hole pairs created in the PD for a MIP.
Since the ENC is 548 electrons, we can expect a signal-to-noise
ratio (S/N) of about 120~:~1 with our system. For BSO, since the
light output is about one-fourth of BGO, the S/N would be
about 30~:~1.
\section{BGO and BSO Crystals on MIPs}
We experimented on three samples with the same cross-sectional area of
1$\times$1 cm$^2$:
(A) 10 cm long BGO crystal from the Institute of Inorganic
Chemistry, Novosibirsk, Russia\cite{novo},
(B) 12 cm long BGO crystal from the Institute of Single Crystal,
Ukraine\cite{ukra}, and
(C) 12 cm long BSO crystal from Futec Furnace Co, Japan\cite{fute}.
Photodiodes(S5106) were glued to
one end of crystals with an optical glue called Eccobond\cite{eccobond}.
The crystals were then wrapped hermetically,
first with 150 $\mu$m teflon tapes
for better light collection and then with black tapes,
for protection against light leak from outside.
The samples were exposed to a 3.5 GeV $\pi^-$ beam at the $\pi 2$
beam line at the KEK-PS. A schematic diagram of the set-up
is given in Fig.~\ref{fig:layout}.
The $e/\pi$ separation in the beam was achieved with a
CO$_2$ \v{C}erenkov counter.
The trigger was provided by the coincidence of three scintillation
counters along the beam.
The pions would then enter the volume of the crystal (and sometimes
pass through the PD, too), and deposit some energy to produce the
scintillation light, which would then be collected by the
photodiode. Signal from the PD was amplified by the
preamp, and was
digitized by a CAMAC ADC LeCroy 2249W module with
a 4 $\mu$sec gate initiated by the trigger. The data was logged
by a Unix workstation-based DAQ system.
The crystal glued with PD and preamp were placed in a
light-tight box made up of thick aluminum, which was electrically
grounded, and therefore served as an excellent Faraday cage.
The pedestal was
logged concurrently by triggering the DAQ with a clock of the same
gate width, asynchronous with the beam gate.
\section{Results and Analysis}
\noindent
{\bf MIP Detection~: } ADC spectra for
the three samples A, B and C for the set-up
described above are shown in Figs.~\ref{fig:mipsignal}(a),
(b) and (c), respectively.
The first peak in each spectrum corresponds to the pedestal,
the second one to the MIP, and the third one to the sum of the MIP and
the Nuclear Counter Effect(NCE)\footnote{The Nuclear Counter Effect(NCE)
is the extra amount of charge produced in the photodiode
by a charged particle directly hitting it, on the top of the
charge produced by the scintillation light. A MIP, for example, produces
about 25000 electron-hole pairs in a photodiode of thickness
300 $\mu$m. Nuclear counter effect worsens the resolution
of an EM calorimeter, where some of the secondary $e^\pm$
might hit the PD. This effect is avoided by using enough radiation
lengths of crystal along the direction of the shower and/or using
{\em avalanche photodiodes}.
For MIP detection, however, it does not pose a problem as long as
the resulting signal due to NCE is comparable or
less than the MIP scintillation
signal, which is the case in this experiment.}.
The third peak thus
corresponds to the event where the pion deposited energy along the
length of the crystal, and then hit the photodiode. The difference
in the second and the third peak corresponds to the amount of energy
deposited in the photodiode itself when a minimum ionizing pion traverses it.
This conjecture was confirmed by a simple calculation of energy loss and
a GEANT\cite{Geant} simulation indicated by the dashed line in
Fig.~\ref{fig:mipsignal}(a). The simulation is not normalized to
the real data, to retain the clarity of the comparison.
In Fig.~\ref{fig:mipsignal}(d) we show the ADC logged when
the crystal is removed from the setup, {\em i.e.}, when the
beam directly hits the PD. Difference in ADC counts
between the two peaks corresponds to the signal generated
by the NCE. It may be noted that this
difference is same as the difference between second and third
peaks in Figs.~\ref{fig:mipsignal}(a),(b) and (c), which
confirms that the third peak in these figures corresponds to
the nuclear counter effect indeed.
It is also apparent from Fig.~\ref{fig:mipsignal} that
sample A has about 40\% more light
output than B after correcting for the length, which is not
surprising since they are from different manufacturers, and
BGO light output is known to be quite sensitive to production
method and trace impurities. The BSO sample C
has about 25\% light output compared to BGO sample A, as
already observed in Ref.~\cite{koba}, and one is still
able to observe the MIP peak. \\
\noindent
{\bf Effect of reflector~: }We also studied the advantage
of the teflon reflector around the crystal.
BGO has a high refractive index of 2.15, and therefore is supposed to
retain most of the scintillation light by total internal
reflection. We did the following experiment in order to
study the effect of putting on a teflon reflector
around the crystals. First, we stripped the reflector off the
sample B except for the very end opposite to the
photodiode(we will call this setup as ``BGO with a reflector-cap'').
Then the sample was subjected to the 3.5 GeV $\pi^-$ beam.
The pulse height spectrum is given in
Fig.~\ref{mipsignal_reflector}(a)
as the solid line. Then we removed the
reflector completely, leaving the crystal bare.
It was then subjected to the beam,
and the observed pulse-height spectrum is shown in
Fig.~\ref{mipsignal_reflector}(a) as the dotted line.
We can easily distinguish the MIP and nuclear counter peaks in these
two superimposed plots. It can be readily seen by comparing the
position of the two MIP peaks after pedestal subtraction, that
a reflector cap increases the light collection in BGO by about 30\%
compared to a bare crystal.
We then took the sample A, completely wrapped with the reflector,
and subjected it to the beam. The ADC spectrum is plotted
in Fig.~\ref{mipsignal_reflector}(b) as the solid line. Then we
stripped the reflector completely off the sample, and repeated
the experiment. The corresponding ADC spectrum is plotted as the
dotted line in Fig.~\ref{mipsignal_reflector}(b).
Again, the peaks due to the MIP and NCE
are clearly visible. By comparing the two MIP peaks after
pedestal subtraction, it can be seen that the light collection
with full reflector wrap improves by about 85\% compared to the
bare crystal.
It is interesting to note that (Fig.~\ref{mipsignal_reflector})
the height of the NCE
peak with respect to the MIP peak is smaller for sample B
compared to that for sample A. The reason may be ascribed
to the larger length of sample B, where more scintillation light is
collected by the PD, thereby pulling down the ratio.
\section{Summary}
\begin{itemize}
\item BGO and BSO crystals coupled with photodiodes are proven to be
capable of detecting minimum-ionizing particles
with a large S/N ratio.
The preamplifier used is perfectly adequate for the purpose.
The signal of MIPs is well separated from electronic noise and
NCE signal.
\item The detection of MIPs with BSO coupled with a PD is reported
for the first time.
\item A clear effect of NCE in a calorimetric
environment is reported for the first time.
\item Effect of reflector wrap around the crystal in regards to the
light collection is studied.
\end{itemize}
\section*{Acknowledgements}
The MIP detection experiment was done under the auspices of National Lab for
High Energy Physics (KEK) as the experiment T-388 of the KEK-PS.
We would like to thank Dr.~M.~Kobayashi for providing valuable
information on BSO, and Dr.~Y.~Sugimoto for valuable suggestions on
preamps.
The Aerogel and CsI subgroup members of the BELLE collaboration
have been extremely helpful in this project.
This experiment was supported in part by the grant NSC~85-2112-M-002-034
of the Republic of China.
\newpage
|
1,108,101,566,689 | arxiv | \section{Introduction}
Let $\mathbb{R}^{k}$ be real $k-$dimensional space, if $w\in\mathbb{R}^{k}$, then $|w|_{E}$ denotes the Euclidean norm of $w.$
Let $\Omega\subset\mathbb{R}^N$ , $N\geq 2$ is a bounded domain with boundary
$\partial \Omega$ of class $C^{\infty}$. Let $g\in C^{1}(\mathbb{R}^{k},\mathbb{R}^{k}),$ $h\in C(\partial\Omega,\mathbb{R}^{k}),$
and the matrix
$$A(x)=\left[
\begin{array}{cccc}
a_{1}(x)&0&\cdots&0\\
0&a_{2}(x)&\cdots&0)\\
\vdots & \vdots &\ddots &\vdots \\
0&0&\cdots&a_{k}(x)
\end{array}\right]
$$\\
Verifies the following conditions:
\begin{enumerate}
\item[(A1)] The functions $a_{i}:\Omega\to\mathbb{R},$ $a_{i}(x)\ge 0,~~ \forall i=1,\cdots,k ~~x\in\Omega$
with strict inequality on a set of positive measure.
\item[(A2)] $A(x)$ is positive semidefinite matrix on $\mathbb{R}^{{k}\times{k}},$ almost everywhere $x\in\Omega,$ and
$A(x)$ is positive definite on a set of positive measure with $a_{ij}\in L^{p}(\Omega)~~ \forall ~i=1,\cdots,k$
for $p>\frac{N}{2}$ when $N\geq 3$, and $p>1$ when $N=2$
\end{enumerate}
We will study the solvability of
\begin{equation}\label{e1}
\begin{gathered}
-\Delta u + A(x)u = 0 \quad\text{in } \Omega,\\
\frac{\partial u}{\partial \nu}+g(u)=h(x) \quad\text{on }
\partial\Omega,
\end{gathered}
\end{equation}
The interest in this problem is the resonance case at the boundary with a bounded nonlinearity, we will assume that $g$ a bounded function,
and there is a constant $R>0$ such that
\begin{equation}\label{e2}
\begin{gathered}
|g(w(x))|_{E}\leq R \quad\ \forall ~ w\in \mathbb{R}^{k} ~~\& ~~ x\in \partial\Omega
\end{gathered}
\end{equation}
Our assumptions allow that $g$ is not only bounded, but also may be vanish at infinity i.e.;
\begin{equation}\label{e3}
\begin{gathered}
\displaystyle\lim_{{|w|_{E}}\to\infty}g(w)=0\in \mathbb{R}^{k}
\end{gathered}
\end{equation}
Condition (\ref{e3})is not required by Our assumptions, but allowing for it is the main result of this paper.\\
In case of the scalar equation $k=1$ and $g$ doesn't satisfy condition (\ref{e3})
but satisfying the Landesman-Lazer condition
$$g_{-}<\bar{h}<g^{+}$$ where
$\displaystyle\lim_{w\to-\infty}g(w)=g_{-},$ $\bar{h}=\frac{1}{|\partial\Omega|}\int_{\partial\Omega}h\,dx$,
$\displaystyle\lim_{w\to\infty}g(w)=g^{+}$\\
And $A(x)=0\in\mathbb{R}^{k\times k}$
Then it is well know that there is a solution (\ref{e1}).
The first results when the nonlinearity in the equation in scalar case was done by Landesman and Lazaer \cite{LLC} in $1970$,
Their work led to great interest and activity on boundary value problems at resonance which continuous to this day.
A particularly interesting extension of Landesman and Lazer's work to systems was done by Nirenberg \cite{LN}, \cite{LN1}
in case of system and the nonlinearity in the equation was done by Ortega and Ward \cite{OW}, in the scalar case without Landesman-Lazer condition
was done by Iannacci and Nkashama \cite{IK1}, Ortega and S\'{a}nchez \cite{OS}, more completely the case for periodic solutions of the system
of ordinary differential equations with bounded nonlinear $g$ satisfying Nirenberg's condition. They studied periodic so solutions
$$u''+cu'+g(u)=p(t)$$
for $u\in\mathbb{R}^{k}.$\\
In case $c=0$ was done by Mawhin \cite{M2}. In case the nonlinear terms vanish at infinity, as in (\ref{e3}), the Landesman-Lazer
conditions fail. We would like to know what we can do in this case, and what conditions on a bounded nonlinearity that
vanishes at infinity might replace that ones of the Landesman-Lazer type. Several authors have considered the case when the nonlinearity
$g\colon\partial\Omega\times\mathbb{R}\rightarrow\mathbb{R}$ is a scalar function
satisfies Carath\'{e}odory conditions
i,e.;
\begin{description}
\item[i] $g(.,u)$ is measurable on $\partial\Omega$, for each $u\in\mathbb{R}$,
\item[ii] $g(x,.)$ is continuous on $\mathbb{R},$ for $a.e.x\in \partial\Omega$,
\item[iii]for any constant $r>0$, there exists a function \\$\gamma_{r}\in L^{2}(\partial\Omega),$ such that
\begin{equation}\label{00}
\begin{gathered}
|g(x,u)|\leq\gamma_{r}(x),
\end{gathered}
\end{equation}
for $a.e.x\in \Omega$, and all $u\in\mathbb{R}$ with $|u|\leq r,$
\end{description}
Was done by Fadlallah \cite{AF}
and the others have considered the case when the nonlinearity does not decay to zero very rapidly. For example in case the nonlinearity in the equation
if $g=g(t)$ is a scalar function, the condition
\begin{equation}\label{01}
\begin{gathered}
\lim_{|t|\to\infty}tg(t)>0
\end{gathered}
\end{equation}
and related ones were assumed in \cite{Amb}, \cite{Ambr}, \cite{Au}, \cite{FK}, \cite{PH}, \cite{MK1}, \cite{MK2}, \cite{MN}, \cite{Ru}. These
papers all considered scalar problem, but also considered the Dirichlet (Neumann) problem at resonance (non-resonance) at higher
eigenvalues (Steklov-eigenproblems). The work in some of these papers makes use of Leray-Schauder degree arguments, and the others using
critical point theory both the growth restrictions like \ref{01} and Lipschitz conditions have been removed (see \cite{MK2}, \cite{Ru}).
In this paper we study systems of elliptic boundary value problems with nonlinear boundary conditions Neumann type and the nonlinearities at boundary
vanishing at the infinity. We do not require the problem to be in variational from.
\subsection{Assumptions}
\begin{description}
\item[G1] $g\in C^{1}(\mathbb{R}^{k},\mathbb{R}^{k})$ and $g$ is bounded with $g(w)\neq 0$ for $|w|_{E}$ large. Let $S^{k-1}$
be the unit sphere in $\mathbb{R}^{k}$
\item[G2] We will assume that $S^{k-1}\cap\partial\Omega\neq\emptyset$ and Let $\mathbb{S}=S^{k-1}\cap\partial\Omega$
\item[G3] For each $z\in\mathbb{S}$ the $\displaystyle\lim_{r\to\infty}\frac{g(rz)}{|g(rz)|_{E}}=\varphi(z)$ exists, and the limits is uniform for
$z\in \mathbb{S}.$
\item It follows that $\varphi\in C(\mathbb{S},\mathbb{S})$ and the topological degree of $\varphi$ is defined.
\item[G4] $deg(\varphi)\neq 0$
\end{description}
\subsection{Notations}
\begin{itemize}
\item Let $\<.,.\>_{L^{2}}$ denote the inner product in $L^{2}:=L^{2}(\Omega,\mathbb{R}^{k})$ where $L^{2}$ is Lebesgue space
\item Let $\<.,.\>_{E}$ denote the standard inner product in $\mathbb{R}^{k}$
for $u,v\in H^{1}=H^{1}(\Omega,\mathbb{R}^{k})$ where $H^{1}$ the Sobolev space
\end{itemize}
We note that if follows from the assumptions G1-G4 that on large balls
$$B(R):=\{y:|y|_{E}\leq R\},$$ the $deg(g,B(R),0)\neq 0$ see \cite{L},\cite{Ma}. \\
We modify the Lemma 1 and Theorem 1 \cite{OW} to fit our problem.
\begin{lemma}\label{le1}
assume that G1 and G3 hold and $C>0$ is a given constant. Then there is $R>0$ such that
$$\int_{\partial\Omega}g(u(x))\,dx\neq 0,$$
for each function $u\in C(\partial\Omega,\mathbb{R}^{k})$ (we can write $u=\bar{u}+\tilde{u}$ where $\bar{u}=\int_{\partial\Omega}u(x)dx=0,$
and $\bar{u}\bot\tilde{u}$) with $|\bar{u}|_{E}\geq R$ and $||u-\bar{u}||_{L^{\infty}(\partial\Omega)}\leq C$
\end{lemma}
\begin{proof}
By the way of contradiction. Assume that for some $C>0$ there is exist a sequence of functions
$\{u_{n}\}_{n=1}^{\infty}\in C(\bar{\Omega},\mathbb{R}^{k}),$ with
$$|\bar{u}_{n}|_{E}\to\infty,~~||u_{n}-\bar{u}_{n}||_{L^{\infty}(\partial\Omega)}\leq C$$ and
\begin{equation}\label{02}
\begin{gathered}
\int_{\partial\Omega}g(u_{n}(x))\,dx=0
\end{gathered}
\end{equation}
We constructed a subsequence of $u_n$ one can assume that $\bar{z}_{n}=\frac{\bar{u}_n}{|\bar{u}_{n}|_{E}}$ converges to some
point $z\in\mathbb{S}.$ The uniform bound on $u_{n}-\bar{u}_{n}$ implies that also $\frac{u_{n}}{|u_{n}|_{E}}$ converges to $z$ and this convergence
is uniform with respect to $x\in\bar{\Omega}.$ It follows from the assumption G3 that
$$\displaystyle\lim_{n\to\infty}\frac{g(u_{n}(x)}{|g(u_{n}(x))|_{E}}=\varphi(z)$$
uniformly in $\bar{\Omega}.$ Since $\varphi(z)$ is in the unit sphere one can find an integer $n_{0}$ such that if
$n\geq n_{0}$ and $x\in\bar{\Omega}$, then
$$\<\frac{g(u_{n}(x)}{|g(u_{n}(x))|_{E}},\varphi(z)\>_{E}\geq \frac{1}{4}$$
Define
$$\gamma_{n}(x)=|g(u_{n}(x))|_{E}.$$ By G1 clearly $\gamma_{n}>0$ everywhere.
For $n\geq n_{0}$
$$\<\int_{\partial\Omega}g(u_{n}(x))\,dx,\varphi(z)\>_{E}=\int_{\partial\Omega}\<g(u_{n}(x)),\varphi(z)\>_{E}\,dx$$
$$=\int_{\partial\Omega}\gamma_{n}(x)\<\frac{g(u_{n}(x))}{\gamma_{n}(x)},\varphi(z)\>_{E}\,dx\geq\frac{1}{4}\int_{\partial\Omega}\gamma_{n}(x)\,dx>0$$
Therefore $\displaystyle\int_{\partial\Omega}g(u_{n}(x))\,dx>0.$
Now we have contradiction with (\ref{02})\\
The proof completely of the lemma.
\end{proof}
\section{Main Result}
Let \begin{equation}\label{N1}
\begin{gathered}
Qu=Nu
\end{gathered}
\end{equation}
be a semilinear elliptic boundary value problem. Suppose $N$ is continuous and
bounded (i.e.,$|Nu|_{E}\leq C$ for all $u$).
If $Q$ has a compact inverse $Q^{-1}$ then by Leray-Schauder theory (\ref{N1}) has a solution. On the other hand if
$Q$ is not invertible the existence of a solution depends on the behavior of $N$ and its interaction with the
null space of $Q$ see \cite{Ma}.
\begin{theorem}\label{th}
Assume That $g\in C^{1}(\mathbb{R}^{k},\mathbb{R}^{k})$ satisfies G1, G3, and G4. If $h\in C(\partial\Omega,\mathbb{R}^{k}),$
satisfies $\bar{h}=0$ then (\ref{e1}) has at least one solution.
\end{theorem}
\begin{proof}
Define
$$Dom(L):=\{u\in H^{1}(\Omega): -\Delta u + A(x)u = 0\} $$
Define an operator $L$ on $L^{2}=L^{2}(\Omega,\mathbb{R}{k})$ for $u\in Dom(L)$ and each $v\in H^{1}(\Omega)$ by
$$Lu=\frac{\partial u}{\partial\nu},~~~\forall~~u\in Dom(L)$$
we use the embedding theorem see \cite{Bz}since you know that $H^{1}(\Omega)\hookrightarrow L^{2}(\Omega)$
and the trace theorem $H^{1}\to L^{2}(\partial\Omega)$, thus $L:Dom(L)\subset L^(\partial\Omega)\to L^{2}(\partial\Omega)$ then the
equation
$$<Lu,v>=<h,v>_{L^{2}(\partial\Omega)}~~~\forall~~v\in H^{1}(\partial\Omega)$$
if and only if
$$Lu=h.$$
The latter equation is solvable if and only if
$$Ph:=\frac{1}{|\partial\Omega|}\int_{\partial\Omega}h=0$$
Now if $h\in L^{\infty}(\partial\Omega,\mathbb{R}^{k})$ and $Ph=0$ then each solution $u\in H^{1}(\Omega)$ is
H\"{o}lder continuous, so $u\in C^{\gamma}(\bar{\Omega},\mathbb{R}^{k})$ for some $\gamma\in (0,1).$
Since we know that there is constant $r_{1}$ such that
$$||u||_{\gamma}\leq r_{1}\left(||u||_{L^{2}(\partial\Omega)}+||h||_{L^{\infty}(\partial\Omega)}\right)$$
When $Ph=0$ there is a unique solution $Kh=\tilde{u}\in H^{1}(\Omega)$ with $P\tilde{u}=0$
to
$$Lu=h,$$
and if $h\in C(\partial\Omega)=C(\partial\Omega,\mathbb{R}^{k})$ then
$$||Kh||_{\gamma}\leq r_{1}\left(||Kh||_{L^{2}(\partial\Omega)}+||h||_{L^{\infty}(\partial\Omega)}\right)\leq r_{2}||h||_{C(\partial\Omega)}$$
and $K$ maps $C(\partial\Omega)$ into itself take compact set to compact set i.e.; compactly.\\
Let $Q$ be the restriction of $L$ to $L^{-1}(C(\partial\Omega))=KC(\partial\Omega)+\mathbb{R}^{k}.$
We define $N:C(\partial\Omega)\to C(\partial\Omega)$ define by
$$N(w)(x):=h(x)-g(w(x))~~\forall w\in C(\partial\Omega)$$
is continuous. Now (\ref{e1}) can be written as
$$Qu=Nu$$
and $\ker Q=\Im P,$ $\Im Q=\ker P.$ The linear map $Q$ is a Fredholm map (see \cite{MN}) and $N$ is $Q-$compact (see \cite{Ma}).
Now we define the Homotopy equation as follows
Let $\lambda \in [0,1]$ such that
\begin{equation}\label{H}
\begin{gathered}
Qu=\lambda Nu.
\end{gathered}
\end{equation}
The a priori estimates (i.e.; the possible solutions of (\ref{H}) are uniformly bounded in $C(\partial\Omega)$)
Now we show that the possible solutions of (\ref{H}) are uniformly bounded in $C(\partial\Omega)$
independent of $\lambda\in [0,1]$
Since we know that $u=\bar{u}+\tilde{u}$ where $\bar{u}=Pu.$ Then
$$||\tilde{u}||_{\gamma}=||\lambda KNu||_{\gamma}\leq r_{2}||Nu||_{C(\partial\Omega)}\leq R_{1}$$
Where $R_{1}$ is a constant ($g$ is abounded function).
It remains to show that $\bar{u}\in\mathbb{R}^{k}$ is bounded, independent of $\lambda\in[0,1]$.
By the way of contradiction assume is not the case (i.e.; $\bar{u}$ unbounded). Then there are sequence $\{\lambda_{n}\}\subset[0,1],$
and $\{u_{n}\}\subset Dom (Q)$ with
$||\tilde{u}_{n}||_{\gamma}\leq R_{1},$
$$Qu_{n}=\lambda_{n}Nu_n~~{\rm and } ~~|\bar{u}_{n}|_{E}\to\infty$$
We get that
$$PNu_n=PN(\tilde{u}_n+\bar{u}_n)=-\int_{\partial\Omega}g(\tilde{u}_{n}(x)+\bar{u}_{n}(x))\,dx=0$$
Now $u_{n}=\tilde{u}_n+\bar{u}_n$ so $||{u}_n-\bar{u}_n||_{L^{\infty}(\partial\Omega)}=||\tilde{u}_{n}||_{L^{\infty}(\partial\Omega)}\leq R_{1}$
and $||\bar{u}_{n}||_{L^{\infty}(\partial\Omega)}\to\infty$. It follows from Lemma\ref{le1} that for all sufficiently large $n$
$$\int_{\partial\Omega}g(u_{n}(x))\,dx\neq 0$$
We have reached a contradiction, and hence all possible solutions of (\ref{H}) are uniformly bounded in $C(\partial\Omega)$ independent of $\lambda\in[0,1]$
\\
Let $\bar{B}(0,r)=\{x:|x|_{E}\leq r\}$ denote the ball in $C(\partial\Omega,\mathbb{R}^{k})$
Now you can apply Leray-Schauder degree theorem see (\cite{L},\cite{Ma}), the only thing left to show is that
$$deg(PN,\bar{B}(0,r)\cap\ker Q,0)\neq0,$$ for large $r>0.$
So
$deg(PN,\bar{B}(0,r)\cap\ker Q,0)=deg(g,\bar{B}_{r},0),$ where $\bar{B}_{r}$ is the ball in $\mathbb{R}^{k}$ of radius $r$. Since for $|x|_{E}$
large, and $deg(\varphi)\neq0$ we have that
$deg(g,\bar{B}_{r},0)\neq 0$ for large $r$.
Therefore $deg(PN,\bar{B}(0,r)\cap\ker Q,0)\neq 0$
By Leray-Schauder degree theorem equation (\ref{H}) has a solution when $\lambda=1.$ Therefore equation (\ref{e1}) has at least one solution.
This proves the theorem.
\end{proof}
We will give one Example
\begin{example}
Let $\Omega\subset\mathbb{R}^N$ , $N\geq 2$ is a bounded domain with boundary
$\partial \Omega$ of class $C^{\infty}$. Let
\begin{equation}\label{ex}
\begin{gathered}
-\Delta u + A(x)u = 0 \quad\text{in } \Omega,\\
\frac{\partial u}{\partial \nu}+\frac{u}{1+|u|_{E}^{3}}=h(x) \quad\text{on } \partial\Omega
\end{gathered}
\end{equation}
where $A(x)$ is positive semidefinite matrix on $\mathbb{R}^{{2}\times{2}},$ and
Where $u=(u_1,u_2)\in \mathbb{R}^{2}$ and $h$ real valued function and continuous on $\partial\Omega,$ and
$\int_{\partial\Omega}h(x)\,dx=0$
and $g(u)=\frac{u}{1+|u|_{E}^{2}}$
$$\displaystyle\lim_{u\to\infty}g(u)=\displaystyle\lim_{|u|_{E}\to\infty}\frac{u}{1+|u|_{E}^{2}}=0$$
$g(u)$ vanishes at infinity, clearly $g\in C^{1}(\mathbb{R}^{2},\mathbb{R}^{2})$ and bounded with $g(u)\neq0,$ for $|u|_{E}$ large.
Therefore $g$ satisfies G1.\\
$$\frac{g(ru_1,ru_2)}{|g(ru_1,ru_2)|}=\frac{g(ru)}{|g(ru)|}=\frac{\frac{ru}{1+|ru|_{E}^{2}}}{\left|\frac{ru}{1+|ru|_{E}^{2}}\right|}=
\frac{u}{|u|_{E}^{2}}=u$$
For all $u$ in $\mathbb{S}$ and $r>0$. Therefore G3 holds.\\
And $\varphi(u)=u$ so that $deg(\varphi)\neq 0.$Therefore G4 holds. By theorem\ref{th} (\ref{ex}) has at least one solution.
\end{example}
|
1,108,101,566,690 | arxiv | \section{Preliminaries}\label{sec1}
This work will employ techniques and terminology from Banach lattice theory. For
terminology which is not explained below, we refer the
reader to~\cite{AB}.
In this work the word ``operator'' will be synonymous
with ``linear operator.'' An operator $T\colon X\to Y$ between two
Banach lattices is {\it positive\/} if $x\ge0$ in $X$ implies $Tx\ge0$ in $Y$.
A positive operator $S\colon X\to X$ on a Banach lattice $X$ is said to
{\bf dominate} another operator $T\colon X\to X$ (in symbols, $S\succ T$) if
\[
|Tx|\le S|x|
\]
for each $x\in X$. If $S$ dominates $T$, we shall also say that $T$ {\it is
dominated\/} by $S$. Every operator dominated by a positive operator is
automatically continuous.
We recall next the notion of a compact-friendly operator that was introduced
in~\cite{AAB1} and that will play an important role in this work.
\begin{definition}\label{def:Com-fr}
A positive operator $B\colon X\to X$ on a Banach lattice
is said to be {\bf compact-friendly} if there exist three non-zero
operators $R, K, A\colon X\to X$ with $R$ and $ K$ positive and $K$
compact satisfying
$$
RB=BR,\quad R\succ A\quad\hbox{and}\quad K\succ A\,.
$$
\end{definition}
Regarding the invariant subspace problem for operators on Banach lattices the
compact-friendly operators seem to be the analogues of Lomonosov operators.
Recall that an operator $T\colon X\to X$ on a Banach space is a
{\bf Lomonosov operator} if there exist non-zero operators $S,K\colon X\to X$
such that {\it $S$ is not a multiple of the identity, $K$ is compact,
$ST=TS$, and $SK=KS$.\/}
The invariant subspace theorems for positive operators obtained in~\cite{AAB1}
(see also~\cite{AAB2}) can be viewed as the Banach lattice analogues of the
following famous invariant subspace theorem of V.~I. Lomonosov.
\begin{theorem}[Lomonosov~\cite{LO}]\label{Lomon}
Every Lomonosov operator $T$ has a non-trivial clo\-sed invariant subspace.
Moreover, if $T$ itself commutes with a non-zero compact operator,
then there exists a non-trivial closed hyperinvariant subspace.
\end{theorem}
Besides compact-friendly operators, we shall work here also with
multiplication operators on spaces of continuous and measurable functions.
If $\Omega$ is a compact Hausdorff space and $\phi\in C(\Omega)$, then
a {\bf multiplication operator} $M_\varphi$ on $C(\Omega)$ is defined by
$M_\varphi f=\phi f$ for each $f\in C(\Omega)$. The function $\phi$ is called
the {\it multiplier\/}.
Similarly, if $X$ is a Banach function space on a measure space
$(\Omega,\Sigma,\mu)$ and $\phi$ is a measurable function,
then a multiplication operator $M_\varphi$ on $X$ is defined by
$M_\varphi f=\phi f$ for each $f\in X$.
Observe that a multiplication operator $M_\phi$ maps $X$ into itself
if and only if the multiplier $\phi$ is an (essentially) bounded function.
So, for the rest of this paper, whenever we deal with a multiplication
operator $M_\phi$ on a Banach function space we assume that the multiplier
$\phi\in L_\infty(\mu)$.
It should be noticed that a multiplication operator is positive if and only
if its multiplier is a non-negative function.
Obviously each multiplication operator $M_\varphi$ has non-trivial
invariant subspaces and, as was observed in~\cite{AAB3},
each multiplication operator is a Lomonosov operator.
Moreover, as we
will prove in the next section (see Theorem~\ref{strips} and
Corollary~\ref{c:strips}) each multiplication operator $M_\varphi$ has
hyperinvariant subspaces of a very simple geometrical form, namely, the
disjoint bands.
Our next definition describes the kind of multipliers that will
be important in our work.
\begin{definition}\label{d:flat}
A continuous function $\phi\colon\Omega\to\hbox{\Bbb R}$ on a topological space has a
{\bf flat} if there exists a non-empty open set $V$ such that $\phi$ is
constant on $V$.
Similarly, a measurable function $\phi\colon \Omega\to\hbox{\Bbb R}$ on a
measure space $(\Omega,\Sigma,\mu)$ is said to have a {\bf flat} if
$\phi$ is constant on some $A\in\Sigma$ with $\mu(A)>0$.
\end{definition}
It was shown in~\cite{AAB3} that a positive multiplication operator commutes
with a non-zero finite rank operator if and only if the multiplier has a flat.
It was then asked whether the flatness condition characterizes also the
compact-friendly multiplication operators. The objective of this work is to
answer this question affirmatively. Namely, the main result of this paper can
be stated as follows.
\begin{theorem}\label{th:MAIN}
A positive multiplication operator $M_\varphi$ on a $C(\Omega)$-space or on a
$L_p(\Omega,\Sigma,\mu)$-space $(1\le p\le \infty)$ is compact-friendly if and
only if the multiplier $\varphi $ has a flat.
\end{theorem}
\section{The commutant of a multiplication operator}\label{sec2}
In this section $X$ will denote a Banach function space on a fixed measure
space $(\Omega,\Sigma,\mu)$. Let $M_\phi\colon X\to X$ be the
multiplication operator with a multiplier $\varphi \in L_\infty(\mu)$.
Not much is known about the commutant of $M_\varphi$. The following
discussion will provide some important insights
into the structure of the commutant. We precede this discussion by
fixing some notation. If $f\colon\Omega\to\hbox{\Bbb R}$ is a function, then its
support, ${\rm Supp}(f)$, is defined by
\[
{\rm Supp}(f)=\{\omega\in\Omega\colon\ f(\omega)\neq0\}\,.
\]
If $A,B\in\Sigma$, then relations $A\subseteq B$ a.e. and $A=B$ a.e.
are understood as usual $\mu$-a.e. For example, $A\subseteq B$ a.e. means
that $\mu(\{\omega\in A\colon\ \omega\notin B\})=0$.
\begin{definition}\label{d:suppt}
Let $T\colon X\to X$ be a continuous operator and let $E\subseteq \Omega $
be a measurable subset of positive measure. We shall say that $T$
{\bf leaves $E$ invariant}, if
\[
x\in X \ \hbox{ and }\ {\rm Supp}(x)\subseteq E\ \hbox{ implies }\
{\rm Supp}(Tx)\subseteq E \ a.e.
\]
\end{definition}
This definition is, of course, motivated by a simple observation
that an operator $T$ leaves a
(measurable) set $E$ invariant if and only if $T$ leaves invariant the band
$$
B_E=\{f\in X\colon\ f=0\,\hbox{ on }\,\Omega\setminus E\}
$$
generated by $E$ in $X$. It is obvious that {\it if an operator
$T\colon X\to X$ leaves invariant the sets $E$ and $F$, then it also
leaves invariant the sets $E\cap F$ and $E\cup F$.\/}
Now let us introduce some more notation.
For each $\alpha\in \hbox{\Bbb R}$, let
$$
E_\alpha = \bigl\{\omega\in\Omega\colon\ \phi(\omega) \ge
\alpha\bigr\}\quad\hbox{and}\quad
E^\alpha =\bigl\{\omega\in\Omega\colon\ \phi(\omega) \le
\alpha\bigr\}\,.
$$
If we need to emphasize that the level set $E_\alpha$ is produced by the
function $\phi$, then we shall write $E_\alpha(\phi)$ instead of $E_\alpha$.
For $\alpha \le\beta$, we also write
$$
E_\alpha^\beta =E_\alpha\cap E^\beta=\bigl\{\omega\in\Omega\colon\ \alpha \le
\phi(\omega) \le \beta\bigr\}\,.
$$
And now we come to a simple but important result asserting that all the bands
in $X$ generated by the level sets introduced above are left invariant
by each operator commuting with $M_\phi$.
\begin{theorem}\label{strips}
Every operator in the commutant of $M_\phi$ leaves invariant all the
sets $E_\alpha$, $E^\alpha$ and $E_\alpha^\beta$.
\end{theorem}
\begin{proof}
Let $R\colon X\to X$ be a bounded operator commuting with $M_\varphi$.
We begin by considering the sets $E^\alpha$.
Assume that $\varphi \ge 0$. First we will verify that $R$
leaves invariant the set $E^\alpha$ with $\alpha=1$, i.e., the set
$$
E^1=E^1(\varphi)=\{\omega\in\Omega\colon\ \varphi(\omega)\le1\}.
$$
To do this, assume by way of contradiction that $R$ does not leave $E^1$
invariant. This means that there exists some function $x\in X$ with
${\rm Supp}(x)\subseteq E^1$ and such that the measurable set
$A=\{\omega\in\Omega\colon\ Rx(\omega)\neq0\ \& \ \varphi(\omega)>1\}$
has positive measure. Pick some $\gamma >1$ such that
$B=\{\omega\in\Omega\colon\ Rx(\omega)\neq0\ \& \
\varphi(\omega)>\gamma\}$ has positive measure.
The commutativity property $RM_\varphi=M_\varphi R$ easily implies
$$
R(\varphi^nx)=\varphi^nRx\eqno(\star)
$$
for each $n$. Let $\|\cdot\|$ denote the norm on $X$. We shall reach a
contradiction by computing the norm of the function in
$(\star)$ in two different ways. On one hand, the hypothesis
${\rm Supp}(x)\subseteq E^1$ and the fact that $0\le \varphi(\omega) \le 1$ on
$E^1$ imply that $|\varphi^nx|\le|x|$, and so
$$
\|R(\varphi^nx)\|\le\|R\|\cdot\|\varphi^nx\|\le\|R\|\cdot\|x\|<\infty\,.
$$
On the other hand, for the element $y=|(Rx)\chi_B| \in X$ we have
$$
0<\gamma^n y\le|\varphi^n (Rx) \chi_B|\le|\varphi^nRx|\,,
$$
whence
$$ 0< \gamma^n\|y\|\le \|\varphi^nRx\|=
\|R(\varphi^nx)\|\le\|R\|\cdot\|x\|<\infty
$$
for each $n$, contradicting the fact that $\gamma>1$. Hence, $R$ leaves
$E^1(\varphi)$ invariant.
Let us verify now that $R$ leaves invariant each $E^\alpha$ with
$\alpha > 0$. Consider $\psi=\alpha^{-1} \varphi$. Obviously
the multiplication operator $M_\psi$ also commutes with $R$ and $E^1(\psi)=
E^\alpha(\varphi)$. By the previous part $R$ leaves $E^\alpha$ invariant.
Since $E^0 =\cap_{\alpha> 0}E^\alpha$ we see that $R$ leaves
$E^0$ invariant as well. Since $\varphi \ge 0$ the set
$E^\alpha =\emptyset$ whenever $\alpha < 0$.
Thus, for $\varphi \ge 0$ we have proved that
$R$ leaves any set $E^\alpha$ invariant. The assumption
made at the beginning of the proof that the multiplier $\varphi$
is nonnegative can be easily disposed of. Indeed, pick any
$t>0$ such that the function
$\psi=\varphi +t {\bf 1}$ is positive.
Obviously $M_\psi$ commutes with $R$ (since
$M_\varphi$ does) and $E^\alpha(\varphi)= E^{\alpha+t}(\psi)$.
By the preceding part $R$ leaves $E^{\alpha+t}(\psi)$, that is
$E^\alpha(\varphi)$, invariant.
Finally notice that $E_\alpha(\varphi)=E^{-\alpha}(-\varphi)$.
This shows that the case of the sets $E_\alpha$ follows immediately from
the case of the sets $E^{\alpha}$ consided above.
\end{proof}
\begin{corollary}\label{c:strips}
If $\phi\in L_\infty(\mu)$ is a non-constant function, then
the multiplication operator $M_\phi$ has a non-trivial hyperinvariant
band. If the $($essential$)$ range of the multiplier $\varphi $ is
an infinite set, then $M_\varphi$ has infinite many disjoint
hyperinvariant bands.
\end{corollary}
Consider also the
following three additional types of the level sets associated with
the multiplier $\varphi$:
$$
\{\omega\in\Omega\colon\ \alpha \le \varphi(\omega) < \beta\}, \
\{\omega\in\Omega\colon\ \alpha < \varphi(\omega) < \beta\} \ \hbox{ and }\
\{\omega\in\Omega\colon\ \alpha < \varphi(\omega) \le \beta\}.
$$
It is easy to see that if $R$ is order continuous (and commutes with $M_\varphi$)
then $R$ leaves also each of these sets invariant. In particular this is so
if the norm on $X$ is order continuous. However, quite surprisingly, it may
happen that without this extra assumption the operator $R$ may fail to leave
these latter sets invariant.
Even when $X$ has order continuous norm (and so $R$ leaves
invariant so many mutually disjoint bands) it is not true in general that
$R$ leaves invariant any band.
Furthermore, as we shall see in the next example
$R$ may even fail to be a disjointness preserving operator. (Recall that an
operator $R$ on a vector lattice is said to preserve disjointness if
$R$ carries disjoint vectors to disjoint vector.)
\begin{itemize}
\item A positive operator $R\colon L_\infty \to L_\infty $ commuting with
$M_\varphi$ need not be disjointness preserving even if $\varphi$ has no flat.
\end{itemize}
\noindent To see this take $\mu$ to be the usual 2-dimensional Lebesgue
measure on $[0,1]\times[0,1]$ and $\varphi(x,y)=y$. Let
$Rf(x,y)=\int_0^1 f(t,y)\;dt$, then it is easy to see that $R$
commutes with $M_\varphi$, the multiplier $\varphi$ has no flat but $R$ is not
disjointness preserving. [If $\varphi$ has a flat, then the existence
of $R$ as required is obvious].
\section{Multiplication operators on $C(\Omega)$-spaces}\label{sec4}
We start with a useful general criterion
for distinguishing between compact-friendly and non-compact-friendly
operators on a Banach lattice with order continuous norm.
\begin{proposition}\label{t:general}
Let $A\colon Y\to Y$ be an operator on a Banach lattice
dominated by a positive compact operator. Then for any norm bounded sequence
$\{e_n\}$ the following two statements are true.
\begin{enumerate}
\item
The sequence $\{Ae_n\}$ has an order bounded subsequence.
\item
If $Y$ has order continuous norm and $\{Ae_n\}$ is disjoint, then
$\|Ae_n\| \to 0$.
\end{enumerate}
\end{proposition}
\begin{proof}
(1) Let $K\colon Y\to Y$ be a compact positive operator dominating $A$, i.e.,
$|Ax|\le K|x|$ holds for each $x\in Y$. Since $K$ is a compact operator and
$\{e_n\}$ is a norm bounded sequence, we can extract from $\{K(|e_n|)\}$
a convergent subsequence. Without loss of generality we can assume that
the sequence $\{K(|e_n|)\}$ itself converges in $Y$, that is, there exists
$y\in Y$ such that $K|e_n|\to y$. By passing
to another subsequence if necessary, we can also assume without loss of
generality that $\|K|e_n|-y\|<2^{-n}$ holds for each $n$. Letting
$e=\sum_{n=1}^\infty|K|e_n|-y|$ we see that $e\in Y^+$ and clearly
$|K|e_n|-y|\le e$, whence $K|e_n|\le e+|y|$ for each $n$.
It remains to note that
\[
|Ae_n|\le K|e_n|\le e+|y|
\]
for each $n$.
\vskip .12cm
(2) Assume that $\{Ae_n\}$ is a disjoint sequence and let $\{f_n\}$ be a
subsequence of $\{e_n\}$. By part (1), there exists a subsequence $\{g_n\}$
of $\{f_n\}$ (and hence of $\{e_n\}$) such that the pairwise disjoint
sequence $\{Ag_n\}$ is order bounded. Since $Y$ has order continuous norm,
it follows that $Ag_n\to 0$ in $Y$; see~\cite[Theorem~{12.13}, p.~{183}]{AB}.
Thus, we have shown that every subsequence of $\{Ae_n\}$ has a
subsequence convergent to zero, and consequently $Ae_n\to0$ in $Y$.
\end{proof}
The next theorem is a characterization of the compact-friendly
multiplication operators on $C(\Omega)$-spaces.
\begin{theorem} \label{t:tony}
A positive multiplication operator $M_\varphi$ on a $C(\Omega)$-space is
compact-friendly if and only the multiplier $\varphi$ has a flat.
\end{theorem}
\begin{proof}
Let $0\le\phi\in C(\Omega)$. If $\phi$ is constant on a non-empty open subset
of $\Omega$, then $M_\phi$ commutes with a non-zero positive rank-one
operator (see~\cite[Theorem~{2.6}]{AAB3}), and so $M_\varphi$ is compact-friendly.
For the converse, assume that $M_\phi$ is compact-friendly, and
consequently there exist non-zero bounded operators $R,K,A\colon C(\Omega)\to
C(\Omega)$ with $R, K$ positive, $K$ compact and such that
$$
M_\phi R=RM_\phi,\ \ R\succ A\ \ \hbox{and}\ \ K\succ A\,.
$$
Taking adjoints, we see that
$$
M_\phi^* R^*=R^*M_\phi^*,\ \ R^*\succ A^*\ \ \hbox{and}\ \
K^*\succ A^*\,.
$$
The following three properties follow in a rather straightforward way.
1) For each $\omega\in \Omega$ the support of the measure
$R^*\delta_\omega$ is contained in the set
$W_\omega=\phi^{-1}(\phi(\omega))$,
where $\delta_\omega$ denotes the unit mass at $\omega$.
This claim is immediate from consideration of the identity
$$
M_\phi^* R^*\delta_\omega=R^*
M_\phi^*\delta_\omega=\phi(\omega)R^*\delta_\omega\,.
$$
2) Since $R\succ A$, it follows immediately from $1)$ that
for each $\omega\in \Omega$ the measure $A^*\delta_\omega$ is also
supported by $W_\omega$.
3) Pick $h\in C(\Omega)$ with $\|h\|=1$ and
$Ah\ne 0$. Next, choose a non-empty open set $U$ on which $|Ah(\omega)|\ge
\epsilon>0$ for some $\epsilon>0$. Then
for each $\omega\in U$ we have
$\|A^*\delta_\omega\|\ge \epsilon$.
Indeed, to see this, notice that
$$
\|A^*\delta_\omega\|\ge | \langle A^*\delta_\omega,h\rangle|=
|\langle \delta_\omega,Ah\rangle| =|Ah(\omega)|\ge \epsilon.$$
To complete the proof, assume by way of contradiction that
the set $W_\omega$ has an empty interior for each $\omega\in \Omega$.
Then the non-empty open set $U$, chosen in (3) must
meet infinitely many sets $W_\omega$. Pick a sequence $\{\omega_n\}$ in
$U$ with $\phi(\omega_m)\ne \phi(\omega_n)$ if $m\ne n$,
and let $e_n=|A^*\delta_{\omega_n}|$ for each $n$. Then
$\|e_n\|\ge\epsilon$ for each $n$. Furthermore,
since each $e_n$ is supported by the set $W_{\omega_n}$
and the sequence $\{W_{\omega_n}\}$ is pairwise disjoint, the sequence
$\{A^*\delta_{\omega_n}\}$ is also disjoint. However, by
Proposition~\ref{t:general} (which is applicable since the norm in $C(\Omega)^*$
is order continuous) we should have $\|A^*\delta_{\omega_n}\|\to 0$, a
contradiction. This completes the proof of the theorem.
\end{proof}
Since each $L_\infty(\mu)$ space can be represented as $C(\Omega)$
space on its Stone space, the previous theorem implies immediately
the following result.
\begin{theorem}\label{t:AM.const}
A multiplication operator $M_\phi$ on $L_\infty$, where
$\phi\in L_\infty(\mu)$, is compact-friendly if and only if its
multiplier $\phi$ has a flat.
\end{theorem}
\section{Compact-friendly multiplication operators on $L_p$-spaces}\label{sec3}
For the rest of our discussion, $(\Omega,\Sigma,\mu)$
will denote a fixed measure space, and $\|\cdot\|$ will denote the
standard norm on $L_p(\mu)$. The main result in this section
is the following $L_p$-version of Theorems~\ref{t:tony}
and~\ref{t:AM.const}.
\begin{theorem}\label{t:oc.const}
A multiplication operator $M_\phi$ on an arbitrary $L_p(\mu)$-space,
where $0\le \phi\in L_\infty(\mu)$ and $1\le p<\infty$,
is compact-friendly if and only if $\varphi$ has a flat.\footnote{We do
not know if this theorem is true for arbitrary Banach function
spaces.}
\end{theorem}
\begin{proof}
It was shown in~\cite{AAB3} that if $\phi$ has a flat, then $M_\phi$
commutes with a positive rank-one operator---and hence $M_\phi$ is
compact-friendly.
In the converse direction, assume that $M_\phi$ is compact-friendly and
that, contrary to our claim, $\phi$ is not constant on any set
of positive measure. Pick three non-zero bounded operators
$R,A,K\colon L_p(\mu)\to L_p(\mu)$ such that $R$ and $ K$ are
positive, $K$ is compact and
$$
RM_\phi=M_\phi R,\quad R\succ A\quad\hbox{and}\quad K\succ A\,.
$$
To obtain a contradiction, it will suffice (in view of
Proposition~\ref{t:general}) to construct a sequence $\{e_n\}$ in $L_p(\mu)$
satisfying the following properties:
\begin{itemize}
\item[(i)]
$\|e_n\| =1$ for each $n$,
\item[(ii)]
$\{Ae_n\}$ is a disjoint sequence, and
\item[(iii)]
$\|Ae_n\| \ge \delta$ for each $n$ and for some $\delta >0$.
\end{itemize}
The construction of such a sequence is quite involved and will be presented
in a series of lemmas below.
\end{proof}
The rest of this section will be devoted to construction of a sequence
$\{e_n\}$ that satisfies the properties (i), (ii) and (iii) stated at the
end of the proof of Theorem~\ref{t:oc.const}. We begin with some preliminary
comments.
\begin{enumerate}
\item
The assumption that $\phi$ does not have a flat means that for each
$\gamma\ge0$ the set $E_\gamma^\gamma=
\{\omega\in\Omega\colon\ \phi(\omega)=\gamma\} =\phi^{-1}(\{\gamma\})
$
has measure zero. In particular, this implies that for any
$\gamma \in (\alpha,\beta)$ the level sets $E_\alpha^\gamma$ and $
E_\gamma^\beta$ are essentially disjoint (in the sense that $E_\alpha^\gamma
\cap E_\gamma^\beta=E_\gamma^\gamma$ is a set of measure zero).
\item
By Theorem~\ref{strips} the operator $R$ leaves all the level sets of
$\phi$ invariant, and so does the operator
$A$ since it is dominated by $R$.
\item
Since $A\neq 0$ there exists some $x\in L_p(\mu)$ with $y=Ax\neq 0$. The
functions $x$ and $y$ will be fixed throughout the discussion in this section.
If we let $\alpha_0=0$ and $\beta_0=\|\phi\|_\infty$, then obviously
$E_{\alpha_0}^{\beta_0}=\Omega$ and so
\[
{\rm Supp}(x)\subseteq E_{\alpha_0}^{\beta_0}\,.
\]
\end{enumerate}
\begin{lemma}\label{l:norm}
There exists some $\gamma_0 \in (\alpha_0, \beta_0)$ such that
$$
\bigl\|y\chi_{E_{\alpha_0}^{\gamma_0}}\bigr\| =
\bigl\|y\chi_{E_{\gamma_0}^{\beta_0}}\bigr\| = c \|y\|\,,
$$
where $c=1/{\root p \of 2}$.
\end{lemma}
\begin{proof}
Consider the function $N\colon[\alpha_0,\beta_0] \to \hbox{\Bbb R}$ defined by
$$
N(\gamma) = \|y\chi_{E_{\alpha_0}^\gamma}\|\,.
$$
Clearly, $N(\alpha_0)=0, N(\beta_0) =\|y\|$, and the function $N$ is
continuous by virtue of the ``no flats'' assumption about $\varphi$.
Therefore, there exists some
$\gamma_0 \in (\alpha_0, \beta_0)$ such that $N(\gamma_0)=c \|y\|$.
Since $y\chi_{E_{\alpha_0}^{\gamma_0}}+ y\chi_{E_{\gamma_0}^{\beta_0}} =y$,
and since the sets $E_{\alpha_0}^{\gamma_0}, E_{\gamma_0}^{\beta_0}$ are
essentially disjoint, the $p$-additivity of the norm in $L_p(\mu)$ implies
that
$$
\|y\chi_{E_{\alpha_0}^{\gamma_0}}\|^p +\|y\chi_{E_{\gamma_0}^{\beta_0}}\|^p
=\|y\|^p.
$$
Consequently,
$$
\|y\chi_{E_{\gamma_0}^{\beta_0}}\|^p =\|y\|^p
-\|y\chi_{E_{\alpha_0}^{\gamma_0}}\|^p=\|y\|^p -c^p\|y\| =
\frac{1}{2}\|y\|^p= c^p\|y\|^p,
$$
that is, $\|y\chi_{E_{\gamma_0}^{\beta_0}}\| =c\|y\|$, as required.
\end{proof}
Using the sets $E_{\alpha_0}^{\gamma_0}$ and $E_{\gamma_0}^{\beta_0}$
we can represent $x$ as
$$
x=x\chi_{E_{\alpha_0}^{\gamma_0}}\oplus
x\chi_{E_{\gamma_0}^{\beta_0}}\,,
$$
and denote by $a_1$ the summand with smaller (or equal)
norm. The other summand will be denoted by $b_1$. So, if
$\|x\chi_{E_{\alpha_0}^{\gamma_0}}\|\le \|x\chi_{E_{\gamma_0}^{\beta_0}}\|$,
then we let $a_1= x\chi_{ E_{\alpha_0}^{\gamma_0}}$ and
$b_1= x\chi_{E_{\gamma_0}^{\beta_0}}$, and thus
$$
x= a_1\oplus b_1\,.
$$
Having chosen $a_1$ and $b_1$, we let
$$
u_1=y\chi_{E_{\alpha_0}^{\gamma_0}}\quad\hbox{and}\quad
v_1=y\chi_{E_{\gamma_0}^{\beta_0}}
$$
and also $\alpha_1=\alpha_0$ and $\beta_1=\gamma_0$.
(However, if
$\|x\chi_{E_{\gamma_0}^{\beta_0}}\| <\|x\chi_{E_{\alpha_0}^{\gamma_0}}\|$,
then $a_1= x\chi_{E_{\gamma_0}^{\beta_0}}$,
$b_1= x\chi_{E_{\alpha_0}^{\gamma_0}}$,
and we let $u_1=y\chi_{E_{\gamma_0}^{\beta_0}}$ and
$v_1=y\chi_{E_{\alpha_0}^{\gamma_0}}$, so that the functions $u_1$ and
$a_1$ are supported by the same set. In this case we accordingly
choose $\alpha_1=\gamma_0$ and $\beta_1=\beta_0$.)
In accordance with our construction the support sets
of $u_1$ and $v_1$ are the disjoint sets $E_{\alpha_0}^{\gamma_0}$ and
$E_{\gamma_0}^{\beta_0}$ respectively, which are left invariant by $A$.
The same disjoint sets are the support sets of the elements
$a_1$ and $b_1$. This implies (in view of the equality $x=a_1\oplus b_1$)
that
$y=Ax =Aa_1\oplus Ab_1$, and therefore
$$
Aa_1=u_1\quad{and}\quad Ab_1=v_1\,.
$$
In the next lemma, we present some simple estimates on the norms of
$a_1$ and $b_1$.
\begin{lemma}\label{l:norm.est}
For the functions $a_1$ and $b_1$ introduced above, we have:
\[
\textstyle
c\frac{\|y\|}{\|A\|}\le \|a_1\| \le c \|x\|\qquad\hbox{and}\qquad
\|b_1\| \le c_1 \|x\|\,,
\]
where
$\displaystyle c=1/{\root p \of 2}\,$
and
$\displaystyle c_1=\bigl[1-\bigl( \frac{c \|y\|}{\|A\|\cdot
\|x\|}\bigr)^p\bigr]^{1/p}>0$.
\end{lemma}
\begin{proof}
Since $a_1\oplus b_1=x$ and $\|a_1\| \le\|b_1\|$, the $p$-additivity of
the norm yields
$$
2\|a_1\|^p\le \|a_1\|^p +\|b_1\|^p =\|x\|^p\,,
$$
whence $\|a_1\|\le c\|x\|$.
From $u_1=Aa_1$ we have $\|u_1\|\le \|A\|\|a_1\|$.
So, taking into account that (in view of Lemma~\ref{l:norm})
$\|u_1\|=\|u_2\|=c\|y\|$, we see that
$$
\displaystyle
c\frac{\|y\|}{\|A\|}=\frac{\|u_1\|}{\|A\|}\le\|a_1\|\,.
$$
For the last inequality, note that
\begin{eqnarray*}
\|b_1\|^p &=&\|x\|^p -\|a_1\|^p \\
&\le&
\|x\|^p - \frac{c^p\|y\|^p}{\|A\|^p} \\
&=&
\|x\|^p \bigl[ 1- \frac{c^p\|y\|^p}{\|A\|^p \|x\|^p}\bigr] =c_1^p\|x\|^p\,,
\end{eqnarray*}
and the proof of the lemma is finished.
\end{proof}
The rest of the construction must be done inductively. For instance,
at the next step we will apply the above described procedure to the functions
$u_1, a_1$ satisfying $u_1=Aa_1$ and to the interval $[\alpha_1,\beta_1]$. That
is, we take
$u_1$ for $y$ and
$a_1$ for $x$ and we repeat the same procedure, keeping in mind that the
support set of either of these two functions lies in
$E_{\alpha_1}^{\beta_1}$.
Afterwards, we will have $u_1=u_2\oplus v_2$ with $\|u_2\| =\|v_2\|
=c\|u_1\|$ and with the support sets of these new functions
also invariant under $A$.
Next we will have $a_1=a_2\oplus b_2$ with $\|a_2\|\le \|b_2\|$
and
$Aa_2=u_2$, $Ab_2=v_2$ and with the corresponding estimates on the
norms of $a_2,b_2$. The precise details of this inductive construction
can be formulated as follows.
Assume that we have already constructed the functions $u_k$,$v_k$,$a_k$ and
$b_k$ and scalars $\alpha_{k-1}<\gamma_{k-1} <\beta_{k-1}$ satisfying the
following conditions:
\begin{eqnarray*}
\|u_k\|&=&\|v_k\|=c\|u_{k-1}\|=c^k\|y\|\\
u_{k-1}&=& u_k\oplus v_k \\
{\rm Supp}(u_k) &\subseteq& E_{\alpha_{k-1}}^{\gamma_{k-1}}\\
{\rm Supp}(v_k) & \subseteq& E_{\gamma_{k-1}}^{\beta_{k-1}}\\
a_k &=& a_{k-1}\chi_{E_{\alpha_{k-1}}^{\gamma_{k-1}}} \\
b_k &=& a_{k-1}\chi_{E_{\gamma_{k-1}}^{\beta_{k-1}} } \\
\hskip 4cm c^k{\frac{\|y\|}{\|A\|}}&\le&\|a_k\| \le c^k\|x\| \hskip 5.7cm (1) \\
\hskip 4cm \|a_k\|&\le&\|b_k\|\le c_1 c^{k-1}\|x\| \hskip 5cm (2)
\end{eqnarray*}
For this choice of $a_k$ and $b_k$ we let $\alpha_k=\alpha_{k-1}$
and $\beta_k=\gamma_{k-1}$.
Now we are ready to describe the induction step
to produce $u_{k+1}$, $v_{k+1}$, $a_{k+1}$ and $b_{k+1}$, and the scalars
$\alpha_{k+1}$ and $\beta_{k+1}$.
Namely, to the elements $u_k$, $a_k$, satisfying $u_k =Aa_k$, we apply the
very first step described in detail above. As a consequence, we find first
the scalar $\gamma_k \in (\alpha_k,\beta_k)$ such that
the functions $u_k\chi_{ E_{\alpha_k}^{\gamma_k}}$
and $u_k\chi_{ E_{\gamma_k}^{\beta_k}}$
have the same norm
$$
\|u_k\chi_{ E_{\alpha_k}^{\gamma_k}}\|
=\|u_k\chi_{ E_{\gamma_k}^{\beta_k}}\|
=c\|u_k\|.
$$
Next we consider the functions $a_k\chi_{ E_{\alpha_k}^{\gamma_k}}$
and $a_k \chi_{ E_{\gamma_k}^{\beta_k}}$ and denote by $a_{k+1}$
the one with the smaller norm---if both have the same norm, $a_{k+1}$
can be either one. The other function is denoted by $b_{k+1}$. Without
loss of generality, we can assume that
$a_{k+1}= a_k\chi_{ E_{\alpha_k}^{\gamma_k}}$. Subsequently, we let
$\alpha_{k+1} = \alpha_k$ and $\beta_{k+1} =\gamma_k$.
(Recall however, that if
$\|a_k\chi_{ E_{\gamma_k}^{\beta_k}}\| <
\|a_k\chi_{E_{\alpha_k}^{\gamma_k}}\|$, then
$a_{k+1}= a_k\chi_{ E_{\gamma_k}^{\beta_k}}$, and accordingly
$\alpha_{k+1} = \gamma_k$ and $\beta_{k+1} =\beta_k$.)
We are ready to verify now that the functions $a_{k+1}$ and $b_{k+1}$
satisfy the desired estimates.
\begin{lemma}\label{l:induct}
The functions $a_{k+1},b_{k+1}$ constructed above satisfy the following
inequalities:
$$
c^{k+1}\frac{\|y\|}{\|A\|} \le \|a_{k+1}\| \le
c^{k+1}\|x\|\qquad\hbox{and}\qquad
\|b_{k+1}\| \le c_1c^k\|x\|\,.
$$
\end{lemma}
\begin{proof}
By Lemma~\ref{l:norm.est} we have $\|a_{k+1}\|\le c\|a_k\|$.
This and the right
inequality in (1) imply that
$\|a_{k+1}\|\le c^{k+1}\|x\|$. The equalities $Aa_{k+1}= u_{k+1}$ and
$\|u_k\|=c^k\|y\|$ imply
$$
\|a_{k+1}\| \ge \frac{\|u_{k+1}\|}{\|A\|}=
c\frac{\|u_k\|}{\|A\|} =
c^{k+1} \frac{\|y\|}{\|A\|}\,.
$$
Finally, we use the identity $a_{k+1}\oplus b_{k+1}= a_k$
and again the above estimate $\|a_k\|\le c^k\|x\|$
to get:
\begin{eqnarray*}
\|b_{k+1}\|^p &=& \|a_k\|^p - \|a_{k+1}\|^p \le
(c^{k} \|x\|)^p -\|a_{k+1}\|^p \\
&\le&
c^{kp} \|x\|^p - \bigl(c^{k+1} \frac{\|y\|}{\|A\|}\bigr)^p\\
&=&
c^{kp} \|x\|^p\bigl[1 - \bigl(\frac{c\|y\|}{\|A\|\|x\|}\bigr)^p\bigr]=
c_1^p c^{kp} \|x\|^p\,.
\end{eqnarray*}
This implies $\|b_{k+1}\| \le c_1 c^k \|x\|$, as desired.
\end{proof}
Using the sequence $\{b_k\}$ and the estimates obtained so far,
we can finally produce a sequence $\{e_k\}$ satisfying the
properties required in the proof of Theorem~\ref{t:oc.const}.
\begin{lemma}\label{l:final}
If $e_n=\frac{b_n}{\|b_n\|}$, then the sequence $\{e_n\}$ satisfies the
following properties:
\begin{itemize}
\item[(i)]
$\|e_n\| =1$ for each $n$,
\item[(ii)]
$\{Ae_n\}$ is a disjoint sequence, and
\item[(iii)]
$\|Ae_n\| \ge \delta$ for each $n$ and for some $\delta >0$.
\end{itemize}
\end{lemma}
\begin{proof}
Since, by their definition, the vectors $b_n$ are pairwise disjoint
and have the sets $E_{\gamma_n}^{\beta_{n-1}}$ (which are disjoint
and invariant under our operator $A$) as their support sets, we see
that the vectors
$Ab_n$, $n=1,2,\ldots$, are also pairwise disjoint. Now
recalling that $Ab_n=v_n$ and using the right
inequality in (2)
we can easily estimate $\|Ae_n\|$:
\begin{eqnarray*}
\textstyle
\|Ae_n\|&=&\frac{\|Ab_n\|}{\|b_n\|} =\frac{\|v_n\|}{\|b_n\|}
=c^n\frac{\|y\|}{\|b_n\|}\\
\textstyle
&\ge&
\frac{c^n}{c_1c^{n-1}} \frac{\|y\|}{ \|x\|} =
\frac{c}{c_1}\cdot\frac{\|y\|}{\|x\|}\,.
\end{eqnarray*}
This completes the proof.
\end{proof}
|
1,108,101,566,691 | arxiv | \section{Introduction}
Since the discovery of $J/\Psi$, many charmonium and charmonium-like states have been observed~\cite{pdg12}. In these states, most of them are confirmed as $c\bar c$ charmonium states, some of them do not fit the predicted features of $c\bar c$ charmonium. Especially, in the past few years, some neutral ``X, Y" and charged``Z" resonances which cannot be simply accommodated in the $c\bar c$ picture have been observed and explored~\cite{pdg12}. How to understand and identify these resonances is a big challenge.
Several years ago, the Bell Collaboration observed a significant enhancement with mass $M=4008\pm40^{+114}_{-28}$ MeV and width $\Gamma=226\pm44\pm87$ MeV when measuring the cross section for $e^+e^-\to \pi^+\pi^-J/\psi$~\cite{belle2}. From its production, $Y(4008)$ has $J^{PC}=1^{--}$. There is a large uncertainty on the measured mass. $Y(4008)$ was not confirmed by the BaBar Collaboration~\cite{babar}.
Based on some analyses, the $\psi(3^3S_1)$ and $D^\star \bar D^\star$ molecular state possibility of $Y(4008)$ is studied in Ref.~\cite{Xiangliu08}. Through the calculated mass with the heavy quark-antiquark potential, $Y(4008)$ is suggested the $\psi(3^3S_1)$~\cite{BK}. In a one boson exchange model~\cite{ding}, the study does not support the interpretation of $Y(4008)$ as a $D^*\bar D^*$ molecule. In order to identify $Y(4008)$, it is interesting to study its strong decays in detail.
In fact, there is a $\psi(4040)$ which is commonly believed the $J^{PC}=1^{--}$ $\psi(3^3S_1)$~\cite{pdg12,TSE,swanson}. $\psi(4040)$ has mass and width
\begin{equation}
M=4039\pm1~\rm{MeV}, ~\Gamma=80\pm10 ~\rm{MeV}.
\end{equation}
The measured mass and total width of $\psi(4040)$ is consistent with theoretical predictions~\cite{TSE,swanson}.
Now the fact is that there are two states $\psi(4040)$ and $Y(4008)$, which are close to the threshold of $D^*\bar{D}^*$. Furthermore, these two states have different total decay widths. Even though the calculation of the strong decay of $\psi(4040)$ within the $^3P_0$ model has been performed in Ref.~\cite{TSE}, in order to find the difference and have a comparison, it will be interesting to study the strong decays of $\psi(4040)$ and $Y(4008)$ in the $^3P_0$ model at the same time.
The paper is organized as follows. After the introduction, the $^3P_0$ model is briefly reviewed and possible strong decay channels and decay amplitudes of the $\psi(3^3S_1)$ state are presented in Sec.II. In Sec. III, the numerical results in the $^3P_0$ model are obtained. The last section is devoted to a simple discussion and summary.
\section{$^3P_0$ model and possible charmonium strong decays of $\psi(4040)$ and $Y(4008)$}
Up to now, many strong decay models have been developed to describe the transition of hadrons to open-flavor final states. The $^3P_0$ model~\cite{micu1969,yaouanc1,yaouanc2} was first proposed by Micu~\cite{micu1969}, and further developed by Orsay Group~\cite{yaouanc1,yaouanc2}. In the model, the created quark-antiquark pair is supposed the vacuum quantum numbers $J^{PC}=0^{++}$. Although the intrinsic mechanism and the relation to the Quantum Chromodynamics are not very clear, the model is widely employed to study the OZI-allowed strong decays of a meson into two other mesons, as well as the two-body strong decays of baryons and other hadrons~\cite{capstick,PE,ackleh,TFPE,TNP,FE}.
\begin{figure}
\begin{center}
\includegraphics[height=4cm,angle=-180]{jjs}
\caption{The decay process of $A\Rightarrow B+C$ in the $^3P_0$ model~\cite{ZXX}.}
\end{center}
\end{figure}
A meson decay process $A\Rightarrow B+C$ is showed in Fig. 1. In the nonrelativistic limit, the transition operator is written as
\begin{eqnarray}\label{br1}
T=-3\gamma \sum_{m} \langle1m;1-m|00\rangle\int d\textbf{k}_{3}d\textbf{k}_{4}\delta^{3}( \textbf{k}_{3}+\textbf{k}_{4})\nonumber \\
\times y_{1m}(\frac{\textbf{k}_{3}-\textbf{k}_{4}}{2})\chi _{1, - m}^{34}\varphi^{34}_{0}\omega^{34}_{0} b^{\dagger}_{3i}( \textbf{k}_{3})d^{\dagger}_{4j}( \textbf{k}_{4})
\end{eqnarray}
where $i$ and $j$ denote the color indices for the $q\bar q$ pair. The flavor wave function for the $q\bar q$ pair is $\varphi^{34}_{0}\omega^{34}_{0}=(u\bar{u}+d\bar{d}+s\bar{s})/\sqrt{3}$, and $\omega^{34}_{0}=\delta_{ij}$ for the flavor and color singlet. $\chi _{1, - m}^{34}$ is the spin triplet. $y_{1m}(\textbf{k})=|\textbf{k}|\times Y_{1m}(\Omega)$ is the solid harmonic polynomial corresponding to the p-wave quark pair. The dimensionless constant $\gamma$ indicates the strength of the quark pair creation from the vacuum. Therefore, the helicity amplitude of the process $A\Rightarrow B+C$ reads as
\begin{widetext}
\begin{eqnarray}\label{maM}
\mathcal{M}^{M_{J_A } M_{J_B } M_{J_C }} &=& \sqrt {8E_A E_B E_C } \gamma \sum_{M_{L_A } ,M_{S_A } ,M_{L_B } ,M_{S_B } ,M_{L_C } ,M_{S_C } ,m}\langle {1m;1 - m}|{00} \rangle \nonumber \\
&&\times \langle {L_A M_{L_A } S_A M_{S_A } }| {J_A M_{J_A } }\rangle \langle L_B M_{L_B } S_B M_{S_B }|J_B M_{J_B } \rangle\langle L_C M_{L_C } S_C M_{S_C }|J_C M_{J_C }\rangle \nonumber \\
&& \times\langle\varphi _B^{13} \varphi _C^{24}|\varphi _A^{12}\varphi _0^{34} \rangle
\langle \chi _{S_B M_{S_B }}^{13} \chi _{S_C M_{S_C } }^{24}|\chi _{S_A M_{S_A } }^{12} \chi _{1 - m}^{34}\rangle I_{M_{L_B } ,M_{L_C } }^{M_{L_A } ,m} (\vec{K})
\end{eqnarray}
where $E_A=m_A$, $E_B =\sqrt {M_B^2 + \vec{K_B}^2 }$ and $E_C =\sqrt {M_B^2+ \vec{K_B}^2 }$ are the total energy of mesons $A$, $B$ and $C$. $\langle\varphi _B^{13} \varphi _C^{24}|\varphi _A^{12}\varphi _0^{34} \rangle $ and $\langle \chi _{S_B M_{S_B }}^{13} \chi _{S_C M_{S_C } }^{24}|\chi _{S_A M_{S_A } }^{12} \chi _{1 - m}^{34}\rangle$ are the matrix elements of favor wave functions and spin wave functions, respectively.
$I_{M_{L_B } ,M_{L_C } }^{M_{L_A } ,m} (\vec{K})$ is a spatial integral:
\begin{eqnarray}\label{I}
I_{M_{L_B } ,M_{L_C } }^{M_{L_A } ,m} (\vec{K}) &=& \int d \vec{k}_1 d \vec{k}_2 d \vec{k}_3 d \vec{k}_4 \delta ^3 (\vec{k}_1 + \vec{k}_2)\delta ^3 (\vec{k}_3+ \vec{k}_4)\delta ^3 (\vec{k}_B- \vec{k}_1- \vec{k}_3 )\nonumber \\
&&\times \delta ^3 (\vec{k}_C- \vec{k}_2 -\vec{k}_4) \Psi _{n_B L_B M_{L_B } }^* (\vec{k}_1 ,\vec{k}_3)\Psi _{n_cL_C M_{L_c}}^* (\vec{k}_2 ,\vec{k}_4)\nonumber \\
&& \times \Psi _{n_A L_A M_{LA}} (k_1 ,k_2 )Y _{1m}\left(\frac{\vec{k_3}-\vec{k}_4}{2}\right).
\end{eqnarray}
Using the Jacob-Wick formula~\cite{JW}, the helicity amplitude can be transformed into the partial wave amplitude:
\begin{eqnarray}
\mathcal{M}^{JL} (A \to BC) &=& \frac{{\sqrt {2L + 1} }}{{2J_A + 1}}\sum{M_{J_B } ,M_{J_C } } \langle {L0JM_{J_A } } |{J_A M_{J_A } }\rangle \nonumber \\
&&\times \left\langle {J_B M_{J_B } J_C M_{J_C } } \right|\left. {JM_{J_A } } \right\rangle M^{M_{J_A } M_{J_B } M_{J_C } } (\vec{K})
\end{eqnarray}
\end{widetext}
where $\vec{J}=\vec{J_B}+\vec{J_C}$, $\vec{J_A}=\vec{J_B}+\vec{J_C}+\vec{L}$, $M_{J_A}=M_{J_B}+M_{J_C}$.
The decay width is thus obtained as
\begin{eqnarray}
\Gamma = \pi ^2 \frac{|\vec{K}|}{M_A^2}\sum{JL} |{\mathcal{M}^{JL}}|^2
\end{eqnarray}
where $|\vec{K}|$ is the momentum of the daughter meson in the initial meson A's center mass frame
\begin{eqnarray}
|\vec{K}|= \frac{{\sqrt {[m_A^2 - (m_B - m_C )^2 ][m_A^2 - (m_B + m_C )^2 ]} }}{{2m_A }}.
\end{eqnarray}
With these formula in hand, we proceed with the study of the strong decays of $\psi(4040)$ and $Y(4008)$. $\psi(4040)$ and $Y(4008)$ have the same quantum number $J^{PC}=1^{--}$. Once they are assigned as the $1^{--}$ $\psi(3^3S_1)$ state, all possible open-charm strong decay modes allowed by the OZI rule above the $D^*\bar{D}^*$ threshold are given in Table. I. Accordingly, the decay amplitudes and the detailed decay channels are presented.
\begin{table}
\caption{The allowed open-charm strong decays of $1^{--}$ $\psi({3^{3}S_{1}})$ for $\psi(4040)$ and $Y(4008)$, where $\varepsilon=\gamma\sqrt{E_AE_BE_C}$.} \label{table-1}
\begin{tabular}{cccccccc}
\hline\hline
State & Decay mode & Decay amplitude & Decay channel\\
\hline
$ $ & $0^{-}+ 0^{-}$ & $\mathcal{M}^{00}=-\frac{\sqrt3}{18}\gamma\sqrt\varepsilon I_{00}$ & $D\bar{D},D_s\bar{D}_s$ \\
$\psi(3^{3}S_{1})$ & $0^{-}+ 1^{-}$ & $\mathcal{M}^{11}=-\frac{\sqrt6}{18}\gamma\sqrt\varepsilon I_{00}$ & $D\bar{D}^\star/\bar{D}D^\star$\\
$ $ & $1^{-}+1^{-}$ & $\mathcal{M}^{21}=-\frac{\sqrt5}{9}\gamma\sqrt\varepsilon I_{00}$ & $D^\star \bar{D}^\star$ \\
\hline\hline
\end{tabular}
\end{table}
For the flavor matrix element $\langle\varphi _B^{13} \varphi _C^{24}|\varphi _A^{12}\varphi _0^{34} \rangle$, there are several definitions which give different numbers. In our calculation, $\langle\varphi _B^{13} \varphi _C^{24}|\varphi _A^{12}\varphi _0^{34} \rangle=\frac{1}{\sqrt3}$ is chosen.
\section{Numerical results}
In order to get the numerical results within the $^3P_0$ model, several parameters are chosen as follows. The masses of constituent quarks are taken as $m_u=m_d=0.22~\rm{GeV}$, $m_s=0.419~\rm{GeV}$ and $m_c=1.6~\rm{GeV}$~\cite{XZZ}. The masses of relevant charmed mesons~\cite{pdg12} are listed in Table. II, where $(\pm)$ indicates the charged mesons and $(0)$ indicates the charge neutral mesons.
\begin{table*}
\begin{center}
\caption{The relevant mass and $R$ values of charmed mesons used in our calculation}
\begin{tabular}{cccccc}
\hline
\hline
Meson &$D $ &$D^\ast$ &$D_s$&$D_s^\ast$ \\
\hline
Mass (MeV) & 1869.62 ($\pm$), 1864.84 (0) &2010.29 ($\pm$), 2006.97 (0) &1968.49 ($\pm$) &2112.3 ($\pm$) \\
R($\rm{GeV}^{-1}$)~\cite{YZJ,PRP}&1.52& 1.85& 1.41& 1.69\\
\hline\hline
\end{tabular}
\label{table21}
\end{center}
\end{table*}
There are other two important parameters in the $^3P_0$ model, the strength of quark pair creation $\gamma$ and the $R$ value in the simple harmonic oscillator (SHO) wave function. For the color saturation, the color matrix element as a constant can be absorbed into the dimensionless constant $\gamma$. In our calculation, $\gamma$ is chosen with $\gamma=6.3$~\cite{XZZ,YZJ,li,zhang} and the strength of $s\bar s$ pair creation $\gamma_{ss}=\gamma/\sqrt3$~\cite{yaouanc2}. The chosen $\gamma=6.3$ has a factor $\sqrt{96\pi}$ difference with that in Ref.~\cite{TSE}. The $R$ value in the SHO wave function can be obtained from the Schrodinger equation within the potential model~\cite{SR}.
In general, there are two ways to choose $R$: a constant around $2$ $\rm{GeV}^{-1}$~\cite{PE,TSE,chen} and an effective varying value ~\cite{FE,li}. In this paper, an effective $R$ is chosen and the suitable region of $R$ is fixed by $\psi(4040)$. At that $R$, the strong decay widths and relevant ratios of $Y(4008)$ are investigated. Of course, the numerical results depend on $R$. To learn this dependence, the variation of our results with $R$ are also presented.
\subsection{$\psi$(4040)}
\begin{figure}
\begin{center}
\includegraphics[height=4.3cm]{fig1.eps}
\includegraphics[height=4.3cm]{fig2.eps}
\caption{(color online)(a) Possible partial strong decay widths of $\psi(4040)$ versus $R$; (b) The total strong decay width of $\psi(4040)$ versus $R$.}
\end{center}
\end{figure}
As a commonly believed $\psi(3^3S_1)$, the variation of the decay width of $\psi(4040)$ for different modes with $R_A$ is shown in Fig. 2(a). The variation of the total decay width of $\psi(4040)$ with $R_A$ is presented in Fig. 2(b). From PDG~\cite{pdg12}, three horizontal lines in the figure are drawn to indicate the lower, central and upper values of the total width of $\psi(4040)$ ($\Gamma=0.08\pm0.01$ GeV). $R_A$ (corresponding to the initial $A$ meson) is therefore fixed by the three lines at the region $2.65\to 2.82$ GeV$^{-1}$ with the central value $2.74$ GeV$^{-1}$. At $R_A=2.74$ GeV$^{-1}$, the widths of all possible open-flavor strong decay channels are calculated and given in Table. III. As a comparison, the results in Ref.~\cite{TSE} are also listed. Obviously, the dominant decays of $\psi(4040)$ are $D\bar{D}$, $D^*\bar{D}$ and $D^*\bar{D}^*$ channels.
\begin{table*}
\centering
\caption{Open-flavor strong decays of $\psi(4040)$ at universal $R_A=2.74$ GeV$^{-1}$ (in MeV)}
\label{table}
\begin{tabular*}{16cm}{@{\extracolsep{\fill}}ccccccc}
\hline
\hline
Decay Channles & $D\overline{D}$ & $D_s\overline{D}_s$ & $D\overline{D}^*/\overline{D}D^*$ & $D^*\overline{D}^*$ & $\Gamma(total)_{thy}$ & $\Gamma(total)_{expt}$ \\
\hline
Ref.~\cite{TSE} & 0.1 & 7.8 & 33 & 33 & 74 & 80$\pm$10 \\
Our results & 30.44 & 1.02 & 47.67 & 0.87 & fixed point & 80$\pm$10 \\
\hline
\hline
\end{tabular*}
\end{table*}
Unlike the decay widths, the ratios of the decay widths are less sensitive to the uncertainties of the $^3P_0$ model. Therefore, some relevant ratios are calculated and presented in Table. IV. The experimental data are those from PDG~\cite{pdg12}.
Except for $\frac{\Gamma(D^*(2007)^0\bar{D}^*(2007)^0)}{\Gamma(D^*(2007)^0\bar{D}^0+c.c.)}$ and $\frac{\Gamma(D^0\bar{D}^0)}{\Gamma(D^*(2007)\bar{D}^0+c.c.)}$ (measure in 1977~\cite{gold}), our results are consistent with experiments. Besides, the $\mathcal{BR}(\psi(4040)\Rightarrow D\bar{D})$ is $37.5^{+3.9}_{-3.1}\%$, which is also consistent with the BABAR data $(31.2\pm5.3)\%$~\cite{BaBar} within the experimental uncertainty. In Ref.~\cite{HX}, the obtained $\mathcal{BR}(\psi(4040) \Rightarrow D\bar{D})=(25.3\pm4.5)\%$.
\begin{table*}[htbp]
\centering
\caption{Relevant ratios of $\psi(4040)$ at $R_A=2.74$ GeV$^{-1}$}
\begin{tabular}{cccccc}
\toprule
Ratios & $\frac{\Gamma(D\bar{D})}{\Gamma(D^*\bar{D}+c.c.)}$ & $\frac{\Gamma(D^*\bar{D}^*)}{\Gamma(D^*\bar{D}+c.c.)}$ & $\frac{\Gamma(D^*(2007)^0\bar{D}^*(2007)^0)}{\Gamma(D^*(2007)^0\bar{D}^0+c.c.)}$ & $\frac{\Gamma(D^*(2010)^+D^-)}{\Gamma(D^*(2007)^0\bar{D}^0+c.c.)}$ & $\frac{\Gamma(D^0\bar{D}^0)}{\Gamma(D^*(2007)\bar{D}^0+c.c.)}$ \\
\hline
Expt. & $0.24\pm0.05\pm0.12 $& $0.18\pm0.14\pm0.03$ & $32\pm12$ & $0.95\pm0.09\pm0.10$ & $0.05\pm0.03$ \\
our results & 0.641 & 0.018 & 0.022 & 0.927 & 0.625 \\
Ref.~\cite{TSE} & 0.003 & 1 & & & \\
\hline
\hline
\end{tabular}
\label{tab:addlabel}
\end{table*}
In our results, $\frac{\Gamma(D\bar{D})}{\Gamma(D^*\bar{D}+c.c.)}$ and $\frac{\Gamma(D^0\bar{D}^0)}{\Gamma(D^*(2007)\bar{D}^0+c.c.)}$ have pole around $R_A=2.07$ GeV$^{-1}$ as pointed out in Refs.~(\cite{SN,PE,RN}).
\subsection{$Y(4008)$}
As indicated in the first section, $Y(4008)$ is close to the $D^*\bar{D}^*$ threshold while has a large mass uncertainty. Therefore, more decay channels may open when $Y(4008)$ has a larger mass. To check the possibility of $\psi(3^3S_1)$, the $R_A=2.74$ GeV$^{-1}$ fixed by $\psi(4040)$ is employed to study the open-flavor strong decay of $Y(4008)$. At $R_A=2.74$ GeV$^{-1}$, the widths of all possible open-flavor decay channels are presented in Table. V, where $3940$, $4008$ and $4162$ represent the lower, central and upper mass of $Y(4008)$, respectively.
\begin{table*}
\centering
\caption{Open-flavor strong decays of $Y(4008)$ at $R_A=2.74$ GeV$^{-1}$ (in MeV)}
\label{table}
\begin{tabular*}{16cm}{@{\extracolsep{\fill}}ccccccc}
\hline
\hline
Decay Channles & $D\overline{D}$ & $D_s\overline{D}_s$ & $D\overline{D}^*/\overline{D}D^*$ & $D^*\overline{D}^*$ & $\Gamma(total)_{thy}$ & $\Gamma(total)_{expt}$ \\
\hline
~~~~~~~~~~~3940 MeV & 17.9 & 7.7$\times10^{-6}$ & 8.61 & - & 26.53 & \\
$Y(4008)$ 4008 MeV & 26.44 & 0.59 & 32.18 & - & 59.21 & $226\pm44\pm87$\\
~~~~~~~~~~~4162 MeV & 43.44 & 3.07 & 120.36 & 59.02 & 225.89 & \\
\hline
\hline
\end{tabular*}
\end{table*}
From Table. V, it is easy to find that $Y(4008)$ with the lower or central mass does not open the $D^*\overline{D}^*$ channel. Therefore, the predicted total decay width of $Y(4008)$ is largely different from the observed one. If $Y(4008)$ has the upper mass, the $D^*\overline{D}^*$ channel opens and the predicted total decay width is consistent with experiment. To learn the dependence of the total width of $Y(4008)$ on $R_A$, two figures corresponding to the central and upper mass are drawn in Fig. 3, where the horizontal lines indicate the experimental result.
\begin{figure}
\begin{center}
\includegraphics[height=4.3cm]{fig3.eps}
\includegraphics[height=4.3cm]{fig4.eps}
\caption{(color online) Total decay widths of $Y(4008)$ versus $R_A$ for: (a) the central mass $4008$ MeV; (b) the upper mass $4162$ MeV.}
\end{center}
\end{figure}
Similarly, relevant ratios of $Y(4008)$ are calculated and presented in Table. VI. Unfortunately, there is no such experimental data at present.
\begin{table*}[htbp]
\centering
\caption{Relevant ratios of $Y(4008)$ with upper mass at $R_A=2.74$ GeV$^{-1}$}
\begin{tabular}{cccccc}
\toprule
Ratios & $\frac{\Gamma(D\bar{D})}{\Gamma(D^*\bar{D}+c.c.)}$ & $\frac{\Gamma(D^*\bar{D}^*)}{\Gamma(D^*\bar{D}+c.c.)}$ & $\frac{\Gamma(D^*(2007)^0\bar{D}^*(2007)^0)}{\Gamma(D^*(2007)^0\bar{D}^0+c.c.)}$ & $\frac{\Gamma(D^*(2010)^+D^-)}{\Gamma(D^*(2007)^0\bar{D}^0+c.c.)}$ & $\frac{\Gamma(D^0\bar{D}^0)}{\Gamma(D^*(2007)\bar{D}^0+c.c.)}$ \\
\hline
our results & 0.361 & 0.492 & 0.505 & 0.964 & 0.357 \\
\hline
\hline
\end{tabular}
\label{tab:addlabel}
\end{table*}
\section{Summary and discussion}
In this work, the strong decay of the $1^{--}$ $\psi(3^3S_1)$ resonance is studied in the $^3P_0$ model. As a commonly believed $\psi(3^3S_1)$, the dominant strong decay of $\psi(4040)$ are $D\bar{D}$, $D^*\bar{D}$ and $D^*\bar{D}^*$ channels. Accordingly, the decay widths of these channels are calculated. Based on these decay widths, some relevant ratios are obtained. Most of the ratios are consistent with experiments within the experimental uncertainties. Our results for $\frac{\Gamma(D^*(2007)^0\bar{D}^*(2007)^0)}{\Gamma(D^*(2007)^0\bar{D}^0+c.c.)}$ and $\frac{\Gamma(D^0\bar{D}^0)}{\Gamma(D^*(2007)\bar{D}^0+c.c.)}$ are different from experimental data which were measured in 1977. Of course, the uncertainties related to the $^3P_0$ model are not studied in this paper, which may bring in some uncertainties.
$Y(4008)$ are close to the threshold of $D^*\bar{D}^*$ and has a large mass uncertainty. For this reason, the strong decays of $Y(4008)$ with different mass are studied. Under the threshold of $D^*\bar{D}^*$, it is hard to understand the wide decay width of $Y(4008)$ if $Y(4008)$ is assumed as the $\psi(3^3S_1)$. However, above the threshold of $D^*\bar{D}^*$, $Y(4008)$ is very possibly the $\psi(3^3S_1)$. In this case, more information is required to distinguished $\psi(4040)$ from $Y(4008)$ both in theory and in experiment.
To have a clear picture of the charmonium spectroscopy, the observed $X,~Y$ and $Z$ have to be understood and identified. Unfortunately, people has not a comprehensive understanding of these resonances. Besides, $Y(4008)$ was observed only by the Belle collaboration, and only the total decay width was given. More experiments are required to confirm its existence or not. Especially, the mass uncertainty of $Y(4008)$ has to be deduced if it is confirmed in forthcoming experiment. Only when more decay channels and their branching fractions ratios have been measured, can we understand $Y(4008)$ and $\psi(4040)$.
\begin{acknowledgments}
This work is supported by National Natural Science Foundation of China(11075102) and the Innovation Program of Shanghai Municipal Education Commission under grant No. 13ZZ066.
\end{acknowledgments}
|
1,108,101,566,692 | arxiv | \section{Introduction}
\label{Introduction}
The statistical treatment of experimental results
obtained in a Poisson process with background and a small signal
is difficult and controversial.
Two methods are accepted by the Particle Data Group~\cite{PDG98}:
the Bayesian Method and the Unified Approach,
which is a frequentist method
proposed recently by Feldman and Cousins~\cite{Feldman-Cousins-98}
that allows the unified calculation of confidence intervals and upper limits
\emph{with the correct coverage}
(see \cite{Cousins95}).
The Unified Approach
represents a major breakthrough for a satisfactory
statistical treatment of processes with small signals
with the frequentist method.
However,
as already noted by Feldman and Cousins~\cite{Feldman-Cousins-98},
when the number of observed events
in a Poisson process with mean $\mu$ is smaller than the expected background,
the upper limit for $\mu$ obtained with the Unified Approach
decreases rapidly when the background increases.
Hence,
by observing less events than the expected background
an experiment can establish a very stringent upper bound on $\mu$
even if it is not sensitive to such small values of $\mu$.
This problem has been further discussed in Ref.~\cite{Giunti98-poisson},
where an alternative frequentist method has been proposed.
This method yields confidence intervals and upper limits
with all the desirable properties
of those calculated with the Unified Approach
and in addition minimizes the effect
of the observation of less background events than expected.
In the following this method will be called
``Alternative Unified Approach".
The basic features
of the Unified Approach and the Alternative Unified Approach,
which are necessary for the understanding of the present paper,
are reviewed in Section~\ref{Poisson processes with background}.
The original formulation of the Unified Approach~\cite{Feldman-Cousins-98}
and of the
Alternative Unified Approach~\cite{Giunti98-poisson}
for a Poisson process with background
assumed a precise knowledge of the expected mean background.
The aim of this paper is the presentation of
the extension of these approaches
to the case
in which the background is known with a non-negligible error.
This is done in
Sections~\ref{Background with small error}
and \ref{Background with large error},
where the probability to observe a number $n$ of events
in a Poisson process consisting in signal events
with mean $\mu$ and background events with known mean
$b=\overline{b}\pm\sigma_b$
is derived
(in Section~\ref{Background with small error}
we consider the simpler case
$ \sigma_b \lesssim \overline{b}/3 $
and in
Section~\ref{Background with large error}
this constraint is removed),
and
in Section~\ref{Confidence intervals},
where the method for deriving the corresponding confidence intervals
in the Unified Approach and in the Alternative Unified Approach
is presented.
Conclusions are drawn in Section~\ref{Conclusions}.
\section{Poisson processes with background}
\label{Poisson processes with background}
The probability to observe a number $n$
of events in a Poisson process
consisting in signal events with mean $\mu$
and background events with known mean $b$
is
\begin{equation}
P(n|\mu;b)
=
\frac{1}{n!} \ (\mu+b)^n \, e^{-(\mu+b)}
\,.
\label{poisson}
\end{equation}
The classical frequentist method for obtaining the confidence interval
for the unknown parameter $\mu$
is based on Neyman's method to construct a \emph{confidence belt}
\cite{Neyman37}.
This confidence belt is the region in the $n$--$\mu$ plane
lying between the two curves $n_1(\mu;b,\alpha)$ and $n_2(\mu;b,\alpha)$
such that for each value of $\mu$
\begin{equation}
P(n\in[n_1(\mu;b,\alpha),n_2(\mu;b,\alpha)]|\mu;b)
\equiv
\sum_{n=n_1(\mu;b,\alpha)}^{n_2(\mu;b,\alpha)} P(n|\mu;b)
=
\alpha
\,,
\label{CL}
\end{equation}
where
$\alpha$ is the desired confidence level.
The two curves
$n_1(\mu;b,\alpha)$ and $n_2(\mu;b,\alpha)$
are required to be monotonic functions of $\mu$
and can be inverted to yield the corresponding curves
$\mu_1(n;b,\alpha)$ and $\mu_2(n;b,\alpha)$.
Then,
if a number $n_{\mathrm{obs}}$ of events is measured,
the confidence interval for $\mu$ is
$[\mu_2(n_{\mathrm{obs}};b,\alpha),\mu_1(n_{\mathrm{obs}};b,\alpha)]$.
This method guarantees by construction the \emph{correct coverage},
\textit{i.e.}
the fact that the resulting confidence interval
$[\mu_2(n_{\mathrm{obs}};b,\alpha),\mu_1(n_{\mathrm{obs}};b,\alpha)]$
is a member of a set of confidence intervals
obtained with an ensemble of experiments
that
contain the true value of $\mu$ with a probability $\alpha$
(in other words,
$100\alpha\%$
of the confidence intervals in the set contain the true value of $\mu$).
As noted by Cousins
in Ref.~\cite{Cousins95},
Neyman himself pointed out \cite{Neyman37} that
the usefulness of classical confidence intervals lies in the fact that
the experiments in the ensemble do not need to be identical,
but can be real, different experiments.
One can see this fact in a simple way by considering,
for example,
two different experiments that measure the same quantity $\mu$.
The $100\alpha\%$
classical confidence interval obtained from the results of each experiment
belongs to a set of confidence intervals which can be obtained
with an ensemble of identical experiments
and contain the true value of $\mu$ with probability $\alpha$.
It is clear that the sum of these two sets of confidence intervals
is still a set of confidence intervals that contain the true
value of $\mu$ with probability $\alpha$.
In the case of a Poisson process,
since $n$ is an integer,
the relation (\ref{CL})
can only be approximately satisfied and in practice the chosen
\emph{acceptance intervals}
$[n_1(\mu;b,\alpha),n_2(\mu;b,\alpha)]$
are the smallest intervals such that
\begin{equation}
P(n\in[n_1(\mu;b,\alpha),n_2(\mu;b,\alpha)]|\mu;b)
\geq
\alpha
\,.
\label{CLP}
\end{equation}
This choice introduces an overcoverage for some values of $\mu$
and the resulting confidence intervals
are \emph{conservative}.
As emphasized in Ref.~\cite{Feldman-Cousins-98},
conservativeness is an undesirable but unavoidable property
of the confidence intervals in the case of a Poisson process
(it is undesirable because it
implies a loss of power
in restricting the allowed range for the parameter $\mu$).
The construction of Neyman's confidence belt
\emph{is not unique},
because in general there are many different couples of curves
$n_1(\mu;b,\alpha)$ and $n_2(\mu;b,\alpha)$
that satisfy the relation (\ref{CL}).
Hence,
an additional criterion is needed in order to
define uniquely the acceptance intervals
$[n_1(\mu;b,\alpha),n_2(\mu;b,\alpha)]$.
The two common choices are
\begin{equation}
P(n<n_1(\mu;b,\alpha)|\mu;b)
=
P(n>n_2(\mu;b,\alpha)|\mu;b)
=
\frac{1-\alpha}{2}
\,,
\label{central}
\end{equation}
which leads to
\emph{central confidence intervals}
and
\begin{equation}
P(n<n_1(\mu;b,\alpha)|\mu;b)
=
1-\alpha
\,,
\label{upper}
\end{equation}
which leads to
\emph{upper confidence limits}.
Central confidence intervals are appropriate for the
statistical description of the results of experiments reporting a positive result,
\textit{i.e.} the measurement of a number of events
significantly larger than the expected background.
On the other hand,
upper confidence limits are appropriate for the
statistical description of the results of experiments reporting a negative result,
\textit{i.e.} the measurement of a number of events
compatible with the expected background.
However,
Feldman and Cousins~\cite{Feldman-Cousins-98}
noticed that switching from central confidence level
to upper confidence limits or vice-versa
on the basis of the experimental data
(``flip-flopping'')
leads to undercoverage for some values of $\mu$,
which is a serious flaw for a frequentist method.
Feldman and Cousins \cite{Feldman-Cousins-98}
proposed an ordering principle
for the construction of the acceptance intervals
that is
based on likelihood ratios
and produces an automatic transition
from central confidence intervals to upper limits
when the number of observed events in a Poisson process with background
is of the same order or less than the expected background,
guaranteeing the correct frequentist coverage for all values of $\mu$.
The acceptance interval for each value of $\mu$
is calculated assigning at each value of $n$ a rank
obtained from the relative size of the likelihood ratio
\begin{equation}
R_{\mathrm{UA}}(n|\mu;b)
=
\frac{ P(n|\mu;b) }{ P(n|\mu_{\mathrm{best}};b) }
\,,
\label{RUA}
\end{equation}
where $\mu_{\mathrm{best}}=\mu_{\mathrm{best}}(n;b)$
(for a fixed $b$)
is the non-negative value of $\mu$ that
maximizes the probability
$P(n|\mu;b)$:
\begin{equation}
\mu_{\mathrm{best}}(n;b)
=
\mathrm{max}[0,n-b]
\,.
\label{best}
\end{equation}
For each fixed value of $\mu$,
the rank of each value of $n$
is assigned in order of decreasing value of the ratio $R_{\mathrm{UA}}(n|\mu;b)$:
the value of $n$ which has bigger $R_{\mathrm{UA}}(n|\mu;b)$ has rank one,
the value of $n$ among the remaining ones
which has bigger $R_{\mathrm{UA}}(n|\mu;b)$ has rank two
and so on.
The acceptance interval for each value of $\mu$
is calculated by adding the values of $n$ in increasing order of rank
until the condition (\ref{CLP}) is satisfied.
The automatic transition from two-sided confidence intervals
to upper confidence limits for $ n \lesssim b $
is guaranteed in the Unified Approach by the fact that
$\mu_{\mathrm{best}}$
is always non-negative.
Indeed,
since
$ \mu_{\mathrm{best}}(n{\leq}b;b) = 0 $,
the rank of
$n \leq b$ for $\mu=0$
is one,
implying that the interval
$ 0 \leq n \leq b $
for $\mu=0$
is guaranteed to lie in the confidence belt.
As already noticed by Feldman and Cousins \cite{Feldman-Cousins-98},
when $n \lesssim b$
the upper bound
$\mu_1(n;b,\alpha)$
decreases rather rapidly when $b$ increases
and stabilizes around a value close to 0.8
for large values of $b$.
Hence,
a stringent upper bound for $\mu$
obtained with the Unified Approach
by an experiment that has observed a number of events
significantly smaller than the expected background
is not due to the fact that the experiment is very sensitive to small values of $\mu$,
but to the fact that less background events than expected have been observed.
The Alternative Unified Approach proposed in
Ref.~\cite{Giunti98-poisson} allows
the construction of a classical confidence belt
which has all the desirable features of the one
in the Unified Approach
(\textit{i.e.}
an automatic transition with the correct coverage
from two-sided confidence intervals to
upper confidence limits when the observed number of events
is of the order or less than the expected background)
and in addition minimizes
the decrease of the upper confidence limit $\mu_1(n;b,\alpha)$ for a given $n$
as the mean expected background $b$ increases.
The Alternative Unified Approach is based on
an ordering principle
for the construction of a classical confidence belt
that is implemented as the
Feldman and Cousins ordering principle in the Unified Approach,
but for each value of $\mu$
the rank of each value of $n$
is calculated from the relative size of the likelihood ratio
\begin{equation}
R_{\mathrm{AUA}}(n|\mu;b)
=
\frac{ P(n|\mu;b) }{ P(n|\mu_{\mathrm{ref}};b) }
\,,
\label{RNO}
\end{equation}
where the reference value $\mu_{\mathrm{ref}}=\mu_{\mathrm{ref}}(n;b)$
is taken to be the bayesian expected value for $\mu$:
\begin{equation}
\mu_{\mathrm{ref}}(n;b)
=
\int_0^\infty \mu \, P(\mu|n;b) \, \mathrm{d}\mu
=
n + 1
-
\left( \displaystyle \sum_{k=0}^{n} \frac{k\,b^k}{k!} \right)
\left( \displaystyle \sum_{k=0}^{n} \frac{b^k}{k!} \right)^{-1}
\,.
\label{mu_ref}
\end{equation}
Here $P(\mu|n;b)$
is the bayesian probability distribution for $\mu$
calculated assuming a constant prior
for $\mu\geq0$
(see, for example, \cite{D'Agostini95}):
\begin{equation}
P(\mu|n;b)
=
\frac{1}{n!}
\
\big( \mu + b \big)^n
\
e^{-\mu}
\left( \displaystyle \sum_{k=0}^{n} \frac{b^k}{k!} \right)^{-1}
\,.
\label{bayes}
\end{equation}
The assumption
of a constant prior is arbitrary,
but it seems to be the most natural choice if $\mu$
is the parameter under investigation and there is no prior
knowledge on its value.
Notice also that the arbitrariness induced by the choice of the prior
is of ``second order'' with respect to the dominant arbitrariness
induced by the choice of the method for constructing
the confidence belt.
The obvious inequality
$
\sum_{k=0}^{n} k\,b^k/k!
\leq
n \sum_{k=0}^{n} b^k/k!
$
implies that
$ \mu_{\mathrm{ref}}(n;b) \geq 1 $.
Therefore,
$\mu_{\mathrm{ref}}(n;b)$
represents a reference value for $\mu$ that not only is non-negative,
as desired in order to have an automatic
transition from
two-sided intervals to upper limits,
but is even bigger or equal than one.
This is a desirable characteristic in order to
obtain a weak decrease of the upper confidence limit
for a given $n$
when the expected background $b$ increases.
Indeed,
it has been shown in Ref.~\cite{Giunti98-poisson}
that for
$ n \lesssim b $
the upper bound
$\mu_1(n;b,\alpha)$
decreases rather weakly when $b$ increases
and stabilizes around a value close to 1.7
for large values of $b$.
This behaviour of $\mu_1(n;b,\alpha)$
is more suitable for the physical interpretation of
experimental results
than the behaviour of $\mu_1(n;b,\alpha)$
in the Unified Approach.
Furthermore,
as shown by the example in Ref.~\cite{Giunti98-poisson},
the upper limits $\mu_1(n;b,\alpha)$ obtained with the Alternative Unified Approach
for $ n \lesssim b $
are are in reasonable agreement with those obtained with the Bayesian Approach.
Hence,
the Alternative Unified Approach
extends the approximate agreement between the Bayesian and frequentist methods
from $n \gg b$ to $ n \lesssim b $
(although the statistical interpretations
of the confidence intervals is different in the two methods).
\section{Background with small error}
\label{Background with small error}
Let us consider an experiment that measures a Poisson process with
an expected background
$ b = \overline{b} \pm \sigma_b $
and a normal probability distribution function for the
mean expected background $b$:
\begin{equation}
f(b;\overline{b},\sigma_b)
=
\frac{ 1 }{ \sqrt{2\pi} \ \sigma_b }
\
\exp\left[ - \frac{ (b-\overline{b})^2 }{ 2 \, \sigma_b^2 } \right]
\,.
\label{normal}
\end{equation}
The importance of $\sigma_b$
can be estimated by comparing it with $\sqrt{\overline{b}}$,
which represents the rms fluctuation of the number of
background events if $ b = \overline{b} $.
If $ \sigma_b \ll \sqrt{\overline{b}} $
the uncertainty of the value of the background is much
smaller than the typical
fluctuation of the number of observed events induced by the background
and can be safely neglected.
Here we consider the possibility that
$ \sigma_b $ is not much smaller than $ \sqrt{\overline{b}} $
and its contribution cannot be neglected.
For simplicity,
in this section
we assume that
$ \sigma_b \lesssim \overline{b}/3 $
and we consider
$b$ varying from $-\infty$ to $+\infty$,
neglecting the small error introduced by considering
negative values of $b$.
This approximation allows a simple analytic solution of all the integrals
involved in the calculation.
The general case with arbitrarily large $\sigma_b$
and $b$ restricted
in the interval $[0,+\infty)$
is treated in Section~\ref{Background with large error}.
If $\mu$ is the mean of true signal events,
the probability $P(n|\mu;\overline{b},\sigma_b)$
to observe $n$ events is given by
\begin{equation}
P(n|\mu;\overline{b},\sigma_b)
=
\int
P(n|\mu;b)
\
f(b;\overline{b},\sigma_b)
\
\mathrm{d}b
\,,
\label{pnm1}
\end{equation}
with the Poisson probability
$P(n|\mu;b)$
given in Eq.(\ref{poisson}).
With the change of variable
$ x = ( b - \overline{b} + \sigma_b^2 ) / \sigma_b $,
the probability
$P(n|\mu;\overline{b},\sigma_b)$
can be written as
\begin{equation}
P(n|\mu;\overline{b},\sigma_b)
=
\frac{1}{n!}
\
\bigg( \mu + \overline{b} - \sigma_b^2 \bigg)^n
\
\exp\left[ - ( \mu + \overline{b} ) + \frac{\sigma_b^2}{2} \right]
\
I_n(\mu,\overline{b},\sigma_b)
\,,
\label{pnm2}
\end{equation}
where
\begin{equation}
I_n(\mu,\overline{b},\sigma_b)
=
\sum_{k=0}^{n}
\left( \begin{array}{c} n \\ k \end{array} \right)
\left(
\frac
{ \sigma_b }
{ \mu + \overline{b} - \sigma_b^2 }
\right)^k
m_k
\,.
\label{In1}
\end{equation}
Here $m_k$
is the $k^{\mathrm{th}}$
central moment of the normal distribution with unit variance,
\begin{equation}
m_k
=
\frac{ 1 }{ \sqrt{2\pi} }
\int_{-\infty}^{+\infty}
x^k
\
e^{ - x^2 / 2 }
\
\mathrm{d}x
\,.
\label{mom1}
\end{equation}
Taking into account that
$
\int x \, e^{ - x^2 / 2 } \, \mathrm{d}x
=
- e^{ - x^2 / 2 }
$,
the integral in Eq.(\ref{mom1})
can be calculated by parts,
yielding
\begin{equation}
m_k
=
\frac{ k! }{ (k/2)! \ 2^{k/2} }
\label{mom2}
\end{equation}
for $k$ even and
$ m_k = 0 $
for $k$ odd.
From Eqs.(\ref{In1}) and (\ref{mom2}),
we obtain
\begin{equation}
I_n(\mu,\overline{b},\sigma_b)
=
\sum_{k=0}^{n/2}
\frac
{ n! }
{ (n-2k)! \ k! \ 2^k }
\
\left(
\frac
{ \sigma_b }
{ \mu + \overline{b} - \sigma_b^2 }
\right)^{2k}
\,.
\label{In2}
\end{equation}
Equation (\ref{pnm2}) gives the formula for the
probability
$P(n|\mu;\overline{b},\sigma_b)$
to observe a number $n$
of events in a Poisson process
consisting in signal events with mean $\mu$
and background events with known mean $b=\overline{b}\pm\sigma_b$,
\textit{i.e.}
it replaces Eq.(\ref{poisson})
if the error $\sigma_b$ of the calculated mean background is not negligible.
The expression (\ref{In2}) for $I_n(\mu,\overline{b},\sigma_b)$
is valid only if
$ \sigma_b \lesssim \overline{b}/3 $,
but,
as we will see in the next section,
with an appropriate redefinition of $I_n(\mu,\overline{b},\sigma_b)$
the formula (\ref{pnm2}) for $P(n|\mu;\overline{b},\sigma_b)$
is valid for any value of
$\sigma_b$.
\section{Background with large error}
\label{Background with large error}
In this section we present the formalism that allows treatment of cases in which
$\sigma_b$
is arbitrarily large
and $b$ is restricted
to the interval $[0,+\infty)$.
The gaussian probability distribution function
of the mean expected background $b$
normalized in the interval $[0,+\infty)$ is
\begin{equation}
f(b;\overline{b},\sigma_b)
=
\frac{ N }{ \sqrt{2\pi} \ \sigma_b }
\
\exp\left[ - \frac{ (b-\overline{b})^2 }{ 2 \, \sigma_b^2 } \right]
\qquad
( b \geq 0 )
\,,
\label{normalized}
\end{equation}
with the normalization factor $N$ given by\footnote{The
error function is defined by
$
\mathrm{erf}(x)
\equiv
\frac{2}{\sqrt{\pi}}
\,
\int_0^x e^{-x^2} \, \mathrm{d}x
$.}
\begin{equation}
N^{-1}
=
\frac{ 1 }{ 2 }
\left[
1 + \mathrm{erf}\!\left( \frac{ \overline{b} }{ \sqrt{2} \, \sigma_b } \right)
\right]
\,.
\label{N}
\end{equation}
Apart from the error function that must be evaluated numerically,
the integral over $\mathrm{d}b$
in Eq.(\ref{pnm1}) can still be solved analytically.
Indeed,
Eqs.(\ref{pnm2}) and (\ref{In1}) are still valid,
with
\begin{equation}
m_k
=
\frac{ N }{ \sqrt{2\pi} }
\
\int_{x_{\mathrm{min}}}^{+\infty}
x^k \, e^{-x^2/2} \ \mathrm{d}x
\,,
\label{mk0}
\end{equation}
where
\begin{equation}
x_{\mathrm{min}}
=
- \frac{ \overline{b} - \sigma_b^2 }{ \sigma_b }
\,.
\label{xmin}
\end{equation}
The moments (\ref{mk0})
can be calculated by parts, yielding
\begin{equation}
m_{k}
=
\frac{N}{2}
\left[
1
+
\mathrm{erf}\!\left( - \frac{x_{\mathrm{min}}}{\sqrt{2}} \right)
\right]
\frac{ k! }{ (k/2)! \ 2^{k/2} }
+
\frac{ N }{ \sqrt{2\pi} }
\
e^{-x_{\mathrm{min}}^2/2}
\
\frac{ k! }{ (k/2)! }
\left[
\sum_{\ell=0}^{(k/2)-1}
\frac{ \left(\frac{k}{2}-\ell\right)! }{ (k-2\ell)! }
\
\frac{ x_{\mathrm{min}}^{k-2\ell-1} }{ 2^\ell }
\right]
\label{mk1}
\end{equation}
for $k$ even and
\begin{equation}
m_{k}
=
\frac{ N }{ \sqrt{2\pi} }
\
e^{-x_{\mathrm{min}}^2/2}
\
\left(\frac{k-1}{2}\right)!
\left[
\sum_{\ell=0}^{(k-1)/2}
\frac{ 2^\ell }{ (\frac{k-1}{2}-\ell)! }
\,
x_{\mathrm{min}}^{k-2\ell-1}
\right]
\,,
\label{mk2}
\end{equation}
for $k$ odd.
Therefore,
the probability
$P(n|\mu;\overline{b},\sigma_b)$
to observe a number $n$ of events
is given by the formula in Eq.(\ref{pnm2}) with
\begin{eqnarray}
&&
I_n(\mu,\overline{b},\sigma_b)
=
\frac{N}{2}
\left[
1
+
\mathrm{erf}\!\left( \frac{ \overline{b} - \sigma_b^2 }{ \sqrt{2} \, \sigma_b } \right)
\right]
\left[
\sum_{k=0}^{n/2}
\frac
{ n! }
{ (n-2k)! \ k! \ 2^k }
\left(
\frac
{ \sigma_b }
{ \mu + \overline{b} - \sigma_b^2 }
\right)^{2k}
\right]
\nonumber
\\
&&
\hspace{1cm}
+
\frac{ N }{ \sqrt{2\pi} }
\
\exp\left[ - \frac{ ( \overline{b} - \sigma_b^2 )^2 }{ 2 \ \sigma_b^2 } \right]
\nonumber
\\
&&
\hspace{2cm}
\times
\left\{
\sum_{k=0}^{(n-1)/2}
\frac{ n! \ k! }{ (n-2k-1)! \ (2k+1)! }
\left(
\frac
{ \overline{b} - \sigma_b^2 }
{ \mu + \overline{b} - \sigma_b^2 }
\right)^{2k+1}
\sum_{\ell=0}^{k}
\frac{ 2^\ell }{ (k-\ell)! }
\left(
\frac
{ \sigma_b }
{ \overline{b} - \sigma_b^2 }
\right)^{2\ell+1}
\right.
\nonumber
\\
&&
\hspace{2.5cm}
\left.
-
\sum_{k=0}^{n/2}
\frac{ n! }{ (n-2k)! \ k! }
\left(
\frac
{ \overline{b} - \sigma_b^2 }
{ \mu + \overline{b} - \sigma_b^2 }
\right)^{2k}
\sum_{\ell=0}^{k-1}
\frac{ (k-\ell)! }{ \big(2(k-\ell)\big)! \ 2^\ell }
\left(
\frac
{ \sigma_b }
{ \overline{b} - \sigma_b^2 }
\right)^{2\ell+1}
\right\}
\,.
\label{In3}
\end{eqnarray}
These quantities have a cumbersome expression,
but their numerical evaluation with a computer is
not much more difficult than
that of the corresponding quantities in Eq.(\ref{In2})
(however, the calculation of
$I_n(\mu,\overline{b},\sigma_b)$
is rather difficult if
$ \sigma_b^2 > \overline{b} $
because the addenda in Eq.(\ref{In3})
have alternating signs and the roundoff errors
introduced by subtracting large numbers become crucial).
\section{Confidence intervals}
\label{Confidence intervals}
The construction of the confidence belt for the probability (\ref{pnm2})
follows the same procedure described in
Section~\ref{Poisson processes with background}
but now the confidence interval for $\mu$
corresponding to
a number $n_{\mathrm{obs}}$ of observed events is
$
[
\mu_2(n_{\mathrm{obs}};\overline{b},\sigma_b,\alpha)
,
\mu_1(n_{\mathrm{obs}};\overline{b},\sigma_b,\alpha)
]
$,
\textit{i.e.}
it depends on $\overline{b}$ and $\sigma_b$.
The acceptance intervals can be constructed following the same
principles discussed in
Section~\ref{Poisson processes with background}
and
one can construct the confidence belt for central confidence intervals
or upper confidence limits,
or the confidence belt in the Unified Approach
or in the Alternative Unified Approach.
This section is devoted to the presentation of the formalism
for the implementation of
the Unified Approach
and
of the Alternative Unified Approach.
As an example, we will consider
$\overline{b}=3$
and $\sigma_b=0,1,1.8$.
The quantity $\mu_{\mathrm{best}}(n;\overline{b},\sigma_b)$
in the Unified Approach
is the value of $\mu$ that maximizes $P(n|\mu;\overline{b},\sigma_b)$
and
the acceptance interval for each value of $\mu$
is calculated assigning at each value of $n$ a rank
obtained from the relative size of the ratio
\begin{equation}
R_{\mathrm{UA}}(n|\mu;\overline{b},\sigma_b)
=
\frac
{ P(n|\mu;\overline{b},\sigma_b) }
{ P(n|\mu_{\mathrm{best}};\overline{b},\sigma_b) }
\,.
\label{RUA1}
\end{equation}
The value of
$\mu_{\mathrm{best}}(n;\overline{b},\sigma_b)$
can be easily calculated by hand
for $n=0,1,2$,
whereas
for higher values of $n$
it can be calculated numerically.
The resulting 90\% CL
confidence belts for
$\overline{b}=3$
and
$\sigma_b=0,1,1.8$
are plotted in Fig.~\ref{fig1}.
We have checked that the confidence belt
for $ \sigma_b \lesssim 0.2 $
practically coincides with the one for $\sigma_b=0$,
confirming the prediction that the contribution of
$\sigma_b$ is negligible if
$ \sigma_b \ll \sqrt{\overline{b}} $.
In Fig.~\ref{fig1},
the confidence belt for $\sigma_b=1$
has been obtained with the formulas presented in
Section \ref{Background with small error},
that are valid for $ \sigma_b \lesssim \overline{b}/3 $,
whereas the confidence belt for $\sigma_b=1.8$
has been obtained with the formulas presented in
Section \ref{Background with large error},
which are valid for any value of $\sigma_b$.
We have checked that the confidence belt for $\sigma_b=1$
calculated with the formulas presented in
Section \ref{Background with large error}
practically coincides with the one shown in Fig.~\ref{fig1}.
From Fig.~\ref{fig1}
one can see that
the broadness of the confidence belt increases with $\sigma_b$.
This is due to the fact that the integral in Eq.(\ref{pnm1})
has the effect of flattening the probability
$P(n|\mu;\overline{b},\sigma_b)$
as a function of $n$
for fixed $\mu$
with respect to
$P(n|\mu;\overline{b},\sigma_b=0)$
and
this flattening effect increases with the size of $\sigma_b$.
The shift of the borders of the confidence belt
as $\sigma_b$ increases is not always monotonic
because of the unavoidable overcoverage
caused by the fact that $n$ is an integer
(see Section~\ref{Poisson processes with background}).
The lower value of $\mu$ for which $n=0$
is out of the confidence belt in Fig.~\ref{fig1}
is lower for $\sigma_b=1.8$
than for
$\sigma_b=0$ and $\sigma_b=1$.
This is caused by the fact that the ratio (\ref{RUA1}) for $n=0$
does not depend on $\sigma_b$.
Indeed, from Eqs.(\ref{pnm2}) and (\ref{In3}) we have
\begin{equation}
P(n=0|\mu;\overline{b},\sigma_b)
=
\frac{N}{2}
\left[
1
+
\mathrm{erf}\!\left( \frac{ \overline{b} - \sigma_b^2 }{ \sqrt{2} \, \sigma_b } \right)
\right]
\exp\left[ - ( \mu + \overline{b} ) + \frac{\sigma_b^2}{2} \right]
\,.
\label{p0}
\end{equation}
Therefore,
$\mu_{\mathrm{best}}(n=0;\overline{b},\sigma_b)=0$
and
\begin{equation}
R_{\mathrm{UA}}(n=0|\mu;\overline{b},\sigma_b)
=
e^{-\mu}
\,.
\label{RUAn0}
\end{equation}
On the other hand,
the ratio
$R_{\mathrm{UA}}(n|\mu;\overline{b},\sigma_b)$
for $n>0$
increases with $\sigma_b$
because of the flattening of
$P(n|\mu;\overline{b},\sigma_b)$
as a function of $n$.
Hence,
the rank of $n=0$ for each value of $\mu$
decreases with the increasing of $\sigma_b$,
causing the peculiar behaviour of the upper bound
$\mu_1(n=0;\overline{b},\sigma_b,\alpha)$
as a function of $\sigma_b$
exemplified in Fig.~\ref{fig1}.
Since the possibility to set a smaller upper bound
on $\mu$ for larger $\sigma_b$
as a consequence of the observation of $n=0$ events
is undesirable from the physical point of view,
we think that in this case the physical interpretation of
the experimental result should be very cautious,
waiting for a better understanding of the background.
In the Alternative Unified Approach
the acceptance interval for each value of $\mu$
is calculated assigning at each value of $n$ a rank
obtained from the relative size of the ratio
\begin{equation}
R_{\mathrm{AUA}}(n|\mu;\overline{b},\sigma_b)
=
\frac
{ P(n|\mu;\overline{b},\sigma_b) }
{ P(n|\mu_{\mathrm{ref}};\overline{b},\sigma_b) }
\,,
\label{RNO1}
\end{equation}
where
the reference value
$\mu_{\mathrm{ref}}=\mu_{\mathrm{ref}}(n;\overline{b},\sigma_b)$
is the bayesian expected value for $\mu$.
In order to calculate analytically the value of $\mu_{\mathrm{ref}}(n;\overline{b},\sigma_b)$,
it is convenient to write the probability (\ref{pnm2}) as
\begin{equation}
P(n|\mu;\overline{b},\sigma_b)
=
\exp\left[ - ( \mu + \overline{b} ) + \frac{\sigma_b^2}{2} \right]
\
\sum_{k=0}^{n}
\frac{ \mu^{n-k} }{ (n-k)! }
\
J_k(\overline{b},\sigma_b)
\,,
\label{pnm3}
\end{equation}
with
\begin{equation}
J_k(\overline{b},\sigma_b)
\simeq
\sum_{j=0}^{k/2}
\frac
{ ( \overline{b} - \sigma_b^2 )^{k-2j} \, \sigma_b^{2j} }
{ (k-2j)! \ j! \ 2^j }
\label{jeik1}
\end{equation}
for
$ \sigma_b \lesssim \overline{b}/3 $
and
\begin{eqnarray}
&&
J_k(\overline{b},\sigma_b)
=
\frac{N}{2}
\left[
1
+
\mathrm{erf}\!\left( \frac{ \overline{b} - \sigma_b^2 }{ \sqrt{2} \, \sigma_b } \right)
\right]
\left(
\sum_{j=0}^{k/2}
\frac
{ ( \overline{b} - \sigma_b^2 )^{k-2j} \ \sigma_b^{2j} }
{ (k-2j)! \ j! \ 2^j }
\right)
\nonumber
\\
&&
\hspace{1cm}
+
\frac{ N }{ \sqrt{2\pi} }
\
\exp\left[ - \frac{ ( \overline{b} - \sigma_b^2 )^2 }{ 2 \ \sigma_b^2 } \right]
\nonumber
\\
&&
\hspace{2cm}
\times
\left\{
\sum_{j=0}^{(k-1)/2}
\frac{ j! }{ (2j+1)! \ (k-2j-1)! }
\sum_{\ell=0}^{j}
\frac{ 2^\ell \ ( \overline{b} - \sigma_b^2 )^{k-2\ell-1} \ \sigma_b^{2\ell+1} }{ (j-\ell)! }
\right.
\nonumber
\\
&&
\hspace{2.5cm}
\left.
-
\sum_{j=0}^{k/2}
\frac{ 1 }{ (k-2j)! \ j! }
\sum_{\ell=0}^{j-1}
\frac{ (j-\ell)! \ ( \overline{b} - \sigma_b^2 )^{k-2\ell-1} \ \sigma_b^{2\ell+1} }{ \big(2(j-\ell)\big)! \ 2^\ell }
\right\}
\label{jeik2}
\end{eqnarray}
for arbitrarily large $\sigma_b$.
For the bayesian probability distribution function for $\mu$
with a constant prior,
\begin{equation}
P(\mu|n;\overline{b},\sigma_b)
=
\frac
{ P(n|\mu;\overline{b},\sigma_b) }
{ \int_0^\infty P(n|\mu;\overline{b},\sigma_b) \ \mathrm{d}\mu }
\,,
\label{bayes1}
\end{equation}
one obtains
\begin{equation}
P(\mu|n;\overline{b},\sigma_b)
=
e^{-\mu}
\left(
\sum_{k=0}^{n}
\frac{ \mu^{n-k} }{ (n-k)! }
\
J_k(\overline{b},\sigma_b)
\right)
\left(
\sum_{k=0}^{n}
J_k(\overline{b},\sigma_b)
\right)^{-1}
\,.
\label{bayes2}
\end{equation}
Hence,
the reference value $\mu_{\mathrm{ref}}(n;\overline{b},\sigma_b)$,
which is the bayesian expected value for $\mu$,
is given by
\begin{equation}
\mu_{\mathrm{ref}}(n;\overline{b},\sigma_b)
=
n + 1
-
\left(
\sum_{k=0}^{n}
k
\
J_k(\overline{b},\sigma_b)
\right)
\left(
\sum_{k=0}^{n}
J_k(\overline{b},\sigma_b)
\right)^{-1}
\,.
\label{mu_ref1}
\end{equation}
If $ \sigma_b \lesssim \overline{b}/3 $
the quantities
$J_k(\overline{b},\sigma_b)$
are given by Eq.(\ref{jeik1})
and one can see that they are all positive.
Hence,
the inequality
$
\sum_{k=0}^{n}
k
\
J_k(\overline{b},\sigma_b)
\leq
n
\sum_{k=0}^{n}
J_k(\overline{b},\sigma_b)
$
implies that
$\mu_{\mathrm{ref}}(n;\overline{b},\sigma_b)\geq1$
as in the case $\sigma_b=0$
(see Eq.(\ref{mu_ref})
and the following discussion).
On the other hand,
the general formula (\ref{jeik2})
allows
$J_k(\overline{b},\sigma_b)$
to be negative and
$\mu_{\mathrm{ref}}(n;\overline{b},\sigma_b)$
is not guaranteed to be larger than one
if $ \sigma_b \gtrsim \overline{b}/3 $.
The 90\% CL
confidence belts in the Alternative Unified Approach for
$\overline{b}=3$
and
$\sigma_b=0,1,1.8$
are plotted in Fig.~\ref{fig2}.
One can see again that the broadness of the confidence belt
increases with $\sigma_b$.
The behaviour of the upper bound
$\mu_1(n=0;\overline{b},\sigma_b,\alpha)$
as a function of $\sigma_b$
is similar to the one obtained in the Unified Approach
and the same caveats apply
to the physical interpretation of the observation of $n=0$ events.
\section{Conclusions}
\label{Conclusions}
We have presented the formalism that allows the
error $\sigma_b$ of the calculated mean background $\overline{b}$ in the
statistical analysis of a Poisson process with the frequentistic method to
be taken into account.
This error must be taken into account if it is not much smaller
than $\sqrt{\overline{b}}$,
which represents
the rms fluctuation of the number of
background events.
We have considered in particular
the Unified Approach~\cite{Feldman-Cousins-98}
and the Alternative Unified Approach~\cite{Giunti98-poisson},
that guarantee by construction a correct frequentist coverage.
We have shown that the broadness of the classical confidence belt
increases with $\sigma_b$,
leading to an increase of the confidence intervals
for the mean $\mu$ of signal events.
The only exception to this behaviour is represented
by the upper bound
$\mu_1(n=0;\overline{b},\sigma_b,\alpha)$,
which decreases with the increasing of $\sigma_b$
for large values of $\sigma_b$
in both approaches.
Hence,
the physical interpretation of the observation of $n=0$ events
when $\sigma_b$ is large
should be very cautious
and the effort towards a better understanding
of the background should receive high priority.
\acknowledgements
I would like to thank S. Yellin for suggesting the simplest way to perform the integral in
Eq.(\ref{pnm1}).
|
1,108,101,566,693 | arxiv | \section{Introduction}\label{sec:intro}
The advent of mobile devices and mobile networks triggered a new services named location-based services (LBS). LBS systems enable service providers (SPs) to provide users with accurate services based on their geographical locations. Nowadays, increasing number of users use LBS systems to query nearby Points of Interest (PoI) including shopping centers, restaurants, banks, hospitals, traffic information, navigation, {\em etc.} However, to query a service, a user must reveal her location to the service provider (SP). Hence, untrusted SPs can profile a user's movement by tracing her location, and conclude her personal information, such as working place, health condition, commercial partners, {\em etc}. This raises a serious privacy issue.
To protect users' location privacy, privacy-preserving LBS schemes were proposed where either a semi-trusted third party (TTP) is required or the computation cost of a query is linear in the size of the queried area. However, in practice, it is difficult to find a party who can work as a semi-trusted TTP in LBS schemes, and mobile devices have constrained computation power and limited storage space.
Considering the above problems, an oblivious location-based service query (OLBSQ) scheme is proposed to enhance the security of SPs' services and protect users' location privacy. Especially, our OLBSQ scheme provides mobile uses with a light query algorithm which has constant computation cost.
\subsection{Related Work}
Due to it can provide accurate services, LBS schemes are becoming increasingly popular. Nevertheless, location privacy has been the primary concern of LBS users. To protect users' location privacy, privacy-preserving LBS schemes were proposed.
\subsubsection{Privacy-Preserving LBS with A Trusted Third Party}\hfill
\medskip
\noindent In these schemes, to protect mobile users' location privacy, a trusted third party called {\em location anonymizer} is required to blur a user's exact location into a cloaked area. Meanwhile, the cloaked area must satisfy the user's privacy requirements. The popular privacy requirement is $k$-anonymity, namely a user's location is indistinguishable from other $k-1$ users' locations.
Gruteser and Grunwald \cite{gg:lbs2003} proposed an anonymous LBS scheme where the location anonymizer needs to remove any identifiers such as network and address, and perturbs the position data. In \cite{gg:lbs2003}, the location anonymizer knows users' location, and users need to periodically update their location information to the location anonymizer.
Proposed by Mokbel, Chow and Aref \cite{mca:lbs2006}, $Casper^{*}$ is a privacy-aware query processing method for LBS. In Casper \cite{mca:lbs2006}, the location anonymizer blurs users' exact location into cloaked spatial areas and a privacy-aware query processor is embedded in the database to deal with queries based on the cloaked spatial areas. The privacy-aware query processor supports three types of queries: private queries over public data, public queries over private data and private queries over private data.
Xu and Cai \cite{xc:lbs2007} addressed the location anonymity issue in continuous LBS schemes. In \cite{xc:lbs2007}, entropy was used to measure the anonymity degree of a cloaking area, which consider both the number of the users and their anonymity probability distribution in the cloaking area. When issuing a query, a mobile user sends his query and desired anonymity level to the location anonymizer, and then the location anonymizer generates a session identity for the user and contact the service provider to establish a service session. After a service session is established, the location anonymizer needs to periodically identify a cloaking area for the user according to her latest location, and report the cloaking area to the service provider. Furthermore, a polynomial time algorithm was proposed to find a cloaking area satisfying the anonymity requirement.
Kalnis {\em et al.} \cite{kgmp:bls2007} proposed a framework to prevent location-based identity inference of users. In \cite{kgmp:bls2007}, when receiving a query, the location anonymizer first removes the user's identity, and uses an anonymizing spatial region to hide the user's location. This framework optimizes the processing of both location anonymity and spatial queries.
Gedik and Liu \cite{gl:bls2008} introduced a scalable architecture to protect users' location privacy. The architecture consists of a model of personalised location anonymity and a set of location perturbation algorithms. In \cite{gl:bls2008}, upon receiving a query from a user, the location anonymizer remove the identity of the user and perturbs her location by replacing a 2-dimensional point with a spatial cloaking ranger. Especially, users are allowed to specify the minimum level of anonymity and the maximum temporal and spatial tolerances.
Chen {\em et al.} \cite{chycdx:bls2018} proposed a new scheme to protect users' location privacy. In \cite{chycdx:bls2018}, redundant point-of-interest (POI) records were applied to protect location privacy. When receiving a query from a user, the location anonymizer first generates a $k$-anonymity rectangle area for the user, and then sends the anonymous query to the service provider. Notably, a blind filter scheme was proposed to enable the location anonymizer to filter out the redundant POI records on behalf of users.
To leveraging spatial diversity in LBS, He {\em et al.} \cite{hjd:lbs2018} first proposed ambient environment-dependent location privacy metrics and a stochastic model, and then developed an optimal stopping-based LBS scheme which enable users to leverage the spatial diversity.
Grissa {\em et al.} \cite{gyh:loc2017} proposed two schemes to protect the location privacy of second users where a TTP named fusion centre (FC) is required to orchestrates the sensing operation. The first scheme is based on an order-preserving encryption (OPE) and has lower communication head, while the second scheme is based on a secure comparison protocol and has lesser architectural cost.
Schlegel {\em et al.} \cite{schw:grid2015} proposed a user-defined privacy LBS scheme called dynamic grid system (DGS) which support both privacy-preserving continuous $k$-nearest-neighbor ($k$-NN) and range queries. In \cite{schw:grid2015}, each user generates a grid structure according to her privacy requirement and embeds it into an encrypted query area.
When making a query, a user encrypts a secret key $K$ and the grid structure by using an identity-based encryption scheme, and sends the ciphertexts to the service provider. Subsequently, the user generates an encrypted identifier for each cell in the intended area using a deterministic encryption technique, and sends it to the TTP. To process a query, the service provider decrypts the ciphertext and obtains the secret key and the grid architecture. The service provider uses the secret key and the deterministic encryption technique to generate encrypted identifiers for all cells where POIs exist. Later, the service provider sends all the encrypted identifiers to the TTP. The TTP match the encrypted identifiers from the user and those from the service provider, and send the same encrypted identifiers to the user. Finally, the user can decrypt the encrypted identifiers and know the locations of the POIs.
Notably, the communication cost to generate a query is linear with the number of POI in the vicinity and independent of the number of cells in the grid.
In above schemes, a TTP is required to protect users' location privacy. However, in practice, it is difficult to find an entity which can play the role of the TTP.
\subsubsection{Privacy-Preserving LBS without A Trusted Third Party}\hfill
\medskip
Chow, Mokbel and Liu \cite{cml:lbs2006} proposed a peer-to-peer (P2P) spatial cloaking scheme which enables users to obtain services without the need of a TTP. Prior to make a query, a user needs to forms a group from her peers via single-hop communication/multiple-hop routing. The spatial cloaked area should cover all peers in the group. Furthermore, the user randomly selects one peer in the group as her agent and sends both her query and cloaked spatial region to the agent. The agent forwards the query to the service provider and receives a list of answers including actual answers and false answers. Then, the agent sends the answers to the user. Finally, the user filter out false answers and obtain the actual answers. The P2P spatial cloaking scheme supports two models: on-demand model and proactive model. Comparatively, the on-demand model is efficient, but requires longer response time.
Ghinita, Kalnis and Skiadopoulos \cite{gks:lbs2007} proposed a decentralised LBS scheme named $\mbox{PRIV}\acute{E}$ where each user can organises herself into a hierarchical overlay network and make service queries anonymously. Each user can decide the degree $k$ of anonymity and the $\mbox{PRIV}\acute{E}$ algorithm can identify an appropriate set consisting of $k$ users in a distributed manner. To protect users' anonymity, the HILB-ASR algorithm was proposed to guarante that the probability of identifying a real service requester is always bounded by $\frac{1}{k}$. This scheme is scalable and fault tolerant.
Paulet {\em et al.} \cite{pkyb:loc2014} proposed a privacy-preserving and content-protecting LBS scheme. This scheme was derived from the oblivious transfer (OT) scheme \cite{np:ot1999} and private information retrieve (PIR) \cite{gr:pir2005}. Each user firsts runs the OT protocol with the service provider to obtain the location identity and a secret key, and then executes the PIR protocol with the service provider to obtain the location data by using the secret key. The author formalised the security model and analysed the security of the proposed scheme.
Schlegel {\em et al.} \cite{schw:loc2017} proposed an order-retrievable encryption (ORE) scheme with the following two properties: (1) it can generate a encrypted query location; (2) given two encrypted user locations, a server can determine which one is closed to the an encrypted query location. Subsequently, based on the proposed ORE scheme, a privacy-preserving location sharing services scheme was presented. In \cite{schw:loc2017}, a user or a group initiator should create a group. The group initiator generates a shared key for the ORE scheme and a shared key for AES scheme. Every user in the group updates periodically her location information to a database server using the ORE and AES techniques. When receiving a encrypted query location, the server can search out the exact answer without knowing the location information. Finally, the user can use the shared key for AES to decrypt the cipherext and obtain the location information. In \cite{schw:loc2017}, a group of users need to share keys prior to sharing location information.
Hu {\em et al.} \cite{hwhhlc:bls2018} proposed a LBS with query content privacy scheme based on homomorphic encryption, OT and PIR. In \cite{hwhhlc:bls2018}, a user can obtain accurate services, but does not release any query content information to the server. The homomorphic encryption is used to compute the Euclidean distance between the attribute vector submitted by a user and the attribute vectors in the database. The OT protocol was used to find the exact match vectors for the queried attribute vector. Finally, the PIR protocol was applied to obtain the intended POI set. The security of the proposed scheme was analysed, instead of formal reduction.
In these schemes \cite{cml:lbs2006,gks:lbs2007,pkyb:loc2014,schw:loc2017,hwhhlc:bls2018}, both the computation and communication cost to generate a query are linear with the size of the queried area. This is undesirable to the devices which have limited computation power and storage space, such as smart phone, tablet, {\em etc}.
\subsection{Contributions}
To protect users' location privacy, we propose an OLBSQ scheme which can provide the following important features: (1) a semi-trusted TTP is not required; (2) a user can query services from a service provider without revealing her exact location; (3) a service provider can only know the size of a query made by a user; and (4) both the computation cost and the communication cost to generate a query is constant, instead of linear with the size of the queried area.
Our contributions include: (1) both the definition and security model of the proposed OLBSQ scheme are formalised; (2) a concrete OLBSQ scheme is proposed; (3) the security of the proposed OLBSQ is reduced to well-known complexity assumptions.
\subsection{Organization}
The remaining of this paper is organised as follows. Preliminaries used throughout this paper are introduced in Section \ref{sec:preli}. In Section \ref{sec:const}, we formally present our construction.
In Section \ref{sec:analy}, we prove the security of our scheme. Finally, Section \ref{sec:conc} concludes this paper.
\section{Preliminaries}\label{sec:preli}
In this section, all preliminaries used throughout this paper are introduced.
\subsection{Formal Definition}
\begin{figure}
\centering
\includegraphics[width=12cm,height=6cm]{frame_1.pdf}
\caption{The Framework of Our OLBSQ Scheme}\label{framework}
\end{figure}
Let $\mathfrak{L}$ be a location structure (e.g. grid) and $O$ be a point in $\mathfrak{L}$. By $(O;S)$, we denote that the area with start point $O$ and size $S$ in $\mathfrak{L}$. For example, if $\mathfrak{L}$ is a grid system, $(O=(i,j);S=l\times k)$ is the area consisting of the left-bottom point $O$ and $l\times k$ continuous cells. Let $\mathfrak{D}$ be the services included in $\mathfrak{L}$ and $\mathfrak{D}'$ be the encrypted services. $\hat{\mathfrak{D}}\in (O;S)$ stands for the services included in the area $(O;S)$. Fig. \ref{framework} describes the framework of our OLBSQ scheme. The service provider $\mathcal{SP}$ first generates a secret key $SK$ and some public parameters $PP$, selects a location structure $\mathcal{L}$. Suppose that $SP$ has a set of service $\mathcal{D}$, he encrypts each service in $\mathcal{D}$ by using $SK$ and its location information, and obtains an encrypted set of services $\mathcal{D}'$. To query services included in an area, a user $\mathcal{U}$ select a start point $O$ and the query size $S$, and then commit $O$ to be a point $O'$. Furthermore, $\mathcal{U}$ generates a proof $\prod$ that the queried area starting from $O$ with size $S$ is included in $\mathcal{L}$. $\mathcal{U}$ sends $(O',S,\prod)$ to $\mathcal{SP}$. If $\prod$ is correct, $\mathcal{SP}$ uses $SK$ to obliviously and incrementally compute a set of keys $\hat{\mathcal{D}}'$ according to $O'$ and $S$, and sends $\hat{\mathcal{D}}'$ to $\mathcal{U}$. Finally, $\mathcal{U}$ decommit $\hat{\mathcal{D}}'$, and obtain a set of decryption key $\hat{\mathcal{D}}$ which enable her to access the intended services.
\medskip
An OLBSQ scheme consists of the following two algorithms:
\begin{itemize}
\item{\sf Setup}$(1^{\ell},\mathfrak{L},\mathfrak{D})\rightarrow(SK,PP,\mathfrak{D}').$ Taking as input a security parameter $1^{\ell}$, a location structure $\mathfrak{L}$ and a set of services $\mathfrak{D}$, this algorithm outputs a secret key $SK$ for $SP$, some public parameters $PP$ and the encrypted services $\mathfrak{D}'$.
\item{\sf Service-Transfer}$(\mathcal{U}(O,S,PP)\leftrightarrow \mathcal{SP}(PP,SK))\rightarrow(\hat{\mathfrak{D}},(O',S,\prod))$. This is an interactive algorithm executed between a user $\mathcal{U}$ and the service provider $\mathcal{SP}$. $\mathcal{U}$ takes as input the public parameters $PP$, the start point $O$ and the query size $S$, and outputs the intended services $\hat{\mathfrak{D}}\subset \mathfrak{D}$. $\mathcal{SP}$ takes as input the public parameters $PP$ and the secret key $SK$, and outputs the committed start point $O'$, query size $S$ and a proof $\prod$ that the queried area with start point $O$ and size $S$ is in $\mathcal{L}$.
\end{itemize}
\begin{definition}
We say that an oblivious location-based service query scheme is correct if and only if
\begin{equation*}
\Pr\left[ \begin{array}{c|l}
& {\sf Setup}(1^{\ell},\mathcal{L},\mathfrak{D})\rightarrow(SK,PP,\mathfrak{D'});\\
\hat{\mathfrak{D}}\subset \mathfrak{D}~\wedge~ \hat{\mathfrak{D}}\in (O,S) &{\sf Service-Transfer}(\mathcal{U}(PP,O,S)\leftrightarrow \\
& \mathcal{SP}(PP,SK))\rightarrow(\hat{\mathfrak{D}},(O',S,\prod));\\
& \prod ~\mbox{is correct.}
\end{array}
\right]=1.
\end{equation*}
\end{definition}
\subsection{Security Model}
The security model of OLBSQ schemes is formalised by using the simulation-based model \cite{cdn:otac2009,cns:ot2007,j:sim2018,pw:sim2001} where the real world experiment and ideal world experiment are defined. In the real world experiment, there are some parties who run the protocol: an adversary $\mathcal{A}$ who controls some of the parties and an environment $\mathcal{E}$ who provides inputs to all honest parties and interact arbitrarily with $\mathcal{A}$. The dishonest parties are controlled by $\mathcal{A}$. In the ideal world experiment, there are same parties as in the real world experiment. Notably, these parties do not run the protocol. They submit their inputs to a ideal functionality $\mathcal{F}$ and receive outputs from $\mathcal{F}$. $\mathcal{F}$ specifies the behaviour that the desired protocol should implement in the real world. $\mathcal{E}$ provides inputs to and receives outputs from honest parties. Let $\mathcal{S}$ be a simulator who controls the dishonest parties in the ideal world experiment as $\mathcal{A}$ does in the real world experiment. Furthermore, $\mathcal{E}$ interacts with $\mathcal{S}$ arbitrarily.
\begin{definition}
Let ${\bf Real}_{\mathcal{P},\mathcal{E},\mathcal{A}}$ be the probability with which $\mathcal{E}$ runs the protocol $\mathcal{P}$ with $\mathcal{A}$ and outputs 1 in the real world experiment. Let ${\bf Ideal}_{\mathcal{F},\mathcal{E},\mathcal{S}}$ be the probability with which $\mathcal{E}$ interacts with $\mathcal{S}$ and $\mathcal{F}$, and outputs 1 in the ideal world experiment. We say that the protocol $\mathcal{P}$ securely realizes the functionality $\mathcal{F}$ if
\begin{equation*}
\left| {\bf Real}_{\mathcal{P},\mathcal{E},\mathcal{A}}-{\bf Ideal}_{\mathcal{F},\mathcal{E},\mathcal{S}}\right|\leq \epsilon(\ell).
\end{equation*}
\end{definition}
\medskip
The ideal functionality of OLBSQ schemes is formalized in Fig. \ref{fig:fun}.
\begin{figure}
\centering
\fbox{
\begin{minipage}{14cm}
\begin{center} {\bf Functionality: $\mathcal{F}_{OLBSQ}$}\end{center}
\medskip
$\mathcal{F}_{OLBSQ}$ is executed among a service provider $\mathcal{SP}$, a user $\mathcal{U}$ and an adversary $\mathcal{S}$, and works as follows:
\begin{itemize}
\item Upon receiving a message $(sid, service\_provider,\mathfrak{L},\mathfrak{D})$ from $\mathcal{SP}$, store $(\mathfrak{L},\mathfrak{D})$.
\medskip
\item Upon receiving a message $(sid, user, O,S)$ from $\mathcal{U}$, check whether the message $(sid, service\_provider,\cdots)$ was previously stored. If no such message was stored, send nothing to $\mathcal{U}$; otherwise, send $(sid,service\_request)$ to $\mathcal{SP}$ and receive a response $(sid,b\in\{0,1\})$. Pass $(sid,b\in\{0,1\})$ to $\mathcal{S}$. If $b=0$, send $(sid,\perp)$ to $\mathcal{U}$. If $b=1$, send $(sid,\hat{\mathfrak{D}})$ to $\mathcal{U}$ where $\hat{\mathfrak{D}}\in (O,S)\subset \mathfrak{L}$.
\end{itemize}
\end{minipage}
}
\caption{The Functionality of Oblivious Location-Based Service Query Schemes}\label{fig:fun}
\end{figure}
\subsection{Bilinear Map and Complexity Assumptions}
Let $\mathbb{G}_{1}$, $\mathbb{G}_{2}$ and $\mathbb{G}_{\tau}$ be three cyclic groups with prime order $p$. A map $e:\mathbb{G}_{1}\times\mathbb{G}_{2}\rightarrow\mathbb{G}_{\tau}$ is a bilinear map if it satisfies the following properties:
\begin{enumerate}
\item{\sf Bilinearity.} For all $g\in\mathbb{G}_{1}$, $h\in\mathbb{G}_{2}$ and $x,y\in\mathbb{Z}_{p}$, $e(g^{x},h^{y})=e(g^{y},h^{x})=e(g,h)^{xy}$;
\medskip
\item{\sf Non-degeneracy.} $e(g_{1},g_{2})\neq 1_{\tau}$, where $1_{\tau}$ is the identity of $\mathbb{G}_{\tau}$;
\medskip
\item{Efficiency.} For all $g\in\mathbb{G}_{1}$ and $h\in\mathbb{G}_{2}$, there is an efficient algorithm to compute $e(g,h)$.
\end{enumerate}
If $\mathbb{G}_{1}=\mathbb{G}_{2}$, $e$ is called a symmetric bilinear map.
Let $\mathcal{BG}(1^{\ell})\rightarrow(e,p,\mathbb{G},\mathbb{G}_{\tau})$ be a generator of symmetric bilinear group which takes as input a security parameter $1^{\ell}$ and outputs a bilinear group $(e,p,\mathbb{G},\mathbb{G}_{\tau})$ with prime order $p$ and $e:\mathbb{G}\times\mathbb{G}\rightarrow\mathbb{G}_{\tau}$.
\begin{definition}{\sf ($q$-Strong Diffie-Hellman ($q$-SDH) Assumption \cite{bb:ss2007}).} Let $\mathcal{BG}(1^{\ell})\rightarrow(e,p,\mathbb{G},\mathbb{G}_{\tau})$ and $\zeta\stackrel{R}{\leftarrow}\mathbb{Z}_{p}$. Suppose that $g$ be a generator of $\mathbb{G}$. Given $(g,g^{\zeta},g^{\zeta^{2}},\cdots,g^{\zeta^{q}})$, we say that the $q$-SDH assumption holds on the bilinear group $(e,p,\mathbb{G},\mathbb{G}_{\tau})$ if all probable polynomial-time adversarties $\mathcal{A}$ can output $(c,g^{\frac{1}{\zeta+c}})$ with a negligible advantage, namely
\begin{equation*}
Adv_{\mathcal{A}}^{\mbox{q-SDH}}=\left|\Pr[\mathcal{A}(g,g^{\zeta},g^{\zeta^{2}},\cdots,g^{\zeta^{q}})\rightarrow (c,g^{\frac{1}{\zeta+c}})\right|\leq \epsilon(\ell)
\end{equation*}
where $c\stackrel{R}{\leftarrow}\mathbb{Z}_{p}$ and $c\neq -\zeta$.
\end{definition}
\begin{definition}{\sf ( $q$-Power Decisional Diffie-Hellman ($q$-PDDH) Assumption \cite{cns:ot2007}).} Let $\mathcal{BG}(1^{\ell})\rightarrow(e,p,$ $\mathbb{G},\mathbb{G}_{\tau})$, $g$ be a generator of $\mathbb{G}$ and $\zeta\stackrel{R}{\leftarrow}\mathbb{Z}_{p}$. Given $(g,g^{\zeta},g^{\zeta^{2}},\cdots,g^{\zeta^{q}},H)$, we say that $q$-PDDH assumption holds on $(e,p,\mathbb{G},\mathbb{G}_{\tau})$ if all probable polynomial-time adversary $\mathcal{A}$ can distinguish $T=(H^{\zeta},H^{\zeta^{2}},\cdots,H^{\zeta^{q}})$ from $T=(\tilde{H}_{1},\tilde{H}_{2},\cdots,\tilde{H}_{q})$ with a negligible advantage, namely
\begin{equation*}
\begin{split}
Adv_{\mathcal{A}}^{\mbox{q-PDDH}}=& \Big|\Pr[\mathcal{A}(g,g^{\zeta},g^{\zeta^{2}},\cdots,g^{\zeta^{q}},H,H^{\zeta},H^{\zeta^{2}},\cdots,H^{\zeta^{q}})=1]-\\
& \Pr[\mathcal{A}(g,g^{\zeta},g^{\zeta^{2}},\cdots,g^{\zeta^{q}},H,\tilde{H}_{1},\tilde{H}_{2},\cdots,\tilde{H}_{q})=1]\Big|\leq \epsilon(\ell)
\end{split}
\end{equation*}
where $H,\tilde{H}_{1},\tilde{H}_{2},\cdots,\tilde{H}_{q}\stackrel{R}{\leftarrow}\mathbb{G}_{\tau}$.
\end{definition}
\section{Construction}\label{sec:const}
In this section, we describe the formal construction of our OLBQS scheme.
\subsection{High-Level Overview}
To construct our scheme, we use the grid structure which is described in Fig. \ref{grid}. The location of each cell is determined by the coordinate of the point at its upper-right corner. Suppose that all services included in a cell are encrypted under a same key. Firstly, the service provider divides the whole area into $m\times n$ cells, and then generates a secret key and some public parameters. The service provider encrypts each service in a cell by using his secret key and the coordinate of the cell. Finally, the service provider publishes the public parameters and the encrypted services.
When making a service query, a user selects a start point $O=(i,j)$ and the query size $S=k\times l$ where $k$ and $l$ are the numbers of cells in each row and each column, respectively. The user commits $O=(i,j)$ to be a point $O'$, generates a proof $\prod$ that the queried area $(O';S)$ is included in $\mathcal{L}$, and sends $(O', S,\prod_{U})$ to the service provider. After receiving $(O', S,\prod_{U})$, the service provider first checks the correctness of $\prod_{U}$, and then uses his secret key to obliviously an incrementally compute a set of keys according $O'$ and $S$. Furthermore, the service provider generates a proof $\prod_{SP}$ that these keys are computed correctly, and sends the keys and $\prod_{SP}$ to the user. Finally, the user verifies the proof $\prod_{SP}$, de-commits the keys and obtains the corresponding decryption keys. Finally, the user decrypts the ciphertexts and obtains the intended services. Notably, to retrieve a service, the user only needs to execute 3 exponent operations on $\mathbb{G}_{\tau}$.
\begin{figure}
\centering
\includegraphics[width=10cm,height=6cm]{model}\caption{Grid Location Model of Our Scheme}\label{grid}
\end{figure}
\subsection{Our Construction}
Our OLBSQ scheme is presented in Fig. \ref{fig:setup} and Fig. \ref{fig:service_tr}.
\medskip
\noindent{\bf Setup.} The service provider $\mathcal{SP}$ first divides the whole area $\mathcal{L}$ into $m\times n$ cells. $\mathcal{SP}$ generates a bilinear group by running $\mathcal{BG}(1^{\ell})\rightarrow(e,p,\mathbb{G},\mathbb{G}_{\tau})$, and then selects its secret key $SK=(\alpha_{1},\alpha_{2},\beta_{1},\beta_{2},x,y,\mathfrak{h})$ where $\alpha_{1},\alpha_{2},\beta_{1},\beta_{2},x,y\stackrel{R}{\leftarrow}\mathbb{Z}_{p}$ and $\mathfrak{h}\stackrel{R}{\leftarrow}\mathbb{G}_{2}$. To encrypt the service $M_{i,j}$ in a cell $C(i,j)$ using its coordinate $(i,j)$, $\mathcal{SP}$ computes $A_{i,j}=g_{1}^{i}h_{1}^{j}g_{2}^{x^{i}}h_{2}^{y_{j}}$ and $B_{i,j}=e(A_{i,j},\mathfrak{h})\cdot M_{i,j}$ for $i=1,2,\cdots,m$ and $j=1,2,\cdots,n$. To enable each user $\mathcal{U}$ to prove that a committed point is in the whole area and $\mathcal{SP}$ to obliviously and incrementally generate decryption keys according $\mathcal{U}$'s query, $\mathcal{SP}$ computes $H=e(\mathfrak{g},\mathfrak{h})$,
$W_{1}=g_{1}^{\alpha_{1}}$, $W_{2}=g_{2}^{\alpha_{2}}$, $W_{1}'=h_{1}^{\beta_{1}}$, $W_{2}'=h_{2}^{\beta_{2}}$, $ \Gamma_{1}^{i}=g_{1}^{\frac{1}{\alpha_{1}+i}}$, $\Gamma_{2}^{j}=h_{1}^{\frac{1}{\beta_{1}+j}}$, $(C_{i,1}=g_{2}^{x^{i}},C_{i,2}=g_{2}^{\frac{1}{\alpha_{2}+x^{i}}},C_{i,3}=e(\mathfrak{g},\mathfrak{h})^{x^{i}})$, $(D_{j,1}=h_{2}^{y^{j}},D_{j,2}=h_{2}^{\frac{1}{\beta_{2}+y^{j}}},D_{j,3}=e(\mathfrak{g},\mathfrak{h})^{y^{j}})$ for $i=0,1,2,\cdots,m$ and $j=1,2,\cdots,n$. Actually, $(W_{1},W_{2},W'_{1},W'_{2},\Gamma_{1}^{i},\Gamma_{2}^{j},C_{i,2},D_{j,2})$ are used by $\mathcal{U}$ to prove that a committed start point $O(i,j)$ is within $\mathcal{L}$ for $i=1,2,\cdots,m$ and $j=1,2,\cdots,n$; while other parameters are used by $\mathcal{SP}$ to computes decryption keys. Finally, the public parameters are $PP=\Big(e,p,\mathbb{G},\mathbb{G}_{\tau},\mathfrak{g},g_{1},g_{2},g_{3},g_{4},H,W_{1},$ $W_{2},W'_{1},W'_{2},\Gamma_{1}^{1},\cdots,\Gamma_{1}^{m},\Gamma_{2}^{1},\cdots,\Gamma_{2}^{n},((A_{1,1},B_{1,1}),\cdots,\\(A_{m,n},B_{m,n}),(C_{1,1},$ $C_{1,2},C_{1,3}),\cdots,(C_{m,1},$ $C_{m,2},C_{m,3}),(D_{1,1},D_{2,1},D_{1,3}),\cdots,(D_{n,1},D_{n,2},D_{n,3})\Big)$ and $\mathcal{D}'=\left\{((A_{i,j}, B_{i,j})_{i=1}^{m})_{j=1}^{n}\right\}$.
\begin{figure}
\centering
\fbox{
\begin{minipage}{15.5cm}
\begin{center}{\bf Setup}$(1^{\ell}):$\end{center}
$\mathcal{SP}$ divides the whole area into $m\times n$ cells. Let $M_{i,j}\in\mathbb{G}_{\tau}$ be the service in the cell $C(i,j)$. $\mathcal{SP}$ runs $\mathcal{BG}(1^{\ell})\rightarrow(e,p,\mathbb{G},\mathbb{G}_{\tau})$. Let $g_{1},g_{2},h_{1},h_{2},\mathfrak{g},\mathfrak{h}$ be generators of $\mathbb{G}$. SP selects $\alpha_{1},\alpha_{2},\beta_{1},\beta_{2},x,y\stackrel{R}{\leftarrow}\mathbb{Z}_{p}$, and computes $H=e(\mathfrak{g},\mathfrak{h})$,
$W_{1}=g_{1}^{\alpha_{1}}$, $W_{2}=g_{2}^{\alpha_{2}}$, $W_{1}'=h_{1}^{\beta_{1}}$, $W_{2}'=h_{2}^{\beta_{2}}$, $ \Gamma_{1}^{i}=g_{1}^{\frac{1}{\alpha_{1}+i}}$, $\Gamma_{2}^{j}=h_{1}^{\frac{1}{\beta_{1}+j}}$, $A_{i,j}=(g_{1}^{i}h_{1}^{j}g_{2}^{x^{i}}h_{2}^{y^{j}})$, $B_{i,j}=e(A_{i,j},\mathfrak{h})\cdot M_{i,j}$, $(C_{i,1}=g_{2}^{x^{i}},C_{i,2}=g_{2}^{\frac{1}{\alpha_{2}+x^{i}}},C_{i,3}=e(\mathfrak{g},\mathfrak{h})^{x^{i}})$, $(D_{j,1}=h_{2}^{y^{j}},D_{j,2}=h_{2}^{\frac{1}{\beta_{2}+y^{j}}},D_{j,3}=e(\mathfrak{g},\mathfrak{h})^{y^{j}})$ for $i=1,2,\cdots,m$ and $j=1,2,\cdots,n$. The secret key is $SK=(\alpha_{1},\alpha_{2},\beta_{1},\beta_{2},x,y,\mathfrak{h})$ and the public parameters are
$PP=\Big(e,p,\mathbb{G},\mathbb{G}_{\tau},\mathfrak{g},g_{1},g_{2},g_{3},g_{4},H,W_{1},W_{2},W'_{1},W'_{2},\Gamma_{1}^{1},\cdots,\Gamma_{1}^{m},\Gamma_{2}^{1},\cdots,\Gamma_{2}^{n},((A_{1,1},B_{1,1}),\cdots,(A_{m,n},B_{m,n}),(C_{1,1},$ $C_{1,2},C_{1,3}),\cdots,(C_{m,1},C_{m,2},C_{m,3}),(D_{1,1},D_{2,1},D_{1,3}),\cdots,(D_{n,1},D_{n,2},D_{n,3})\Big)$ and $\mathcal{D}'=\left\{((A_{i,j}, B_{i,j})_{i=1}^{m})_{j=1}^{n}\right\}$.
\end{minipage}
}\caption{Setup Algorithm}\label{fig:setup}
\end{figure}
\medskip
\begin{figure}
\centering
\fbox{
\begin{minipage}{15.5cm}
\begin{center}{\sf Service-Transfer}$(\mathcal{U}((i,j),PP)\leftrightarrow \mathcal{SP}(SK,PP)):$\end{center}
\begin{tabular}{lcl}
User: $\mathcal{U}$ & & Service Provider: $\mathcal{SP}$\\
Selects a start point $O=(i,j)$ & ~ $\xleftarrow[H]{\prod_{SP}^{1}}$ ~ & Generates a proof $\prod_{SP}^{1}:$\\
and the query size $S=l\times k$. & & ~$ \mbox{PoK}\left\{(\mathfrak{h}):H=e(\mathfrak{g},\mathfrak{h})\right\}$\\
\medskip\\
Selects $r_{1},r_{2},r_{3},r_{4},r_{5},r_{6},r_{7},r_{8},r_{9},r_{10}\stackrel{\$}{\leftarrow}\mathbb{Z}_{p}$, \\
and computes\\
$E_{1}=\mathfrak{g}^{-r_{1}}g_{1}^{i}$, $E_{2}=\mathfrak{g}^{-r_{2}}h_{1}^{j}$,
$F_{1}=\mathfrak{g}^{r_{3}}C_{i,1}$, $F_{2}=(C_{i,2})^{r_{4}}$,\\
$J_{1}=\mathfrak{g}^{r_{5}}D_{j,1}$, $J_{2}=(D_{j,2})^{r_{6}}$,
$I_{1}=(\Gamma_{1}^{i})^{r_{7}}$, $I_{2}=(\Gamma_{2}^{j})^{r_{8}}$, & &\\
$I_{3}=(\Gamma_{1}^{i+l})^{r_{9}}$, $I_{4}=(\Gamma_{2}^{j+k})^{r_{10}}$
and a proof $\prod_{U}:$\\
$ \mbox{PoK}\Big\{(i,j,r_{1},r_{2},r_{3},r_{4},r_{5},r_{6},r_{7},r_{8},r_{9},r_{10},C_{i,1},$\\
$C_{i,2},D_{j,1},D_{j,2},\Gamma_{1}^{i},\Gamma_{2}^{j},\Gamma_{1}^{i+l},\Gamma_{2}^{j+k}):$\\
$E_{1}=\mathfrak{g}^{-r_{1}}g_{1}^{i}~ \wedge~ E_{2}=\mathfrak{g}^{-r_{2}}h_{1}^{j}~ \wedge$\\
$e(I_{1},W_{1}^{-1})=e(g_{1},g_{1})^{-r_{7}}\cdot e(g_{1},I_{1})^{i}~\wedge$\\
$e(I_{2},(W'_{1})^{-1})=e(h_{1},h_{1})^{-r_{8}}\cdot e(h_{1},I_{2})^{j}~\wedge$\\
$e(I_{3},W_{1}^{-1})\cdot e(g_{1},I_{3})^{-l}=e(g_{1},g_{1})^{-r_{9}}\cdot e(g_{1},I_{3})^{i}~\wedge$\\
$e(I_{4},(W'_{1})^{-1})\cdot e(h_{1},I_{4})^{-k}=e(h_{1},h_{1})^{-r_{10}}\cdot e(h_{1},I_{4})^{j}~\wedge$\\
$ e(F_{1}W_{2},F_{2})=e(\mathfrak{g},F_{2})^{r_{3}}\cdot e(g_{2},g_{2})^{r_{4}}~\wedge$\\
$e(J_{1}W'_{2},J_{2})=e(\mathfrak{g},J_{2})^{r_{5}}\cdot e(h_{2},h_{2})^{r_{6}}~\wedge$\\
$e(E_{1}W_{1},I_{1})=e(\mathfrak{g},I_{1})^{r_{1}}\cdot e(g_{1},g_{1})^{r_{7}}~ \wedge$\\
$ e(E_{2}W'_{1},I_{2})=$ $ e(\mathfrak{g},I_{2})^{r_{2}}\cdot e(h_{1},h_{1})^{r_{8}} ~\wedge $\\
$e(E_{1}g_{1}^{l}W_{1},I_{3})=e(\mathfrak{g},I_{3})^{r_{1}}\cdot e(g_{1},g_{1})^{r_{9}} ~\wedge $\\
$e(E_{2}h_{1}^{k}W'_{1},I_{4})= e(\mathfrak{g},I_{4})^{r_{2}}\cdot e(h_{1},h_{1})^{r_{10}} \Big\}.$ &
~~$\xrightarrow[\Omega_{U}]{\prod_{U}}$~~&For $\mu=1,2,\cdots,l$ and $\nu=1,2,\cdots,k$, \\
Let $\Omega_{U}=(l,k,E_{1},E_{2},F_{1},F_{2},J_{1},J_{2},I_{1},I_{2},$
&& compute $(K_{\mu,\nu}=E_{1}g_{1}^{\mu}E_{2}h_{1}^{\nu}F_{1}^{x^{\mu}}J_{1}^{y^{\nu}},$ \\
\hspace{1.7cm}$I_{3},I_{4})$. & & $L_{\mu,\nu}=e(K_{\mu,\nu},\mathfrak{h})$ and a proof $\prod_{SP}^{2}:$\\
& & $ \mbox{PoK}\Big\{(x,y,\mathfrak{h}):\big((\frac{K_{\mu,\nu}}{E_{1}g_{1}^{\mu}E_{2}h_{1}^{\nu}}=F_{1}^{x^{\mu}}J_{1}^{y^{\nu}}~\wedge $\\
& & \hspace{1.5cm}$ \frac{e(C_{\mu,2},W_{2})}{e(g_{2},g_{2})}=e(C_{\mu,2},g_{2})^{-x^{\mu}}~\wedge$\\
& &
\hspace{1.5cm} $\frac{e(D_{\nu,2},W'_{2})}{e(h_{2},h_{2})}=e(D_{\nu,2},h_{2})^{-y^{\nu}}~\wedge$\\
Computes &~ $\xleftarrow[\Omega_{SP}]{\prod_{SP}^{2}}$ ~ & \hspace{1.5cm} $L_{\mu,\nu}=e(K_{\mu,\nu},\mathfrak{h}))_{\mu=0}^{l}\big)_{\nu=0}^{k}~\wedge$\\
$P_{\mu,\nu}=\frac{L_{\mu,\nu}}{H^{-(r_{1}+r_{2})}\cdot C_{\mu,3}^{r_{3}}\cdot D_{\nu,3}^{r_{5}}}$ and
& & \hspace{1.5cm} $ H=e(\mathfrak{g},\mathfrak{h})\Big\}$.\\
$M_{i+\mu,j+\nu}=\frac{B_{i+\mu,j+\nu}}{P_{\mu,\nu}}$,
& & Let $\Omega_{SP}=\Big((K_{\mu,\nu},L_{\mu,\nu})_{\mu=0}^{l})_{\nu=0}^{k},H\Big)$.\\
for $\mu=1,2,\cdots,l$ and\\
$\nu=1,2,\cdots,k$.
\end{tabular}
\end{minipage}}\caption{Service Transfer Algorithm}\label{fig:service_tr}
\end{figure}
\medskip
\noindent{\bf Service-Transfer.} To make a query, $\mathcal{U}$ first selects a start point $O=(i,j)$ and query size $S=l\times k$. $\mathcal{SP}$ generates a proof $\prod_{SP}^{1}$ that he knows the value $\mathfrak{h}$ which is used to encrypt services. If $\prod_{SP}^{1}$ is correct, $\mathcal{U}$ selects $r_{1},r_{2},r_{3},r_{4},r_{5},r_{6},r_{7},r_{8},r_{9},r_{10}\stackrel{\$}{\leftarrow}\mathbb{Z}_{p}$ and commits $(i,j,x^{i},y^{j},i+l,j+k)$ into $(E_{1},E_{2},F_{1},F_{2},J_{1},J_{2},I_{1},I_{2},I_{3},I_{4})$. Let $\Omega_{U}=(l,k,E_{1},E_{2},F_{1},F_{2},J_{1},J_{2},I_{1},I_{2},I_{3},I_{4})$. Furthermore, $\mathcal{U}$ generates a proof $\prod_{\mathcal{U}}$ that the query area $(O;S)$ is within $\mathcal{L}$. $\mathcal{U}$ sends $\Omega_{U}$ and $\prod_{U}$ to $\mathcal{SP}$.
If $\prod_{U}$ is correct, $\mathcal{SP}$ obliviously and incrementally computes a set of keys $(K_{\mu,\nu},L_{\mu,\nu})$ using his secret key $(x,y)$ and generates a proof $\prod_{SP}^{2}$ that $K_{\mu,\nu}$ and $L_{\mu,\nu}$ are generates correctly, where $\mu=1,2,\cdots,l$ and $\nu=1,2,\cdots,k$. Let $\Omega_{SP}=\left((K_{\mu,\nu},L_{\mu,\nu})_{\mu=1}^{l})_{\nu=1}^{k},H\right)$.
$\mathcal{SP}$ sends $\Omega_{SP}$ and $\prod_{SP}^{2}$ to $\mathcal{U}$.
If $\prod_{SP}^{2}$ is correct, $\mathcal{U}$ uses $(r_{1},r_{2},r_{3},r_{5})$ to de-commit the key $(K_{\mu,\nu},L_{\mu,\nu})$ and obtain $P_{\mu,\nu}=e(g_{1}^{i+\mu}h_{1}^{j+\nu}g_{2}^{x^{i+\mu}}h_{2}^{y^{j+\nu}},\mathfrak{h})$. Furthermore, $\mathcal{U}$ can obtain the services by computing $M_{i+\mu,j+\nu}=\frac{B_{i+\mu,j+\nu}}{P_{\mu,\nu}}$, where $\mu=1,2,\cdots,m$ and $\nu=1,2,\cdots,n$.
\subsection{Efficiency Analysis}
The computation cost and communication cost of our OLBSQ scheme are presented in Table \ref{tab:comp} and Table \ref{tab:comm}, respectively. By $\mathbb{E}$, $\mathbb{E}_{\tau}$, $\mathbb{P}$, $\mathbb{H}$, we denote the time of executing one exponent on the group $\mathbb{G}$, executing one exponent on the group $\mathbb{G}_{\tau}$, executing a pairing and executing one hash function, respectively. $\mathbb{E}_{\mathbb{G}}$, $\mathbb{E}_{\mathbb{G}_{\tau}}$ and $\mathbb{E}_{\mathbb{Z}_{p}}$ stand for the size of one element in the group $\mathbb{G}$, $\mathbb{G}_{\tau}$ and $\mathbb{Z}_{p}$, respectively.
\begin{table}
\caption{Computation Cost of Our OLBSQ Scheme}
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
\multirow{3}{*}{Algorithm}& \multirow{3}{*}{Setup} & \multicolumn{3}{|c|}{ Service Transfer}\\
\cline{3-5}
& & \multicolumn{2}{|c|}{$\mathcal{U}$} & \multirow{2}{*}{$\mathcal{SP}$}\\
\cline{3-4}
& & Query& Retrieve &\\
\hline
\multirow{2}{*}{Computation Cost} & $(4+3m+3n+4mn)\mathbb{E}$ & $16\mathbb{E}+17\mathbb{E}_{\tau}$ & $3klE_{\mathbb{G}}+2(l+k+lk)\mathbb{E}_{\tau} $& $(11+3kl)\mathbb{E}+(33+2kl)\mathbb{E}_{\tau}$ \\
& $+(m+n)\mathbb{E}_{\tau}+(1+mn)\mathbb{P}$ & $+15\mathbb{P}+13\mathbb{H}$ & $2(l+k+lk)\mathbb{P}+2kl\mathbb{H}$& $+(27+4kl)\mathbb{P}+(13+2kl)\mathbb{H}$\\
\hline
\end{tabular}\label{tab:comp}
\end{table}
\begin{table}
\caption{Communication Cost of Our OLBSQ Scheme}
\centering
\begin{tabular}{|c|c|c|c|}
\hline
\multirow{2}{*}{Algorithm}& \multirow{2}{*}{Setup} & \multicolumn{2}{|c|}{Service Transfer}\\
\cline{3-4}
& & $\mathcal{U}\rightarrow\mathcal{SP}$ & $\mathcal{U}\leftarrow\mathcal{SP}$ \\
\hline
\multirow{2}{*}{Communication Cost} & $(10+3m+3n+mn)E_{\mathbb{G}}+$ & \multirow{2}{*}{$12E_{\mathbb{G}}+16E_{\mathbb{G}_{\tau}}+36E_{\mathbb{Z}_{p}}$} & $(1+3kl)E_{\mathbb{G}}+(2+2kl+l+k)E_{\mathbb{G}_{\tau}}$\\
& $(1+m+n+mn)E_{\mathbb{G}_{\tau}}$ & & $+(1+4kl)E_{\mathbb{Z}_{p}}$\\
\hline
\end{tabular}\label{tab:comm}
\end{table}
\section{Security Analysis}\label{sec:analy}
In this section, the security of our OLBSQ scheme described in Fig. \ref{fig:setup} and Fig. \ref{fig:service_tr} is proven.
\begin{theorem}
Our oblivious location-based service query scheme in Fig. \ref{fig:setup} and Fig. \ref{fig:service_tr} securely realize the functionality $\mathcal{F}_{OLBSQ}$ in Fig. \ref{fig:fun} under the $q$-SDH and $q$-PDDH assumptions. \label{theo:1}
\end{theorem}
To prove Theorem \ref{theo:1}, we consider the cases where either the user or the service provider is corrupted. We show that there exists a simulator $\mathcal{S}$ such that it can interact with the ideal functionality $\mathcal{F}_{OLBSQ}$ (simply denoted as $\mathcal{F}$) and the environment $\mathcal{E}$ appropriately and ${\bf Real}_{\mathcal{P},\mathcal{E},\mathcal{A}}$ and ${\bf Ideal}_{\mathcal{F},\mathcal{E},\mathcal{S}}$ are indistinguishable.
In order to prove the indistinguishability between ${\bf Real}_{\mathcal{P},\mathcal{E},\mathcal{A}}$ and ${\bf Ideal}_{\mathcal{F},\mathcal{E},\mathcal{S}}$, a sequence of hybrid games {\bf Game}$_{0}$, {\bf Game}$_{1}$, $\cdots$, {\bf Game}$_{n'}$ are defined. For each {\bf Game}$_{i}$, we show that there exists a simulator $Sim_{i}$ that runs $\mathcal{A}$ as a subroutine and provides $\mathcal{E}$'s view, for $i=1,2,\cdots,n'$. {\bf Hybrid}$_{\mathcal{E},Sim_{i}}(\ell)$ stands for the probability that $\mathcal{E}$ outputs $1$ running in the world provided by $Sim_{i}$. $Sim_{0}$ runs $\mathcal{A}$ and other honest parties in the real-world experiment, so {\bf Hybrid}$_{\mathcal{E},Sim_{0}}$ $={\bf Real}_{\mathcal{P},\mathcal{E},\mathcal{A}}$. $Sim_{n'}$ runs $\mathcal{S}$ in the ideal-world experiment, so {\bf Hybrid}$_{\mathcal{E},Sim_{n'}}$ $={\bf Ideal}_{\mathcal{F},\mathcal{E},\mathcal{S}}$.
Therefore,
\begin{equation*}
\begin{array}{ll}
\left|{\bf Real}_{\mathcal{P},\mathcal{E},\mathcal{A}}-{\bf Ideal}_{\mathcal{F},\mathcal{E},\mathcal{S}}\right| & \leq \left|{\bf Hybrid}_{\mathcal{E},Sim_{0}}-{\bf Hybrid}_{\mathcal{E},Sim_{1}}\right|+\left|{\bf Hybrid}_{\mathcal{E},Sim_{1}}-{\bf Hybrid}_{\mathcal{E},Sim_{2}}\right|\\
&+\cdots+\left|{\bf Hybrid}_{\mathcal{E},Sim_{n'-1}}-{\bf Hybrid}_{\mathcal{E},Sim_{n'}}\right|.
\end{array}
\end{equation*}
\begin{lemma} {\bf (Users' Privacy)} For all environments $\mathcal{E}$ and all real world adversaries $\mathcal{A}$ who controls the service provider, there exists an ideal-world simulator $\mathcal{S}$ such that
\begin{equation*}
\left| {\bf Real}_{\mathcal{P},\mathcal{E},\mathcal{A}}-{\bf Ideal}_{\mathcal{F},\mathcal{E},\mathcal{S}}\right|\leq \frac{1}{2^{\ell}}.
\end{equation*}\label{l1}
\end{lemma}
\begin{proof} Given a real cheating service provider, we can construct a simulator $\mathcal{S}$ in the ideal world experiment such that for any $\mathcal{E}$ cannot distinguish ${\bf Real}_{\mathcal{P},\mathcal{E},\mathcal{A}}$ and ${\bf Ideal}_{\mathcal{F},\mathcal{E},\mathcal{S}}$.
\medskip
\noindent{\bf Game}$_{0}$: $Sim_{0}$ runs $\mathcal{A}$ and the honest user as in the real-world experiment, hence
\begin{equation*}
{\bf Real}_{\mathcal{P},\mathcal{E},\mathcal{A}}={\bf Hybrid}_{\mathcal{E},Sim_{0}}.
\end{equation*}
\noindent{\bf Game}$_{1}$: $Sim_{1}$ runs the extractor for the proof of knowledge $\prod_{SP}^{1}: \mbox{PoK}\left\{(\mathfrak{h}):H=e(\mathfrak{g},\mathfrak{h})\right\}$ to extract the knowledge $\mathfrak{h}$ at the first service transfer query dictated by $\mathcal{A}$. If the extractor fails to exact $\mathfrak{h}$, $Sim_{1}$ returns $\perp$ to $\mathcal{E}$; otherwise, $Sim_{1}$ runs $\mathcal{A}$ interacting with $\mathcal{U}$. The difference between ${\bf Hybrid}_{\mathcal{E},Sim_{1}}$ and ${\bf Hybrid}_{\mathcal{E},Sim_{0}}$ is the knowledge error of the proof of knowledge $\prod_{SP}^{1}$. Hence,
\begin{equation*}
\left|{\bf Hybrid}_{\mathcal{E},Sim_{0}}-{\bf Hybrid}_{\mathcal{E},Sim_{1}}\right|\leq \frac{1}{2^{\ell}}.
\end{equation*}
\noindent{\bf Game}$_{2}$: $Sim_{2}$ runs exactly as $Sim_{1}$ in {\bf Game}$_{1}$, except it can retrieve all messages holden by $\mathcal{SP}$. $Sim_{2}$ runs $\mathcal{A}$ to obtain the encrypted $\mathcal{D}'=\left\{((A_{i,j},B_{i,j})_{i=1}^{m})_{j=1}^{n}\right\}$. $Sim_{2}$ can computes $M_{i,j}=\frac{B_{i,j}}{e(\mathfrak{h},A_{i,j})}$ and $\mathcal{D}=\left\{M_{i,j}\right\}$ where $i=1,2,\cdots,m$ and $j=1,2,\cdots,n$. Hence,
\begin{equation*}
{\bf Hybrid}_{\mathcal{E},Sim_{1}}={\bf Hybrid}_{\mathcal{E},Sim_{2}}.
\end{equation*}
\noindent{\bf Game}$_{3}$: We construct a simulator $\mathcal{S}$ that plays the role as $\mathcal{A}$ in {\bf Game}$_{2}$. $\mathcal{S}$ only relays the communications between $\mathcal{E}$ and $\mathcal{A}$. When receiving a message $(sid, service\_provider,\cdots)$, $\mathcal{S}$ returns $\mathcal{D}$ to $\mathcal{E}$. When receiving a message $(sid,user,O,S)$, $\mathcal{S}$ first checks whether $(O;S)\in \mathfrak{L}$. If it is not, $\mathcal{S}$ returns $(sid,0)$ to $\mathcal{E}$; otherwise, $\mathcal{S}$ returns $(sid,1)$ to $\mathcal{E}$. Hence,
\begin{equation*}
{\bf Hybrid}_{\mathcal{E},Sim_{2}}={\bf Hybrid}_{\mathcal{E},Sim_{3}}={\bf Ideal}_{\mathcal{F},\mathcal{E},\mathcal{S}}.
\end{equation*}
Therefore,
\begin{equation*}
\begin{array}{c}
\left|{\bf Real}_{\mathcal{P},\mathcal{E},\mathcal{A}}-{\bf Ideal}_{\mathcal{F},\mathcal{E},\mathcal{S}}\right|
\leq \left|{\bf Hybrid}_{\mathcal{E},Sim_{0}}-{\bf Hybrid}_{\mathcal{E},Sim_{1}}\right|+\left|{\bf Hybrid}_{\mathcal{E},Sim_{1}}-{\bf Hybrid}_{\mathcal{E},Sim_{2}}\right|\\
+\left|{\bf Hybrid}_{\mathcal{E},Sim_{2}}-{\bf Hybrid}_{\mathcal{E},Sim_{3}}\right|\leq \frac{1}{2^{\ell}}.
\end{array}
\end{equation*}
\qed
\end{proof}
\begin{lemma} {\bf (Service Provider's Security)} For all environments $\mathcal{E}$ and all real world adversaries $\mathcal{A}$ who controls the user, there exists an ideal-world simulator $\mathcal{S}$ such that
\begin{equation*}
\left| {\bf Real}_{\mathcal{P},\mathcal{E},\mathcal{A}}-{\bf Ideal}_{\mathcal{F},\mathcal{E},\mathcal{S}}\right|\leq \frac{1}{p}+2Adv_{\mathcal{A}}^{\mbox{q-SDH}}+Adv_{\mathcal{A}}^{\mbox{A-q-PDDE}}.
\end{equation*}\label{l1}
\end{lemma}
\begin{proof} Given a real cheating user, we can construct a simulator $\mathcal{S}$ in the ideal world experiment such at for any $\mathcal{E}$ cannot distinguish ${\bf Real}_{\mathcal{P},\mathcal{E},\mathcal{A}}$ and ${\bf Idea}_{\mathcal{F},\mathcal{E},\mathcal{S}}$.
\medskip
\noindent{\bf Game}$_{0}$: $Sim_{0}$ runs $\mathcal{A}$ and the honest service provider as in the real world experiment, hence,
\begin{equation*}
{\bf Real}_{\mathcal{P},\mathcal{E},\mathcal{A}}={\bf Hybrid}_{\mathcal{E},Sim_{0}}.
\end{equation*}
\noindent{\bf Game}$_{1}$: $Sim_{1}$ runs exactly as $Sim_{0}$ in {\bf Game}$_{0}$, except that $Sim_{1}$ extract the knowledge $(i,j,r_{1},r_{2},$ $r_{3},r_{4},r_{5},r_{6},r_{7},r_{8},r_{9},r_{10},C_{i,1},C_{i,2},D_{j,1},D_{j,2},\Gamma_{1}^{i},\Gamma_{2}^{j},\Gamma_{1}^{i+l},\Gamma_{2}^{j+k})$ from the proof $\prod_{U}$. $Sim_{1}$ first generates a simulated proof of $\prod_{SP}^{1}: \mbox{PoK}\left\{(\mathfrak{h}): H=e(\mathfrak{g},\mathfrak{h})\right\}$, and then runs the extractor of the knowledge proof of $\prod_{U}$ to extract $(i,j,r_{1},r_{2},r_{3},r_{4},r_{5},r_{6},r_{7},r_{8},r_{9},r_{10},C_{i,1},C_{i,2},D_{j,1},D_{j,2},\Gamma_{1}^{i},\Gamma_{2}^{j},$ $\Gamma_{1}^{i+l},\Gamma_{2}^{j+k})$. Due to the knowledge proof of $\prod_{U}$ is perfect zero-knowledge, we have
\begin{equation*}
\left|{\bf Hybride}_{\mathcal{E},Sim_{0}}-{\bf Hybrid}_{\mathcal{E},Sim_{1}}\right|\leq \frac{1}{p}
\end{equation*}
\noindent{\bf Game}$_{2}$: $Sim_{2}$ runs exactly as $Sim_{1}$ in {\bf Game}$_{1}$, except that: (1)$ i\notin\{1,2,\cdots,m\}$ or $i+l\notin\{1,2,\cdots,m\}$; $j\notin\{1,2,\cdots,n\}$ or $j+k\notin\{1,2,\cdots,n\}$.
\begin{clm}\label{clm:1}
If the $q$-SDH assumption hold on $(e,p,\mathbb{G},\mathbb{G}_{\tau})$, we have
\begin{equation*}
\left|{\bf Hybrid}_{\mathcal{E},Sim_{1}}-{\bf Hybrid}_{\mathcal{E},Sim_{2}}\right|\leq 2 Adv_{\mathcal{A}}^{q-SDH}
\end{equation*}
where $q=max\{m+1,n+1\}$.
\end{clm}
\medskip
\noindent{\bf Game}$_{3}:$ $Sim_{3}$ runs exactly as $Sim_{2}$ in {\bf Game}$_{2}$, except that $Sim_{3}$ outputs $(A_{\mu,\nu},L_{\mu,\nu})$ and the proof $\prod_{SP}^{2}$. $Sim_{3}$ computes $A_{\mu,\nu}=\mathfrak{g}^{-(r_{1}+r_{2})}(\mathfrak{g}^{x^{\mu}})^{r_{3}}(\mathfrak{g}^{y^{\nu}})^{r_{5}}g_{1}^{i+\mu}h_{1}^{j+\nu}g_{2}^{x^{i+\nu}}h_{2}^{y^{j+\nu}}$ and $L_{\mu,\nu}=H^{r_{1}+r_{2}}\cdot (H^{x^{\mu}})^{r_{3}}(H^{y^{\nu}})^{r_{5}}\cdot \frac{B_{i+\mu,j+\nu}}{M_{i+\mu,j+\nu}}$, and generates a simulated proof of $\prod_{SP}^{2}=\mbox{PoK}\Big\{(x^{\mu},y^{\nu},\mathfrak{h}):\big((\frac{K_{\mu,\nu}}{E_{1}g_{1}^{\mu}E_{2}h_{1}^{\nu}}=F_{1}^{x^{\mu}}H_{1}^{y^{\nu}}\wedge\frac{e(C_{\mu,2},W_{1})}{e(g_{2},g_{2})}=e(C_{\mu,2},g_{2})^{-x^{\mu}}\wedge$$~\frac{e(D_{\nu,2},W'_{1})}{e(h_{2},h_{2})}=e(D_{\nu,2},h_{2})^{-y^{\nu}}~\wedge~L_{\mu,\nu}=e(\mathfrak{h},K_{\mu,\nu}))_{\mu=0}^{l}\big)_{\nu=0}^{k}\wedge H=e(\mathfrak{g},\mathfrak{h})\Big\}$. Due to the perfect of the zero-knowlege proof, we have that
\begin{equation*}
{\bf Hybrid}_{\mathcal{E},Sim_{2}}={\bf Hybrid}_{\mathcal{E},Sim_{3}}.
\end{equation*}
\noindent${\bf Game}_{4}:$ $Sim_{4}$ runs exactly as $Sim_{3}$ in {\bf Game}$_{3}$, except that the values $(B_{1,1},B_{1,2}, \cdots, B_{m,n})$ are replaced by random elements in $\mathbb{G}_{\tau}$. In this case, the proof $\prod_{SP}^{2}$ in {\bf Game}$_{3}$ is a simulated proof of a false statement.
\begin{clm}\label{clm:2}
If the $q$-PDDH assumption holds on $(e,p,\mathbb{G},\mathbb{G}_{\tau})$, we have that
\begin{equation*}
\left|{\bf Hybrid}_{\mathcal{E},Sim_{3}}-{\bf Hybrid}_{\mathcal{E},Sim_{4}}\right|\leq Adv_{\mathcal{A}}^{q-PDDH}
\end{equation*}
where $q=max\{m^{2},n^{2}\}$.
\end{clm}
\medskip
\noindent{\bf Game}$_{5}:$ We construct a simulator $\mathcal{S}$ that works as $\mathcal{A}$ in {\bf Game}$_{4}$. $\mathcal{S}$ only forward the communication between $\mathcal{E}$ and $\mathcal{A}$. When receiving a message $(sid,service\_provider,\mathcal{D})$, stores $M_{i,j}\in\mathcal{D}$ for $i=0,1,2,\cdots,m$ and $j=0,1,2,\cdots,n$. Upon receiving a message $(sid,user,O,S)$, $\mathcal{S}$ runs the extractor of the proof $\prod_{U}$ to extract $(i,j,r_{1},r_{2},r_{3},r_{4},r_{5},r_{6},r_{7},r_{8},r_{9},r_{10},C_{i,1},C_{i,2},D_{j,1},D_{j,2},\Gamma_{1}^{i},\Gamma_{2}^{j},\Gamma_{1}^{i+l},$ $\Gamma_{2}^{j+k})$. If the extraction fail, $\mathcal{S}$ sends noting to $\mathcal{U}$, otherwise, sends $(sid,service\_request)$ to $\mathcal{SP}$. If $b=0$, returns $(sid,\perp)$ to $\mathcal{U}$. If $b=1$, $\mathcal{S}$ computes $A_{\mu,\nu}=\mathfrak{g}^{-(r_{1}+r_{2})}(\mathfrak{g}^{x^{\mu}})^{r_{5}}(\mathfrak{g}^{y^{\nu}})^{r_{5}}g_{1}^{i+\mu}h_{1}^{j+\nu}g_{2}^{x^{i+\nu}}h_{2}^{y^{j+\nu}}$ and $L_{\mu,\nu}=H^{-(r_{1}+r_{2}}\cdot (H^{x^{\mu}})^{r_{3}}(H^{y^{\nu}})^{r_{5}}\cdot \frac{B_{i+\mu,j+\nu}}{M_{i+\mu,j+\nu}}$, and generates a simulated proof. Hence,
\begin{equation*}
{\bf Hybrid}_{\mathcal{E},Sim_{4}}={\bf Hybrid}_{\mathcal{E},Sim_{5}}={\bf Ideal}_{\mathcal{F},\mathcal{E},\mathcal{S}}.
\end{equation*}
Therefore,
\begin{equation*}
\begin{array}{ll}
\left|{\bf Real}_{\mathcal{F},\mathcal{E},\mathcal{A}}-{\bf Ideal}_{\mathcal{F},\mathcal{E},\mathcal{S}}\right| & \leq \left|{\bf Hybrid}_{\mathcal{E},Sim_{0}}-{\bf Hybrid}_{\mathcal{E},Sim_{1}}\right|+\left|{\bf Hybrid}_{\mathcal{E},Sim_{1}}-{\bf Hybrid}_{\mathcal{E},Sim_{2}}\right|\\
&+\left|{\bf Hybrid}_{\mathcal{E},Sim_{2}}-{\bf Hybrid}_{\mathcal{E},Sim_{3}}\right|+\left|{\bf Hybrid}_{\mathcal{E},Sim_{3}}-{\bf Hybrid}_{\mathcal{E},Sim_{4}}\right|\\
&+\left|{\bf Hybrid}_{\mathcal{E},Sim_{4}}-{\bf Hybrid}_{\mathcal{E},Sim_{5}}\right|\leq \epsilon(\ell).
\end{array}
\end{equation*}
\qed
\end{proof}
\noindent{\bf Proof of Claim \ref{clm:1}.} We prove this claim by constructing an algorithm $\mathcal{B}$ that can break the unfogeability under weak chosen-message attack of the Boneh-Boyen signature scheme. According to the proof given in \cite{bb:sig}, $\mathcal{B}$ can solve the $q$-SDH assumption.
Suppose that there exists an environment $\mathcal{E}$ that can distinguish {\bf Game}$_{1}$ and {\bf Game}$_{2}$, $\mathcal{B}$ can forge a signature as follows. We consider the following four cases: {\em Case-I.} $\mathcal{B}$ outputs a forged signature for $i$ or $i+l$; {\em Case-II.} $\mathcal{B}$ outputs a forged signature for $j$ or $j+k$.
\medskip
\noindent{\em Case-I.} Given $(g,g^{\zeta},g^{\zeta^{2}},\cdots,g^{\zeta^{q}}, h,h^{\zeta})$, $\mathcal{B}$ sets $g_{1}=g$ and $\mathfrak{h}=h$. $\mathcal{B}$ selects $\gamma_{1},\gamma_{2},\gamma_{3},\gamma_{4}\stackrel{R}{\leftarrow}\mathbb{Z}_{p}$ and computes $g_{2}=g_{1}^{\gamma_{1}}$, $h_{1}=g_{1}^{\gamma_{2}}$ $h_{2}=g_{1}^{\gamma_{3}}$, $\mathfrak{g}=g_{1}^{\alpha_{4}}$, and sets $f(\zeta)=(\zeta+1)(\zeta+2)\cdots (\zeta+m)=\sum_{z=0}^{m}a_{z}\zeta^{z}$, $g_{1}=g^{f(\zeta)}$, and $f_{i}(\zeta)=\frac{f(\zeta)}{\zeta+i}=\sum_{w=0}^{m-1}b_{w}\zeta^{w}$ where $a_{z},b_{w}\in\mathbb{Z}_{p}$ and $i=1,2,\cdots,m$. $\mathcal{B}$ selects $\alpha_{2},\beta_{1},\beta_{2},x,y\stackrel{\$}{\leftarrow}\mathbb{Z}_{p}$, and computes $H=e(\mathfrak{g},\mathfrak{h})$ and sets $\alpha_{1}=\zeta$. $\mathcal{B}$ computes
\begin{equation*}
\begin{array}{c}
W_{1}=\prod_{k=0}^{m-1} (g^{\zeta^{k+1}})^{a_{z}}=(g^{\sum_{z=0}^{m}a_{z}\zeta^{z}})^{\zeta}=(g^{f(\zeta))\zeta}=g_{1}^{\zeta},~ W_{2}=g_{2}^{\alpha_{2}}, ~ W_{1}'=h_{2}^{\beta_{1}}, ~ W_{2}'=h_{1}^{\beta_{2}}, \\ \Gamma_{1}^{i}=\prod_{w=0}^{m-1}(g^{\zeta})^{b_{w}}=g^{\sum_{w=0}^{m-1}b_{w}\zeta^{w}}= g^{f_{i}(\zeta)}=g^{\frac{f(\zeta)}{\zeta+i}}=g_{1}^{\frac{1}{\alpha_{1}+i}}, ~ \Gamma_{2}^{j}=h_{1}^{\frac{1}{\beta_{1}+j}}, ~A_{i,j}=(g_{1}^{i}h_{1}^{j}g_{2}^{x^{i}}h_{2}^{y^{j}}), \\
B_{i,j}=e(\mathfrak{h},A_{i,j})\cdot M_{i,j}, ~ \left(C_{i,1}=g_{2}^{x^{i}}, ~ C_{i,2}=g_{2}^{\frac{1}{\alpha_{2}+x^{i}}}, ~ C_{i,3}=e(\mathfrak{h},\mathfrak{g})^{x^{i}}\right), \\
\left(D_{j,1}=h_{2}^{y^{j}}, ~ D_{j,2}=h_{2}^{\frac{1}{\beta_{2}+y^{j}}}, ~ D_{j,3}=e(\mathfrak{h},\mathfrak{g})^{y^{j}}\right) \mbox{ for}~ i=1,2,\cdots,m ~\mbox{and}~ j=1,2,\cdots,n.
\end{array}
\end{equation*}
The secret key is $SK=(\zeta,\alpha_{2},\beta_{1},\beta_{2},x,y,\mathfrak{h})$ and the public parameters are
$PP=\big(e,p,\mathbb{G},\mathbb{G}_{\tau},$ $\mathfrak{g},g_{1},g_{2},h_{1},h_{2},H,W_{1},W_{2},W'_{1},W'_{2},\Gamma_{1}^{1},\cdots,\Gamma_{1}^{m},\Gamma_{2}^{1},\cdots,\Gamma_{2}^{n},((A_{1,1},B_{1,1}),\cdots,(A_{m,n},B_{m,n}),(C_{1,1},$ $C_{1,2},C_{1,3}),\cdots,(C_{m,1},C_{m,2},C_{m,3}),(D_{1,1},D_{2,1},D_{1,3}),\cdots,(D_{n,1},D_{n,2},D_{n,3})\big)$ and $\mathcal{D}'=\{((A_{i,j},$\\ $ B_{i,j})_{i=1}^{m})_{j=1}^{n}\}$.
\medskip
$\mathcal{B}$ runs the extractor of the proof $\prod_{U}$ to extract the knowledge $(i,j,r_{1},r_{2},r_{3},r_{4},r_{5},r_{6},r_{7},r_{8},$ $C_{i,1},C_{i,2},\Gamma_{1}^{i},\Gamma_{2}^{j},\Gamma_{1}^{i+l},\Gamma_{2}^{j+k})$. If $\mathcal{E}$ can distinguish {\bf Game}$_{1}$ and {\bf Game}$_{2}$, namely $i\notin\{1,2,\cdots,m\}$ or $i+l\notin\{1,2,\cdots,m\}$, $\mathcal{B}$ outputs a forged signature $I_{1}^{\frac{1}{r_{7}}}$ on $i$ or a forged signature $(I_{3})^{\frac{1}{r_{9}}}$ on $i+l$.
\medskip
\noindent{\em Case-II.} Given $(g,g^{\zeta},g^{\zeta^{2}},\cdots,g^{\zeta^{q}})$, $\mathcal{B}$ selects $g_{1}$, $g_{2}$ $h_{2}$, $\mathfrak{g}$ and $\mathfrak{h}$ from $\mathbb{G}$, and sets $f(\zeta)=(\zeta+1)(\zeta+2)\cdots (\zeta+n)=\sum_{z=0}^{n}c_{z}\zeta^{z}$, $h_{1}=g^{f(\zeta)}$, and $f_{j}(\zeta)=\frac{f(\zeta)}{\zeta+i}=\sum_{w=1}^{n-1}d_{w}\zeta^{w}$ where $c_{z},d_{w}\in\mathbb{Z}_{p}$ and $j=1,2,\cdots,n$. $\mathcal{B}$ selects $\alpha_{1},\alpha_{2},\beta_{2},x,y\stackrel{\$}{\leftarrow}\mathbb{Z}_{p}$, and computes $H=e(\mathfrak{g},\mathfrak{h})$ and sets $\beta_{1}=\zeta$. $\mathcal{B}$ computes
\begin{equation*}
\begin{array}{c}
W_{1}=g_{1}^{\alpha_{1}}, ~ W_{2}=g_{2}^{\alpha_{2}}, ~ W_{1}'=\prod_{z=0}^{n} (g^{\zeta^{z+1}})^{c_{z}}=(g^{\sum_{z=0}^{n}c_{z}\zeta^{z}})^{\zeta}=(g^{f(\zeta))\zeta}=h_{1}^{\zeta}, ~ W_{2}'=h_{2}^{\beta_{2}}, \\
~ \Gamma_{1}^{j}=g_{1}^{\frac{1}{\alpha_{1}+i}},~ \Gamma_{2}^{j}=\prod_{w=0}^{n-1}(g^{\zeta^{w}})^{d_{w}}=g^{\sum_{w=0}^{n-1}d_{w}\zeta^{w}}= g^{f_{j}(\zeta)}=g^{\frac{f(\zeta)}{\zeta+j}}=h_{1}^{\frac{1}{\beta_{1}+j}}, ~A_{i,j}=(g_{1}^{i}h_{1}^{j}g_{2}^{x^{i}}h_{2}^{y^{j}}),\\
B_{i,j}=e(\mathfrak{h},A_{i,j})\cdot M_{i,j}, ~ \left(C_{i,1}=g_{2}^{x^{i}}, ~ C_{i,2}=g_{2}^{\frac{1}{\alpha_{2}+x^{i}}}, ~ C_{i,3}=e(\mathfrak{h},\mathfrak{g})^{x^{i}}\right), \\
\left(D_{j,1}=h_{2}^{y^{j}}, ~ D_{j,2}=h_{2}^{\frac{1}{\beta_{2}+y^{j}}}, ~ D_{j,3}=e(\mathfrak{h},\mathfrak{g})^{y^{j}}\right) \mbox{ for}~ i=1,2,\cdots,m ~\mbox{and}~ j=1,2,\cdots,n.
\end{array}
\end{equation*}
The secret key is $SK=(\alpha_{1},\alpha_{2},\zeta,\beta_{2},x,y,\mathfrak{h})$ and the public parameters are
$PP=\big(e,p,\mathbb{G},\mathbb{G}_{\tau},$ $\mathfrak{g},g_{1},g_{2},h_{1},h_{2},H,W_{1},W_{2},W'_{1},W'_{2},\Gamma_{1}^{1},\cdots,\Gamma_{1}^{m},\Gamma_{2}^{1},\cdots,\Gamma_{2}^{n},((A_{1,1},B_{1,1}),\cdots,(A_{m,n},B_{m,n}),(C_{1,1},$ $C_{1,2},C_{1,3}),\cdots,(C_{m,1},C_{m,2},C_{m,3}),(D_{1,1},D_{2,1},D_{1,3}),\cdots,(D_{n,1},D_{n,2},D_{n,3})\big)$ and $\mathcal{D}'=\{((A_{i,j},$\\ $ B_{i,j})_{i=1}^{m})_{j=1}^{n}\}$.
\medskip
$\mathcal{B}$ runs the extractor of the proof $\prod_{U}$ to extract the knowledge $(i,j,r_{1},r_{2},r_{3},r_{4},r_{5},r_{6},r_{7},r_{8},r_{9},$ $r_{10},C_{i,1},C_{i,2},D_{j,1},D_{j,2},\Gamma_{1}^{i},\Gamma_{2}^{j},\Gamma_{1}^{i+l},\Gamma_{2}^{j+k})$. If $\mathcal{E}$ can distinguish {\bf Game}$_{1}$ and {\bf Game}$_{2}$, namely $j\notin\{1,2,\cdots,n\}$ or $j+k\notin\{1,2,\cdots,m\}$, $\mathcal{B}$ outputs a forged signature $I_{2}^{\frac{1}{r_{8}}}$ on $j$ or a forged signature $(I_{4})^{\frac{1}{r_{10}}}$ on $j+k$.
\medskip
Therefore,
\begin{equation*}
\left|{\bf Hybrid}_{\mathcal{E},Sim_{2}}-{\bf Hybrid}_{\mathcal{E},Sim_{1}}\right|\leq 2 Adv_{\mathcal{A}}^{q-SDH}.
\end{equation*}
\qed
\medskip
\noindent{\bf Proof of Claim \ref{clm:2}.} We prove this claim by constructing an algorithm $\mathcal{B}$ that can break the $q$-PDDH assumption.
Suppose that there exists an environment $\mathcal{E}$ that can distinguish {\bf Game}$_{3}$ and {\bf Game}$_{4}$, $\mathcal{B}$ can break the $q$-PDDH as follows.
Given $(g,g^{\zeta},g^{\zeta^{2}},\cdots,g^{\zeta^{q}},H,T_{1},T_{2},\cdots,T_{q})$, $\mathcal{B}$ will determine whether $T_{z}=H^{x^{z}}$ or $T_{z}\stackrel{R}{\leftarrow}\mathbb{Z}_{p}$ for $z=1,2,\cdots,q$. Let $f(x)=(\alpha_{2}+x)(\alpha_{2}+x^{2})\cdots(\alpha_{2}+x^{m})=\sum_{z=0}^{\frac{m(1+m)}{2}}a_{z}x^{z}$, $f_{i}(x)=(\alpha_{2}+x)(\alpha_{2}+x^{2})\cdots(\alpha_{2}+x^{i-1})(\alpha_{2}+x^{i+1})\cdots(\alpha_{2}+x^{m})=\sum_{w=0}^{\frac{m(1+m)}{2}-i}b_{w}x^{w}$, $f'(y)=(\beta_{2}+y)(\beta_{2}+y^{2})\cdots(\beta_{2}+y^{n})=\sum_{\rho=0}^{\frac{n(1+n)}{2}}c_{\rho}y^{\rho}$ and $f'_{i}(y)=(\beta_{2}+y)(\beta_{2}+y^{2})\cdots(\beta_{2}+y^{j-1})(\beta_{2}+y^{j+1})\cdots(\beta_{2}+y^{n})=\sum_{\varrho=0}^{\frac{n(1+n)}{2}-j}d_{\varrho}y^{\varrho}$.
$\mathcal{B}$ selects $\gamma_{1},\gamma_{2}\stackrel{R}{\leftarrow}\mathbb{Z}_{p}$, and sets $\mathfrak{g}=g$, $g_{1}=g^{\gamma_{1}}$, $g_{2}=g^{f(x)}$, $h_{1}=g^{\gamma_{2}}$ and $h_{2}=g^{f'(y)}$. $\mathcal{B}$ selects $\alpha_{1},\alpha_{2},\beta_{1},\beta_{2},\gamma\stackrel{R}{\leftarrow}\mathbb{Z}_{p}$, and sets $y=\gamma x$. $\mathcal{B}$ computes
\begin{equation*}
\begin{array}{c}
W_{1}=g_{1}^{\alpha_{1}}, ~W_{2}=g_{2}^{\alpha_{2}}, ~ W'_{1}=h_{1}^{\beta_{1}},~W'_{2}=h_{2}^{\beta_{2}},~
\Gamma_{1}^{i}=g_{1}^{\frac{1}{\alpha_{1}+i}},~\Gamma_{2}^{j}=h_{1}^{\frac{1}{\beta_{1}+j}},\\
\end{array}
\end{equation*}
\begin{equation*}
\begin{array}{ll}
A_{i,j}&=g_{1}^{i}h_{1}^{j}\prod_{z=0}^{\frac{m(1+m)}{2}}(g^{x^{z+i}})^{a_{z}}\prod_{\rho=0}^{\frac{n(1+n)}{2}}(g^{x^{\rho+j}})^{\gamma^{\rho+j}c_{\rho}}\\
&=g_{1}^{i}h_{1}^{j}\prod_{z=0}^{\frac{m(1+m)}{2}}(g^{a_{z}x^{z}})^{x^{i}}\prod_{\rho=0}^{\frac{n(1+n)}{2}}(g^{c_{z}y^{\rho}})^{y_{j}}=g_{1}^{i}h_{1}^{j}g_{2}^{x^{i}}h_{2}^{y^{j}},\\
\end{array}
\end{equation*}
\begin{equation*}
\begin{array}{ll}
B_{i,j}=H^{\gamma_{1}i+\gamma_{2}j}\cdot \prod_{z=0}^{\frac{m(1+m)}{2}}T_{z+i}^{a_{z}}\prod_{\rho=0}^{\frac{n(1+n)}{2}}T_{\rho+j}^{c_{\rho}}\cdot M_{i,j},\\
C_{i,1}=\prod_{z=0}^{\frac{m(1+m)}{2}}(g^{x^{z+i}})^{a_{z}}=\prod_{z=0}^{m}(g^{a_{z}x^{z}})^{x^{i}}=g_{2}^{x^{i}},\\
C_{i,2}=\prod_{w=0}^{\frac{m(m+1)}{2}-i}(g^{x^{w}})^{b_{w}}=\prod_{w=0}^{\frac{m(m+1)}{2}-i}(g^{b_{w}x^{w}})=g^{f_{i}(x)}
=g^{\frac{f(x)}{\alpha_{2}+x^{i}}}=g_{2}^{\frac{1}{\alpha_{2}+x^{i}}},\\
C_{i,3}=T_{i},\\
D_{j,1}=\prod_{\rho=0}^{\frac{n(1+n)}{2}}(g^{x^{\rho+j}})^{\gamma^{\rho+j}c_{\rho}}=\prod_{\rho=0}^{\frac{n(1+n)}{2}}(g^{(\gamma x)^{\rho+j}})^{c_{\rho}}=\prod_{\rho=0}^{\frac{n(1+n)}{2}}(g^{c_{\rho}y^{\rho}})^{y^{j}}=h_{2}^{y^{j}},\\
D_{j,2}=\prod_{\varrho=0}^{\frac{n(1+n)}{2}}(g^{x^{\varrho}})^{d_{\varrho}\gamma^{\varrho}}=\prod_{\varrho=0}^{\frac{n(1+n)}{2}}(g^{(\gamma x)^{\varrho}})^{d_{\varrho}}=\prod_{\varrho=0}^{\frac{n(1+n)}{2}}(g^{d_{\varrho}y^{\varrho}})=h_{2}^{\frac{1}{\beta_{2}+y^{j}}},\\
D_{j,3}=T_{j}^{\gamma^{j}},~
\mbox{for}~ i=1,2,\cdots,m ~\mbox{and}~ j=1,2,\cdots,n.
\end{array}
\end{equation*}
The secret key is $SK=(\alpha_{1},\alpha_{2},\beta_{1},\beta_{2},x,y)$ and the public parameters $PP=\big(e,p,\mathbb{G},\mathbb{G}_{\tau},$ $\mathfrak{g},g_{1},g_{2},h_{1},h_{2},H,W_{1},W_{2},W'_{1},W'_{2},\Gamma_{1}^{1},\cdots,\Gamma_{1}^{m},\Gamma_{2}^{1},\cdots,\Gamma_{2}^{n},((A_{1,1},B_{1,1}),\cdots,(A_{m,n},B_{m,n}),(C_{1,1},$ $C_{1,2},C_{1,3}),\cdots,(C_{m,1},C_{m,2},C_{m,3}),(D_{1,1},D_{2,1},D_{1,3}),\cdots,(D_{n,1},D_{n,2},D_{n,3})\big)$ and $\mathcal{D}'=\{((A_{i,j},$\\ $ B_{i,j})_{i=1}^{m})_{j=1}^{n}\}$. $\mathcal{B}$ sends $PP$ to $\mathcal{E}$.
If $(T_{1},T_{2},\cdots,T_{q})=(H^{x},H^{x^{2}},\cdots,H^{x^{q}})$, the parameters are distributed exactly as in {\bf Game}$_{3}$. If $(T_{1},T_{2},\cdots,T_{q})\stackrel{R}{\leftarrow}\mathbb{G}_{\tau}^{q}$, the parameters are distributed exactly as in {\bf Game}$_{4}$. Hence, $\mathcal{B}$ can break the $q$-PDDH assumption if $\mathcal{E}$ can distinguish {\bf Game}$_{3}$ from {\bf Game}$_{4}$. Therefore, we have
\begin{equation*}
\left|{\bf Hybrid}_{\mathcal{E},Sim_{3}}-{\bf Hybrid}_{\mathcal{E},Sim_{4}}\right|\leq Adv_{\mathcal{A}}^{q-PDDH}.
\end{equation*}
\section{Conclusion and Future Work}\label{sec:conc}
In this paper, we proposed an OLBSQ scheme which does not require a semi-TTP. Especially, in our OLBSQ scheme, both the computation cost and communication cost to generate a query is constant, instead of linear with the size of the queried area. We formalised the definition and security model of our OLBSQ scheme, and presented a concrete construction.
Finally, we reduced the security of the proposed OLBSQ scheme to well-known complexity assumptions.
Our OLBSQ scheme was constructed on the groups equipped with pairing. Comparatively, pairing is time consuming operation. Therefore, constructing OLBSQ schemes without pairing is interesting and desirable. We leave it as an open problem and our future work.
\bibliographystyle{plain}
|
1,108,101,566,694 | arxiv | \section{Introduction}\label{Intro}
In this work, we consider the Nielsen-Olesen vortex, a $2+1$-dimensional abelian Higgs model, non-minimally coupled to Einstein gravity with and without cosmological constant. Compared to previous work on the effects of gravity on vortices \cite{Cadoni, Rich, Ariel, Albert} the new ingredient in the action is the non-minimal coupling term $\xi\,R\,|\phi|^2$ where $R$ is the Ricci scalar, $\xi$ is a dimensionless coupling constant and $\phi$ is a complex scalar field. When gravity is present, it is perfectly fitting to add this term to the action as it preserves the local $U(1)$ gauge invariance of the vortex.
The non-minimal coupling term changes the physical landscape significantly, in a qualitative fashion. This is related to the dual role that it plays: it acts as part of the potential for the scalar field but also contributes to the Einstein-Hilbert term for gravity. As a consequence, the old parameters when $\xi=0$ such as the VEV $v$, cosmological constant $\Lambda$ and $\alpha$ (proportional to the inverse of Newton's constant) become effectively the VEV $v_{eff}$, the asymptotic cosmological constant $\Lambda_{eff}$ and $\alpha_{eff}$ respectively that now depend on the coupling $\xi$. The novel feature that emerges is that in an AdS$_3$ background, where $\Lambda_{eff}$ is non-zero and negative, there exists a critical coupling $\xi_c$ where the VEV $v_{eff}$ is zero for $\xi$ at or above $\xi_c$ but is non-zero when $\xi$ crosses below $\xi_c$. When the VEV crosses from zero to non-zero at $\xi_c$, the local $U(1)$ gauge symmetry is spontaneously broken corresponding to a phase transition to a vortex. The critical coupling $\xi_c$ acts like the analog of the critical temperature $T_c$ in Ginzburg-Landau (GL) mean-field theory where the order parameter is zero above $T_c$ but is non-zero below $T_c$ \cite{Justin,Annett}. There is a second-order phase transition when the temperature crosses below $T_c$ and this is typically accompanied by a symmetry that is spontaneously broken. The analogy between $\xi_c$ and $T_c$ can be made quantitative. Near $\xi_c$, we show that the VEV $v_{eff}$ has a power-law behaviour proportional to $|\xi-\xi_c|^{1/2}$ which is similar to the $|T-T_c|^{1/2}$ power-law behaviour of the order parameter near $T_c$ in GL mean-field theory \cite{Justin,Annett}; both have a critical exponent of $1/2$. The plot of the VEV versus the coupling $\xi$ looks very similar to the plot of the order parameter versus temperature $T$ in GL mean-field theory and in both cases there is a discontinuity in the slope at the critical point where the slope diverges.
The magnitude of the scalar field, represented by the function $f(r)$, starts at zero at the origin $r=0$ and reaches its VEV asymptotically (at a large radius, the computational boundary $R$ which represents formally infinity). An important feature is that the scalar field reaches its VEV slower, over a larger radius, as one approaches the critical coupling $\xi_c$. In other words, the core of the vortex extends out further. The plot of the scalar field's ``extension" \footnote{The extension is defined here as the radius where it reaches $99.9\%$ of its VEV.} as a function of $\xi$ shows a dramatic increase near the critical coupling $\xi_c$. We show analytically that the extension is expected to diverge in the limit $\xi \to \xi_c$. This is the analog to the divergence of the coherence length at the critical temperature $T_c$ in GL mean-field theory \cite{Justin,Annett}. We also plot the extension of the magnetic field which shows a similar trend; starting at its peak value at the origin, it falls off slower (extends further out) as one approaches the critical coupling $\xi_c$.
We derive analytical expressions for the VEV $v_{eff}$ and the asymptotic cosmological constant $\Lambda_{eff}$ as a function of $\xi$ and four other parameters that appear in the Lagrangian. When $\xi=0$, $v_{eff}$ reduces to $v$ and $\Lambda_{eff}$ reduces to $\Lambda$. However, when $\xi \ne 0$, $v_{eff}$ does not depend only on $v$ and $\xi$ and $\Lambda_{eff}$ does not depend only on $\Lambda$ and $\xi$. They depend each on five parameters in total. A non-zero $\xi$ therefore causes $v_{eff}$ and $\Lambda_{eff}$ to have a dependence on extra parameters besides itself compared to $\xi=0$. This wider influence ultimately stems from the aforementioned dual role that the non-minimal coupling term plays.
An important point is that the critical coupling exists only in asymptotic AdS$_3$ spacetime; it does not exist in asymptotically flat spacetime ($\Lambda_{eff}=0$) where the VEV is a fixed non-zero constant independent of $\xi$. However, the non-minimal coupling term still plays a significant role in a flat background. In $2+1$-dimensional General Relativity without cosmological constant, it is well known that outside matter the spacetime is locally flat but has the topology of a cone whose deficit angle is proportional to the mass \cite{Deser}. However, we found that the deficit angle was not determined solely by the mass of the vortex but also depended on the coupling $\xi$. One remarkable consequence of this is that a higher mass did not necessarily yield a higher deficit angle.
The focus of this paper is to study how the vortex changes with the coupling $\xi$. The effect of other parameters such as $\Lambda$, $v$ and the winding number $n$ has already been studied in previous work \cite{Ariel}. We therefore fix all other parameters and obtain numerical results for different values of $\xi$. With the non-minimal coupling term, the equations of motion are more complicated. Nonetheless, via a convenient substitution, one can reduce the number of equations and solve them numerically. In an AdS$_3$ background, we obtained vortex solutions for nine values of the coupling $\xi$. These ranged from $-0.14$ to $0.095$ (near $\xi_c$) and included the case $\xi=0$. For the parameters chosen, the critical coupling turned out to be equal to $\xi_c=2/21\approx 0.0952$. Note that $\xi_c$ is an upper bound as the VEV is zero for any $\xi$ above this value. For each $\xi$, we provide plots of the scalar field $f(r)$, gauge field $a(r)$, metric field $A(r)$ and magnetic field $B_m(r)$. In a table, for each $\xi$, we state the numerical values obtained for the VEV $v_{eff}$, the cosmological constant $\Lambda_{eff}$, the ADM mass, the peak value of the magnetic field and the numerically integrated magnetic flux. The expected theoretical values for $v_{eff}$ and $\Lambda_{eff}$ obtained from our derived analytical expressions are also quoted in the table. The numerical values and the theoretical expectations for the VEV, cosmological constant and magnetic flux, matched almost exactly (to great accuracy, within three or four decimal places). This provides a strong mutual confirmation of both our numerical simulation and our derived analytical expressions. We verified numerically that the VEV near $\xi_c$ indeed obeys the power law $|\xi-\xi_c|^{1/2}$. As previously mentioned, the critical exponent of $1/2$ points to a clear analogy with GL mean-field theory where $\xi_c$ acts as the analog of the critical temperature $T_c$. For asymptotically flat spacetime, we considered five values of $\xi$ ranging from $-0.4$ to $+0.4$. The metric field $A(r)$ starts at unity at the origin $r=0$ but then dips below unity and reaches asymptotically (at sufficiently large radius) a plateau at a positive constant value (labelled $D$) that is different for each $\xi$. This is in stark contrast to AdS$_3$ where the metric field $A(r)$ grows as $r^2$ at large radius. The mass and the deficit angle at each $\xi$ are calculated from the numerical value obtained for $D$.
We now place this paper in context, with a focus on previous studies of gravitating vortices that we referred to earlier \cite{Cadoni, Rich, Ariel, Albert}. It was recognized a long time ago that Einstein gravity in $2+1$ dimensions yields a locally flat spacetime outside localized sources, albeit with the topology of a cone \cite{Deser}. However, things become interesting when one includes a negative cosmological constant as this leads to the famous BTZ black holes \cite{BTZ1,BTZ2}. Later, in a higher-derivative extension of Einstein gravity in $2+1$ dimensions called Bergshoeff-Hohm-Townsend (BHT) massive gravity \cite{BHT1}, black hole solutions in both de Sitter and anti-de Sitter space were found as well as wormhole solutions, kinks, and gravitational solitons \cite{Oliva}. An analytical study of black holes with spherical scalar hair in AdS$_3$ was then later studied \cite{Cadoni}. Closer to our topic of interest, they also constructed black hole vortex solutions with a complex scalar field. These solutions departed from the conventional non-singular vortex in two ways. The scalar field had a singularity at the origin and asymptotically tended towards zero which satisfied the Breitenlohner-Freedman bound \cite{BF} in AdS$_3$ but was not the minimum of the potential. In \cite{Rich}, how vortices affect the tunneling decay of a false vacuum under Einstein gravity was studied and it was found that compared to Coleman-de Luccia bubbles \cite{Coleman} the tunneling exponent was less by a factor of a half. Hence vortices are short-lived and become of cosmological interest \cite{Rich}. The non-singular vortex under Einstein gravity in an AdS$_3$ and Minkowski background was first studied in \cite{Ariel}. These were not black hole solutions as in \cite{Cadoni}. Non-singular vortex solutions were found numerically for different values of the cosmological constant $\Lambda$, VEV $v$ and winding number $n$. Two expressions for the (ADM) mass of the vortex were obtained: one in terms of the metric and one as an integral overly purely matter fields. The latter showed that the mass was roughly proportional to $n^2\,v^2$ (an $n^2$ dependence had also been found in \cite{Cadoni}). The mass of the vortex increased as the magnitude of the cosmological constant increased and led to a slightly smaller core for the vortex. Later, work was then extended to include singular vortex solutions besides non-singular ones \cite{Albert}. Vortices with conical singularities were obtained in flat backgrounds and BTZ black hole solutions were obtained in curved backgrounds, though it was found that the vortex cannot ultimately hold a black hole at its core \cite{Albert}. Our present paper introduces the non-minimal coupling term which is missing in all previous studies of gravitating vortices. As previously pointed out, this term preserves the local $U(1)$ gauge invariance of the vortex and is therefore a perfectly natural candidate to add to the action when gravity is present. We already discussed how this term changes the physics significantly, qualitatively.
Our paper is organized in the following fashion. In section 2, we obtain analytical expressions for the VEV $v_{eff}$ and the cosmological constant $\Lambda_{eff}$ in terms of $\xi$ and other parameters. Details of the derivation are relegated to Appendix A. We also obtain an expression for the critical coupling $\xi_c$ in terms of the parameters of the theory and discuss the analogy with the critical temperature $T_c$ in GL mean-field theory. In section 3 we state the equations of motion in an abbreviated form and in Appendix B we write down the full equations that are used in our numerical simulation. In section 4 we obtain analytical expressions for the asymptotic metric. In section 5 we obtain an expression for the ADM mass and also obtain an expression for the deficit angle in asymptotically flat space. In section 6 we state the expression for the magnetic field and derive a formula for the magnetic flux which is a topological invariant independent of $\xi$. In section 7 we present all our numerical results in plots and tables for different values of the coupling $\xi$ in both an AdS$_3$ and Minkowski background. Before presenting the numerical results, we obtain useful analytical expressions for the behaviour of the scalar, gauge and metric field asymptotically and near the origin. We end with our conclusion in
section 8 where among other things, we discuss an interesting and challenging problem to solve in the future.
\section{Lagrangian for the vortex non-minimally coupled to Einstein gravity}
The vortex non-minimally coupled to Einstein gravity with cosmological constant has the following Lagrangian density in $2+1$ dimensions:
\begin{equation}
\mathcal{L} = \sqrt{-g}\Big(\alpha\,(R-2 \Lambda) -\dfrac{1}{4} F_{\mu\nu}F^{\mu\nu} -\dfrac{1}{2}(D_{\mu} \phi)^{\dagger}(D^{\mu} \phi) +\xi\, R\,|\phi|^2-\dfrac{\lambda}{4}(|\phi|^2-v^2)^2\Big)\,.
\eeq{LDensity}
Here $\phi$ is a complex scalar field, $F_{\mu\nu}$ is the usual electromagnetic field tensor, $R$ is the Ricci scalar, $\Lambda$ is a cosmological constant, the constant $\alpha$ is equal to $\frac{1}{16 \pi G}$ where $G$ is Newton's constant and $\xi$ is a dimensionless coupling constant. The interaction with the gauge field $A_{\mu}$ is incorporated via the usual covariant derivative $D_{\mu} \phi=\partial_{\mu} \phi +i\, e A_{\mu} \phi$ where $e$ is a coupling constant. The constants $\lambda$ and $v$ are parameters that enter into the potential for the scalar field. The constants $\alpha$, $\lambda$ and $v$ are positive whereas $\xi$ can be positive, negative or zero. In $2+1$-dimensional General Relativity, positive $\Lambda$ do not yield black holes (i.e. the famous BTZ black holes require negative $\Lambda$). Similarly, positive $\Lambda$ do not support vortices \cite{Ariel} and the non-minimal coupling term does not change that fact. We will see that $\Lambda$ must be either negative or zero which will ultimately yield asymptotic AdS$_3$ or Minkowski spacetime respectively.
The Lagrangian density has a local $U(1)$ symmetry; it is invariant under the following gauge transformations
\begin{align}
&\phi(x) \to e^{i\,e\,\eta(x)}\,\phi(x)\\
&A_{\mu}(x) \to A_{\mu}(x)-\partial_{\mu}\eta(x)
\label{Trans}
\end{align}
where $\eta(x)$ is an arbitrary function. The non-minimal coupling term $\xi\, R\,|\phi|^2$ is clearly invariant under the above gauge transformation and is therefore a perfectly natural ingredient to add to the gravitating vortex.
\subsection{The VEV and cosmological constant as a function of $\xi$}
When $\xi=0$, the VEV and cosmological constant are simply $v$ and $\Lambda$ respectively. When $\xi \ne 0$, the VEV and cosmological constant change and become functions of $\xi$ and other parameters. These will be labeled by $v_{eff}$ and $\Lambda_{eff}$ to denote that they are the actual (effective) VEV and cosmological constant respectively for general coupling $\xi$. In this section we determine expressions for them. This requires one to know only the asymptotic behaviour of the fields and this can be determined directly from the Lagrangian without working out the full equations of motion.
Asymptotically, one reaches the vacuum when the asymptotic spacetime is either AdS$_3$ or Minkowski; these are maximally symmetric spacetimes that can be viewed as the ground states of General Relativity \cite{Carroll}. In this asymptotic region, the kinetic term for the scalar field and gauge field tend to zero: $-\frac{1}{2}(D_{\mu} \phi)^{\dagger}(D^{\mu} \phi)\to 0$ and $-\frac{1}{4} F_{\mu\nu}F^{\mu\nu}\to 0$. This occurs when asymptotically the magnitude of the scalar field approaches the minimum of the potential (the non-zero VEV) and the gauge field approaches a non-zero constant equal to the winding number $n$. In $2+1$ dimensions, the asymptotic value of the Ricci scalar is given by\footnote{Note that the vacuum Einstein field equations with cosmological constant $\Lambda_{eff}$ yield $R=4 \Lambda_{eff}$ in $3+1$ dimensions but $R=6 \Lambda_{eff}$ in $2+1$ dimensions.} $6\, \Lambda_{eff}\,$ where $\Lambda_{eff}$ is either negative (AdS$_3$ background) or zero (Minkowski background). The potential for the scalar field can be readily picked out from the Lagrangian and asymptotically is given by
\begin{align}
V(|\phi|)=\dfrac{\lambda}{4}(|\phi|^2-v^2)^2-\xi\, R\,|\phi|^2=\dfrac{\lambda}{4}(|\phi|^2-v^2)^2 - 6\,\xi\,\Lambda_{eff} |\phi|^2\,.
\label{Potential}
\end{align}
The VEV occurs at the minimum of this potential where the derivative with respect to $|\phi|$ is zero. This yields two possibilities: $|\phi|=0$ and the solution
\begin{align}
|\phi|^2=v_{eff}^2= v^2 + \dfrac{12 \,\xi \,\Lambda_{eff}}{\lambda}\,.
\label{Vev1}
\end{align}
When $v_{eff}^2$ is positive, $v_{eff}$ is the minimum of the potential and corresponds to the VEV (and $|\phi|=0$ is a local maximum). In this case, since the VEV is non-zero, the gauge symmetry is spontaneously broken. When $v_{eff}^2$ is negative (and hence $v_{eff}$ is imaginary), this signals that $|\phi|=0$ is now the minimum of the potential (the VEV). A zero VEV corresponds to the unbroken phase.
With the non-minimal coupling term $\xi \, R \,\phi^2$ term present in the action, the cosmological constant asymptotically is no longer $\Lambda$ but $\Lambda_{eff}$; this is governed by the equation
\begin{align}
\alpha (R-2\,\Lambda) +\xi \,R\, v_{eff}^2- \dfrac{\lambda}{4}(v_{eff}^2-v^2)^2=(\alpha + v_{eff}^2\,\xi)(R- 2\,\Lambda_{eff})\,.
\label{Cosmo1}
\end{align}
If we substitute $R=6 \,\Lambda_{eff}$ above, we can solve the two equations \reff{Vev1} and \reff{Cosmo1} for $v_{eff}$ and $\Lambda_{eff}$ as a function of $\xi$ and the other parameters of the theory. This is worked out in Appendix A and the equations are \reff{Veff4} and \reff{Leff4}:
\begin{align}
v_{eff}=\Bigg[2 v^2 + \frac{\alpha}{\xi} - \frac{\sqrt{(\alpha +v^2\,\xi)^2 - 24 \,\alpha\,\Lambda\,\xi^2/\lambda}}{\xi}\,\Bigg]^{1/2}
\label{Veff}
\end{align}
and
\begin{align}
\Lambda_{eff}=\dfrac{\lambda}{12\,\xi^2}\Big(\alpha + v^2\,\xi -\sqrt{
(\alpha +v^2\,\xi)^2-24\,\alpha\,\Lambda\,\xi^2/\lambda}\,\Big)\,.
\label{Leff}
\end{align}
Equation \reff{Cosmo1} also implies that the coefficient in front of $R$ asymptotically is not $\alpha$ but
\begin{align}
\alpha_{eff}=\alpha + v_{eff}^2\,\xi\,.
\label{alpha_eff}
\end{align}
Newton's constant asymptotically is obtained from $\alpha_{eff}$ so that
the condition $\alpha_{eff}>0$ must be satisfied. We expect that $\lim_{\,\xi \to 0} v_{eff}=v$, $\lim_{\,\xi \to 0} \Lambda_{eff}=\Lambda$ and $\lim_{\,\xi \to 0} \alpha_{eff}=\alpha$; this is in fact the case as one can readily check. When $\Lambda$ in \reff{Leff} is negative, this yields a negative $\Lambda_{eff}$ so that the background is AdS$_3$. In that case, $v_{eff}$ and $\Lambda_{eff}$ change with $\xi$. However, when $\Lambda=0$ and $\alpha + v^2\,\xi\,>\,0$ one obtains $\Lambda_{eff}=0$ and $v_{eff}=v$ regardless of the value of $\xi$ or the other parameters. Therefore, in a Minkowski background ($\Lambda_{eff}=0$) the VEV remains constant at $v$ as $\xi$ changes. Note that $\Lambda=0$ with $\alpha +v^2 \,\xi<0$ is not a physically viable option as it leads to a negative
$\alpha_{eff}$ i.e. one obtains $v_{eff}^2= 3 \,v^2 + \frac{2 \,\alpha}{\xi}$ so that
$\alpha_{eff}=\alpha + v_{eff}^2 \,\xi$ is equal to $3\,(\alpha +v^2\,\xi)$ which is negative.
When $\xi=0$, $v_{eff}$ is simply $v$ but when $\xi\ne 0$, $v_{eff}$ does not depend only on $v$, $\xi$ and $\lambda$ but also on the gravitational parameters $\alpha$ and $\Lambda$. Similarly, when $\xi\ne 0$, $\Lambda_{eff}$ does not depend only on
$\Lambda$, $\xi$ and $\alpha$ but also on the parameters $v$ and $\lambda$ appearing in the scalar potential. We see that the non-minimal coupling term has a wide reach because of the dual role it plays in affecting simultaneously the potential of the scalar field and the Einstein-Hilbert gravitational term.
\subsection{Critical coupling $\xi_c$}
The VEV, given by \reff{Vev1}, is equal to zero at a critical coupling $\xi_c$. This occurs when
\begin{align}
2 v^2 + \frac{\alpha}{\xi} - \frac{\sqrt{(\alpha +v^2\,\xi)^2 - 24 \,\alpha\,\Lambda\,\xi^2/\lambda}}{\xi}=0
\label{effective}
\end{align}
which has the solution
\begin{align}
\xi_c=-\dfrac{2\,v^2 \,\alpha\,\lambda}{3\,(v^4 \lambda + 8\,\alpha\,\Lambda)}
\label{Critical}
\end{align}
if the condition $\alpha + 2 \,v^2\,\xi>0$ is satisfied. This condition implies that $v^4 \lambda + 8\,\alpha\,\Lambda$ in the denominator of \reff{Critical} is negative. The critical coupling is therefore positive and exists only when $\Lambda$ is negative and obeys the inequality $\Lambda<- \frac{v^4 \lambda}{8\,\alpha}$. A negative $\Lambda$ implies $\Lambda_{eff}<0$ so that the spacetime is asymptotically AdS$_3$. In particular, the case $\Lambda=0$ (which yields $\Lambda_{eff}=0$) has no critical coupling and has a fixed VEV at $v$. There is therefore no critical coupling in asymptotic Minkowski spacetime. The critical coupling exists only in AdS$_3$ when $\Lambda<- \frac{v^4 \lambda}{8\,\alpha}$. What happens when $\Lambda$ is negative but falls in the range $- \frac{v^4 \lambda}{8\,\alpha}<\Lambda <0$? The spacetime is asymptotically AdS$_3$ since $\Lambda_{eff}<0$ and the VEV changes with $\xi$ but it always remains above zero; there is no transition from the unbroken phase (zero VEV) to the broken phase (non-zero VEV).
When the critical coupling exists, the VEV is zero for $\xi\ge \xi_c$, but is non-zero and grows as $\xi$ decreases below $\xi_c$. A phase transition from a symmetric (unbroken) phase to a spontaneously broken phase occurs when $\xi$ crosses below $\xi_c$. In figure 1 below, we plot $v_{eff}$ as a function of $\xi$ (for parameters $\alpha=1$, $v=1$, $\lambda=1$ and $\Lambda=-1$). Since $\Lambda<-\frac{v^4 \lambda}{8\,\alpha}=-1/8$, the condition for a critical coupling is satisfied and its value from \reff{Critical} is $\xi_c= 2/21 = 0.0952$. We see that the VEV is zero above $\xi_c=0.0952$ but becomes non-zero and increases as $\xi$ decreases below $\xi_c$. The VEV is continuous but one can readily see that the derivative (slope of graph) is discontinuous at $\xi_c$. We will see that in fact the slope diverges at that point.
\begin{figure}[t]
\centering
\includegraphics[scale=1.0]{VEV.pdf}
\caption{The VEV $v_{eff}$ as a function of $\xi$ plotted for parameters $\alpha=1$, $v=1$, $\lambda=1$ and $\Lambda=-1$. The VEV is zero at or above $\xi_c=0.0952$ and transitions to a non-zero value below $\xi_c$ where it increases as $\xi$ decreases. When $\xi$ crosses below $\xi_c$, there is a transition from a symmetric phase to a spontaneously broken phase. Note that, as expected, the VEV is equal to $v=1$ at $\xi=0$.}
\label{VEV}
\end{figure}
Figure 1 should bring to mind the graph (see fig. 2)\footnote{Image courtesy of C. Lygouras, ``Critical behavior of the order parameter and specific heat in the second-order phase transition from Landau theory", May 4, 2020. Wikimedia Commons contributors, `File:LandauTheoryTransitions.svg`, Wikimedia Commons, the free media repository.} of the order parameter as a function of temperature in the Ginzburg-Landau (GL) mean-field theory of second-order phase transitions where the order parameter is zero above a critical temperature $T_c$ but increases above zero below $T_c$. Our critical coupling $\xi_c$ is the analog to the critical temperature $T_c$. We can make this connection more quantitative. In GL mean-field theory, at temperatures $T$ below and near $T_c$, the order parameter is proportional to $(T_c-T)^{1/2}\,$\cite{Justin,Annett} a power law behaviour with critical exponent of $1/2$. The VEV for $\xi$ below and near $\xi_c$ has a similar behaviour. Using \reff{Critical}, we can express $\Lambda$ in terms of $\xi_c$ and substitute this into \reff{Veff} to obtain
\begin{align}
v_{eff}=\Bigg[\,2 v^2+\frac{\alpha }{\xi }-\frac{\sqrt{\alpha^2+2 \,v^2\, \alpha \,\xi + v^4 \,\xi^2-\frac{\xi^2 \,\left(-2 \,v^2 \,\alpha \,\lambda -3 \,v^4 \,\lambda \,\xi_c\right)}{\lambda \,\xi_c}}}{\xi}\,\Bigg]^{1/2}\,.
\end{align}
Expanding $v_{eff}$ above about the critical coupling $\xi_c$ yields
\begin{align}
v_{eff}= k\,(\xi_c-\xi)^{1/2} + \mathcal{O}\big((\xi_c-\xi)^{3/2}\big)
\label{expansion}
\end{align}
where the proportionality constant is $k=\frac{v}{\sqrt{\xi_c + (2\,v^2\,\xi_c^2)/\alpha}}$. We therefore see that the power law behaviour of the VEV near $\xi_c$ and of the order parameter near $T_c$ in GL theory are similar and have the same critical exponent of $1/2$. From \reff{expansion}, one can readily see that the slope in figure 1 diverges at $\xi_c$ (just like the slope in figure 2 diverges at $T_c$). We will set that the VEV for values of $\xi$ near $\xi_c$ in our numerical simulation follows closely the power law behaviour given by \reff{expansion}.
\begin{figure}[t]
\centering
\includegraphics[scale=1.0]{Order_Parameter.pdf}
\caption{The order parameter $\eta_0(T)$ as a function of temperature in the GL mean-field theory. The order parameter is zero at or above the critical temperature $T_c$ but is non-zero below $T_c$. There is a discontinuity in the slope at $T_c$ and there is a second-order phase transition when the temperature crosses below $T_c$.}
\label{Order}
\end{figure}
We will now determine the equations of motion, solve them numerically and obtain plots of various quantities for different values of the coupling $\xi$. The equations \reff{Veff} and \reff{Leff} for the VEV and cosmological constant that we derived in this section will be used to predict the asymptotic values of our plots and we will see that they match exactly. This provides a strong confirmation of both our derived theoretical results of this section and of our numerical vortex solutions in later sections.
\section{Rotationally symmetric ansatz and the equations of motion}
For the vortex, we consider rotationally symmetric static solutions. The ansatz for the gauge and scalar field is
\begin{align}
&A_j(\mathbf{x})=\epsilon_{jk}\hat{x}^k \dfrac{a(r)}{er}\\
&\phi(\mathbf{x}) = f(r)\,e^{i n \,\theta}
\label{ansatz}
\end{align}
where $a(r)$ and $f(r)$ are functions of $r$ that represent the gauge and scalar field
respectively. The non-negative integer $n$ is called the winding number. A $2+1$ dimensional metric which is rotationally symmetric can be expressed as
\begin{equation}
ds^2=- B(r) \,dt^2 +\dfrac{1}{A(r)} \,dr^2 + r^2\, d\theta^2
\eeq{metric} where $A(r)$ and $B(r)$ represent two metric functions of $r$.
With the ansatz \reff{ansatz} and \reff{metric}, the Langrangian density \reff{LDensity} reduces to
\begin{equation}
\mathcal{L}=\sqrt{B/A}\,r\,\Big(\alpha \,(R-2\Lambda )-\frac{A (a')^2}{2\, e^2 \,r^2}-\frac{(f')^2 A}{2}- \frac{(n-a)^2\,f^2}{2 \,r^2} +\xi R\,f^2-\frac{\lambda}{4}\,(f^2 -v^2)^2 \Big)\,.
\eeq{LDensity2}
Since $f$ approaches a non-zero constant asymptotically, one requires that $a\to n$ asymptotically (which yields $(n-a)^2\,f^2 \to 0$) so that one avoids a logarithmic divergence in the energy of the vortex \cite{Weinberg,Ariel}. The Ricci scalar is a function of $A$ and $B$ and their derivatives:
\begin{equation}
R= \frac{ (B')^2 A}{ 2 B^2}-\frac{ A'}{r }-\frac{A'B'}{2 B}- \frac{B' A}{r B} -\frac{ B'' A}{B}\,.
\eeq{Ricci}
Note that when the complex scalar field is inserted in the Lagrangian density, the winding number $n$ appears but not the coordinate $\theta$ since the phase cancels out.
The Lagrangian density therefore depends on $r$ only and solutions are rotationally symmetric. The Euler-Lagrange equations of motion for $A(r)$, $B(r)$, $f(r)$ and $a(r)$ are respectively
\begin{align}
&4\, e^2 \,r \,A \,B' (\alpha +\xi \,f^2+ 2 \,r\, \xi\, f \,f')+B\Big(e^2 r^2 (v^4 \lambda +8 \alpha\, \Lambda)+2 e^2 (n^2-r^2 v^2 \lambda-2 \,n \,a+a^2) f^2
\nonumber\\&\quad\quad+e^2 r^2 \lambda f^4+16 \,e^2 r \,\xi \,A \,f\, f'-2 A (a'^{\,2}+e^2 r^2 f'^{\,2})\Big)=0 \label{EOMA}\\\nonumber\\
&e^2 r^2 \lambda f^4+e^2 r \,(r v^4 \lambda+8 r \alpha \Lambda+4 \alpha A')+2 e^2 f^2 (n^2-r^2 v^2 \lambda-2\, n \,a+a^2+2 r \xi A')\nonumber\\&\quad\quad+2 A \,(a'^{\,2}+e^2 r^2 (1+8 \xi) f'^{\,2})+8 e^2 r \xi f \,\big(r A' f'+2 A \,(f'+r f'')\big)=0\label{EOMB}\\\nonumber\\
&2 r^2 \xi A f B'^{\,2}+r B \,\big(-2 r \xi f A' B'+A\,(r B' f'-4 \xi f (B'+r B''))\big)+B^2 \Big(-2 r^2 \lambda f^3\nonumber\\&\quad\quad-2 f \,(n^2-r^2 v^2 \lambda-2 n a+a^2+2 r \xi A')+r \,(r A' f'+2 A (f'+r f''))\Big)=0 \label{EOMf}\\\nonumber\\
&r A a' B'+B \Big(2 \,e^2 r\, (n-a) f^2-2 A a'+r a' A'+2 \,r A a''\Big)=0\,.\label{EOMa}
\end{align}
We can reduce the above four equations of motion to three by extracting $W(r)=B'/B$
from equation \reff{EOMA} and substituting it into equations \reff{EOMf} and \reff{EOMa}.
The function $W(r)$ contains $A,f$ and $a$ and their derivatives. The main point is that the three remaining equations no longer have any dependence on $B(r)$. However, the equations become longer especially the one for the function $f(r)$. We write them out in full in Appendix B; equations \reff{EOMB4}, \reff{EOMf4} and \reff{EOMa4} are the equations we solve numerically. To avoid writing out cumbersome lengthy equations here, the three remaining equations are written below using $W(r)$ and $W'(r)$. Note that we need $W'$ because of the appearance of $B''$ in \reff{EOMf}. In particular, $B''/B = W' + W^2$. The three remaining equations are
\begin{align}
&e^2 r^2 \lambda f^4+e^2 r \,(r v^4 \lambda+8 r \alpha \Lambda+4 \alpha A')+2 e^2 f^2 (n^2-r^2 v^2 \lambda-2\, n \,a+a^2+2 r \xi A')\nonumber\\&\quad\quad+2 A \,(a'^{\,2}+e^2 r^2 (1+8 \xi) f'^{\,2})+8 e^2 r \xi f \,\big(r A' f'+2 A \,(f'+r f'')\big)=0\label{EOMB2}\\\nonumber\\
&2 \,r^2 \,\xi A \,f\, W^2 -2 r^2 \xi \,f \,A'\, W +A\,r\big(r \,W \,f'-4 \xi f (W+r \,(W'+W^2))\big)-2 \,r^2 \lambda f^3\nonumber\\&\quad\quad-2 \,f \,(n^2-r^2 v^2 \lambda-2\,n \,a+a^2+2 \,r \xi \,A')+r \,\big(r A' f'+2 A \,(f'+r f'')\big)=0\label{EOMf2}\\\nonumber\\
&r \,A\, a' \,W + 2 \,e^2\, r\, (n-a) f^2 - 2 \,A \,a'+ r \,a'\, A' + 2 \,r \,A\, a''=0\,.
\label{EOMa2}\end{align}
When $W(r)$ given by \reff{W} is substituted into the above equations we obtain the full equations \reff{EOMB4}, \reff{EOMf4} and \reff{EOMa4}.
\section{Asymptotic analytical solutions}
One can solve analytically for the metric in vacuum by setting $f=v_{eff}$ and
$a=n$ identically in Eq. \reff{EOMB2}. This yields
\begin{align}
A'(r)=-r \,\dfrac{(8\,\alpha \,\Lambda + \lambda\,(v_{eff}^2-v^2)^2)}{4(\alpha + \xi\, v_{eff}^2)}
\end{align}
with solution
\begin{equation}
A_0(r) = -\dfrac{(8\,\alpha \,\Lambda + \lambda(v_{eff}^2-v^2)^2)}{8(\alpha + \xi\, v_{eff}^2)}\,r^2 + C =- \Lambda_{eff} \,r^2 + C
\eeq{A0}
where the subscript `0' denotes vacuum and $C$ is an integration constant. In the last step we substituted $v_{eff}$ given by \reff{Veff} and this yields $\Lambda_{eff}$ given by \reff{Leff} as the coefficient of $-r^2$ (see also Eq. \reff{Cosmo3}). Of course, this is exactly what we expect the metric of pure AdS$_3$ to be for cosmological constant $\Lambda_{eff}$. The initial conditions at $r=0$ are determined by the constant $C$. We set $C=1$ since in $2+1$ dimensions this choice avoids a conical singularity at the origin \cite{BTZ1,Deser}. Moreover, $C=1$ also works for the case of vortices embedded in asymptotically Minkowski spacetime ($\Lambda_{eff}=0$).
We can now solve for the metric function $B_0(r)$ in vacuum by substituting $A_0(r)$ with $C=1$ into Eq.\reff{EOMA}. This yields $B_0(r) = k_0\,(-\Lambda_{eff} \,r^2 + 1)$ where $k_0$ is an integration constant (positive). We can absorb this constant into a redefinition of time in the line element \reff{metric} so that
\begin{equation}
B_0(r)= -\Lambda_{eff}\, r^2 + C = A_0(r)\,.
\eeq{B0}
In the presence of the vortex, we have that $f \to v_{eff}$ and $a \to n$ asymptotically. Note that in contrast to the vacuum case, these are now only the asymptotic values. The vortex departs significantly from that in the core region near the origin. In numerical simulations, $f$ and $a$ start at zero at the origin and reach their asymptotic value (within less than a percent) at a finite radius $R$, the computational boundary which represents formally infinity. The asymptotic form of the metric function $A$ in the presence of matter (the vortex) is obtained again via Eq. \reff{EOMB2} and yields at $r=R$
\begin{equation}
A(R)= -\Lambda_{eff} \,R^2 + D\,.
\eeq{AR}
The constant $D$ differs from the constant $C$ in \reff{A0}; as one goes through the core of the vortex, one naturally emerges into an asymptotic region that differs from the purely vacuum one and this is reflected in $D$ being a different constant from $C$. We will see that the (ADM) mass of the vortex will be expressed in terms of $A_0(R)$ and $A(R)$.
Asymptotically, using \reff{EOMA}, we obtain $B(R)= k \, A(R)$. Here $k$ is an integration constant (positive); it can no longer be absorbed into a redefinition of time since that has been carried out once already with the constant $k_0$. At large radius $R$, in the presence of the vortex, we obtain that $B(R)$ is proportional to $A(R)$ but not equal to it.
\section{Expression for the (ADM) mass of the vortex}
An important property of a vortex is its finite mass. In curved spacetime, the mass of a localized source is defined as its ADM mass \cite{Poisson}. AdS$_3$ is a maximally symmetric spacetime with isometry group $SO(2,2)$ and has a timelike Killing vector so that a conserved energy (the ADM mass) naturally applies to matter embedded in it. The ADM mass in $2+1$ dimensions can be calculated via the following expression \cite{Poisson}:
\begin{equation}
M= -2 \,\alpha_{eff}\, \lim_{C_t \to R} \oint_{C_t} (k-k_0) \,\sqrt{\sigma} \,N(R) \,d \theta \,.
\eeq{M}
Note that $\alpha_{eff}$, given by \reff{alpha_eff}, must be used here instead of $\alpha$. Here $C_t$ is the circle at spatial infinity where infinity corresponds to the computational boundary $r=R$. The lapse $N(R)$ is given by $\big(B_0(R)\big)^{1/2}=\big(A_0(R)\big)^{1/2}$. The metric on $C_t$ is $\sigma_{AB}$ and $\sqrt{\sigma} =R$ where $\sigma$ is its determinant. The extrinsic curvature of $C_t$ embedded on the two-dimensional spatial surface obtained by setting $t$ to be constant in \reff{metric} is given by $k$ whereas its embedding in the two-dimensional spatial surface of AdS$_3$ is given by $k_0$. A straightfoward calculation yields
\begin{equation}
k= \dfrac{\big(A(R)\big)^{1/2}}{R} \quad;\quad k_0= \dfrac{\big(A_0(R)\big)^{1/2}}{R}
\eeq{extrinsic}
Substituting all the above quantities into \reff{M} yields our final expression for the ADM mass:
\begin{equation}
M= 4 \,\pi \,\alpha_{eff} \,\Big(A_0(R) -[A_0(R)\,A(R)]^{1/2}\Big)\,.
\eeq{ADM}
We will use the above expression to calculate the ADM mass in an AdS$_3$ background. Note that if $A(R)=A_0(R)$ one obtains $M=0$ which implies that our definition has set empty AdS$_3$ space to have zero mass. This is the desired and expected result since maximally symmetric spacetimes can be viewed as the ground states of General Relativity \cite{Carroll} and as such are typically set to zero energy.
The analytical expression \reff{A0} for the vacuum metric $A_0(R)$ is $-\Lambda_{eff}\, R^2 + 1$ and this can be readily calculated for any given $R$. From \reff{AR} we have that $A(R)=-\Lambda_{eff}\, R^2 + D$ where $D$ is a constant. This corresponds to the case with matter (the vortex) and it is obtained by solving the equations of motion numerically since we do not know a priori the value of the constant $D$. The mass $M$ of the vortex is then obtained via \reff{ADM}. Though $A_0(R)$ and $A(R)$ both change with $R$, at a large enough $R$, the mass $M$ hardly changes as $R$ increases and the matter fields $f(r)$ and $a(r)$ plateau to their respective asymptotic values of $v_{eff}$ and $n$ respectively. The value of $A(r)$ at $r=0$ is an initial condition. In vacuum, $A(r)$ must reduce to $A_0(r)$ so that their initial conditions at the origin must match. This implies that $A(0)=A_0(0)=C=1$.
\subsection{ADM mass in asymptotically flat space and angular deficit}
In asymptotically flat spacetime where $\Lambda=\Lambda_{eff}=0$, the ADM mass formula \reff{ADM} remains valid but simplifies greatly. We have that $A_0(R)=C=1$ and $A(R)=D$ which yields
\begin{align}
M_{flat}= 4 \,\pi \,\alpha_{eff} \,\big(1 -D^{1/2}\big)
\label{M2}
\end{align}
where $\alpha_{eff}=\alpha +\xi\,v^2$ since $v_{eff}=v$. Note that $A_0(r)=B_0(r)$ stay constant at unity for all $r$ (this represents the vacuum Minkowski spacetime). In contrast, $A(r)$ is unity at the origin $r=0$ but dips below unity as $r$ increases until it plateaus to a positive value $D$ at large radius $R$. The value of $D$ is obtained numerically. Recall that localized matter in $2+1$ dimensions yields an asymptotically Minkowski spacetime with an angular deficit \cite{Deser}. Asymptotically, $A(r)=D$ and the spatial part of the metric \reff{metric} becomes $\frac{dr^2}{D} + r^2 d\theta^2$. If we define $r_0=r/D^{1/2}$ and $\theta_0= D^{1/2}\, \theta$ we obtain a manifestly flat metric $dr_0^2 + r_0^2 \,d\theta_0^2$ but with $\theta_0$ ranging now from $0$ to $2 \pi D^{1/2}$ instead of $2\,\pi$. Since $0<D<1$ there is an angular deficit of
\begin{align}
\delta= 2 \pi\, (1-D^{1/2})\,.
\label{delta}
\end{align}
Using \reff{M2} with $\alpha_{eff}=1/(16 \pi G_{eff})$ we obtain that $\delta= 8 \pi\, G_{eff}\,M_{flat}$ which is the formula for the angular deficit produced by a mass $M_{flat}$ in $2+1$ Minkowski spacetime \cite{Deser} if $G_{eff}$ replaces $G$ in \cite{Deser}. Asymptotically, the spacetime is locally flat but topologically a cone. There is however no conical singularity at the origin in our case in contrast to the point mass in \cite{Deser}. The spacetime is smooth at the origin since the vortex by construction is an extended non-singular object. In our case, the conical spacetime is only the asymptotic spacetime and does not extend into the core of the vortex.
In the original work of \cite{Deser}, the only way to change the angular deficit is to change the mass since $G$ remains constant. In our case, $G_{eff}$ depends on the coupling $\xi$. Therefore as $\xi$ changes, one can encounter a scenario (and one does as our numerical results will show) where a higher mass yields a smaller deficit angle than a smaller mass. This is another instance of how the non-minimal coupling term plays a novel role.
\section{Magnetic flux as a topological invariant independent of coupling $\xi$}
The vortex contains a magnetic field which we label $B_m$. We will see when we plot our numerical results that it has its maximum at the origin and then decreases towards zero outside a core region. The maximum value of the magnetic field at the origin as well as its profile depends on the coupling $\xi$. After we present our numerical results, we will look at the radial extension of the scalar field as a function of $\xi$, a measure of how far the field extends before it reaches close to its plateau value (the VEV). We will see that the radial extension of the scalar field increases significantly as we approach the critical coupling $\xi_c$. This is analogous to the coherence length in GL mean-field theory which diverges near the critical temperature. We have discussed here the radial extension of the scalar field because we will see that the radial extension of the magnetic field as a function of $\xi$ undergoes the same fate and also increases as we approach the critical coupling $\xi_c$. The magnetic field profile therefore provides us with an additional window into how far the core region of the vortex extends.
An important property of the magnetic field is that even though its profile changes with the coupling $\xi$, the magnetic flux $\Phi$ obtained by integrating the magnetic field over the entire two-dimensional area stays constant (i.e. it is independent of the value of $\xi$). We show here that the magnetic flux depends only on the winding number $n$ and hence is a topological invariant. The quantity $-\frac{A \,(a')^2}{2 \,e^2 \,r^2}$ appearing in the Lagrangian density \reff{LDensity2} stems from the term $-\frac{1}{4} F_{\mu\nu}F^{\mu\nu}$ and hence is identified with $-B_m^2/2$ where $B_m$ is the magnetic field (no electric field is present hence the absence of an $\tfrac{E^2}{2}$ term). It follows that the magnetic field is given by $B_m= \frac{\sqrt{A}\, a'}{e \,r}$ which reduces to the well-known result $a'/(e\,r)$ for the magnetic field in fixed Minkowski spacetime \cite{Weinberg} where $A(r)=1$ identically.
The magnetic flux $\Phi$, the integral of the magnetic field over the invariant area element, yields
\begin{align}
\Phi=\int d^2x \,\sqrt{\gamma} \,B_m =\int dr \,d\theta \,\Big(\frac{r}{\sqrt{A}}\Big)\,\big(\frac{\sqrt{A}\, a'}{e \,r}\big)= \frac{2\,\pi}{e} \int_{0}^{R} a' \,dr= \frac{2\,\pi}{e}\,(a(R)-a(0))=\frac{2\,\pi\,n}{e}
\label{Flux}
\end{align}
where $\gamma=r^2/A$ is the determinant of the spatial two-metric obtained from \reff{metric} by setting $t$ to be constant. We used the boundary conditions on the function $a(r)$: $a(R)=n$ and $a(0)=0$. Note that the expression for the magnetic flux $\Phi=\frac{2\,\pi \,n}{e}$ is the same in curved space as it is in fixed Minkowski spacetime \cite{Weinberg}. In the next section where we present our numerical results, we will integrate numerically over the area the different magnetic field profiles for different coupling $\xi$ and show that the result is the same independent of the profile and $\xi$. Besides demonstrating numerically that the magnetic flux is a topological invariant in curved space, it also provides another check on our numerical simulation. The magnetic flux is ``quantized" as it comes in integer steps of $2\pi/e$. This does not stem from any quantization procedure imposed on the fields but from the topology of the vortex which is characterized by its winding number $n$.
\section{Numerical solutions of vortex in curved space}
The three equations of motion \reff{EOMB4}, \reff{EOMf4} and \reff{EOMa4} are solved numerically to obtain non-singular profiles for the scalar field $f(r)$, the gauge field $a(r)$ and the metric function $A(r)$. The initial conditions at the origin $r=0$ are
\begin{align}
f(0)=0\quad; \quad a(0)=0\quad;\quad A(0)=1\,.
\label{boundary}
\end{align}
These initial conditions ensure that our vortex solutions are non-singular. Let $R$ be the computational boundary representing formally infinity. We expect that
\begin{align}
f(R)=v_{eff}\quad;\quad a(R)=n \quad ;\quad A(R)= D - \Lambda_{eff} \,R^2
\label{Asym}
\end{align}
where $D$ is a constant that is determined only after running the numerical simulation and depends on the matter distribution of the vortex. The quantity $v_{eff}$ is the value where $f(r)$ plateaus at numerically and we will see that it matches very closely our theoretical prediction given by \reff{Veff}. The winding number of the vortex is given by the positive integer $n$ and we will see that $a(r)$ plateaus at that value numerically. The coefficient $\Lambda_{eff}$ in front of $R^2$ in $A(R)$ can be extracted from our numerical simulation by evaluating $-A''(r)/2$ at $r=R$. We will see that it matches very closely our theoretical prediction for the asymptotic value of the cosmological constant given by \reff{Leff}. We obtain the profiles by adjusting $f'(r)$ and $a'(r)$ near the origin until the curves for $f(r)$ and $a(r)$ plateau towards their respective constant values beyond a certain radius (in our numerical simulations they reach their expected constant values to within less than a tenth of a percent at the computational boundary $R$).
\subsection{Analytical behaviour of the fields near the origin and asymptotically}
The equations of motion are a long complicated set of coupled non-linear differential equations which require a numerical solution. However, before presenting the numerical results, it is instructive to extract some useful analytical information from the equations. In particular, we will determine the analytical behaviour of the fields near the origin and in the asymptotic region. We will see that the asymptotic profile of a vortex is not supported by a positive cosmological constant $\Lambda_{eff}$; it must be either negative (AdS$_3$ background) or zero (Minkowski background). This is similar to the fact that in $2+1$ dimensional General Relativity (GR), a black hole exists for negative cosmological constant (the BTZ black hole \cite{BTZ1,BTZ2}) but not for positive cosmological constant. There is no black hole in a Minkowski background either but in contrast, one can have a vortex in a Minkowski background.
\subsubsection{Behaviour of $A(r)$, $f(r)$ and $a(r)$ near the origin}
The initial conditions on the fields at $r=0$ are $f(0)=0$, $a(0)=0$ and $A(0)=1$. We would like to know the behaviour of these fields in the vicinity of $r=0$. If we linearize \reff{EOMB4} about $A=1$ we obtain $A(r)=1- r^2 (\frac{v^4\,\lambda}{8 \alpha}+ \Lambda)$. This quadratic behaviour implies that its first derivative $A'(r)$ at $r=0$ is always zero regardless of the parameters so that the metric function always starts out flat at the origin. This is what is observed numerically. Linearizing \reff{EOMa4} about $a=0$ yields $a(r)=b\,r^2$ with $b$ a positive constant. We see that $a(r)$ also starts out flat at the origin since $a'(0)=0$. Again, this agrees with our numerical simulation. Linearizing \reff{EOMf4} about $f=0$ yields $f(r)= c\, r^n$ where $n$ is the winding number and $c$ a positive constant. Near the origin, $f'(r)=c\,n r^{n-1}$ so that $f'(0)=c$ for $n=1$ and $f'(0)=0$ for $n>1$. This implies that $f(r)$ starts off flat at the origin when $n>1$ but with a positive slope when $n=1$. Note that the fields near $r=0$ have no dependence on the coupling $\xi$.
\subsubsection{Behaviour of $A(r)$, $f(r)$ and $a(r)$ asymptotically}
Asymptotically, the metric function $A(r)$ is given by $D- \Lambda_{eff} \,r^2$ where
$D$ is a constant. The matter fields $a$ and $f$ plateau to their constant values of $n$ and $v_{eff}$ respectively asymptotically. We would like to know their behavior as they approach these constant values. At large $r$ we can write $a(r)=n-\epsilon(r)$ and $f(r)=v_{eff}-\beta(r)$ where $\epsilon$ and $\beta$ are small positive perturbations which must vanish asymptotically. Substituting these expressions into equation \reff{EOMa4} and \reff{EOMf4} and keeping only terms linear in $\epsilon$ and $\beta$ yields the differential equations
\begin{align}
&e^2 \,v_{eff}^2\, \epsilon(r) + r \,\Lambda_{eff} \,\epsilon'(r)+ r^2 \,\Lambda_{eff} \,\epsilon''(r)=0 \\\nonumber\\
& 2\, v_{eff}^2 \left(\alpha_{eff}\,\lambda -24 \,\Lambda_{eff} \,\xi^2\right) \beta(r) +r\, \Lambda_{eff} \left(\alpha_{eff}+16\,v_{eff}^2 \,\xi^2\right) \left(3 \,\beta'(r) +r \beta''(r)\right)=0\,.
\label{Linear}
\end{align}
The above equations are valid only for the case $\Lambda_{eff}\ne 0$ (the case $\Lambda_{eff}=0$ will be treated separately). Both equations have power law fall off solutions
\begin{align}
&\epsilon(r)=b\, r^{-\,\dfrac{e\,v_{eff}}{(-\Lambda_{eff})^{1/2}}}\label{eps}\\
&\beta(r)= c\, r^{-1-\Big[\dfrac{-\alpha_{eff}\,\Lambda_{eff} + 2 \,\alpha_{eff}\,
v_{eff}^2 \,\lambda - 64 \,v_{eff}^2 \,\Lambda_{eff} \,\xi^2}{-\alpha_{eff}\,\Lambda_{eff} - 16 \,v_{eff}^2 \,\Lambda_{eff} \,\xi^2}\Big]^{1/2}}
\label{beta}
\end{align}
where $b$ and $c$ are positive constants. Since \reff{eps} is valid only if $\Lambda_{eff}$ is negative, the above profiles apply only to an AdS$_3$ background. An important point is that the profile of a vortex which requires the gauge field $a$ to plateau at $n$ and $f$ to plateau at $v_{eff}$ is not supported by a positive $\Lambda_{eff}$. It is supported by a negative $\Lambda_{eff}$ and as we will now see, also by a zero $\Lambda_{eff}$. The vortex therefore exists only in an AdS$_3$ or Minkowski background.
When $\Lambda_{eff}=0$, asymptotically we have $A(r)=D$ where $D$ here is positive (since a non-singular profile requires that $A(r)>0$). We also have $v_{eff}=v$. The differential equations governing the perturbations $\epsilon$ and $\beta$ are then
\begin{align}
&e^2 \,r \,v^2 \,\epsilon(r)+ D \left(\epsilon'(r)-r \epsilon''(r)\right)=0\\\nonumber\\
& 2\, r \,v^2 \lambda \left(\alpha +v^2 \xi \right) \beta(r)-D \left(\alpha +v^2 \xi (1+8 \xi )\right) \left(\beta'(r)+r \beta''(r)\right)=0
\end{align}
with solutions
\begin{align}
&\epsilon(r)=b \,e^{\frac{-e\,v\,r}{\sqrt{D}}}\,\sqrt{r}\label{eps2}\\
&\beta(r)= c\,e^{-v\,r\,\big(\frac{2 \,\lambda\,\alpha_{eff}}{D\,(\alpha_{eff} + 8\,v^2\,\xi^2)}\big)^{1/2}}\dfrac{1}{\sqrt{r}}
\label{beta2}
\end{align}
where $b$ and $c$ are positive constants. The above result is for a Minkowski background ($\Lambda_{eff}=0$) but where Einstein gravity and a non-minimal coupling term acts on the vortex. The exponential fall-off expressions \reff{eps2} and \reff{beta2} are similar to those found in fixed Minkowski spacetime \cite{Weinberg} and we recover them if we set $\xi=0$ and $D=1$.
\subsection{Plot of vortex profiles and magnetic field in AdS$_3$ for different $\xi$}
The parameters that appear in the Lagrangian density \reff{LDensity2} for the vortex are $\lambda$, $e$, $n$, $v$, $\alpha$, $\Lambda$ and $\xi$. The goal here is to determine how the vortex changes with the coupling $\xi$ and to observe what happens as we approach the critical couling $\xi_c$. How the vortex changes with the other parameters such as $\Lambda$, $n$ and $v$ has been studied elsewhere \cite{Ariel}. We therefore run numerical simulations for different values of $\xi$ with the other parameters held fixed; we set $\lambda=1$, $e=3$, $n=1$, $v=1$, $\alpha=1$, and $\Lambda=-1$. We work in natural units where $\hbar=c=1$. Though our parameters and quantities such as the radius, mass and magnetic field are quoted as numbers, they should be thought of as having a unit attached to them (except for the winding number $n$ which is a pure number)\footnote{All parameters and quantities can be expressed in terms of two independent units: a unit $x$ attached to the VEV $v_{eff}$ and a unit $y$ attached to the cosmological constant $\Lambda_{eff}$. The units $x$ and $y$ have dimensions of [L]$^{-1/2}$ and [L]$^{-2}$ respectively. It follows that the units attached to the parameter $v$ and $\Lambda$ are $x$ and $y$ respectively. From \reff{A0}, the quantity $-\Lambda_{eff}\,r^2$ is dimensionless. Therefore the unit attached to the radius $r$ is $y^{-1/2}$. This can be expressed in terms of the AdS length $\ell$. We have that $\Lambda_{eff} \times y =-1/\ell^2$ so that $y^{-1/2}=(-\Lambda_{eff})^{1/2}\,\ell$ where $\Lambda_{eff}$ here is a pure negative number. The equation for $\epsilon(r)$ in \ref{beta} implies that $e\,v_{eff}/(-\Lambda_{eff})^{1/2}$ must be dimensionless. The unit attached to the coupling $e$ is therefore $x^{-1}\,y^{1/2}\,$ and has dimensions [L]$^{-1/2}$. $\lambda/e^2$ is also dimensionless so that $\lambda$ is expressed in units of $x^{-2}\,y$ which has dimensions of [L]$^{-1}$. The mass is proportional to $\alpha_{eff}= \alpha + v_{eff}^2 \,\xi$ and therefore has units of $x^2$ i.e. the unit of the VEV squared which is the same unit as in fixed Minkowski spacetime \cite{Weinberg}. This has dimensions of [L]$^{-1}$ which is the correct dimension for the mass in natural units. The magnetic field, given by $B_m= \frac{\sqrt{A}\, a'}{e \,r}$ has units of $x\,y^{1/2}$ which has dimensions of [L]$^{-3/2}$. Note that $A(r)$ and $a(r)$ are dimensionless.}. As we have seen, a negative $\Lambda$ automatically ensures that the asymptotic cosmological constant $\Lambda_{eff}$ will be negative. Our solutions in this section will therefore correspond to an AdS$_3$ background. Note that though $v$ and $\Lambda$ are held fixed, the VEV $v_{eff}$ and the cosmological constant $\Lambda_{eff}$ will change with $\xi$.
Recall that a critical coupling $\xi_c$ exists only if $v^4 \lambda +8 \alpha \Lambda$ is negative. With the above values for the parameters this latter quantity is negative (equal to $-7$) and therefore a critical coupling exists. It is given by \reff{Critical} and substituting the values of our parameters is equal to $\xi_c=2/21\approx 0.0952$ (the same value that appears in our plot of the VEV vs. $\xi$ in fig. \ref{VEV}). This implies that for $\xi\ge 2/21$ the VEV is zero and there is no vortex. We therefore obtained vortices for $\xi<2/21$.
We considered nine values of the coupling $\xi$ that ranged from $-0.14$ to $0.095$ (close to the upper bound $\xi_c$) which includes the case $\xi=0$. We present below figures \ref{Graph1}-\ref{Graph9}, one for each value of the couplings in order of increasing $\xi$. Each figure contains plots of the scalar field $f(r)$, the gauge field $a(r)$, the metric function $A(r)$ and the magnetic field $B_m(r)$. We also made separate plots of $f$ and $A$ that focus on the core region near the origin where the fields undergo significant change. There are therefore six plots associated with each value of $\xi$. We also present some numerical results in table format. In table \ref{Table} we present the following data for each value of $\xi$: the theoretically expected and numerically obtained value of the VEV $v_{eff}$ and cosmological constant $\Lambda_{eff}$, the (ADM) mass of the vortex, the peak value of the magnetic field at the origin and the numerically integrated magnetic flux.
In table \ref{Table}, the formula \reff{Veff} for the VEV $v_{eff}$ matched almost exactly (to within three and four decimal places) the value where $f$ plateaued numerically. Similarly, our formula \reff{Leff} for the cosmological constant $\Lambda_{eff}$ matched almost exactly (again to within three and four decimal places) the numerical value of the asymptotic cosmological constant. This provides strong confirmation of both our analytical formulas and numerical simulation. In figures \ref{Graph1}-\ref{Graph9}, the magnetic field $B_m$ always peaks at the origin and then falls off with radius towards zero. As $\xi$ increases and approaches closer to the critical coupling, the value of the peak magnetic field decreases (see plot in fig. \ref{BPlot}) but the magnetic field extends further out since it falls off to zero more slowly. As a consequence, the magnetic flux obtained numerically by integrating over the magnetic field profile remained constant as $\xi$ changed (see table \ref{Table}) and matched exactly (to within three or four decimal places) the expected theoretical value of $\Phi=2\,\pi\,n/e=2 \pi/3=2.0944$ (where we substituted $n=1$ and $e=3$). That this numerically integrated magnetic flux remained constant across different magnetic field profiles provides another strong check on our numerical simulation.
In table \ref{Table}, the VEV monotonically decreases from a value of $1.6475$ at $\xi=-0.14$ to a value of $0.04584$ at $\xi=0.095$. We plot the nine data points in fig. \ref{VEVPlot} and they trace out a curve similar to the plot in fig. \ref{VEV} of the VEV vs. $\xi$ obtained theoretically and hence also similar to the plot in fig. \ref{Order} of the order parameter vs. temperature in GL mean-field theory. We now verify numerically that the data points in our sample that are close to the critical coupling $\xi_c=2/21$ follow the power law with critical exponent $1/2$ that we previously derived for $\xi$ near $\xi_c$ i.e. $v_{eff}=k\,(\xi_c-\xi)^{1/2}$ where $k=\tfrac{v}{\sqrt{\xi_c + (2\,v^2\,\xi_c^2)/\alpha}}$ (see \reff{expansion}). For the values of our parameters we obtain $k=2.96985$. For $\xi=0.095$, which is the closest data point to $\xi_c$ in our sample, we obtain $k\,(\xi_c-\xi)^{1/2}=0.04583$ which matches almost exactly our numerical result of $0.04584$ for the VEV quoted in table \ref{Table}. Another data point we can consider is $\xi=0.09$ as it is not that far off from the critical coupling. This yields $k\,(\xi_c-\xi)^{1/2}=0.2149$ which still matches quite closely our numerical result of $0.2161$. This constitutes a quantitative confirmation that the non-minimally coupled vortex in AdS$_3$ undergoes critical phenomena with exponent $1/2$ at the critical coupling $\xi_c$.
We mentioned above that the magnetic field extends further out as $\xi$ increases towards the critical coupling $\xi_c$. The same thing happens with the scalar field $f$. For cases $\xi=-0.14$, $\xi=-0.12$ and $\xi=-0.10$, $f$ can be seen to roughly plateau already ``near the origin" (see plots of $f$ ``near the origin" in figures \ref{Graph1}-\ref{Graph3}). At higher $\xi$, $f$ has not plateaued yet near the origin (see plots of $f$ ``near the origin" in figures \ref{Graph4}-\ref{Graph9}). This implies that it must extend further out to reach its VEV. In particular, as $\xi$ approached near the critical coupling $\xi_c$, the regular plot of $f$ vs. $r$ has to be extended to drastically larger radii to accommodate the fact that $f$ plateaus so much more slowly. We will discuss the extension of the scalar field (and of the magnetic field) in more detail in the next subsection.
If the local matter density in the core region of the vortex is high enough it causes the metric function $A(r)$ near the origin to have a noticeable dip: the metric starts at $A=1$ at the origin $r=0$, dips below unity in the core region, reaches a minimum that is above zero before increasing to reach its asymptotic $r^2$ dependence. The dip can be seen in the plot of $A$ ``near the origin" and the asymptotic $r^2$ dependence is more evident in the regular plot of $A$ vs. $r$. The plots of the metric function $A(r)$ near the origin in figures \ref{Graph1} to \ref{Graph9} reveals that the dip monotonically decreases as $\xi$ increases and is most pronounced at $\xi=-0.14$. This implies that the local matter density in the core region is greatest for $\xi=-0.14$. Though $A$ in this case dips the closest to zero (i.e. its minimum is smallest) it does not cross zero. If $A$ crossed zero, this would signal black hole formation and a singularity. However, our non-singular initial conditions prevents one from constructing vortices beyond a local matter density where gravity becomes so strong that the scalar field is no longer able to reach its asymptotic plateau value. This places a lower bound on $\xi$; for the values of our parameters, we were not able to construct vortices roughly below $\xi=-0.14$. This lower bound was reached way before the lower bound set by the condition $\alpha_{eff}=\alpha +v_{eff}^2 \xi>0$. With $v_{eff}$ given by \reff{Veff} and using the values of our parameters, one can readily check that this would have occurred at the much lower value of $\xi=-0.26$.
In table \ref{Table} one can see that the ADM mass is highest at $\xi=-0.12$ and decreases afterwards as $\xi$ increases towards $\xi=0.095$. There is one case that does not follow this trend in masses. The ADM mass at $\xi=-0.14$ is actually lower than the mass at $\xi=-0.12$ (the data points of mass vs. $\xi$ is plotted in fig. {MPlot} and the curve illustrates nicely the trend in masses). The case $\xi=-0.14$ has the highest VEV which would seem to imply that it should have the highest mass (vortices with higher VEV will usually have more mass in fixed Minkowski spacetime \cite{Weinberg}). Why then is the mass lower for $\xi=-0.14$ than for $\xi=-0.12$? This is due to the fact that the ADM mass receives contributions not only from matter but also from the negative binding energy of the gravitational field (see section 3.9 on ``Thin-shell collapse" in \cite{Poisson} for a clear illustration of this). The metric field $A(r)$ near the origin for $\xi=-0.14$ (fig. \ref{Graph1}) has a more pronounced dip than for $\xi=-0.12$ (fig. \ref{Graph2}). So the negative gravitational binding energy is significant enough in $\xi=-0.14$ to yield a lower ADM mass than in $\xi=-0.12$.
\clearpage
\begin{figure}[!htb]
\centering
\includegraphics[scale=0.75]{Graph_xi=-0.14.pdf}
\caption{Case $\xi=-0.14$. This is the case with the lowest value of $\xi$ and the highest VEV (value where $f$ plateaus). The gauge field plateaus at $n=1$ which is the same value for all subsequent cases. The dip in the metric function $A(r)$ near the origin is the most pronounced of our sample. The magnetic field $B_m$ peaks at the origin and has the highest peak in our sample. The magnetic field also falls off the fastest (extends out the least). The plot of $f$ near the origin shows that $f$ plateaus quickly (does not extend much before reaching its VEV).}
\label{Graph1}
\end{figure}
\clearpage
\begin{figure}[!htb]
\centering
\includegraphics[scale=0.9]{Graph_xi=-0.12.pdf}
\caption{Case $\xi=-0.12$. This is the case with the next lowest value of $\xi$. $f$ plateaus at a lower VEV than $\xi=-0.14$ but it has the highest (ADM) mass in our sample. The dip in the metric function $A(r)$ near the origin is not as pronounced as in $\xi=-0.14$. The magnetic field $B_m$ at the origin is lower than for $\xi=-0.14$ but it falls off slower so that the magnetic flux turns out to be the same. Again, the plot of $f$ near the origin shows that $f$ plateaus quickly and hence has a small extension. }
\label{Graph2}
\end{figure}
\pagebreak
\begin{figure}[!htb]
\centering
\includegraphics[scale=0.9]{Graph_xi=-0.1.pdf}
\caption{Case $\xi=-0.10$. $f$ plateaus at a lower VEV than the previous cases. The dip in the metric function $A(r)$ near the origin is still pronounced but not as much as in $\xi=-0.12$ or $\xi=-0.14$. The magnetic field $B_m$ at the origin is lower than for $\xi=-0.12$ but it falls off slower which yields the same magnetic flux as previous cases. The plot of $f$ near the origin shows that $f$ still plateaus relatively quickly though less fast than in previous cases. }
\label{Graph3}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[scale=0.9]{Graph_xi=-0.05.pdf}
\caption{Case $\xi=-0.05$. $f$ plateaus at a lower VEV than the previous cases. The dip in the metric function $A(r)$ near the origin is visible but not as pronounced as in previous cases. The magnetic field $B_m$ has a profile that yields the same magnetic flux as previous cases. The plot of $f$ near the origin now shows that $f$ is no longer plateauing quickly (it needs to extend more before reaching its VEV).}
\label{Graph4}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[scale=0.9]{Graph_xi=0.0.pdf}
\caption{Case $\xi=0$. The non-minimal coupling term is turned off. The VEV is therefore equal to $v=1$. The dip in the metric function $A(r)$ near the origin is still visible. The magnetic field $B_m$ extends further out but yields the same magnetic flux as previous cases. The plot of $f$ near the origin shows that $f$ is still rising and requires more radial distance before it plateaus to its VEV).}
\label{Graph5}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[scale=0.8]{Graph_xi=0.05.pdf}
\caption{Case $\xi=0.05$. The regular plot of $f$ vs. $r$ now has a computational boundary of $R=40$ whereas in all previous cases it was $R=12$. This is because $f$ reaches its VEV now much slower and one needs to extend the computational boundary so that $f$ can reach its VEV to the same level of accuracy. The plot of $f$ near the origin shows that $f$ has a large slope and is also far from its plateau value so that it requires significantly more radial distance before it plateaus to its VEV. The numerical values of the metric function $A$ show that there is an extremely tiny dip right near the origin but this is not visible on the plot. The magnetic field $B_m$, just like $f$, extends further out than all previous cases.}
\label{Graph6}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[scale=0.9]{Graph_xi=0.07.pdf}
\caption{Case $\xi=0.07$. The regular plot of $f$ vs. $r$ has a computational boundary of $R=90$. The plot of $f$ near the origin shows that $f$ is now quite far from its plateau value. It requires now a larger radial distance before it plateaus to its VEV. There is no longer any dip in the metric function $A$: the numerical values show $A(r)$ increases monotonically. The magnetic field $B_m$, just like $f$, extends out much further than previously.}
\label{Graph7}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[scale=0.75]{Graph_xi=0.09.pdf}
\caption{Case $\xi=0.09$. This is the second largest $\xi$ in our sample and we are now getting quite close to the critical coupling $\xi_c\approx 0.0952$ where the derivative of the VEV with respect to $\xi$ diverges. The change from $\xi=0.07$ to $\xi=0.09$ is therefore large. The regular plot of $f$ vs. $r$ has a significantly larger computational boundary of $R=800$. The plot of $f$ near the origin shows that $f$ is very far from its plateau value. It requires now a very large radial distance before it plateaus to its VEV. Again, there is no longer any dip in the metric function $A$ and it increases monotonically. The magnetic field $B_m$, just like $f$, extends out again much further than previously.}
\label{Graph8}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[scale=0.75]{Graph_xi=0.095.pdf}
\caption{Case $\xi=0.095$. This is the largest $\xi$ in our sample and is very close to the critical coupling $\xi_c\approx 0.0952$. Since we are near the critical point, the change from $\xi=0.09$ to $\xi=0.095$ is very large. The plot of $f$ near the origin shows that $f$ is again very far from its plateau value; this is why the regular plot of $f$ vs. $r$ requires an extremely large computational boundary of $R=5 \times 10^6$. This is the radius required for $f$ to reach its VEV to the same level of accuracy as the other cases. The ``extension" of $f$ (a measure of the radius required to reach the VEV) has therefore increased enormously as $\xi$ approached near the critical coupling $\xi_c$ and this is analogous to the divergence of the coherence length in GL mean-field theory as one approaches the critical temperature $T_c$.}
\label{Graph9}
\end{figure}
\begin{table}[b]
\centering
\includegraphics[scale=0.35]{Table.pdf}
\caption{We present data for $\xi$ ranging from $-0.14$ to $0.095$ (near the critical coupling $\xi_c=2/21\approx 0.0952$). The theoretically predicted values of the VEV $v_{eff}$ and cosmological constant $\Lambda_{eff}$ calculated using \reff{Veff} and \reff{Leff} respectively match the numerical values to within three or four decimal places. The peak value of the magnetic field occurs at the origin and also decreases monotonically as $\xi$ increases. The magnetic flux obtained by integrating numerically over the magnetic field profile remains constant despite the different profiles and its numerical value matches the theoretically expected value of $\Phi=2\,\pi\,n/e=2.0944$ to within three or four decimal places. This provides a very strong check on our numerical simulation. The ADM mass increases from $\xi=0.095$ to $\xi=-0.12$ but this trend does not extend all the way to $\xi=-0.14$; this is due to a significant negative gravitational binding energy in the case of $\xi=-0.14$ (see body of text for more details).}
\label{Table}
\end{table}
\begin{figure}[!htb]
\includegraphics[scale=0.3]{VEVPlot.pdf}
\caption{Plot of the numerical value of the VEV vs. $\xi$. The data points trace out a curve similar to the plot in fig. \ref{VEV} of the VEV vs. $\xi$ that was obtained theoretically and similar to the curve in fig. \ref{Order} of the order parameter vs. temperature in GL mean-field theory. The VEV decreases monotonically and its slope gets steeper (more negative) as $\xi$ increases towards the critical coupling. The data points near $\xi_c$ obey the power law $v_{eff}\propto (\xi_c-\xi)^{1/2}$ (see body of text above for exact comparison); this confirms that our system undergoes critical phenomena with a critical exponent of $1/2$.}
\label{VEVPlot}
\end{figure}
\begin{figure}[!htb]
\includegraphics[scale=0.3]{B_Plot.pdf}
\caption{Plot of peak magnetic field vs. $\xi$. Like the VEV, it decreases monotonically as $\xi$ increases but in sharp contrast to the VEV, its slope gets flatter (less negative) as $\xi$ increases towards the critical coupling. Therefore, the peak magnetic field does not act like an order parameter.}
\label{BPlot}
\end{figure}
\begin{figure}[!htb]
\includegraphics[scale=0.3]{Mass_Plot.pdf}
\caption{Plot of mass $M$ vs. $\xi$. Near the critical coupling, this plot looks similar to the one for the VEV; in particular, its slope gets steeper (more negative) as $\xi$ increases towards the critical coupling. The mass increases as $\xi$ decreases but unlike the VEV, this trend stops when we get to the most negative point, $\xi=-0.14$, where the mass is less than for $\xi=-0.12$ due to gravity's effect (see discussion in body of text).}
\label{MPlot}
\end{figure}
\FloatBarrier
\subsubsection{Extension of scalar field and magnetic field and divergence at critical coupling $\xi_c$}
We have already mentioned that as $\xi$ increases towards the critical coupling, the scalar field and magnetic field extend further out. In the case of the scalar field, this means it rises slower and plateaus at its VEV over a longer radius. For the magnetic field, it means that starting from its peak at the origin, it decreases towards zero in a slower fashion, again over a longer radius. In short, the core region of the vortex occurs over a longer spartial range as $\xi$ gets larger.
To make this more quantitative, we will define the extension $r_f$ of the scalar field to be the radius where it reaches $99.9\%$ of its VEV and define the extension $r_B$ of the magnetic field to be the radius where it has fallen to $0.1\%$ of it its peak value (i.e. decreased by $99.9\%$ from its peak at the origin). We plot in fig \ref{Extf} the scalar field extension $r_f$ vs. $\xi$ and in fig. \ref{ExtB} the magnetic field extension $r_B$ vs. $\xi$. In both cases, there is a very rapid increase in the extension when $\xi$ is near the critical coupling $\xi_c$. We will see that the extension actually diverges at the exact value of $\xi=\xi_c=2/21$. This is reminiscent of the divergence of the coherence length in GL mean-field theory at the critical temperature $T_c$.
\begin{figure}[!htb]
\includegraphics[scale=0.3]{Extension_f.pdf}
\caption{Extension of the scalar field $f(r)$ as a function of $\xi$. Note the rapid increase in the extension as one approaches near the critical coupling $\xi_c\approx 0.0952$. The extension is expected to diverge at the exact value of $\xi_c=2/21$.}
\label{Extf}
\end{figure}
\begin{figure}[!htb]
\includegraphics[scale=0.3]{Extension_B.pdf}
\caption{The extension of the magnetic field $B_m(r)$ as a function of $\xi$. Here also there is a rapid increase in the extension as one approaches near the critical coupling $\xi_c\approx 0.0952$. The extension is expected to diverge at the exact value of $\xi_c=2/21$.}
\label{ExtB}
\end{figure}
We will now show analytically that $f(r)$ approaches the VEV in the slowest fashion possible in the limit when $\xi$ approaches $\xi_c$. If we let $f(r)=v_{eff}-\beta(r)$ asymptotically, we know that $\beta(r)$ is given by \reff{beta} which we rewrite for convenience below
\begin{align}
\beta(r)&= c\, r^{-1-\Big[\dfrac{-\alpha_{eff}\,\Lambda_{eff} + 2 \,\alpha_{eff}\,
v_{eff}^2 \,\lambda - 64 \,v_{eff}^2 \,\Lambda_{eff} \,\xi^2}{-\alpha_{eff}\,\Lambda_{eff} - 16 \,v_{eff}^2 \,\Lambda_{eff} \,\xi^2}\Big]^{1/2}}\nonumber\\
&= c\, r^{-1-P^{1/2}}
\label{beta3}
\end{align}
where $P$ is the quantity in square brackets. Since $\alpha_{eff}>0$ and $\Lambda_{eff}<0$, all the terms in the numerator and denominator in the square brackets are positive. It should be clear that $P \ge 1$. We have that $\beta$ approaches zero asymptotically as $1/ r^{1+P^{1/2}}$. When $\xi\to\xi_c$, we have that $v_{eff}\to 0$ and $P\to 1$. Therefore, as $\xi \to \xi_c$, $\beta$ decreases as $1/r^2$ asymptotically which is the slowest fall-off it can have which translates to the slowest approach that $f$ can have towards its VEV.
Now $r_f$ is the extension, defined as the radius where $f=0.999 \,v_{eff}$ so that $r_f^{1+P^{1/2}}$ is proportional to $1/(0.001\,v_{eff})$. This diverges as $\xi\to\xi_c$ since $v_{eff}\to 0$. It is therefore expected that the extension $r_f$ diverges at the critical coupling $\xi_c$ in accordance with the trend in fig. \ref{Extf}.
Asymptotically we have that $a(r)=n-\epsilon(r)$ where $\epsilon$ is given by \reff{eps}. The magnetic field is given by $B_m=\sqrt{A(r)}\,a'(r)/(e\,r)$. Asymptotically, $A(r) \to -\Lambda_{eff}\,r^2$ and $a'(r) \to -\epsilon'(r)$ so that $B_m$ falls off asymptotically as $r^{-\frac{e\,v_{eff}}{(-\Lambda_{eff})^{1/2}}-1}$. As $\xi \to \xi_c$, we have that $v_{eff} \to 0$ so that $B_m$ falls off as $1/r$ which is the slowest fall-off possible. The extension $r_B$ is therefore proportional to the inverse of the peak magnetic field as $\xi \to \xi_c$. Our numerical results show that the peak value of the magnetic field at the origin keeps decreasing (towards zero) as $\xi\to \xi_c$ so that the extension $r_B$ tends to infinity. This agrees with the fact that the magnetic flux can remain constant as the peak magnetic field at the origin decreases to zero only if the magnetic field has an infinite extension.
\subsection{Plot of vortex profiles and magnetic field in asymptotically Minkowski spacetime}
We now consider the role the coupling $\xi$ plays for the case of asymptotically Minkowski spacetime. This corresponds to $\Lambda=0$ which as we have seen, implies $\Lambda_{eff}=0$. As previously mentioned, there is no critical coupling for asymptotically Minkowski spacetime. The VEV is expected to remain constant at $v_{eff}=v=1$ and the cosmological constant is expected to remain at $\Lambda_{eff}=0$ i.e. the VEV $v_{eff}$ and $\Lambda_{eff}$ have no dependence on $\xi$ in contrast to the AdS$_3$ case. We run numerical simulations for different values of $\xi$ with the same set of parameters as before: $\lambda=1$, $e=3$, $n=1$, $v=1$ and $\alpha=1$. The only difference is that $\Lambda=0$ now (instead of the $\Lambda=-1$ we used in the AdS$_3$ case). We work again in natural units. As before, the parameters and quantities like the radius, mass and magnetic field are quoted as numbers but one should think of a unit attached to them\footnote{Let $x$ be the unit attached to the VEV $v$ and let $z$ be the unit attached to the coupling $e$. The quantity $e\,v\,r$ is dimensionless so that the unit attached to the radius $r$ is $(x\,z)^{-1}$ which has the correct dimensions of [L]. The mass is again expressed in units of the VEV squared i.e. $x^2$. The magnetic field $B_m= \frac{\sqrt{A}\, a'}{e \,r}$ is expressed in units of $x^2\,z$ (which again has dimensions of [L]$^{-3/2}$). As before, $\lambda/e^2$ is dimensionless so that the unit attached to $\lambda$ is $z^2$ (which again has dimensions of [L]$^{-1}$).}.
We made plots for five different cases: $\xi=\{-0.4,-0.2,0.0,0.2,0.4\}$. The plots of the scalar field $f$ and the gauge field $a$ all plateau at unity regardless of $\xi$. We also plot the magnetic field whose profile depends on $\xi$. The most important plot by far is the one for the metric $A$ which plateaus asymptotically to a constant value (which we previously labeled $D$). The profile of the metric here (starting at unity at the origin and then plateauing to $0<D<1$) is in stark contrast to the AdS$_3$ case where the metric had an $r^2$ dependence asymptotically. The constant $D$ can only be obtained numerically (by running the simulation) and it changes with $\xi$. Since the deficit angle depends on $D$ via \reff{delta}, the deficit angle depends on $\xi$. We also calculate the mass $M_{flat}$ of the vortex via \reff{M2}. The constant $D$, the deficit angle $\delta$, the mass $M_{flat}$ as well as the peak value of the magnetic field are presented in table \ref{Table2}. In $2+1$-dimensional General Relativity in asymptotically Minkowski spacetime, there is the classic result due to Deser et al. \cite{Deser} that a point mass produces a deficit angle proportional to the mass. The ratio of mass to deficit angle is equal to $2\ \alpha=1/(8\,\pi\,G)$ and is a constant since Newton's constant $G$ does not change as the mass changes. In contrast, for the vortex with non-minimal coupling, the ratio of mass to deficit angle is not constant but depends on $\xi$: it is equal to $2\alpha_{eff}=2(\alpha+v^2\,\xi)=2\,(1+\xi)$ where we substituted the values $\alpha=1$ and $v=1$ for our parameters. A striking consequence is that it is possible for a larger mass to actually produce a smaller deficit angle compared to a smaller mass. For example, in table \ref{Table2}, the case at $\xi=-0.4$ has the largest deficit angle of 1.894 rad in our sample and has a mass of 2.273 whereas the case at $\xi=0.4$ has a smaller deficit angle of 1.225 rad but the largest mass of
3.430 in our sample which is roughly 1.5 times greater than our former case.
\begin{figure}[!htb]
\includegraphics[scale=0.7]{FlatPlot_xi=-0.4.pdf}
\caption{Flat case $\xi=-0.4$. We plot the metric $A$, the magnetic field $B_m$ and the scalar $f$ and gauge field $a$. Since the VEV of the scalar field is always unity, it plateaus at unity regardless of the value of $\xi$. The gauge field $a$ always plateaus at unity also since $n=1$ for all $\xi$. We therefore show the scalar and gauge field profile here but not in subsequent figures, since they are roughly similar. The metric profile plateaus at $D=0.488$ which yields a deficit angle of $1.894$ rad, the largest deficit angle in our sample but not the one with the highest mass (see table \ref{Table2}). The magnetic field peaks at $1.43$, which is the highest peak in our sample. This implies that it extends the least (falls off fastest) since the magnetic flux remains constant at $\Phi=2 \pi n/e=2.0944$ to within three or four decimal places.}
\label{Flat1}
\end{figure}
\begin{figure}[!htb]
\includegraphics[scale=0.8]{FlatPlot_xi=-0.2.pdf}
\caption{Flat case $\xi=-0.2$. The metric $A$ plateaus at $D=0.615$ yielding a deficit angle $1.356$ rad, the second largest deficit angle in our sample. The magnetic field peaks at $1.092$, the second largest peak in our sample.}
\label{Flat2}
\end{figure}\begin{figure}[!htb]
\includegraphics[scale=0.8]{FlatPlot_xi=0.pdf}
\caption{Flat case $\xi=0$. The non-minimal coupling is turned off here. The metric $A$ plateaus at $D=0.668$ yielding a deficit angle of $1.148$. The magnetic field peaks at $0.962$ and is less than the previous case. }
\label{Flat3}
\end{figure}\begin{figure}[!htb]
\includegraphics[scale=0.8]{FlatPlot_xi=0.2.pdf}
\caption{Flat case $\xi=0.2$. The metric $A$ plateaus at $D=0.668$, the same value as the previous case. It therefore also has a deficit angle of $1.148$. It has a peak magnetic field of $0.950$ which is less than the previous case. Up to here so far, there has been a trend: the peak of the magnetic field has monotonically decreased and the deficit angle has decreased or remained the same.}
\label{Flat4}
\end{figure}\begin{figure}[!htb]
\includegraphics[scale=0.8]{FlatPlot_xi=0.4.pdf}
\caption{Flat case $\xi=0.4$. This case departs from the above decreasing trend. The metric plateaus at $D=0.648$ yielding a deficit angle of $1.225$ rad and a peak magnetic field of $0.987$: both are greater than in the previous two cases.}
\label{Flat5}
\end{figure}
\begin{table}[b]
\centering
\includegraphics[scale=1]{Table2.pdf}
\caption{The most important thing about this table is that the deficit angle is not proportional to the mass. Compare the first and last row. At $\xi=-0.4$ one has the largest deficit angle of $1.894$ rad with a mass of $2.273$ whereas at $\xi=+0.4$ the mass is significantly higher at $3.430$ and yet it has a much smaller deficit angle of $1.225$ rad. With the non-minimal coupling term present, the ratio of mass to deficit angle is not constant but depends on $\xi$ (see body of text).}
\label{Table2}
\end{table}
\section{Conclusion}
In this paper, we studied the effects of the non-minimal coupling term
$\xi\,R\,|\phi|^2$ on a vortex under Einstein gravity in an AdS$_3$ and flat (conical) background. In the case of AdS$_3$, this led to the emergence of a critical coupling $\xi_c$ where the VEV of the scalar field is zero for $\xi$ at or above $\xi_c$ but is non-zero when $\xi$ crosses below $\xi_c$. For the values of our parameters, $\xi_c$ was equal to $2/21\approx 0.0952$. We presented our numerical results in plots and tables for nine values of $\xi$. Our plot of the numerically obtained VEV versus $\xi$ was in accord with the theoretical expectation that the slope has a discontinuity and diverges at the critical coupling $\xi_c$. For $\xi$ near $\xi_c$, we verified numerically that the VEV indeed behaved according to the power law $|\xi-\xi_c|^{1/2}$. These results confirmed the idea that the critical coupling $\xi_c$ acts like the analog of the critical temperature $T_c$ in GL mean-field theory. In that theory, the order parameter is zero at or above $T_c$ and is non-zero below $T_c$ and behaves according to the power law $|T-T_c|^{1/2}$. The plot of the order parameter versus temperature $T$ also shows a discontinuity and divergence in the slope near $T_c$. Numerical results of the
``extension" of the scalar field (core region of the vortex) show that it increases monotonically as $\xi$ increases, with a dramatic increase near $\xi_c$. We showed analytically that it is expected to diverge at the critical coupling and this is analogous to the divergence of the coherence length in GL mean-field theory as one approaches the critical temperature.
In asymptotically flat (conical) spacetime, we considered five values of $\xi$ and remarkably, found that higher masses did not necessarily lead to a higher deficit angle as one might naively expect. The reason for this is that, when a non-minimal coupling term is present, the ratio of mass to deficit angle is no longer constant but depends on the coupling $\xi$. This can lead to cases where a higher mass has a smaller deficit angle than a smaller mass as our data clearly showed.
If $\xi_c$ acts as the analog to $T_c$ in GL mean-field theory, this naturally raises the question, ``Is the non-minimally coupled vortex a thermodynamic system at non-zero temperature?". The answer is clearly no. The Nielsen-Olesen vortex without gravity constitutes a static classical field configuration which is at zero temperature and has zero entropy. The zero temperature agrees with the fact that the fields have no average kinetic energy and the zero entropy is in accord with the fact we know everything about the field's configuration throughout spacetime; we are not ignorant of its configuration at any time and no information is hidden from us. The zero entropy is of course consistent with the zero temperature. When gravity is included, this can change only if the vortex acquires an event horizon. However, our gravitating vortex solutions are non-singular static solutions with no event horizon. The temperature and entropy are again zero and as before, the metric field, as well as the scalar and gauge field, are static throughout all of spacetime. In contrast, the BTZ black hole \cite{BTZ1,BTZ2} has a non-zero temperature and entropy as it has an event horizon (for simplicity, assume no angular momentum or electric charge, only mass with a single horizon). Note that the BTZ spacetime has a timelike Killing vector outside the event horizon but like the Schwarzschild black hole in $3+1$ dimensions, it has no timelike Killing vector inside the event horizon \cite{Extremal}. This implies that there is no coordinate transformation that can put the metric in static form inside the event horizon so that an outside observer is ignorant of the metric configuration inside at any particular time. Simply put, information is hidden from us behind the event horizon \cite{Bekenstein}. Note that in contrast, our non-singular gravitating static vortex has a timelike Killing vector throughout spacetime and no information is hidden from us (see also \cite{Carroll2, Horowitz, Teitelboim} for a related discussion).
The vortex actually constitutes a classical solution in quantum field theory (QFT) \cite{Weinberg}. The vortex cannot be obtained from perturbative QFT as it is a non-perturbative solution. It turns out that since the size of the vortex is much larger than its compton wavelength, the classical non-perturbative solution constitutes a valid solution to the QFT (i.e. a very good first approximation) \cite{Weinberg}. Perturbation theory can then be used to obtain one-loop quantum corrections to the vortex by quantizing about the classical configuration. In particular, quantum fluctuations of the scalar field will change the nature of the potential as there will now be logarithmic terms besides the usual terms \cite{Zee, ArielNoah}. The critical exponent of $1/2$ will therefore change as a consequence of these quantum corrections. So an interesting and pertinent problem to solve for the future is to determine the critical exponent of the non-minimally coupled vortex in an AdS$_3$ background after quantum corrections. This would be a considerably more complicated calculation than say the quantization about the $1+1$-dimensional kink in Minkowski spacetime \cite{Weinberg} as we have one extra spatial dimension and a curved space background.
\pagebreak
\begin{appendices}
\numberwithin{equation}{section}
\setcounter{equation}{0}
\section{Derivation of the VEV $v_{eff}$ and cosmological constant $\Lambda_{eff}$}
\numberwithin{equation}{section}
\setcounter{equation}{0}
In this appendix we derive the expressions for $v_{eff}$ and $\Lambda_{eff}$ given by equations \reff{Veff} and \reff{Leff} respectively. We start by rewriting the equations \reff{Vev1} and \reff{Cosmo1} where $v_{eff}$ and $\Lambda_{eff}$ are expressed in terms of each other:
\begin{align}
v_{eff}^2= v^2 + \dfrac{12 \,\xi \,\Lambda_{eff}}{\lambda}\,.
\label{Vev2}
\end{align}
\begin{align}
\alpha (R-2\,\Lambda) +\xi \,R\, v_{eff}^2- \dfrac{\lambda}{4}(v_{eff}^2-v^2)^2=(\alpha + \xi \, v_{eff}^2)(R- 2\,\Lambda_{eff})\,.
\label{Cosmo2}
\end{align}
We first substitute the asymptotic value of the Ricci scalar, $R= 6 \,\Lambda_{eff}$, into \reff{Cosmo2} which yields
\begin{align}
\Lambda_{eff}=\dfrac{\alpha \,\Lambda + \dfrac{\lambda}{8}(v_{eff}^2-v^2)^2}{\alpha + \xi\, v_{eff}^2}\,.
\label{Cosmo3}
\end{align}
Substituting \reff{Cosmo3} into \reff{Vev2} yields a quadratic equation for $v_{eff}^2$:
\begin{align}
\lambda \xi \,(v_{eff}^2)^2 -2\,\lambda (\alpha+2 v^2 \xi)\, v_{eff}^2 +2\,v^2 \alpha \lambda+3\, v^4 \lambda \xi + 24 \,\alpha \Lambda\xi =0\,.
\label{Veff3}
\end{align}
This yields the following two possible solutions for $v_{eff}^2$ (which we label I and II):
\begin{align}
\text{I:}\quad 2 v^2+\frac{\alpha }{\xi }-\frac{\sqrt{\alpha ^2+2 v^2 \alpha \xi +v^4 \xi ^2-24 \alpha \Lambda \xi ^2/\lambda }}{\xi }
\label{I}
\end{align}
\begin{align}
\text{II:} \quad 2 v^2+\frac{\alpha }{\xi }+\frac{\sqrt{\alpha ^2 +2 v^2 \alpha \xi +v^4 \xi ^2-24 \alpha \Lambda \xi ^2/\lambda}}{ \xi }
\end{align}
However, only the first solution satisfies the requirement that $v_{eff}$ is equal to $v$ in the limit $\xi \to 0$. The second solution yields $\infty$ in that limit and must be disregarded. Taking the positive of the square root of the first solution yields the quoted result \reff{Veff} for $v_{eff}$:
\begin{align}
v_{eff}=\Bigg[2 v^2 + \frac{\alpha}{\xi} - \frac{\sqrt{\alpha^2 +
2 \,v^2\,\alpha \,\xi + v^4\,\xi^2 - 24 \,\alpha\,\Lambda\,\xi^2/\lambda}}{\xi}\Bigg]^{1/2}
\label{Veff4}
\end{align}
Substituting the above solution \reff{Veff4} into \reff{Cosmo3} yields the quoted result \reff{Leff} for $\Lambda_{eff}$:
\begin{align}
\Lambda_{eff}=\dfrac{\lambda}{12\,\xi^2}\Big(\alpha + v^2\,\xi -\sqrt{\alpha^2+ v^4\,\xi^2 +2\,v^2\,\alpha\,\xi-24\,\alpha\,\Lambda\,\xi^2/\lambda}\Big)\,.
\label{Leff4}
\end{align}
\section{Full equations of motion}
\numberwithin{equation}{section}
\setcounter{equation}{0}
The three equations of motion quoted in the text are \reff{EOMB2},\reff{EOMf2} and \reff{EOMa2}. Equation \reff{EOMa2} contains the function $W(r)=B'/B$ and equation \reff{EOMf2}contains $W$ and its derivative $W'$. We can extract $W$ from \reff{EOMA} and this yields
\begin{align}
W=&\dfrac{1}{4 e^2 r A \left(\alpha +\xi f^{\,2}+2 r \xi f f'\right)}
\Big(-e^2 r^2 \left(v^4 \lambda +8 \alpha \Lambda \right)-2 e^2 (n^2-r^2 v^2 \lambda \nonumber \\&\quad\quad-2\, n \,a+a^2) f^2-e^2 r^2 \lambda f^4-16 e^2 r \xi A f f'+2 A (a'^{\,2}+ e^2 r^2 f'^{\,2})\Big)\,.
\label{W}
\end{align}
Substituting the above expression for $W$ (as well as its derivative) back into \reff{EOMf2} and \reff{EOMa2} and keeping \reff{EOMB2} the same yields three equations of motion that have no dependence on the function $B$. The full three equations are:
\begin{align}
&e^2 r^2 \lambda f^4+e^2 r \,(r v^4 \lambda+8 r \alpha \Lambda+4 \alpha A')+2 e^2 f^2 (n^2-r^2 v^2 \lambda-2\, n \,a+a^2+2 r \xi A')\nonumber\\&\quad\quad+2 A \,(a'^{\,2}+e^2 r^2 (1+8 \xi) f'^{\,2})+8 e^2 r \xi f \,\big(r A' f'+2 A \,(f'+r f'')\big)=0\,.\label{EOMB4}
\end{align}
\begin{align}
&-2 r^2 \lambda f^3-2 f \left(n^2-r^2 v^2 \lambda -2 n a+a^2+2 r \xi A'\right)+r (r A' f'+2 A (f'+r f''))\nonumber\\&+\dfrac{1}{8 e^4 A (\alpha +\xi f (f+2 r f'))^2}\Big(\xi f (e^2 r^2 (v^4 \lambda +8 \alpha \Lambda )-2 A a'^{\,2}\nonumber\\&+e^2 (2 (n^2-r^2 v^2 \lambda -2 n a+a^2) f^2+r^2 \lambda f^4+16 r \xi A f f'-2 r^2 A f'^{\,2}))^2\Big)\nonumber\\&
+\dfrac{r}{4 e^4 A (\alpha +\xi f (f+2 r f'))^2}\,\Bigg(2 e^2 \xi f A' (\alpha +\xi f (f+2 r f')) (e^2 r^2 (v^4 \lambda +8 \alpha \Lambda )-2 A a'^{\,2}\nonumber\\&+e^2 (2 (n^2-r^2 v^2 \lambda -2 n a+a^2) f^2+r^2 \lambda f^4 +16 r \xi A f f'-2 r^2 A f'^{\,2}))\nonumber\\&-e^2 A f' (\alpha +\xi f (f+2 r f'))(e^2 r^2 (v^4 \lambda +8 \alpha \Lambda )-2 A a'^{\,2}\nonumber\\&+e^2 (2 (n^2-r^2 v^2 \lambda -2 n a+a^2) f^2+r^2 \lambda f^4+16 r \xi A f f'-2 r^2 A f'^{\,2}))\nonumber\\&+\frac{1}{r}\xi f \bigg (-e^4 (r^2 (v^4 \lambda +8 \alpha \Lambda )+2 (n^2-r^2 v^2 \lambda -2 n a+a^2) f^2+r^2 \lambda f^4) \nonumber\\&(r^2 \lambda f^4+r (r v^4 \lambda +8 r \alpha \Lambda +4 \alpha A')+2 f^2 (n^2-r^2 v^2 \lambda -2 n a+a^2+2 r \xi A')+8 r^2 \xi f A' f')\nonumber\\&-4 e^2 A (-2 e^2 r^2 \lambda \xi f^6-r^2 (v^4 \lambda +8 \alpha \Lambda ) (a'^{\,2}+e^2 (2 \alpha +r^2 (1-2 \xi ) f'^{\,2}))\nonumber\\&-r f^4 (4 e^2 \xi (-n+a) a'+r \lambda a'^{\,2}+e^2 r \lambda (2 (\alpha -2 v^2 \xi )+r^2 (1+6 \xi ) f'^{\,2}))\nonumber\\&-2 f^2 (2 e^2 r \alpha (-n+a) a'+(n^2-r^2 v^2 \lambda -2 n a+a^2) a'^{\,2}+e^2 r^2 (-2 v^2 \alpha \lambda +v^4 \lambda \xi +8 \alpha \Lambda \xi \nonumber\\&+(1+2 \xi ) (n^2-r^2 v^2 \lambda -2 n a+a^2) f'^{\,2}))+2 e^2 r^3 \lambda \xi f^5 (2 f'+r f'')\nonumber\\&+2 e^2 r f (2 (-n^2 \alpha +r^2 (v^2 \alpha \lambda +2 v^4 \lambda \xi +16 \alpha \Lambda \xi )+\alpha (2 n-a) a) f'+r^3 (v^4 \lambda +8 \alpha \Lambda ) \xi f'')\nonumber\\&+4 e^2 r f^3 (-(-5 n^2 \xi +r^2 \lambda (\alpha +3 v^2 \xi )+5 \xi (2 n-a) a+2 r \xi (-n+a) a') f'\nonumber\\&+r \xi (n^2-r^2 v^2 \lambda -2 n a+a^2) f''))-4 A^2 \Big(a'^{\,4}-16 e^4 r \xi f (\alpha +\xi f^2) f'\nonumber\\&+4 e^2 r a' (\alpha +\xi f (f+2 r f')) a''-2 e^2 r a'^{\,2} (r (-1+2 \xi ) f'^{\,2}+2 \xi f (6 f'+r f''))\nonumber\\&+e^4 r^2 \big(-16 r \xi f f'^{\,3}+r^2 (1-4 \xi ) f'^{\,4}-16 \xi f (\alpha +\xi f^2) f''\nonumber\\&+4 r (\alpha +\xi f^2) f' f''+4 f'^{\,2} (\alpha -4 \alpha \xi +\xi f (f+20 \xi f+r^2 f''))\big)\Big)\bigg)\Bigg)=0\,.\label{EOMf4}
\end{align}
\begin{align}
&2 e^2 r (n-a) f^2-2 A a'+r a' A'+2 r A a''\nonumber\\&+\dfrac{a'}{4 e^2 \left(\alpha +\xi f^2+2 r \xi f f'\right)} \Big(-e^2 r^2 \left(v^4 \lambda +8 \alpha \Lambda \right)\nonumber\\&-2 e^2 \left(n^2-r^2 v^2 \lambda -2 n a+a^2\right) f^2-e^2 r^2 \lambda f^4-16 e^2 r \xi A f f'+2 A \left(a'^{\,2}+e^2 r^2 f'^{\,2}\right)\Big)=0\,.
\label{EOMa4}
\end{align}
The above three equations are those we solve numerically.
\end{appendices}
\section*{Acknowledgments}
A.E. acknowledges support from a discovery grant of the National Science and Engineering Research Council of Canada (NSERC).
|
1,108,101,566,695 | arxiv |
\section{Benchmarking of polarized PDF evolution}
\label{sec:apppdfevol}
We have benchmarked our implementation of the evolution of polarized
parton densities by
cross-checking against the
Les Houches polarized PDF evolution benchmark
tables~\cite{Dittmar:2005ed}. Note that in
Ref.~\cite{Dittmar:2005ed}
the polarized sea PDFs are given incorrectly, and should be \begin{eqnarray}
x\Delta \bar{u}=-0.045 x^{0.3} (1-x)^7 \nonumber\\
x\Delta \bar{d}=-0.055 x^{0.3} (1-x)^7 \ . \end{eqnarray}
These tables were obtained from a
comparison of the {\tt HOPPET}~\cite{Salam:2008qg} and {\tt
PEGASUS}~\cite{Vogt:2008yw} evolution codes, which are $x-$space and
$N-$space codes respectively. In order to perform a meaningful
comparison, we use the so-called iterated solution of the $N-$space evolution
equations and use the same initial PDFs and running coupling as
in~\cite{Dittmar:2005ed}. The
relative difference $\epsilon_{\rm rel}$
between our PDF evolution and the benchmark tables of
Refs.~\cite{Dittmar:2005ed} at NLO in the ZM-VFNS scheme
are tabulated in Tab.~\ref{tab:lhacc} for various combinations of
polarized PDFs: the accuracy of our code is $\mathcal{O}\left(
10^{-5}\right)$ for all relevant values of $x$, which is the
nominal accuracy of the agreement between {\tt HOPPET} and {\tt
PEGASUS}.
\begin{table}[t]
\begin{center}
\vskip-0.1cm
\input{tab-benchevol}
\end{center}
\caption{\small Percentage difference between FastKernel perturbative
evolution of polarized PDFs and the Les Houches benchmark
tables~\cite{Dittmar:2005ed}
for different
polarized PDF combinations at NLO in the ZM-VFNS scheme.
\label{tab:lhacc}}
\vskip-0.1cm
\end{table}
Therefore, we can conclude that the accuracy of the polarized
PDF evolution in the {\tt FastKernel} framework is satisfactory
for precision phenomenology.
\section{Kernels for Physical Observables}
\label{sec:kernels}
\def\nonumber{\nonumber}
\def\Gamma_{\rm S}{\Gamma_{\rm S}}
\def\Gamma_{\rm NS}{\Gamma_{\rm NS}}
\def\smallfrac{1}{2}{\smallfrac{1}{2}}
\def\ESp{E_{\rm S}^+}
\def\ESm{E_{\rm S}^-}
\def\ENSp{E_{\rm NS}^+}
\def\ENSm{E_{\rm NS}^-}
In this appendix we give the expression of
the polarized structure functions $g_1(x,Q^2)$ in terms
of the combinations of PDFs used in
Sec.~\ref{sec:fact}. We follow the notation of
Ref.~\cite{Ball:2008by}. The expressions below can be used with
evolution kernels and coefficients evaluated at any order in
perturbation theory. At present, polarized evolution kernels
$\Gamma_{\rm NS}^{\pm,v}$ and $\Gamma_{\rm S}^{ij}$, $i,j = q,g$, as described in
Ref.~\cite{Ball:2008by}, are available
at NLO~\cite{Mertig:1995ny},\cite{Vogelsang:1996im}.
Partial NNLO results are given in Ref.~\cite{Vogt:2008yw}. The DIS coefficient
functions are known up to NNLO~\cite{Zijlstra:1993sh}.
Throughout this section we use a condensed notation in which the
arguments of $f_i$, $F_I$, $\Gamma$, $C$ and $K$ are
all suppressed: parton distributions evaluated at $Q_0^2$ are denoted by
a subscript $0$, e.g. $\Delta \Sigma_0\equiv\Delta
\Sigma(x,Q_0^2)$. We will assume
throughout that $Q_0^2=1$ GeV$^2$. Heavy quark
parton distributions are radiatively generated
as described in Sect.~\ref{sec:fact}.
Target mass corrections are included in the kernels using
the procedure described in Sect.~\ref{sec:tmc}.
The datasets we include in our analysis include measurements
of $g_1^p$, $g_1^d$ and $g_1^n$. These observables
can be written in terms of the
linear combinations of polarized parton densities:
in the quark model, the proton polarized structure function
has been shown in Eq.~(\ref{g1p-parton}), while
in the PDF evolution basis it reads
\begin{equation}
\label{eq:g1p}
g_1^p = \smallfrac{1}{2}
\sum_{i=1}^{n_f} e_i^2 \Delta q_i^+ = \smallfrac{1}{2}\lbrace\smallfrac{5}{18}
\Delta \Sigma
+ \smallfrac{1}{6} \Delta T_3
+ \smallfrac{1}{18} (\Delta T_8 - \Delta T_{15})
+ \smallfrac{1}{30} (\Delta T_{24}- \Delta T_{35})\rbrace\ ,
\end{equation}
where $e_i$ are the quark charges
($\smallfrac{2}{3}$ for $u,c,t$, $-\smallfrac{1}{3}$ for $d,s,b$).
The deuteron polarized structure function can be written as
\begin{equation}
\label{eq:g1d}
g_1^d = \smallfrac{1}{2}\lbrace\smallfrac{5}{18}
\Delta \Sigma
+ \smallfrac{1}{18} (\Delta T_8 - \Delta T_{15})
+ \smallfrac{1}{30} (\Delta T_{24}- \Delta T_{35})\rbrace\ ,
\end{equation}
while the neutron polarized structure function is given
in terms of the proton and deuteron ones as
\begin{equation}
\label{eq:g1n}
g_1^n = 2\frac{g_1^d}{1-1.5\omega_D}-g_1^p\ ,
\end{equation}
with $\omega_D=0.05$ the probability that the deuteron is found
in a D state.
In perturbative QCD, we have
\begin{eqnarray}
\label{eq:f2pdfcomp}
g_1^p &=& \lbrace \smallfrac{5}{18}\Delta C^s_{2,q}\otimes \Delta \Sigma
+ \smallfrac{1}{6}\Delta C_{2,q}\otimes( \Delta T_3
+ \smallfrac{1}{3} (\Delta T_8 - \Delta T_{15})
+ \smallfrac{1}{5} (\Delta T_{24}-\Delta T_{35}))\nonumber\\
&&\qquad\qquad\qquad\qquad\qquad\qquad
+ {\langle e_q^2\rangle} \Delta C_{2,g}\otimes \Delta g\rbrace ,\\
\label{eq:f2pdfcomd}
g_1^d &=& \lbrace \smallfrac{5}{18}\Delta C^s_{2,q}\otimes \Delta \Sigma
+ \smallfrac{1}{6}\Delta C_{2,q}\otimes(
\smallfrac{1}{3} (\Delta T_8 - \Delta T_{15})
+ \smallfrac{1}{5} (\Delta T_{24}-\Delta T_{35})) \nonumber\\
&&\qquad\qquad\qquad\qquad\qquad\qquad + {\langle e_q^2\rangle} \Delta C_{2,g}\otimes \Delta g\rbrace \ ,
\end{eqnarray}
where $\otimes$ denotes the Mellin convolution,
and ${\langle e_q^2\rangle}$ is defined as
\begin{equation}
\label{ceenf}
{\langle e_q^2\rangle} = \smallfrac{1}{n_f}\sum_{i=1}^{n_f} e_i^2,
\end{equation}
whith $n_f$ the number of active flavours:
${\langle e_q^2\rangle} = \smallfrac{2}{9},
\smallfrac{5}{18},\smallfrac{11}{45},\smallfrac{5}{18}$ for $n_f = 3,4,5,6$.
We can thus write
\begin{eqnarray}
\label{Kg1pp}
g_1^p&=& \lbrace
K_{{\rm g1},\Delta\Sigma}\otimes \Delta\Sigma_0
+K_{{\rm g1},\Delta g} \otimes \Delta g_0
+K_{{\rm g1},+} \otimes \left(\Delta T_{3,0}
+ \smallfrac{1}{3}\Delta T_{8,0}\right)\rbrace\ ,\\
\label{Kg1dd}
g_1^d&=& \lbrace
K_{{\rm g1},\Delta\Sigma}\otimes \Delta\Sigma_0
+K_{{\rm g1},\Delta g} \otimes \Delta g_0
+K_{{\rm g1},+} \otimes
\smallfrac{1}{3}\Delta T_{8,0}
\rbrace\ ,
\end{eqnarray}
where the explicit expressions (in Mellin space) for the polarized evolution kernels are
\begin{eqnarray}
\label{eq:Kf2S}
K_{{\rm g1},\Delta\Sigma}&=&\smallfrac{5}{18} \Delta C^s_{2,q}\Gamma_{\rm S}^{qq}
-\smallfrac{1}{18} \Delta C_{2,q}\Gamma_{\rm S}^{15,q}
+\smallfrac{1}{30} \Delta C_{2,q}(\Gamma_{\rm S}^{24,q}-\Gamma_{\rm S}^{35,q})
+{\langle e_q^2\rangle} \Delta C_{2,g}\Gamma_{\rm S}^{gq}\ ,\\
\label{eq:Kf2g}
K_{{\rm g1},\Delta g}&=&\smallfrac{5}{18} \Delta C^s_{2,q}\Gamma_{\rm S}^{qg}
-\smallfrac{1}{18} \Delta C_{2,q}\Gamma_{\rm S}^{15,g}
+\smallfrac{1}{30} \Delta C_{2,q}(\Gamma_{\rm S}^{24,g}-\Gamma_{\rm S}^{35,g})
+{\langle e_q^2\rangle} \Delta C_{2,g}\Gamma_{\rm S}^{gg}\ ,\\
\label{eq:Kf2T}
K_{{\rm g1},+}&=& \smallfrac{1}{6} \Delta C_{2,q}\Gamma_{\rm NS}^+\ .
\end{eqnarray}
\section{Conclusions and outlook}
\label{sec:conclusions}
We have presented a first determination of polarized parton
distributions based on the NNPDF methodology: {\tt NNPDFpol1.0}. We
have determined polarized PDFs from the most recent inclusive data on
proton, deuteron and neutron deep-inelastic polarized asymmetries and
structure functions. Our main result is that the uncertainty in the
gluon distribution, and to a lesser extent the strange distribution,
and in the small $x$ extrapolation for all parton distributions, is
rather larger than in previous polarized PDF determinations. Also,
there seems to be some tension between strangeness determined in
deep-inelastic scattering and using sem-inclusive data.
In particular, we find that the role of the gluon distribution in the
spin structure of the nucleon is essentially unknown, as the first
moment of the gluon distribution is compatible with zero, but with an
uncertainty which is compatible with a very large positive or negative
gluon spin fraction. Likewise, the contribution from the small $x$
region to the Bjorken sum rule makes its use as a means to determine
$\alpha_s$ essentially impossible. Different conclusions can be
reached only if one is willing to make strong theoretical assumptions
on the small $x$ behaviour of polarized PDFs.
Future experiments, in particular open charm and hadron production in
fixed target experiments,~\cite{Adolph:2012ca,Adolph:2012vj}
inclusive jet production~\cite{Adare:2010cc,Adamczyk:2012qj} and $W$
boson production~\cite{Aggarwal:2010xx,Adare:2010xx,Stevens:2013xx}
from the RHIC collider may
improve the knowledge on individual polarized flavors and antiflavors
and on the gluon distribution in the valence region. However, only a
high-energy electron-ion collider~\cite{Deshpande:2005wd,Boer:2011fh}
might provide information on polarized PDFs at small $x$ and thus
reduce the uncertainty on first moments in a significant way.
\bigskip
\bigskip
\begin{center}
\rule{5cm}{.1pt}
\end{center}
\bigskip
\bigskip
The {\tt NNPDFpol1.0} polarized PDFs, with $N_{\rm rep}=100$ replicas,
are available from the
NNPDF HEPFORGE web site,
\begin{center}
{\bf \url{http://nnpdf.hepforge.org/}~}.
\end{center}
A Mathematica driver code is also available from the same source.
\section{Experimental data}
\label{sec:expdata}
The bulk of the experimental information on (longitudinal) polarized
proton structure
comes from inclusive polarized deep-inelastic scattering with
charged lepton beams. Deep-inelastic scattering with longitudinally
polarized beams and targets allows a determination of the longitudinal
structure function $g_1(x,Q^2)$, which in turn admits a factorized
expression in terms of polarized PDFs. Neutral-current deep-inelastic
scattering does not allow to us to disentangle the contribution of quarks and
antiquarks. Using both proton and neutron (deuteron or ${}^3$He)
targets it is possible to separate the isospin singlet and triplet
quark contributions to structure functions, with the gluon determined
from scaling violations. A weak control on the separation of the
isospin singlet quark contribution into its SU(3) octet and singlet
component is possible using baryon decays to fix the respective
normalization of these contributions, with in principle their different
scale dependence providing some constraint on their shape.
Only charged-current deep-inelastic scattering would
allow for full flavor separation~\cite{Forte:2001ph}: this could be
feasible with
neutrino beams (such as available at a neutrino
factory~\cite{Mangano:2001mj}), or perhaps very
high-energy polarized charged lepton beams (such as available at an
electron-ion collider~\cite{Boer:2011fh}). Therefore, current
constraints on flavor
separation are only provided by semi-inclusive deep-inelastic scattering
data or by polarized hadron collider processes,
such as polarized Drell-Yan production in fixed target collisions
and polarized $W$ production at the relativistic Heavy Ion Collider (RHIC).
Likewise, direct constraints
on the medium and large-$x$ polarized gluon require hadron
and jet production either in fixed target experiments or at RHIC, while
the small-$x$ gluon can only be probed by going to higher energy, such
as at a polarized Electron-Ion Collider.
In this paper we will concentrate on
inclusive longitudinally polarized DIS data, and thus we will only
determine a subset of PDF combinations. This first polarized PDF set
based on NNPDF methodology will then be available for inclusion of
other datasets through the reweighting technique of
Refs.~\cite{Ball:2010gb,Ball:2011gg}.
We will first review the experimental observables which we use for the
determination of polarized structure functions, and the information
which various experiments provide on them. Then, we
will summarize the features of the data we use, and finally
the construction and validation of the Monte Carlo pseudodata sample from
the input experimental data.
\subsection{Experimental observables and longitudinal polarized
structure functions}
\label{sec:asysf}
Standard perturbative factorization provides predictions for polarized
structure functions $g_1(x,Q^2)$. However,
experiments measure cross section asymmetries,
defined by considering longitudinally polarized leptons
scattering off a hadronic target, polarized either
longitudinally or transversely with respect to the collision axis,
from which the longitudinal ($A_{\parallel}$) and transverse ($A_{\perp}$)
asymmetries are determined as
\begin{equation}
\label{eq:xsecasy}
A_{\parallel}=
\frac{d\sigma^{\rightarrow\Rightarrow}-d\sigma^{\rightarrow\Leftarrow}}
{d\sigma^{\rightarrow\Rightarrow}+d\sigma^{\rightarrow\Leftarrow}};\quad
A_{\perp}=
\frac{d\sigma^{\rightarrow\Uparrow}-d\sigma^{\rightarrow\Downarrow}}
{d\sigma^{\rightarrow\Uparrow}+d\sigma^{\rightarrow\Downarrow}}.\quad
\end{equation}
The hadronic tensor for polarized, parity conserving deep-inelastic
scattering can be parametrized in terms of four structure
functions: two of them, $F_1(x,Q^2)$ and $F_2(x,Q^2)$,
characterize spin-averaged deep-inelastic scattering, while
$g_1(x,Q^2)$ and $g_2(x,Q^2)$ appear when both the lepton beam and
the nucleon target are in definite polarization states.
For the conventional definition of the hadronic tensor in terms
of structure functions, see e.g.~\cite{Ellis:1991qj}.
The two polarized structure functions are related to the
measurable asymmetries Eq.~(\ref{eq:xsecasy}) by
\begin{align}
\label{g1toA}
g_1(x,Q^2)&=
\frac{F_1(x,Q^2)}{(1+\gamma^2)(1+\eta\zeta)}
\left[
(1+\gamma\zeta)\frac{A_{\parallel}}{D}-(\eta-\gamma)\frac{A_{\perp}}{d}
\right]
\mbox{ ,}
\\
\label{g2toA}
g_2(x,Q^2)&=
\frac{F_1(x,Q^2)}{(1+\gamma^2)(1+\eta\zeta)}
\left[
\left(\frac{\zeta}{\gamma}-1\right)\frac{A_{\parallel}}{D}
+\left(\eta+\frac{1}{\gamma}\right)\frac{A_{\perp}}{d}
\right]
\mbox{ .}
\end{align}
In Eqs.~(\ref{g1toA}-\ref{g2toA}) the dependence on the
nucleon mass $m$ is taken into account through the factor
\begin{equation}\label{eq:gamdef}
\gamma^2\equiv\frac{4m^2x^2}{Q^2},
\end{equation}
which also appears in the definitions of the other kinematic factors
in Eqs.~(\ref{g1toA}-\ref{g2toA}):
\begin{align}
\label{eq:ddef}
d&=\frac{D\sqrt{1-y-\gamma^2 y^2/4}}{1-y/2},\\
\label{eq:Ddef}
D&=\frac{1-(1-y)\epsilon}{1+\epsilon R(x,Q^2)},\\
\label{eq:etadef}
\eta&=\frac{\epsilon\gamma y}{1-\epsilon(1-y)},\\
\label{eq:zetadef}
\zeta&=\frac{\gamma(1-y/2)}{1+\gamma^2 y/2},\\
\label{eq:epsilondef}
\epsilon &= \frac{4(1-y) - \gamma^2 y^2}{2 y^2 + 4 (1-y) + \gamma^2 y^2}.
\end{align}
Here $y$ is the standard lepton scaling variable, given by
\begin{equation}
\label{eq:ydef}
y=\frac{p\cdot q}{p\cdot k}=\frac{Q^2}{2xmE}
\end{equation}
in terms of the nucleon, lepton and virtual photon momenta, $p$, $k$
and $q$, or,
in the target rest frame, in terms of the energy $E$ of the incoming
lepton beam.
The unpolarized structure function $F_1$ and unpolarized
structure function ratio $R$ which enter the definition
Eq.~(\ref{g1toA}-\ref{g2toA}) of the asymmetry may be
expressed in terms of $F_2$ and
$F_L$ by
\begin{eqnarray}\label{eq:fonedef}
F_1(x,Q^2)&\equiv&\frac{F_2(x,Q^2)}{2x\left[1+R(x,Q^2)\right]}
\left(1+\gamma^2\right)
\\\label{eq:Rdef}
R(x,Q^2)&\equiv&\frac{F_L(x,Q^2)}{F_2(x,Q^2)-F_L(x,Q^2)}.
\end{eqnarray}
The longitudinal and transverse asymmetries are sometimes expressed in terms
of the virtual photo-absorption asymmetries $A_1$ and $A_2$ according
to
\begin{equation}
\label{eq:asyrel}
A_\parallel=D(A_1+\eta A_2)
\mbox{ ,}
\qquad\qquad
A_\perp=d(A_2-\zeta A_1),
\end{equation}
where
\begin{equation}
\label{eq:gammaasy}
A_1(x,Q^2)
\equiv
\frac{\sigma^T_{1/2}-\sigma^T_{3/2}}{\sigma^T_{1/2}+\sigma^T_{3/2}}
\mbox{ ,}
\qquad\qquad
A_2(x,Q^2)
\equiv
\frac{2\sigma^{TL}}{\sigma^T_{1/2}+\sigma^T_{3/2}}.
\end{equation}
Recall that $\sigma^T_{1/2}$ and $\sigma^T_{3/2}$
are cross sections for the scattering of
virtual transversely polarized photons
(corresponding to longitudinal lepton polarization)
with helicity of the photon-nucleon system equal to 1/2 and 3/2
respectively, and $\sigma^{TL}$ denotes the interference term between the
transverse and longitudinal photon-nucleon amplitudes.
In the limit $m^2\ll Q^2$ Eqs.~(\ref{eq:asyrel}) reduce to $D=A_\parallel/A_1$,
$d=A_\perp/A_2$, thereby providing a physical interpretation of
$d$ and $D$ as depolarization factors.
Using Eqs.~(\ref{eq:asyrel}) in Eqs.~(\ref{g1toA}-\ref{g2toA}) we may
express the structure functions in terms of $A_1$ and $A_2$ instead:
\begin{align}
\label{g1toA1}
g_1(x,Q^2) &= \frac{F_1(x,Q^2)}{1+\gamma^2} \left[ A_1(x,Q^2)
+ \gamma A_2 (x,Q^2) \right],\\\label{g2toA2}
g_2(x,Q^2)&=
\frac{F_1(x,Q^2)}{1+\gamma^2}
\left[\frac{A_2}{\gamma}- A_1\right] .
\end{align}
We are interested in the structure function $g_1(x,Q^2)$,
whose moments are proportional to nucleon matrix elements of twist-two
longitudinally polarized quark and gluon operators, and therefore can
be expressed in terms of longitudinally polarized quark and gluon distributions.
Using Eqs.~(\ref{g1toA}-\ref{g2toA})
we may obtain an expression of it in terms of
the two asymmetries $A_{\parallel}$, $A_{\perp}$, or, using
Eqs.~(\ref{g1toA1}-\ref{g2toA2}), in terms of
the two asymmetries $A_1$, $A_2$. Clearly, up to corrections of
${\mathcal O}\left(\frac{m}{Q}\right)$, $g_1$ is fully determined by
$A_{\parallel}$, which coincides with $A_1$ up to
${\mathcal O}\left(\frac{m}{Q}\right)$ terms, while $g_2$ is
determined by $A_{\perp}$ or $A_2$.
It follows that, even though in principle a measurement of both
asymmetries is necessary for the determination of $g_1$, in practice
most of the information comes from $A_{\parallel}$ or $A_1$, with the
other asymmetry only providing a relatively small correction unless
$Q^2$ is very small.
It may thus be convenient to express $g_1$ in terms of
$A_{\parallel}$
and $g_2$:
\begin{equation}
\label{eq:g1tog2}
g_1(x,Q^2)
=
\frac{F_1(x,Q^2)}{1+\gamma\eta}\frac{A_{\parallel}}{D}
+\frac{\gamma(\gamma-\eta)}{\gamma\eta+1}g_2(x,Q^2),
\end{equation}
or, equivalently, in terms of $A_1$ and $g_2$:
\begin{equation}
\label{eq:g1tog2p}
g_1(x,Q^2) = A_1(x,Q^2) F_1(x,Q^2) + \gamma^2 g_2(x,Q^2).
\end{equation}
It is then possible to use Eq.~(\ref{eq:g1tog2}) or
Eq.~(\ref{eq:g1tog2p}) to determine $g_1(x,Q^2)$ from a dedicated
measurement of the longitudinal asymmetry, and an independent
determination of $g_2(x,Q^2)$.
In practice, experimental information on the transverse asymmetry and
structure function $g_2$ is
scarce~\cite{Abe:1998wq,Anthony:2002hy,Airapetian:2011wu}.
However, the Wilson expansion for polarized DIS implies
that the structure function $g_2$ can be
written as the sum of a twist-two
and a twist-three contribution~\cite{Wandzura:1977qf}:
\begin{equation}
g_2(x,Q^2)=g_2^{\rm t2}(x,Q^2)+g_2^{\rm t3}(x,Q^2).
\end{equation}
The twist-two contribution to $g_2$ is simply related
to $g_1$. One finds
\begin{equation}
g_2^{\rm t2}(x,Q^2)= -g_1(x,Q^2)+\int_x^1\frac{dy}{y} g_1(y,Q^2)
\label{wweq}
\end{equation}
which in Mellin space becomes
\begin{equation}
g_2^{\rm t2}(N,Q^2)= -\frac{N-1}{N}g_1(N,Q^2).
\label{wweqN}
\end{equation}
It is important to note that $g_2^{\rm t3}$ is
not suppressed by a power of $\frac{m}{Q}$ in comparison to
$g_2^{\rm t2}$, because in the polarized case the availability of the spin
vector allows the construction of an extra scalar
invariant. Nevertheless, experimental evidence
suggests that $g_2^{\rm t3}$ is compatible with zero at low scale
$Q^2\sim m^2$. Fits to $g_2^{\rm t3}$~\cite{Accardi:2009au,Blumlein:2012se},
as well as theoretical
estimates of it~\cite{Accardi:2009au,Braun:2011aw} support the
conclusion that
\begin{equation}
g_2(x,Q^2)\approx g_2^{\rm t2}(x,Q^2)\equiv g_2^{\rm WW}(x,Q^2),
\label{eq:wwrel}
\end{equation}
which is known as the Wandzura-Wilczek~\cite{Wandzura:1977qf}
relation.
We will thus determine $g_1$, using Eq.~(\ref{eq:g1tog2}) or
Eq.~(\ref{eq:g1tog2p}), from an experimental determination of the
longitudinal asymmetry, and using the approximate Wandzura-Wilczek
form Eq.~(\ref{eq:wwrel}) of $g_2$.
In order to test the dependence of results on this approximation, we will
also consider the opposite assumption that $g_2=0$ identically.
\subsection{The dataset: observables, kinematic cuts, uncertainties and
correlations}
\label{sec:datasetl}
We use deep-inelastic lepton-nucleon scattering (DIS) data
coming from all relevant
experiments~\cite{Ashman:1989ig,Anthony:2002hy,Adeva:1998vv,Adeva:1999pa,Abe:1998wq,
Abe:1997cx,Anthony:2000fn,
Alexakhin:2006vx,Alekseev:2010hc,Ackerstaff:1997ws,Airapetian:2007mh}
performed at CERN, SLAC and DESY.
The experiments use different nucleon targets (protons, neutrons or
deuterons).
The main
features of these data sets are summarized in
Tab.~\ref{tab:exps-sets}, where we show, for each experiment,
the number of available data points, the kinematic range covered by the
experiment, and the quantity which is published and which we
use for the extraction of $g_1$. This quantity
is not the same for all
experiments: the primary observable can be one of the many asymmetries or structure functions
discussed in Sect.~\ref{sec:asysf}, as we now summarize (individual
experiments are labeled as in Tab.~\ref{tab:exps-sets}).
\begin{itemize}
\item {\bf EMC, SMC, SMClowx, COMPASS, HERMES97}
All these experiments have performed a measurement of
$A_\parallel$. They have then determined $A_1$ from it using
Eq.~(\ref{eq:asyrel}), under the assumption $\eta\approx0$. Therefore,
what these experiments actually publish is a measurement of
$\frac{A_\parallel}{D}$. We determine $g_1$ from $\frac{A_\parallel}{D}$
using Eq.~(\ref{eq:g1tog2}). This is possible because $D$
is completely fixed by Eq.~(\ref{eq:Ddef}) in terms of the unpolarized
structure function ratio Eq.~(\ref{eq:Rdef}) and of the kinematics. We
determine the unpolarized structure function ratio using as primary
inputs $F_2$, for which we use the parametrization of
Ref.~\cite{Forte:2002fg,DelDebbio:2004qj}, and $F_L$, which we
determine from its
expression in terms of parton distributions, using the \texttt{NNPDF2.1 NNLO}
parton set~\cite{Ball:2011uy}.
\item{\bf HERMES}
This experiment has performed a measurement of
$A_\parallel$, and it
publishes both $A_\parallel$ and
$A_1$ (which is determined using Eq.~(\ref{eq:asyrel}) and a
parametrization of $A_2$). We use the published values of
$A_\parallel$, which are closer to the experimentally measured
quantity, to determine $g_1$ through Eq.~(\ref{eq:g1tog2}).
\item{\bf E143}
This experiment has taken data with three different
beam energies, $E_1=29.1$ GeV,
$E_2=16.2$ GeV, $E_3=9.7$ GeV. For the highest energy both
$A_\parallel$ and $A_\perp$ are independently measured and $A_1$ is
extracted from them using Eq.~(\ref{eq:asyrel}); for the two
lowest energies only $A_\parallel$ is measured and
$A_1$ is extracted from it using Eqs.~(\ref{g1toA1}-\ref{g2toA2})
while assuming the form Eq.~(\ref{eq:wwrel}) for $g_2$. The values of
$A_1$ obtained with the three beam energies are combined into a
single determination of $A_1$; radiative corrections are applied at
this combination stage. Because of this, we must use this combined
value of $A_1$, from which we then determine $g_1$
using Eq.~(\ref{eq:g1tog2p}). In order to determine $y$
Eq.~(\ref{eq:ydef}), which depends on the beam energy, we use the
mean of the three energies.
\item{\bf E154}
This experiment measures $A_\parallel$ and $A_\perp$
independently, and then extracts a determination of $A_1$.
We use these values of $A_1$ to determine $g_1$ by means of Eq.~(\ref{eq:g1tog2p}).
\item{\bf E155}
This experiment only measures $A_\parallel$, from
which $\frac{g_1}{F_1}$ is extracted using Eq.~(\ref{g1toA1}) with
the Wandzura-Wilczek form of $g_2$ Eq.~(\ref{eq:wwrel}). In this
case, we use these values of
$\frac{g_1}{F_1}$, and we extract $g_1$ using
Eq.~(\ref{eq:fonedef}) for $F_1$, together with
the parametrization of
Ref.~\cite{Forte:2002fg,DelDebbio:2004qj} for $F_2$ and
the expression in terms
of parton distributions and the \texttt{NNPDF2.1 NNLO}
parton set~\cite{Ball:2011uy} for $F_L$, as in the other cases.
\end{itemize}
\begin{table}
\begin{center}
\footnotesize
\input{exps-sets}
\end{center}
\caption{\small
Experimental data sets included in the present analysis.
For each experiment we
show the number of points before and after (in parenthesis)
applying kinematic cuts, the kinematic range and the measured observable.
\label{tab:exps-sets}}
\end{table}
\begin{figure}[t]
\begin{center}
\epsfig{width=0.7\textwidth,figure=kinNNPDFpol10.eps}
\caption{\small Experimental data in the $(x,Q^2)$ plane (after kinematic cuts).}
\label{fig:dataplot}
\end{center}
\end{figure}
We have excluded from our analysis all data points with
$Q^2 \le Q^2_{\rm cut}=1$ GeV$^2$, since below such energy scale
perturbative QCD cannot be considered reliable. A similar choice
of cut was made in Refs.~\cite{Ball:1995td,Altarelli:1996nm,Altarelli:1998nb,
deFlorian:2009vb,Blumlein:2010rn,Hirai:2008aj}.
We further impose a cut on the squared invariant mass of the
hadronic final state $W^2=Q^2(1-x)/x$
in order to remove points which may be affected by sizable
higher-twist corrections. The cut is chosen based on
a study presented in
Ref.~\cite{Simolo:2006iw}, where higher twist terms were added to the
observables, with a coefficient fitted to the data, and it was shown
that the higher twist contribution becomes compatible with zero if one
imposes the cut $W^2 \ge W^2_{\rm
cut}=6.25$ GeV$^2$. We will follow this choice, which
excludes data points with large Bjorken-$x$ at moderate values of
the squared momentum transfer $Q^2$, roughly corresponding to the
bottom-right corner of the $(x,Q^2)$-plane, see
Fig.~\ref{fig:dataplot}: in particular, it excludes
all available JLAB data~\cite{Zheng:2004ce,Fatemi:2003yh,Dharmawardane:2006zd}.
The
number of data points surviving the kinematic cuts for each data set
is given in parenthesis in Tab.~\ref{tab:exps-sets}.
As can be seen from the scatter plot in Fig.~\ref{fig:dataplot}, the
region of the $(x,Q^2)$-plane where data are available
after kinematic cuts is
roughly restricted to $4\cdot 10^{-3}\lesssim x\lesssim 0.6$ and
$1$~GeV$^2\leq Q^2\lesssim 60$~GeV$^2$. In recent years,
the coverage of the low-$x$ region has been
improved by a complementary set of SMC data~\cite{Adeva:1999pa} and by
the more recent COMPASS
data~\cite{Alexakhin:2006vx,Alekseev:2010hc}. In the
large-$x$ region, information is provided at rather high $Q^2$ by the
same COMPASS data and at lower energy by the latest HERMES
measurements~\cite{Airapetian:2007mh}. In comparison to the dataset used
in Refs.~\cite{Ball:1995td,Altarelli:1996nm,Altarelli:1998nb}
several new datasets are being used, in particular
the SMC~\cite{Adeva:1999pa},
HERMES~\cite{Airapetian:2007mh} and
COMPASS~\cite{Alexakhin:2006vx,Alekseev:2010hc} data.
The dataset used in this paper is the same as that of
Ref.~\cite{Blumlein:2010rn}, and also the same as the DIS data of the
fit of Ref.~\cite{deFlorian:2009vb}, which however has a wider data
set which extends beyond inclusive DIS.
Each experimental collaboration provides uncertainties on the
measured quantities listed in the next-to-last column of
Tab.~\ref{tab:exps-sets}.
Correlated systematics are only provided by EMC and
E143, which give the values of the
systematics due to the uncertainty in the beam and target
polarizations, while all other experiments do not provide any
information on the covariance matrix. For each experiment, we determine
the uncorrelated uncertainty on $g_1$ by combining the uncertainty on
the experimental observable with that of the unpolarized structure
function using standard error propagation. We include all available correlated
systematics. These are provided by the experimental collaboration as
a percentage correction to $g_1$ (or, alternatively, to the asymmetry
$A_1$): we apply the percentage uncertainty on $g_1$ to the structure
function determined by us as discussed in Sect.~\ref{sec:datasetl}
(which, of course, is very close to the value determined by the
experimental collaboration).
We then construct a covariance matrix
\begin{equation}
\label{eq:covmat}
{\rm cov}_{pq}=\left(\sum_i
\sigma^{(c)}_{i,p}\sigma^{(c)}_{i,q} + \delta_{pq} \sigma^{(u)}_{p}\sigma^{(u)}_{q}
\right)
g_{1,p}g_{1,q},
\end{equation}
where $p$ and $q$ run over the experimental data points,
$g_{1,p}\equiv g_1(x_p,Q_p^2)$
($g_{1,q}\equiv g_1(x_q,Q_q^2)$), $\sigma^{(c)}_{i,p}$ are the
various sources of correlated uncertainty, and $\sigma^{(u)}_{p}$
the uncorrelated uncertainties, which are in turn found as a sum in
quadrature of all uncorrelated sources of statistical
$\sigma^{\rm (stat)}_{i,p}$ and systematic $\sigma^{\rm (syst)}_{i,p}$
uncertainty on each point:
\begin{equation}
\label{uncsum}
\left(\sigma^{(u)}_{p}\right)^2=\sum_i \left(\sigma^{\rm (stat)}_{i,p}\right)^2 +\sum_j
\left(\sigma^{\rm (syst)}_{j,p}\right)^2.
\end{equation}
The correlation matrix is defined as
\begin{equation}
\label{eq:cormatr}
\rho_{pq} = \frac{{\rm cov}_{pq}}{\sigma^{\rm (tot)}_{p}\sigma^{\rm (tot)}_{q}g_{1,p}g_{1,q}}
\mbox{ ,}
\end{equation}
where the total uncertainty $\sigma^{\rm (tot)}_{p}$ on the $p$-th data point is
\begin{equation}
\label{eq:sigmatot}
\left(\sigma^{\rm (tot)}_{p}\right)^2 =
(\sigma^{(u)}_{p})^2+\sum_i\left(\sigma^{(c)}_{i,p}\right)^2
\mbox{ .}
\end{equation}
We show in Tab.~\ref{tab:exps-err} the
average experimental uncertainties for each dataset, with
uncertainties separated into statistical and correlated systematics.
All values are given as absolute uncertainties and refer to the
structure function $g_1$, which has been reconstructed for each experiment
as discussed above.
As in the case of Tab.~\ref{tab:exps-sets}, we provide
the values before and after kinematic cuts (if different).
In Tab.~\ref{tab:exps-sets}, we distinguish between
experiments, defined as groups of data which cannot be correlated to
each other, and datasets within a given experiment, which could in
principle be correlated with each other, as they
correspond to measurements of
different observables in the same experiment, or measurements of the
same observable in different years. Even though, in practice, only two
experiments provide such correlated systematics (see Tab.~\ref{tab:exps-err}),
this distinction will be useful in the
minimization strategy, see Sect.~\ref{sec:minim} below.
\begin{table}
\begin{center}
\footnotesize
\input{exps-err_g1}
\end{center}
\caption{\small Averaged statistical, correlated systematic and total
uncertainties before and after (in parenthesis) kinematic cuts for each of
the experimental sets included in the present analysis. Uncorrelated systematic
uncertainties are considered as part of the statistical uncertainty
and they are added in quadrature.
All values are absolute uncertainties and refer to the structure function $g_1$,
which has been reconstructed for each experiment as discussed in the text.
Details on the number of points and
the kinematics of each dataset are provided in Tab.~\ref{tab:exps-sets}.
\label{tab:exps-err}}
\end{table}
\subsection{Monte-Carlo generation of the pseudo-data sample}
Error propagation from experimental data to the fit is handled by a
Monte Carlo sampling of the probability distribution defined by
data. The statistical sample is obtained by generating
$N_\mathrm{rep}$ pseudodata replicas, according to a multigaussian
distribution centered at the data points and with a covariance equal
to that of the original data.
Explicitly, given an
experimental data point $g_{1,p}^{(\mathrm{exp})}\equiv
g_1(x_p,Q_p^2)$, we generate $k=1,\dots,N_\mathrm{rep}$ artificial
points $g_{1,p}^{(\mathrm{art}),(k)}$ according to \begin{equation}
\label{eq:MCgeneration}
g_{1,p}^{(\mathrm{art}),(k)} (x,Q^2)
=
\left[
1+\sum_i r_{(c),p}^{(k)}\sigma_{i,p}^{(c)} + r_{(u),p}^{(k)}\sigma_p^{(u)}
\right]
g_{1,p}^{(\mathrm{exp})} (x,Q^2),
\end{equation}
where $r_{(c),p}^{(k)}$, $r_{(u),p}^{(k)}$ are univariate gaussianly distributed
random numbers, and $\sigma_{i,p}^{(c)}$ and $\sigma_{p}^{(u)}$ are respectively
the relative correlated systematic and
statistical uncertainty. Unlike in the unpolarized case,
Eq.~(\ref{eq:MCgeneration}) receives no contribution from normalization uncertainties, given that
all polarized observables are obtained as cross section asymmetries.
The number of Monte Carlo replicas of the data is determined by
requiring that the central values, uncertainties and correlations of the
original experimental data can be reproduced to a given accuracy by
taking averages, variances and
covariances over the replica sample.
A comparison between expectation values and variances of the Monte
Carlo set and the corresponding input experimental values as a
function of the number of replicas is shown in Fig.~\ref{fig:splots},
where we display scatter-plots of the central values and errors for
samples of $N_{\mbox{\scriptsize{rep}}}=10,100$ and $1000$ replicas.
A more quantitative comparison can be performed by defining suitable
statistical estimators (see, for example, Appendix B of
Ref.~\cite{DelDebbio:2004qj}).
We show in Tabs.~\ref{tab:est1gen}--\ref{tab:est2gen} the percentage
error and the scatter correlation $r$ (which is crudely speaking the
correlation between the input value and the value computed from the
replica sample) for central values and errors respectively . We do not compute values for correlations, as these,
as seen in Tab.~\ref{tab:exps-err},
are only available for a very small number of data points from two
experiments.
Note that
some large values of the percentage uncertainty are due to the fact
that $g_1$ for some experiments can take values which are very close
to zero. It is clear from both the tables and the plots that a Monte Carlo
sample of pseudo-data with $N_\mathrm{rep}=100$ is
sufficient to reproduce the mean values and the errors of experimental
data to an accuracy which is better than 5\%, while the improvement in
going up to $N_\mathrm{rep}=1000$ is moderate. Therefore, we will
henceforth use a $N_\mathrm{rep}=100$ replica sample as a default
in the remainder of this paper.
\begin{table}[t]
\centering
\footnotesize
\input{est2gen_cv.tex}
\caption{\small Table of statistical estimators for the mean value computed from
the Monte Carlo sample with $N_\mathrm{rep}=10,100,1000$ replicas.
Estimators refer to individual experiments and are defined in Appendix B of Ref.~\cite{DelDebbio:2004qj}.}
\label{tab:est1gen}
\end{table}
\begin{table}[t]
\centering
\footnotesize
\input{est2gen_er.tex}
\caption{\small Table of statistical estimators for the errors computed from
the Monte Carlo sample with $N_\mathrm{rep}=10,100,1000$ replicas.
Estimators refer to individual experiments and are defined in Appendix B of Ref.~\cite{DelDebbio:2004qj}.}
\label{tab:est2gen}
\end{table}
\begin{figure}[t]
\begin{center}
\epsfig{width=0.4\textwidth,figure=scatter-plot-mean.eps}
\epsfig{width=0.4\textwidth,figure=scatter-plot-error.eps}
\caption{\small Scatter-plot of experimental versus artificial Monte Carlo mean central values and absolute uncertainties of polarized
structure functions computed from
ensembles made of $N_{\mbox{\scriptsize{rep}}}=10,100,1000$ replicas.}
\label{fig:splots}
\end{center}
\end{figure}
\section{Neural networks and fitting strategy}
\label{sec:minim}
We will now briefly review the NNPDF methodology for parton
parametrization in terms of neural networks, and their optimization
(fitting) through a genetic algorithm. The details of the procedure
have been discussed in previous NNPDF papers, in particular
Refs.~\cite{Ball:2008by,Ball:2010de,Rojo:2004iq}. Here we summarize
the main steps of the whole strategy, and discuss in greater detail
some points which are specific to the polarized case.
\subsection{Neural network parametrization}
\label{sec:net-param}
Each of the independent polarized PDFs in the evolution basis
introduced in Sect.~\ref{sec:fact}, $\Delta\Sigma,\Delta g,\Delta T_3$
and $\Delta T_8$, is parametrized using a multi-layer feed-forward
neural network~\cite{Ball:2011eq}. All neural networks have the same
architecture, namely 2-5-3-1, which corresponds to 37 free parameters
for each PDF, and thus a total of 148 free parameters. This is to be
compared to about 10-15 free parameters for all other available
determinations of polarized PDFs. This parametrization has been
explicitly shown to be redundant in the unpolarized case, in that
results are unchanged when a smaller neural network architecture is
adopted: this ensures that results do not depend on the
architecture~\cite{Ball:2011eq}. Given that polarized data are much
less abundant and affected by much larger uncertainties than
unpolarized ones, this architecture is adequate also in the
polarized case.
The neural network parametrization is supplemented with a
preprocessing function. In principle, large enough neural networks
can reproduce any functional form given sufficient training
time. However, the training can be made more efficient by adding a
preprocessing step, i.e. by multiplying the output of the neural
networks by a fixed function. The neural network then only fits the
deviation from this function, which improves the speed of the
minimization procedure if the preprocessing function is suitably
chosen. We thus write the input PDF basis in terms of preprocessing
functions and neural networks ${\rm NN}_{\rm \Delta pdf}$ as follows
\begin{eqnarray}
\label{eq:PDFbasisnets}
\Delta \Sigma(x,Q_0^2)
&=&{(1-x)^{m_1}}{x^{-n_1}}{\rm NN}_{\Delta
\Sigma}(x)\ ,
\nonumber\\
\Delta T_3(x,Q_0^2)&=&A_3{(1-x)^{m_3}}{
x^{-n_3}}
{\rm NN}_{ \Delta T_3}(x) \ , \nonumber\\
\Delta T_8(x,Q_0^2)&=&A_8{(1-x)^{m_8}}{
x^{-n_{ \Delta T_8}}}
{\rm NN}_{ \Delta T_3}(x) \ , \\
\Delta g(x,Q_0^2)&=&{(1-x)^{m_g}}{x^{-n_g}}{\rm NN}_{\Delta g}(x).
\nonumber
\end{eqnarray}
Of course, one should check that no bias is introduced in the choice
of preprocessing functions. To this purpose, we first select a
reasonable range of values for the large and small--$x$
preprocessing exponents $m$ and $n$, and produce a PDF determination
by choosing for each replica a value of the exponents at random with
uniform distribution within this range. We then determine effective
exponents for each replica, defined as
\begin{equation}
\label{eq:effexp2}
m_{\rm eff}(Q^2)\equiv
\lim_{x\to1}\frac{ \ln \Delta f(x,Q^2) }{\ln(1-x)}
\mbox{ ,}
\end{equation}
\begin{equation}
\label{eq:effexp1}
n_{\rm eff}(Q^2)\equiv
\lim_{x\to0} \frac{\ln \Delta f(x,Q^2)}{\ln\frac{1}{x}}
\mbox{ ,}
\end{equation}
where $\Delta f = \Delta\Sigma\mbox{, }\Delta T_3\mbox{, }\Delta T_8\mbox{, }\Delta g$.
Finally, we check that the range of variation of the preprocessing
exponents is wider than the range of effective exponents for each PDF.
If it is not, we enlarge the range of variation of preprocessing, then
repeat the PDF determination, and iterate
until the condition is satisfied.
This ensures that the range of effective large- and
small-$x$ exponents found in the fit is not biased, and in particular not
restricted, by the range of preprocessing exponents. Our final values
for the preprocessing exponents are summarized in
Tab.~\ref{tab:prepexps}, while the effective exponents obtained in
our fit will be discussed in Sect.~\ref{sec:prepexp}.
It is apparent from Tab.~\ref{tab:prepexps} that
the allowed range of preprocessing exponents is rather
wider than in the unpolarized case, as a
consequence of the limited amount of experimental information.
It is enough to perform this check at the input evolution
scale, $Q_0^2=1$~GeV$^2$.
\begin{table}[t]
\begin{center}
\begin{tabular}{c|c|c}
\hline PDF & $m$ & $n$ \\
\hline
\hline
$\Delta\Sigma(x,Q_0^2)$ & $\left[ 1.5,3.5\right]$ &
$\left[ 0.2,0.7\right]$ \\
\hline
$\Delta g(x,Q_0^2)$ & $\left[ 2.5,5.0\right]$ &
$\left[ 0.4,0.9\right]$ \\
\hline
$\Delta T_3(x,Q_0^2)$ & ·$\left[ 1.5,3.5\right]$ &
$\left[ 0.4,0.7\right]$ \\
\hline
$\Delta T_8(x,Q_0^2)$ & ·$\left[ 1.5,3.0\right]$ &
$\left[ 0.1,0.6\right]$ \\
\hline
\end{tabular}
\caption{\small \label{tab:prepexps} Ranges for the small and
large $x$
preprocessing exponents Eq.~(\ref{eq:PDFbasisnets}).}
\end{center}
\end{table}
Two of the PDFs in the parametrization basis
Eq.~(\ref{eq:PDFbasisnets}), namely the nonsinglet
triplet and octet $\Delta T_3$ and $\Delta T_8$, are supplemented by
a prefactor. This is because these PDFs
must satisfy the sum rules Eqs.~(\ref{eq:t3sr}, \ref{eq:t8sr}),
which are enforced by letting
\begin{eqnarray}
A_3&=&\frac{a_3}
{\int_0^1 dx\,(1-x)^{m_3}x^{-n_3} {\rm NN}_{\Delta T_3}(x)} ,
\nonumber\\
A_8&=&\frac{a_8}
{\int_0^1 dx\, (1-x)^{m_8}x^{-n_8} {\rm NN}_{\Delta T_8}(x) } .
\label{eq:sumrules1}
\end{eqnarray}
The integrals are computed numerically each time the parameters of
the PDF set are modified. The values of $a_3$ and $a_8$ are chosen
for each replica as gaussianly distributed numbers, with central value
and width given by the corresponding experimental values,
Eqs.~(\ref{eq:a3},\ref{eq:a8p}).
\subsection{Genetic algorithm minimization}
As discussed at length in Ref.~\cite{Ball:2008by}, minimization
with a neural network parametrization of PDFs must be performed through an
algorithm which explores the very wide functional space efficiently.
This is done by means of a genetic algorithm, which is
used to minimize a suitably defined figure
of merit, namely the error function~\cite{Ball:2008by},
\begin{equation}
\label{eq:errfun}
E^{(k)}=\frac{1}{N_{\mathrm{dat}}}\sum_{I,J=1}^{N_{\rm dat}}
\left(g_I^{(\mathrm{art})(k)}-g_I^{(\mathrm{net})(k)}\right)
\left(\left({\mathrm{cov}}\right)^{-1}\right)_{IJ}
\left(g_J^{(\mathrm{art})(k)}-g_J^{(\mathrm{net})(k)}\right) \ .
\end{equation}
Here $g_I^{\rm (art)(k)}$ is the value of the observable $g_I$ at the
kinematical point $I$ corresponding to the Monte Carlo replica $k$,
and
$g_I^{(\rm net)(k)}$ is the same observable computed from the neural
network PDFs; the covariance matrix $\left({\rm cov}\right)_{IJ}$ is defined in
Eq.~(\ref{eq:covmat}).
The minimization procedure we adopt follows
closely that of Ref.~\cite{DelDebbio:2007ee}, to which we refer for a more
general discussion. Minimization is perfomed by means of a genetic
algorithm, which minimizes the
figure of merit, Eq.~(\ref{eq:errfun}) by creating, at each
minimization step, a pool of new
neural nets, obtained by randomly mutating the parameters of the
starting set, and retaining the configuration which corresponds to the
lowest value of the figure of merit.
The parameters which characterize the behaviour of the genetic
algorithm are tuned in order to optimize the efficiency of
the minimization procedure: here, we rely on previous experience of
the development of unpolarized NNPDF sets. In particular, the
algorithm is characterized by a mutation rate, which we take to
decrease as a function of the number of iterations $N_{\rm ite}$ of
the algorithm according to~\cite{Ball:2008by}
\begin{equation}
\eta_{i,j}=\eta_{i,j}^{(0)}/N_{\rm ite}^{r_\eta} \ ,
\label{eq:etarate}
\end{equation}
so that in the early stages of the training large mutations are
allowed, while they become less likely as one approaches the
minimum. The starting mutation rates are chosen to be larger for PDFs
which contain more information. We perform two mutations per PDF at
each step, with the starting rates given in Tab.~\ref{tab:etapars}.
The exponent $r_\eta$ has been
introduced in order to optimally span the whole range of possible
beneficial mutations and it is randomized between $0$ and $1$ at each
iteration of the genetic algorithm, as in Ref.~\cite{Ball:2010de}.
Furthermore, following Ref.~\cite{Ball:2010de}, we let the number of new
candidate solutions depend on the stage of the minimization.
At earlier stages of the minimization, when
the number of generations is smaller than $N^{\rm mut}$, we
use a large population of mutants, $N_{\rm mut}^{a}\gg 1$, so a larger
space of mutations is being explored. At later stages of the
minimization, as the minimum is approached, a smaller
number of mutations $N_{\rm mut}^{b}\ll N_{\rm
mut}^{a}$ is used. The values of the parameters $N_{\rm gen}^{\rm
mut}$, $N_{\rm mut}^{a}$ and $N_{\rm mut}^{b}$ are collected in
Tab.~\ref{tab:mutpars}.
\begin{table}[t]
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
$\eta^{(0)}_{i,\Delta\Sigma}$ & $\eta^{(0)}_{i,\Delta g}$ & $\eta^{(0)}_{i,\Delta T_3}$ & $\eta^{(0)}_{i,\Delta T_8}$ \\
\hline
$5, 0.5$ & $5, 0.5$ & $2, 0.2$ & $2, 0.2$\\
\hline
\end{tabular}
\caption{\small The initial values of the mutation rates for the two
mutations of each PDF.}
\label{tab:etapars}
\end{center}
\end{table}
\begin{table}[t]
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
$N_{\rm gen}^{\rm mut}$ & $N^a_{\rm mut}$ & $N^b_{\rm mut}$ & $N_{\rm gen}^{\rm wt}$ & $E^{\mathrm{sw}}$ \\
\hline
200 & 50 & 10 & 5000 & 2.5\\
\hline
\end{tabular}
\caption{\small Values of the parameters of the genetic algorithm.}
\label{tab:mutpars}
\end{table}
Because the minimization procedure stops the fit to all experiments at
once, we must make sure that the quality of the fit to different
experiments is approximately the same. This is nontrivial, because of
the variety of experiments and datasets included in the
fit. Therefore, the figure of merit per datapoint for a given set is
not necessarily a reliable indicator of the quality of the fit to that set,
because some experiments may have systematically underestimated or
overestimated uncertainties. Furthermore, unlike for unpolarized PDF
fits, information on the experimental covariance matrix is only
available for a small subset of experiments, so for most experiments
statistical and systematic errors must be added in quadrature, thereby
leading to an overestimate of uncertainties: this leads to a wide
spread of values of the figure of merit, whose value depends on the
size of the correlated uncertainties which are being treated as
uncorrelated.
A methodology to deal with this situation was developed in
Ref.~\cite{Ball:2010de}. The idea is to first determine the optimal value of
the figure of merit for each experiment, i.e. a set of target values
$E_{i}^{\rm targ}$ for each of the $i$ experiments, then
during the fit give more weight to experiments for which the figure
of merit is further away from its target value, and stop training
experiments which have already reached the target value. This is done by
minimizing, instead of the figure of merit Eq.~(\ref{eq:errfun}), the
weighted figure of merit
\begin{equation}
\label{eq:weight_errfun}
E_{\rm wt}^{(k)}=\frac{1}{N_{\mathrm{dat}}}
\sum_{j=1}^{N_{\mathrm{sets}}}p_j^{(k)} N_{\mathrm{dat},j}E_j^{(k)}\, ,
\end{equation}
where $E_j^{(k)}$ is the error function for the $j$-th dataset with
$N_{{\rm dat},j}$ points, and the weights $p_j^{(k)}$ are given by
\begin{enumerate}
\item If $E_{i}^{(k)} \ge E_{i}^{\rm targ}$, then $p_i^{(k)}=\left( E_{i}^{(k)}/E_{i}^{\rm targ}\right)^n$,
\item If $E_{i}^{(k)} < E_{i}^{\rm targ}$, then $p_i^{(k)}=0$ \ ,
\end{enumerate}
with $n$ a free parameter which essentially determines the amount of
weighting. In the unpolarized fits of
Refs.~\cite{Ball:2010de,Ball:2011mu,Ball:2011uy,Ball:2012cx} the value
$n=2$ was used. Here instead we will choose $n=3$. This larger value,
determined by trial and error, is justified by the wider spread of
figures of merit in the polarized case, which in turn is related
to the absence of correlated systematics for most
experiments.
The target values $E_{i}^{\rm targ}$ are determined through an
iterative procedure: they are set to one at first, then a very long
fixed-length fit is run, and the values of $E_{i}$ are taken as
targets for a new fit, which is performed until stopping
(according to the criterion to be discussed in Sect.~\ref{sec-minim}
below). The values of $E_{i}$ at the end of this fit are then taken
as new targets until convergence is reached, usually after a couple
iterations.
Weighted training stops after the first $N_{\rm
gen}^{\rm wt}$ generations, unless the total error function
Eq.~(\ref{eq:errfun}) is above some threshold $E^{(k)}\geq E^{\rm
sw}$. If it is, weighted training continues until $E^{(k)}$ falls
below the threshold value. Afterwards, the error function is just the
unweighted error function Eq.~(\ref{eq:errfun}) computed on
experiments. This ensures that the figure of merit behaves smoothly in
the last stages of training. The values for the parameters $N_{\rm
gen}^{\rm wt}$ and $E^{\rm sw}$ are also given in
Tab.~\ref{tab:mutpars}.
\subsection{Determination of the optimal fit}
\label{sec-minim}
Because the neural network parametrization is very redundant, it may
be able to fit not only the underlying behaviour of the PDFs, but also
the statistical noise in the data. Therefore, the best fit does not necessarily coincide with the
absolute minimum of the figure of merit Eq.~(\ref{eq:errfun}). We
thus determine the best fit, as in Refs.~\cite{DelDebbio:2007ee,Ball:2008by}, using a
cross-validation method~\cite{Bishop:1995}:
for each replica, the data are randomly divided in two sets, training
and validation, which include a fraction $f_{\rm tr}^{(j)}$ and $f_{\rm
val}^{(j)}=1-f_{\rm tr}^{(j)}$ of the data points respectively. The
figure of merit Eq.~(\ref{eq:errfun}) is then computed for both sets.
The training figure of merit function is minimized through the genetic
algorithm, while the validation figure of merit is monitored: when the
latter starts increasing while the former still decreases the fit is
stopped. This means that the fit is stopped as soon as the neural
network is starting to learn the statistical fluctuations of the
points, which are different in the training and validation sets,
rather than the underlying law which they share.
In the unpolarized fits of
Refs.~\cite{DelDebbio:2007ee,Ball:2008by,Ball:2010de,Ball:2011mu,Ball:2011uy,Ball:2012cx}
equal training and validation fractions were uniforlmly chosen,
$f_{\rm tr}^{(j)}=f_{\rm val}^{(j)}=1/2$.
However, in this case we have to face the problem that the number of
datapoints is quite small: most experiments include about ten
datapoints (see Tab.~\ref{tab:exps-sets}). Hence, it is difficult to
achieve a stable minimization if only half of them
are actually used for minimization, as we have explicitly verified. Therefore,
we have chosen to include 80\% of the data in the training set,
i.e. $f_{\rm tr}^{(j)}=0.8$ and $f_{\rm val}^{(j)}=0.2$. We have explicitly
verified that the fit quality which is obtained in this case is
comparable to the one achieved when including all data in the training set
(i.e. with $f_{\rm tr}^{(j)}=1.0$ and $f_{\rm val}^{(j)}=0.0$), but the presence of a
nonzero validation set allows for a satisfactory stopping, as we have
checked by explicit inspection of the profiles of the figure of merit
as a function of training time.
In practice, in order to implement cross-validation we must determine
a stopping criterion, namely, give conditions which must be satisfied
in order for the minimization to stop.
First, we require that the weighted training stage
has been completed, i.e., that the genetic algorithm has been run for
at least $N_{\rm gen}^{\rm wt}$ minimization steps. Furthermore,
we check that all experiments have reached a value of the figure of merit
below a minimal threshold $E_{\rm thr}$. Note that because stopping
can occur only after weighted training has been switched off, and this
in turn only happens when the figure of merit falls below the value
$E^{\rm sw}$, the total figure of merit must be below this value in
order for stopping to be possible.
We then compute moving averages
\begin{equation}
\label{eq:smearing}
\langle E_{\mathrm{tr,val}}(i)\rangle\equiv
\frac{1}{N_{\mathrm{smear}}}
\sum_{l=i-N_{\mathrm{smear}}+1}^iE_{\mathrm{wt;\,tr,val}}(l)\,,
\end{equation}
of the figure of merit Eq.~(\ref{eq:weight_errfun}) for either the
training or the validation set at the $l$-th genetic minimzation step.
The fit is then stopped if
\begin{equation}
\label{eq:trratcond}
r_{\rm tr} < 1-\delta_{\rm tr}\quad{\rm and}\quad r_{\rm val} > 1+\delta_{\rm val}\, ,
\end{equation}
where
\begin{equation}
\label{eq:dec-train}
r_{\rm tr}\equiv \frac{\langle E_{\mathrm{tr}}(i)\rangle}
{\langle E_{\mathrm{tr}}(i-\Delta_{\mathrm{smear}})\rangle}
\, ,
\end{equation}
\begin{equation}
\label{eq:dec-valid}
r_{\rm val}\equiv \frac{\langle E_{\mathrm{val}}(i)\rangle}
{\langle E_{\mathrm{val}}(i-\Delta_{\mathrm{smear}})\rangle} \,.
\end{equation}
The parameter $N_{\rm smear}$ determines the width of the moving
average; the parameter $\Delta_{\rm smear}$ determines the distance
between the two points along the minimization path which are compared
in order to determine whether the figure of merit is increasing or
decreasing; and the parameters $\delta_{\rm tr}$, $\delta_{\rm val}$ are
the threshold values for the decrease of the training and increase of
the validation figure of merit to be deemed significant.
The optimal value of these parameters should be chosen in such a way
that the fit does not stop on a statistical fluctuation, yet it does
stop before the fit starts overlearning (i.e. learning statistical
fluctuation). As explained in Ref.~\cite{Ball:2010de}, this is done
studying the profiles of the error functions for individual dataset
and for individual replicas.
In order to avoid unacceptably long fits, training is stopped anyway
when a maximum number of iterations $N_{\rm gen}^{\rm max}$ is reached, even though the stopping
conditions Eq.~(\ref{eq:trratcond}) are not satisfied. This leads to
a small loss of accuracy of the corresponding fits: this is
acceptable provided it only happens for a small enough fraction of
replicas. If a fit stops at $N_{\rm gen}^{\rm max}$ without the
stopping criterion having been satisfied, we also check that the
total figure of merit is below the value $E^{\rm sw}$ at which
weighted training is switched off. If it hasn't, we conclude that the
specific fit has not converged, and we retrain the same replica, i.e.,
we perform a new fit to the same data starting with a different random
seed. This only occurs in about one or two percent of cases.
The full set of parameters which
determine the stopping criterion is given in
Tab.~\ref{tab:stopping_pars}.
\begin{table}[t]
\centering
\begin{tabular}{|c|c|c|c|c|c|}
\hline
$N_{\rm gen}^{\rm max}$ & $E_{\rm thr}$ & $N_{\rm smear}$ & $\Delta_{\rm smear}$ & $\delta_{\rm tr}$ & $\delta_{\rm val}$ \\
\hline
$20000$ & $8$ & $100$ & $100$ & $5 \cdot 10^{-4}$ & $5 \cdot 10^{-4}$ \\
\hline
\end{tabular}
\caption{\small Parameters for the stopping criterium.}
\label{tab:stopping_pars}
\end{table}
An example of how the stopping criterium works in practice is
shown in Fig.~\ref{fig:stop}. We display the moving averages
Eq.~(\ref{eq:smearing}) of the
training and validation error functions $\langle E_{\rm tr,
val}^{(k)}\rangle$,
computed with the parameter settings of Tab.~\ref{tab:stopping_pars}, and
plotted as a function of the number of iterations of the
genetic algorithm, for a particular replica and for two
of the experiments included in the fit. The wide fluctuations which
are observed in the first part of training, up to the $N_{\rm
gen}^{\rm wt}$-th generation, are due to the fact that the weights
which enter the definition of the figure of merit
Eq.~(\ref{eq:weight_errfun}) are frequently adjusted. Nevertheless,
the downwards trend of the figure of merit is clearly visible.
Once the weighted training is switched off,
minimization proceeds smoothly. The vertical line denotes the point at
which the stopping criterion is satisfied. Here, we have let the
minimization go on beyond this point, and we see clearly that the
minimization has entered an overlearning regime, in which
the validation
error function $E_{\rm val}^{(k)}$ is rising while the training
$E_{\rm tr}^{(k)}$ is still decreasing. Note that the stopping point,
which in this particular case occurs at
$N_{\rm gen}^{\rm stop}=5794$, is determined by verifying that the
stopping criteria are satisfied by the {\it total}
figure of merit, not that of individual experiments shown here.
The fact that the two different experiments
considered here both start overlearning at the same point shows that
the weighted training has been effective in synchronizing the fit
quality for different experiments.
\begin{figure}[t]
\centering
\epsfig{width=0.43\textwidth,figure=chi2pro_COMPASS-D_1004}
\epsfig{width=0.43\textwidth,figure=chi2pro_HERMES_1004}
\caption{\small Behaviour of the moving average Eq.~(\ref{eq:smearing})
of the training and validation figure of merit for two different
datasets included in a global fit (\texttt{COMPASS-P} and
\texttt{HERMES})
as a function of training length. The
The straight vertical line
indicates the point at which the fit stops with the stopping parameters
of Tab.~\ref{tab:stopping_pars}. The weighted
training is switched off at $N_{\rm gen}^{\rm wt}=5000$.}
\label{fig:stop}
\end{figure}
\subsection{Theoretical constraints}
\label{sec:thconst}
Polarized PDFs are only loosely constrained by data, which are
scarce and not very accurate. Theoretical constraints are
thus especially important in reducing the uncertainty on the PDFs. We
consider in particular positivity and integrability.
Positivity of the individual cross sections
which enter the polarized asymmetries Eq.~(\ref{eq:xsecasy}) implies
that, up to power-suppressed corrections, longitudinal polarized
structure functions are bounded by their unpolarized counterparts,
i.e. \begin{equation}
\label{eq:pos}
|g_1(x,Q^2)| \le F_1(x,Q^2) .
\end{equation}
At leading order, structure functions are proportional to parton
distributions, so imposing Eq.~(\ref{eq:pos}) for any process (and a
similar condition on an asymmetry which is sensitive to polarized
gluons~\cite{Altarelli:1998gn}), would imply
\begin{equation}
\label{eq:pospdf}
|\Delta f_i(x,Q^2)|\le f_i(x,Q^2)
\end{equation}
for any pair of unpolarized and polarized PDFs $f$ and $\Delta f$, for
all quark flavors and gluon $i$, for all $x$, and for all $Q^2$.
Beyond leading order, the condition
Eq.~(\ref{eq:pos}) must still hold, but it does not necessarily imply
Eq.~(\ref{eq:pospdf}). Rather, one should then impose at least a number of
conditions of the form of Eq.~(\ref{eq:pos}) on physically measurable
cross-sections which is equal to the number of independent polarized
PDFs. For example, in principle one may require that the condition
Eq.~(\ref{eq:pos}) is separately satisfied for each flavor, i.e.
when only contributions from the
$i$-th flavor are included in the polarized and unpolarized structure
function: this corresponds to requiring positivity of semi-inclusive
structure functions which could in principle be measured
(and that fragmentation effects cancel in the ratio).
A condition on the gluon can be obtained by imposing
positivity of the polarized and unpolarized cross-sections for
inclusive Higgs production in gluon-proton
scattering~\cite{Altarelli:1998gn}, again measurable in principle if
not in practice.
Because $g_1/F_1\sim x$ as $x\to0$~\cite{Ball:1995ye}, the positivity
bound Eq.~(\ref{eq:pos}) is only significant at large enough
$x\gtrsim10^{-2}$. On the other hand, at very large $x$ the NLO
corrections to the LO positivity bound become
negligible~\cite{Altarelli:1998gn,Forte:1998kd}. Therefore, the NLO
positivity bound in practice only differs from its LO counterpart
Eq.~(\ref{eq:pospdf}) in a small region $10^{-2}\sim x\lesssim0.3$, and
even there by an amount of rather less that
10\%~\cite{Altarelli:1998gn}, which is negligible in comparison to the size
of PDF uncertainties, as we shall see explicitly in
Sec.~\ref{sec:results}.
Therefore, we will impose the leading-order positivity bound
Eq.~(\ref{eq:pospdf}) on each flavor combination $\Delta q_i+\Delta
\bar q_i$ and on the gluon $\Delta g$ (denoted as $\Delta f_i$ below).
We do this by requiring
\begin{equation}
\label{eq:possigma}
|\Delta f_i(x,Q^2)| \le f_i(x,Q^2) + \sigma_i(x,Q^2) \ ,
\end{equation}
where $\sigma_i(x,Q^2)$ is the uncertainty on the corresponding unpolarized PDF
combination $f_i(x,Q^2)$ at the kinematic point $(x,Q^2)$. This choice is
motivated by two considerations. First, it is clearly meaningless to
impose positivity of the polarized PDF
to an accuracy which is greater than that with which
the unpolarized PDF has been determined. Second, because the
unpolarized PDFs satisfy NLO positivity, they can become negative and
thus they may have nodes. As a consequence, the LO bound
Eq.~(\ref{eq:pospdf}) would imply that the polarized PDF must vanish
at the same point, which would be clearly meaningless.
As in Ref.~\cite{Ball:2010de} positivity is imposed during the
minimization procedure, thereby guaranteeing that the genetic
algorithm only explores the subspace of acceptable physical
solutions. This is done through a Lagrange multiplier $\lambda_{\rm
pos}$, i.e. by computing the polarized PDF at $N_{\rm dat,pos}$
fixed kinematic points $(x_p,Q_0^2)$ and then adding to the error function
Eq.~(\ref{eq:errfun}) a contribution
\begin{eqnarray}
\label{eq:lagrmult}
&&E_{\rm pos}^{(k)}={\lambda_{\rm pos}}\sum_{p=1}^{N_{\rm dat,pos}}
\Bigg\{
\sum_{j=u+\bar{u},d+\bar{d},s+\bar{s},g} \Theta\left[\left|\Delta f_j^{(\rm net)(k)}(x_p,Q_0^2)\right| -\left(f_j + \sigma_j\right) (x_p,Q_0^2) \right] \nonumber\\
&&\qquad\qquad \times
\left[\left|\Delta f_j^{(\rm net)(k)}(x_p,Q_0^2)\right| -
\left(f_j + \sigma_j\right)(x_p,Q_0^2) \right]
\Bigg\} \ .
\end{eqnarray}
This provides a penalty, proportional to the violation of positivity,
which enforces Eq.~(\ref{eq:possigma}) separately for all
the non-zero quark-antiquark combinations. The values of the
unpolarized PDF combination $f_j(x,Q^2)$ and its uncertainty $\sigma_j(x,Q^2)$ are
computed using the
\texttt{NNPDF2.1 NNLO} PDF set~\cite{Ball:2011mu}, while $\Delta
f_j^{(\rm net)(k)}$ is the corresponding polarized PDF computed from
the neural network parametrization for the $k$-th replica. The
polarized and unpolarized PDFs are evaluated at $N_{\rm dat,pos}=20$
points with $x$ equally spaced in the interval
\begin{equation}
x \in \left[
10^{-2},0.9\right] \ .
\end{equation}
Positivity
is imposed at the initial scale $Q_0^2=1$~GeV$^2$ since once
positivity is enforced at low scales, it is automatically
satisfied at larger scales~\cite{Altarelli:1998gn,Forte:1998kd}.
After stopping, we finally test the positivity condition
Eq.~(\ref{eq:possigma}) is satisfied on a grid of $N_{\rm dat,pos}=40$ points in
the same intervals. Replicas for which positivity is violated in one
or more points are discarded and retrained.
In the unpolarized case, in which positivity only played a minor role
in constraining PDFs, a fixed value of the Lagrange multiplier
$\lambda_{\rm pos}$ was chosen. In the polarized case it turns out to
be necessary to vary the Lagrange multiplier along the minimization.
Specifically, we let
\begin{equation}
\label{eq:lagrmult2}
\left\{
\begin{array}{rcll}
\lambda_{\rm pos} & = & \lambda_{\rm max}^{(N_{\rm gen}-1)/(N_{\lambda_{\rm max}}-1)} & N_{\rm gen} < N_{\lambda_{\rm max}}\\
\lambda_{\rm pos} & = & \lambda_{\rm max} & N_{\rm gen} \geq N_{\lambda_{\rm max}}.
\end{array}
\right.
\end{equation}
This means that the Lagrange multiplier increases as
the minimization proceeds, starting from $\lambda_{\rm pos}=1$, at the
first minimization step, $N_{\rm gen}=1$, up to $\lambda_{\rm pos} =
\lambda_{\rm max}\gg 1$ when $N_{\rm gen} = N_{\lambda_{\rm max}}$. After
$N_{\lambda_{\rm max}}$ generations $\lambda_{\rm pos}$ is then kept
constant to $\lambda_{\rm max}$. The rationale behind this choice is
that the genetic algorithm can thus learn experimental data and
positivity at different stages of minimization. During the early
stages, the contribution coming from the modified error function
Eq.~(\ref{eq:lagrmult}) is negligible, due to the moderate value of
the Lagrange multiplier; hence, the genetic algorithm will mostly
learn the basic shape of the PDF driven by experimental data. As soon
as the minimization proceeds, the contribution coming from the
Lagrange multiplier increases, thus ensuring the proper learning of
positivity: at this stage, most of the replicas which will not fulfill
the positivity bound will be discarded.
The final values of $N_{\lambda_{\rm max}}=2000$ and $\lambda_{\rm max}=10$
have been determined as follows.
First of all, we have performed a fit without
any positivity constraint and we have observed that data were mostly
learnt in about $2000$ generations: hence we have taken this value for
$N_{\lambda_{\rm max}}$. Then we have tried different values for
$\lambda_{\rm max}$ until we managed to reproduce the same $\chi^2$
obtained in the previous, positivity unconstrained, fit. This ensures
that positivity is not learnt to the detriment of the global fit
quality.
Notice that the value of $\lambda_{\rm max}$ is rather small if
compared to the analogous Lagrange multiplier used in the unpolarized
case~\cite{Ball:2011mu}. This depends on the fact that, in this latter
case, positivity is learnt at the early stages of minimization, when
the error function can be much larger than its asymptotic value:
a large Lagrange multiplier is then needed to select the best
replicas. Also, unpolarized PDFs are quite well constrained by data
and positivity is almost automatically fulfilled, except in some
restricted kinematic regions; only a few replicas
violate positivity and need to be penalized. This means that the behaviour
of the error function Eq.~(\ref{eq:errfun}), which governs the fitting
procedure, is essentially dominated by data instead of positivity.
In the polarized case, instead, positivity starts to be effectively
implemented only after some minimizaton steps, when the error function
has already decreased to a value of a few units. Furthermore, we have
checked that, at this stage, most of replicas slightly violate the
positivity condition Eq.~(\ref{eq:possigma}): thus, a too large value
of the Lagrange multiplier on the one hand would penalize replicas which
are good in reproducing experimental data and only slightly worse in
reproducing positivity; on the other, it would promote replicas which
fulfill positivity but whose fit to data is quite bad. As a
consequence of this behaviour, the convergence of the minimization
algorithm would be harder to reach. We also verified that, using a value
for the Lagrange multiplier up to $\lambda_{\rm pos}=100$ leads to no
significant improvement neither in the fulfillment of positivity requirement
nor in the fit quality.
We will show in detail the effects of the positivity bound Eq.~(\ref{eq:possigma})
on the fitted replicas and on polarized PDFs in Sect.~\ref{sec:results}.
Finally, as already mentioned, we impose an integrability constraint.
The requirement that polarized PDFs be integrable, i.e. that they have
finite first moments, corresponds to the assumption that the nucleon
matrix element of the axial current for the $i$-th flavor is finite.
The integrability condition is imposed by computing at each
minimization step the integral of each of the polarized PDFs
in a given interval,
\begin{equation} I(x_1,x_2)=\int_{x_1}^{x_2} dx~\Delta q_i(x,Q_0^2) \,
\qquad \Delta q_i=\Delta\Sigma, \Delta g, \Delta T_3, \Delta T_8 \end{equation}
with $x_1$ and $x_2$ chosen in the small $x$ region, well below the
data points, and verifying that in this region the growth of the
integral as $x_1$ decreases for fixed $x_2$ is less than logarithmic.
In practice, we test for the condition
\begin{equation}
\frac{I(x_1,x_2)}{I(x_1^\prime,x_2)} <
\frac{\ln\frac{x_2}{x_1}}{\ln\frac{x_2}{x_1\prime}},
\end{equation}
with $x_1<x_1^\prime$.
Mutations which do not satisfy the
condition are rejected during the minimization procedure. In our default fit,
we chose $x_1=10^{-5}$, $x_1^\prime=2\cdot 10^{-5}$ and $x_2=10^{-4}$.
\section{Introduction}
The interest in the determination of polarized parton distributions
(PDFs) of
the nucleon is largely related to the experimental discovery in the
late 80s that the singlet axial charge of the proton is anomalously
small~\cite{Ashman:1987hv,Ashman:1989ig}, soon followed by the theoretical
realization~\cite{Altarelli:1988nr,Altarelli:1990jp} that the
perturbative behavior of polarized PDFs deviates from parton model
expectations, according to which gluons decouple in the asymptotic limit.
The theoretical interpretation of these results has spawned a
huge literature, while at the same time
experimental information on polarized PDFs from
deep-inelastic scattering but also from a variety of other processes has
been accumulating over the years
(see e.g.~\cite{deFlorian:2011ia} and references therein).
First studies of the polarized structure of the nucleon were aimed at
an accurate determination of polarized first moments (including
detailed uncertainty
estimates)~\cite{Ball:1995td,Altarelli:1996nm,Altarelli:1998nb}, but
did not attempt a determination of a full PDF set, which was first
proposed in Ref.~\cite{Gehrmann:1995ag}, but without uncertainty
estimation. More recently, polarized PDF sets with uncertainties
have been constructed by
at least four groups (BB~\cite{Bluemlein:2002be,Blumlein:2010rn}, AAC~\cite{Hirai:2008aj},
LSS~\cite{Leader:2006xc,Leader:2010rb} and
DSSV~\cite{deFlorian:2008mr,deFlorian:2009vb}). These PDF sets slightly differ
in the choice of datasets, the form of PDF parametrization, and in
several details of the QCD analysis (such as the treatment of
higher twist corrections), but they are all based on the standard
Hessian methodology for PDF fitting and uncertainty
determination, which has been widely used in the
unpolarized case (see~\cite{Forte:2010dt,Forte:2013wc} and references therein).
This methodology is known~\cite{Forte:2010dt}
to run into difficulties especially when
information is scarce, because of the intrinsic bias of the Hessian
method based on a fixed parton parametrization. This is
likely to be particularly the
case for polarized PDFs, which rely on data both less
abundant and less accurate than their unpolarized counterparts.
In order to overcome these difficulties, the NNPDF collaboration
has proposed and developed a new methodology for PDF
determination~\cite{DelDebbio:2004qj,DelDebbio:2007ee,Ball:2008by,Ball:2009qv,Ball:2009mk,
Ball:2010de,Ball:2010gb,Ball:2011mu,Ball:2011uy,Ball:2011eq, Ball:2011gg,Ball:2012cx}.
The NNPDF technique uses a robust set of statistical tools, which include Monte Carlo
methods for error propagation, neural networks for PDF parametrization, and genetic algorithms
for their training. The NNPDF sets are now routinely used by the Tevatron
and LHC collaborations in their data analysis and for data-theory comparisons.
In this work we extend the application of the NNPDF methodology to the
determination of polarized parton distributions of the nucleon. As we
will see, some PDF uncertainties will turn out to be
underestimated in existing PDF determinations: in particular those of the polarized gluon distribution, but also those of the strange distribution.
The outline of this paper is as follows. In Sect.~\ref{sec:expdata} we
present the data set used to determine polarized PDFs, and we review
the relationship between measured asymmetries and structure
functions. In Sect.~\ref{sec:polpdfs} we discuss the parametrization of
polarized PDFs in terms of neural networks, and the construction
of polarized structure functions. Then in Sect.~\ref{sec:minim}
we discuss the minimization strategy. The results for the {\tt NNPDFpol1.0}
polarized partons are presented in Sect.~\ref{sec:results}, and
in Sect.~\ref{sec:phenoimplications} we discuss the phenomenological
implications for the spin content of the proton and the test of the
Bjorken sum
rule. Finally in Sect.~\ref{sec:conclusions} we summarize our results
and outline future developments. Some details
on the benchmarking of polarized PDF evolution are given in the Appendix.
\section{Computation of PDF uncertainties}
\label{sec:app-pdferr}
In this appendix we briefly review the main formulae
to compute PDF uncertainties.
Available methods for the determination of PDF uncertainties
fall broadly into two distinct categories, which we shall refer to as
the Hessian method
and the {Monte Carlo method}.\footnote{
The Monte Carlo method to PDF error determination is the default
in NNPDF, but it has also been used by the MSTW~\cite{Watt:2012tq}
and HERAPDF~\cite{Jung:2009eq}
collaborations in the unpolarized case.}
In both cases, sets of polarized PDFs with uncertainties are given as
an ensemble of $N_{\rm set}$ sets of PDFs,
\begin{equation}\{ \Delta q^{(k)} \} \ , \qquad k=0,\ldots,N_{\rm set}.
\end{equation}
Conventionally the PDF set $\Delta q^{(0)}$ corresponds to a ``central'' set.
Within the Monte Carlo approach,
the expectation value of any function
$ \mathcal{F} [ \{ \Delta q \}]$ which depends
on the PDFs is computed as an average over the ensemble of PDFs, using
the master formula
\begin{equation}
\label{masterave}
\left\langle \mathcal{F} [ \{ \Delta q \}] \right\rangle
= \frac{1}{N_{\rm set}} \sum_{k=1}^{N_{\rm set}}
\mathcal{F} [ \{ \Delta q^{(k)} \}],
\end{equation}
where $N_{\rm set}=N_{\rm rep}$ is the number of sets of PDFs in the ensemble,
equal to the number of replicas.
The associated uncertainty is found as the standard deviation of the
sample, according to the usual formula
\begin{eqnarray}
\sigma_{\mathcal{F}}
&=& \left[ \frac{N_{\rm set}}{N_{\rm set}-1}
\left( \left\langle \mathcal{F} [ \{ \Delta q \}]^2\right\rangle
- \left\langle \mathcal{F} [ \{ \Delta q \}] \right\rangle^2
\right) \right]^{1/2}\nonumber\\
&=& \left[ \frac{1}{N_{\rm set}-1}
\sum_{k=1}^{N_{\rm set}}
\left( \mathcal{F} [ \{ \Delta q^{(k)} \}]
- \left\langle \mathcal{F} [ \{ \Delta q \}] \right\rangle\right)^2
\right]^{1/2}.
\label{mastersig}
\end{eqnarray}
These formulae may also be used for the determination of central values and
uncertainties of the parton distribution themselves, in which case the
functional $\mathcal{F}$ is identified with the parton distribution $\Delta q$:
$\mathcal{F}[ \{ \Delta q\}]\equiv \Delta q$.
Alternatively, for a Monte Carlo PDF set, one can compute PDF
uncertainties as confidence level (CL) intervals. For a $(1-2p)\%$
Confidence Level interval, one just needs to order the
$N_{\rm set}$ obtained from each of the replicas of the sample,
and discard the $p$ lower and higher replicas. For example, if
$N_{\rm set}=100$, the 68 \% Confidence Level is obtained by
removing from the sample the 16 higher and 16 lower replicas
for a given cross section (or PDF set in a given point in
$(x,Q^2)$): the remaining replicas define the 68 \% Confidence Level
interval. When the Gaussian linear approximation holds the
68~\% CL will be very similar to the 1--$\sigma$ PDF error
Eq.~(\ref{mastersig}), but in other regions where the Gaussian
behavour breaks down differences can be sizable,
as shown in Fig.~\ref{fig:pdfcl68}.
In the Hessian method, used in all recent
polarized PDF sets~\cite{deFlorian:2009vb,Leader:2010rb,Hirai:2008aj,Blumlein:2010rn}, the central set is a best fit set
of PDFs, which thus provides the central value for PDFs
themselves.
The central value of any quantity $\mathcal{F}[ \{ \Delta q
\}]$ is obtained in this method
by evaluating it as a function of the central set:
\begin{equation}
\label{naiveav}
\mathcal{F}^{(0)}=
\mathcal{F}[ \{ \Delta q^{(0)} \}].
\end{equation}
The determination of uncertainties with the Hessian method is
based on the idea that sets $ \Delta q^{(k)}$ with $k>0$ provide upper and
lower variations (for even and odd values of $k$)
away from the central set $ \Delta q^{(0)}$ which correspond to eigenvectors
in parameter space.
The 1--$\sigma$ uncertainty is then found by adding in quadrature
these variations:
\begin{equation}
\label{hepdataerr}
\sigma_{\mathcal{F} }^{\rm hessian} = \frac{1}{2}\left(
\sum_{k=1}^{N_{\rm set}/2}
\left( \mathcal{F} [ \{ \Delta q^{(2k-1)} \}]
- \mathcal{F} [ \{ \Delta q^{(2k)} \}] \right)^2\right)^{1/2}.
\end{equation}
The resulting error could be interpreted as 1--$\sigma$ error, however in many
cases the tolerances used in the PDF error definition are larger
than the textbook value, $\Delta\chi^2 \gg 1$, and thus the statistical
interpretation of the results is hindered.
To conclude, let us discuss how to obtain the
``central value'' for a Monte Carlo PDF set.
In the Monte Carlo method,
the central values of any quantity $\mathcal{F}[ \{ \Delta q
\}]$ are instead given by Eq.~(\ref{masterave}). Therefore,
the central
value for PDFs themselves is given by
\begin{equation}
\label{mcav}
\Delta q^{(0)} \equiv \left\langle \Delta q \right\rangle = \frac{1}{N_{\rm set}}
\sum_{k=1}^{N_{\rm set}} \Delta q^{(k)} \ .
\end{equation}
This set is provided as set $\Delta q^{(0)}$ in the
{\tt NNPDFpol1.0} PDFs. Hence, in the Monte Carlo method the central (best
fit) PDF is obtained as an average of the replica best fits.
However, for any quantity
$\mathcal{F}[ \{ \Delta q \}]$ which
depends nonlinearly on the PDFs, such as cross section asymmetries in
polarized hadronic collisions,
\begin{equation}
\label{linearav}
\left\langle \mathcal{F} [ \{ \Delta q \}] \right\rangle
\not=
\mathcal{F} [\{ \Delta q^{(0)}\}] .
\end{equation}
Hence,
Eq.~(\ref{masterave}) must be used for the determination of the
central value, and using the set $q^{(0)}$ is not
recommended. However, for a quantity that does depend linearly on the
PDFs, such as a DIS polarized structure function, Eq.~(\ref{naiveav}) with the
central PDFs Eq.~(\ref{mcav}) gives the
same result as Eq.~(\ref{masterave}), and thus it may be used
also with the Monte Carlo method (but this is not true for
example for cross section in polarized hadronic collisions).
Note that set $\Delta q^{(0)}$ should not be included when
computing an average with Eq.~(\ref{masterave}), because it is
itself already an average.
\section{From polarized PDFs to observables}
\label{sec:polpdfs}
\subsection{Leading-twist factorization of the structure functions}
\label{sec:fact}
At leading twist, the
polarized structure function $g_1$ for neutral-current virtual photon
DIS is given in terms of the polarized quark
and gluon distributions by
\begin{equation}
g_1(x,Q^2)=\frac{\langle e^2 \rangle}{2} [C_{NS}\otimes \Delta q_{NS}
+C_S\otimes \Delta \Sigma + 2n_fC_g \otimes \Delta g]\ .
\label{1}
\end{equation}
Here $n_f$ is the number of active flavors, the average charge is given by
$\langle e^2 \rangle=n_f^{-1}\sum_{i=1}^{n_f} e^2_i$ in terms of the
electric charge $e_i$ of the $i$-th quark flavor, $\otimes$ denotes the
convolution with respect to $x$, and the nonsinglet and singlet quark
distributions are defined as
\begin{equation}
\Delta q_{NS}\equiv \sum_{i=1}^{n_f}
\left(\frac{e^2_i}{\langle e^2 \rangle} - 1\right)
(\Delta q_i+\Delta \bar q_i),\qquad
\Delta \Sigma\equiv \sum_{i=1}^{n_f}(\Delta q_i+\Delta \bar q_i),
\label{2}
\end{equation}
where $\Delta q_i$ and $\Delta \bar q_i$ are the polarized quark and antiquark
distributions of flavor $i$ and $\Delta g$ is the polarized gluon PDF.
In the parton model, Eq.~(\ref{1}) reduces to
\begin{equation}
\label{g1p-parton}
g_1(x,Q^2)=\frac{1}{2}\sum_{i=1}^{n_f} e_i^2 \left( \Delta q_i(x,Q^2)
+\Delta \bar q_i (x,Q^2) \right),
\end{equation}
but in perturbative QCD the parton model expression is not recovered
even when $\alpha_s\to0$ because at large $Q^2$ the first moment of the gluon
distribution $\int_0^1 \!dx\, \Delta g\sim ({\alpha_s(Q^2))^{-1}}$,
so the gluon does not decouple from $g_1$ asymptotically.
Be that as it may, below charm threshold, with $n_f=3$, Eq.~(\ref{1}) can be
rewritten as
\begin{equation}
g_1(x,Q^2)=\frac{1}{9}\Delta\Sigma(x,Q^2)
+\frac{1}{12}\Delta T_3(x,Q^2)+\frac{1}{36}\Delta T_8(x,Q^2),
\end{equation}
in terms of the singlet quark-antiquark distribution $\Delta\Sigma(x,Q^2)$, defined in
Eq.~(\ref{2}), the isospin triplet combination
\begin{equation}\label{tripdef}
\Delta T_3(x,Q_0^2)=\Delta u(x,Q_0^2) + \Delta \bar{u}(x,Q_0^2)
-\left[\Delta d(x,Q_0^2)+\Delta \bar{d}(x,Q_0^2)\right],
\end{equation}
and the SU(3) octet combination
\begin{equation}
\Delta T_8(x,Q_0^2) =\Delta u(x,Q_0^2)+\Delta \bar{u}(x,Q_0^2)
+\Delta d(x,Q_0^2)+\Delta \bar{d}(x,Q_0^2)
-2\left[\Delta s(x,Q_0^2)+\Delta \bar{s}(x,Q_0^2)\right].
\end{equation}
It is clear from Eqs.~(\ref{1}-\ref{g1p-parton}) that neutral current
$g_1$ data only allow for a direct determination of the four polarized
PDF combinations $\Delta g$, $\Delta\Sigma$, $\Delta T_3$ and $\Delta
T_8$. In principle, an intrinsic polarized component could also be present
for each heavy flavour. However, we will neglect it here and assume that heavy quark PDFs are dynamically generated above
threshold by (massless) Altarelli-Parisi evolution, in a zero-mass
variable-flavor number (ZM-VFNS) scheme. In such a scheme all heavy
quark mass effects are neglected. While they can be introduced for instance
through the \texttt{FONLL} method~\cite{Forte:2010ta}, these effects have been
shown to be relatively small already on the scale of present-day unpolarized
PDF uncertainties, and thus are most likely negligible in the polarized case
where uncertainties are rather larger.
The proton and neutron PDFs are related to each other by isospin,
which we will assume to be exact, thus yielding
\begin{equation}
\Delta u^p=\Delta d^n,\quad \Delta d^p=\Delta u^n, \quad \Delta s^p=\Delta s^n,
\end{equation}
and likewise for the polarized anti-quarks. In the following we will
always assume that PDFs refer to the proton. The first moment of all
non-singlet combinations of quark and antiquark distributions are
scale-independent because of axial current conservation, while the
first moment of the singlet quark distribution is not. Because of the
axial anomaly, the first moment of the singlet quark distribution is
scale-dependent in the $\overline{\rm MS}$ scheme. However, it may be
convenient to choose a factorization scheme in which the first moment
of the singlet quark distribution is also scale independent so that all
the individual quark and antiquark spin fractions are scale
independent. Several such schemes, including the so-called
Adler-Bardeen (AB) scheme, were discussed in Ref.~\cite{Ball:1995td},
where the transformation connecting them to the $\overline{\rm MS}$
scheme was constructed explicitly.
By means of the SU(2) or SU(3) flavour symmetry it is possible to relate the first moments of the nonsinglet
$C$-even combinations ($\Delta T_3$ and $\Delta T_8$) to the baryon octet decay constants $a_3$ and
$a_8$:
\begin{align}
\label{eq:t3sr}
a_3&= \int_0^1 dx~ \Delta T_3(x,Q^2),
\\
\label{eq:t8sr}
a_8&=\int_0^1 dx~ \Delta T_8(x,Q^2),
\end{align}
whose current experimental values are~\cite{Nakamura:2010zzi}
\begin{equation}\label{eq:a3}
a_3=g_A=1.2701\pm0.0025,
\end{equation}
\begin{equation}
\label{eq:a8}
a_8 = 0.585 \pm 0.025.
\end{equation}
A much larger uncertainty on the octet axial charge, up to about 30\%, is found if SU(3) symmetry is
violated~\cite{FloresMendieta:1998ii}. Even though a detailed phenomenological analysis does not
seem to support this conclusion~\cite{Cabibbo:2003cu}, we will
take as default this more conservative
uncertainty estimation
\begin{equation}
\label{eq:a8p}
a_8 = 0.585 \pm 0.176 .
\end{equation}
The impact of replacing this with the more aggressive determination given in Eq.~(\ref{eq:a8}) will be
studied in Sect.~\ref{sec:srres}.
Structure functions will be computed in terms of polarized parton
distributions using the so-called NNPDF {\tt FastKernel} method,
introduced in Ref.~\cite{Ball:2010de}. In short, in this method the
PDFs at scale $Q^2$ are obtained by convoluting the parton
distributions at the parametrization scale $Q_0^2$ with a set of Green's
functions, which are in turn obtained by solving the QCD evolution
equations in Mellin space. These Green's functions are then convoluted
with coefficient functions, so that the structure function can be
directly expressed in terms of the PDFs at the parametrization scale
through suitable kernels $K$. In terms of the polarized PDFs at the
input scale we have
\begin{equation}
\label{Kg1p}
g_1^p=\lbrace
K_{{\rm g1},\Delta\Sigma}\otimes \Delta \Sigma_0
+K_{{\rm g1},\Delta g} \otimes \Delta g_0
+K_{{\rm g1},+} \otimes \left(\Delta T_{3,0}
+ \smallfrac{1}{3}\Delta T_{8,0} \right)\rbrace\ ,
\end{equation}
where the kernels $K_{{\rm g1},\Delta\Sigma},K_{{\rm g1},\Delta g},
K_{{\rm g1},+}$ take into account both the coefficient functions and
$Q^2$ evolution. This way of expressing structure
functions is amenable to numerical optimization, because all kernels
can then be precomputed and stored, and convolutions may be reduced to
matrix multiplications by projecting onto a set of suitable basis
functions.
The neutron polarized structure function $g_1^n$ is given
in terms of the proton and deuteron ones as
\begin{equation}
\label{eq:g1n}
g_1^n = 2\frac{g_1^d}{1-1.5\omega_D}-g_1^p\ ,
\end{equation}
with $\omega_D=0.05$ the probability that the deuteron is found
in a D state.
Under the assumption of exact isospin symmetry, the expression
of $g_1^n$ in terms of parton densities is obtained from Eq.~\eqref{Kg1p}
by interchanging the up and down quark PDFs, which amounts
to changing the sign of $\Delta T_3$.
The implementation of the polarized PDF evolution up to NLO
has been benchmarked against the {\tt HOPPET} evolution
code~\cite{Salam:2008qg} using the settings of the Les Houches PDF
evolution benchmark tables~\cite{Jung:2009eq}. This benchmarking is
discussed in more detail in Appendix~\ref{sec:apppdfevol}. We will
assume the values $\alpha_s\left( M_Z^2\right)=0.119$ for the strong coupling
constant and $m_c=1.4$~GeV and $m_b=4.75$~GeV for the charm and bottom
quark masses respectively.
\subsection{Target mass corrections to $g_1$}
\label{sec:tmc}
The leading twist expressions of structure functions given in Sect.~\ref{sec:fact} are corrected both by dynamical
and kinematic higher-twist terms. The former are related to the contribution of higher twist operators to the Wilson
expansion, and are generally expected to be small. The latter are related to target-mass corrections (TMCs),
and because of their kinematical origin they can be included
exactly: we do this following Ref.~\cite{Piccione:1997zh}.
As discussed in Sect.~\ref{sec:asysf}, we thus consistently include all nucleon mass effects, both in the relation between
measured asymmetries and structure functions, and in the relation between the latter and parton distributions.
The target mass corrections are especially simple in Mellin space, where they take the form~\cite{Piccione:1997zh}
\begin{eqnarray}
&&\tilde g_1(N,Q^2) = g_1(N,Q^2)+\frac{m^{2}}{Q^2}\frac{N(N+1)}{(N+2)^2}
\left[(N+4)~g_1(N+2,Q^2)+4\frac{N+2}{N+1}~g_2(N+2,Q^2)\right]+{\mathcal O}\left(\frac{m^2}{Q^2}\right)^2\ ,
\nonumber\\
\label{g1n1bis}
\\
&&\tilde g_2(N,Q^2)=g_2(N,Q^2)+\frac{m^2}{Q^2}\frac{N(N-1)}{(N+2)^2}
\left[N\frac{N+2}{N+1}g_2(N+2,Q^2)-g_1(N+2,Q^2)\right]+{\mathcal O}\left(\frac{m^2}{Q^2}\right)^2\ .
\label{g2n1bis}
\end{eqnarray}
We denote by $\tilde g_{1,2}(N,Q^2)$ the Mellin space
structure functions with TMCs included, while $g_{1,2}(N,Q^2)$
are the structure functions determined in the $m=0$ limit.
As discussed in Sect.~\ref{sec:asysf}, in the absence of precise data
on the structure function $g_2$, we will either determine it using the
Wandzura-Wilczek approximation Eq.~(\ref{eq:wwrel}) (which is
uncorrected by target-mass effects~\cite{Piccione:1997zh}), or, as a
cross-check, simply setting it to zero. In either case, we may then
determine $\tilde g_1$ Eq.(\ref{g1n1bis}) in terms of $g_1$.
In the former (Wandzura-Wilczek) case, substituting Eq.~(\ref{wweqN})
in Eq.~(\ref{g1n1bis}) and taking the inverse
Mellin transform, we get
\begin{equation}
\tilde g_1(x,Q^2)=
\frac{1}{2\pi i}\int dN\,x^{-N}\left[1
+\frac{m^2x^2}{Q^2}
\frac{(N-2)^2(N-1)}{N^2}\right]g_1(N,Q^2)\ ,
\label{g1tmc1ww}
\end{equation}
where we have shifted $N\to N-2$ in the term proportional to $m^2$.
Inverting the Mellin transform we then obtain
\begin{equation}
\tilde g_1(x,Q^2)=g_1(x,Q^2)
+\frac{m^2x^2}{Q^2}
\left[-5g_1(x,Q^2)-x\frac{dg_1(x,Q^2)}{dx}
+\int_x^1\frac{dy}{y}\left(8g_1(y,Q^2)
+4g_1(y,Q^2)\log\frac{x}{y}\right)\right]\ .
\label{g1xWW}
\end{equation}
If instead $g_2=0$,
\begin{equation}
\tilde g_1(x,Q^2)=
\frac{1}{2\pi i}\int dN\,x^{-N}\left[1
+\frac{m^2x^2}{Q^2}\frac{(N^2-4)(N-1)}{N^2}\right]
g_1(N,Q^2)\ ,
\label{g1tmc10}
\end{equation}
whence
\begin{equation}
\tilde g_1(x,Q^2)=g_1(x,Q^2)
+\frac{m^2x^2}{Q^2}
\left[-g_1(x,Q^2)-x\frac{dg_1(x,Q^2)}{dx}
-\int_x^1\frac{dy}{y}\left(4g_1(y,Q^2)
+4g_1(y,Q^2)\log\frac{x}{y}\right)\right].
\label{g1x0}
\end{equation}
The numerical implementation of Eqs.~(\ref{g1xWW}) or
Eq.~(\ref{g1x0}) is difficult, because of the presence
of the first derivative of $g_1$ in the correction term.
Therefore, we will include target mass
effects in an iterative way:
we start by performing a fit in which we set $m=0$ and
at each iteration the target mass corrected $g_1$ structure function
is computed by means of Eqs.~(\ref{g1xWW}--\ref{g1x0}) using the
$g_1$ obtained in the previous minimization step.
\section{Polarized nucleon structure}
\label{sec:phenoimplications}
The \texttt{NNPDFpol1.0} PDF set may be used for a determination of the first
moments of polarized parton distributions. As briefly summarized in
the introduction, these are the quantities of greatest physical
interest in that they are directly related to the spin structure of
the nucleon, and indeed their determination, in particular the
determination of the first moments of the quark and gluon
distributions, has been the main motivation for the experimental
campaign of $g_1$ measurements. The determination of the isotriplet
first moment, because of the Bjorken sum rule, provides a potentially
accurate and unbiased handle on the strong coupling $\alpha_s$.
\subsection{First moments}
We have computed the first moments
\begin{equation}
\langle \Delta f(Q^2) \rangle \equiv
\int_0^1 dx \, \Delta f(x,Q^2)
\label{eq:moments}
\end{equation}
of each light polarized quark-antiquark and gluon
distribution using a sample of
$N_\mathrm{rep}=100$ \texttt{NNPDFpol1.0}
PDF replicas.
The histogram of the distribution of first moments over the replica
sample at $Q_0^2=1~{\rm
GeV^2}$ are displayed in Fig.~\ref{fig:mom_distr}: they appear to
be reasonably approximated by a
Gaussian.
The central value and one-$\sigma$ uncertainties of the quark first
moments are listed in Tab.~\ref{tab:spin2}, while those of the singlet
quark combination Eq.~(\ref{2}) and the gluon are given in
Tab.~\ref{tab:spin1}.
Results are compared to those from other
parton sets, namely ABFR98~\cite{Altarelli:1998nb},
DSSV10~\cite{deFlorian:2009vb}, AAC08~\cite{Hirai:2008aj},
BB10~\cite{Blumlein:2010rn} and
LSS10~\cite{Leader:2010rb}. Results from other PDF sets are not
available for all combinations and scales, because public codes only
allow for the computation of first moments in a limited $x$ range, in
particular down to a minimum value of $x$: hence we must rely on
published values for the first moments. In particular, the
DSSV and AAC results are shown at $Q_0^2=1~{\rm
GeV^2}$, while the BB and LSS results are shown at $Q^2=4~{\rm
GeV^2}$. For ease of reference, the NNPDF values for both scales are shown
in Tab.~\ref{tab:spin1}.
\begin{figure}[t]
\begin{center}
\epsfig{width=0.47\textwidth,figure=uubar.eps}
\epsfig{width=0.47\textwidth,figure=ddbar.eps}
\epsfig{width=0.47\textwidth,figure=ssbar.eps}
\epsfig{width=0.47\textwidth,figure=gluon.eps}
\caption{\small Distribution of the first moments of
$\Delta u + \Delta\bar{u}$ (top left), $\Delta d + \Delta\bar{d}$ (top right),
$\Delta s + \Delta\bar{s}$ (bottom left) over a set of
$N_\mathrm{rep}=100$ \texttt{NNPDFpol1.0}
PDF replicas.
\label{fig:mom_distr}}
\end{center}
\end{figure}
\begin{table}[h!]
\begin {center}
\small
\begin{tabular}{l||r@{.}l|r@{.}l|r@{.}l|r@{.}l||r@{.}l|r@{.}l|r@{.}l|r@{.}l||r@{.}l|r@{.}l|r@{.}l|r@{.}l}
\hline
& \multicolumn{8}{c||}{$\langle \Delta u +\Delta \bar u \rangle$}
& \multicolumn{8}{c||}{$\langle \Delta d +\Delta \bar d \rangle$}
& \multicolumn{8}{c}{$\langle \Delta s +\Delta \bar s \rangle$}\\
\hline
\hline
& \multicolumn{2}{c|}{\textsl{\rm cv}} & \multicolumn{2}{c|}{\textsl{\rm exp}}
& \multicolumn{2}{c|}{\textsl{\rm th}} & \multicolumn{2}{c||}{\textsl{\rm tot}}
& \multicolumn{2}{c|}{\textsl{\rm cv}} & \multicolumn{2}{c|}{\textsl{\rm exp}}
& \multicolumn{2}{c|}{\textsl{\rm th}} & \multicolumn{2}{c||}{\textsl{\rm tot}}
& \multicolumn{2}{c|}{\textsl{\rm cv}} & \multicolumn{2}{c|}{\textsl{\rm exp}}
& \multicolumn{2}{c|}{\textsl{\rm th}} & \multicolumn{2}{c}{\textsl{\rm tot}} \\
\hline
NNPDFpol1.0
& 0 & 80 & 0 & 08 & \multicolumn{2}{c|}{---} & 0 & 08
& -0 & 46 & 0 & 08 & \multicolumn{2}{c|}{---} & 0 & 08
& -0 & 13 & 0 & 09 & \multicolumn{2}{c|}{---} & 0 & 09 \\
DSSV08~\cite{deFlorian:2008mr}
& 0 & 817 & 0 & 013 & 0 & 008 & 0 & 015
& -0 & 453 & 0 & 011 & 0 & 036 & 0 & 038
& -0 & 110 & 0 & 023 & 0 & 098 & 0 & 101 \\
\hline
\end{tabular}
\end {center}
\caption{\small
First moments of the polarized quark distributions at $Q_0^2=1$
GeV$^2$; cv denotes the central value, whil exp and th denote
uncertainties (see text) whose sum in quadrature is given by tot.}
\label{tab:spin2}
\end{table}
In order to compare the results for first moments shown in
Tabs.~\ref{tab:spin2}-\ref{tab:spin1}, it should be understood that
the uncertainties shown, and sometimes also the central values,
have somewhat different meanings. In particular:
\begin{itemize}
\item For \texttt{NNPDFpol1.0} the {\textit exp} uncertainty, determined as
the standard deviation of the replica sample, is a pure PDF
uncertainty: it includes the
propagation of the experimental data uncertainties and the uncertainty
due to the interpolation and extrapolation.
\item In the ABFR98 study, the central values were obtained in the
so-called AB factorization scheme~\cite{Ball:1995td}. While the
gluon in this scheme coincides with the gluon in the $\overline{\rm
MS}$ scheme used here (and thus the value from
Ref.~\cite{Altarelli:1998nb} for the gluon is shown in
Tab.~\ref{tab:spin1}), the quark singlet differs from it. However,
in Ref.~\cite{Altarelli:1998nb} a value of the singlet axial charge
$a_0$ in the limit of infinite $Q^2$ was also given.
In the $\overline{\rm MS}$,
the singlet axial charge and the first moment of $\Delta\Sigma$
coincide~\cite{Ball:1995td}, hence we have determined $\left\langle\Delta\Sigma\right\rangle$
for ABFR98 by evolving down to $Q^2=1$~GeV$^2$ the value of
$a_0(\infty)$ given in Ref.~\cite{Altarelli:1998nb}, at NLO and with
$\alpha_s(M_z)=0.118$~\cite{Beringer:1900zz} (the impact of the
$\alpha_s$ uncertainty is negligible). We have checked that
the same result is obtained if $a_0$ is computed as the appropriate
linear combination of $\left\langle \Delta\Sigma\right\rangle$ in the AB scheme and the
first moment of $\Delta g$.
In the ABFR98 study, the
\textit{exp} uncertainty is the Hessian uncertainty on the best fit,
and it thus includes the propagated data uncertainty. The
\textit{th} uncertainty includes the uncertainty originated by
neglected higher orders (estimated by renormalization and
factorization scale variations), higher twists, position of heavy
quark thresholds, value of the strong coupling, violation of SU(3)
(uncertainty on $a_8$ Eq.~(\ref{eq:t8sr})), and finally
uncertainties related to the choice of functional form, estimated by
varying the functional form. This latter source of theoretical
uncertainty corresponds to interpolation and extrapolation
uncertainties which are included in the \textit{exp} for NNPDF.
\item For DSSV08 and BB10 PDFs, the central value is obtained by
computing the first moment integral of the best-fit with a fixed
functional form restricted to the data region, and then
supplementing it with a contribution due to the extrapolation in the
unmeasured (small $x$) region. The \textit{exp} uncertainty in the
table is the Hessian uncertainty given by DSSV08 or BB10 on the
moment in the measured region, and it thus includes the propagated
data uncertainty. In both cases, we have determined the \textit{th}
uncertainty shown in the table as the difference between the full first
moment quoted by DSSV08 or BB10, and the first moment in the
measured region. It is thus the contribution from the extrapolation
region, which we assume to be $100\%$ uncertain. In both cases,
we have computed the truncated first moment in the measured region
using publicly available codes, and checked that it coincides with
the values quoted by DSSV08 and BB10.
\item For AAC08, the central value is obtained by computing the first moment
integral of the best-fit with a fixed functional form, and the
\textit{exp} uncertainty is the Hessian uncertainty on it. However,
AAC08 uses a so-called tolerance~\cite{Pumplin:2002vw} criterion for
the determination of Hessian uncertainties, which rescales the
$\Delta\chi^2=1$ region by a suitable factor, in order to
effectively keep into
account also interpolation errors. Hence, the
\textit{exp} uncertainties include propagated data uncertainties, as
well as uncertainties on the PDF shape.
\item For LSS10, the central value is obtained by computing the first
moment integral of the
best-fit with a fixed functional form, and the \textit{exp} uncertainty is
the Hessian uncertainty on it. Hence it includes the
propagated data uncertainty.
\end{itemize}
\begin{table}[t]
\begin{center}
\begin{tabular}{ll||r@{.}l|r@{.}l|r@{.}l|r@{.}l||r@{.}l|r@{.}l|r@{.}l|r@{.}l}
\hline
& & \multicolumn{8}{c||}{$\langle \Delta\Sigma \rangle$}
& \multicolumn{8}{c}{$\langle \Delta g \rangle$} \\
\hline
\hline
& & \multicolumn{2}{c|}{\textsl{\rm cv}} & \multicolumn{2}{c|}{\textsl{\rm exp}}
& \multicolumn{2}{c|}{\textsl{\rm th}} & \multicolumn{2}{c||}{\textsl{\rm tot}}
& \multicolumn{2}{c|}{\textsl{\rm cv}} & \multicolumn{2}{c|}{\textsl{\rm exp}}
& \multicolumn{2}{c|}{\textsl{\rm th}} & \multicolumn{2}{c}{\textsl{\rm tot}} \\
\hline
NNPDFpol1.0 (1GeV$^2$)
& & 0 & 22 & 0 & 20 & \multicolumn{2}{c|}{---} & 0 & 20
& -1 & 2 & 4 & 2 & \multicolumn{2}{c|}{---} & 4 & 2 \\
NNPDFpol1.0 (4GeV$^2$)
& & 0 & 18 & 0 & 20 & \multicolumn{2}{c|}{---} & 0 & 20
& -0 & 9 & 3 & 9 &
\multicolumn{2}{c|}{---} & 4 & 2 \\
\hline
ABFR98~\cite{Altarelli:1998nb} & & 0 & 12 & 0 & 05 & \multicolumn{2}{c|}{$^{+0.19}_{-0.12}$} & \multicolumn{2}{c||}{$^{+0.19}_{-0.13}$}
& 1 & 6 & 0 & 4 & 0 & 8 & 0 & 9 \\
DSSV08~\cite{deFlorian:2008mr} & & 0 & 255 & 0 & 019 & 0 & 126 & 0 & 127
& -0 & 12 & 0 & 12 & 0 & 06 & 0 & 13 \\
\multirow{2}*{AAC08~\cite{Hirai:2008aj}} & (\textsl{positive})
& 0 & 26 & 0 & 06 & \multicolumn{2}{c|}{---} & 0 & 06
& 0 & 40 & 0 & 28 & \multicolumn{2}{c|}{---} & 0 & 28 \\
& (\textsl{node})
& 0 & 25 & 0 & 07 & \multicolumn{2}{c|}{---} & 0 & 07
& -0 & 12 & 1 & 78 & \multicolumn{2}{c|}{---} & 1 & 78 \\
BB10~\cite{Blumlein:2010rn} & & 0 & 19 & 0 & 08 & 0 & 23 & 0 & 24
& 0 & 46 & 0 & 43 & 0 & 004 & 0 & 43 \\
\multirow{2}*{LSS10~\cite{Leader:2010rb}} & (\textsl{positive})
& 0 & 207 & 0 & 034 & \multicolumn{2}{c|}{---} & 0 & 034
& 0 & 316 & 0 & 190 & \multicolumn{2}{c|}{---} & 0 & 190 \\
& (\textsl{node})
& 0 & 254 & 0 & 042 & \multicolumn{2}{c|}{---} & 0 & 042
& -0 & 34 & 0 & 46 & \multicolumn{2}{c|}{---} & 0 & 46 \\
\hline
\end{tabular}
\end{center}
\caption{\small Same as Tab.~\ref{tab:spin2}, but for the total
singlet quark distribution and the gluon distribution. The NNPDF
results are shown both at $Q_0^2=1$ GeV$^2$ and $Q^2=4~{\rm GeV^2}$, the
ABFR, DSSV and AAC results are shown at $Q_0^2=1$ GeV$^2$, and the
BB10 and LSS10 are shown at $Q^2=4~{\rm GeV^2}$).}
\label{tab:spin1}
\end{table}
In all cases, the total uncertainty is computed as the sum in quadrature of the
\textit{exp} and \textit{th} uncertainties. Roughly speaking, for
LSS10 this includes only the data uncertainties; for DSSV08, and BB10
it also includes extrapolation uncertainties;
for AAC08 interpolation uncertainties;
for \texttt{NNPDFpol1.0} both extrapolation and
interpolation uncertainties; and for ABFR98 all of the
above, but also theoretical (QCD) uncertainties. For LSS10 and AAC08,
we quote the results obtained from two different fits, both
assuming positive- or node-gluon PDF: their spread gives a feeling for
the missing uncertainty due to the choice of functional form. Note
that the AAC08 results correspond to their Set B which includes,
besides DIS data, also RHIC $\pi^0$ production data; the DSSV08 fit
also includes, on top of these, RHIC jet data and semi-inclusive DIS
data; LSS10 includes, beside DIS, also semi-inclusive DIS data. All other sets are
based on DIS data only.
Coming now to a comparison of results, we see that for the
singlet first moment $\langle
\Delta\Sigma \rangle$ the {\tt NNPDFpol1.0} result is consistent within
uncertainties with that of other groups. The uncertainty on the {\tt
NNPDFpol1.0} result is
comparable (if somewhat larger) to that found whenever the
extrapolation uncertainty has been included.
For individual quark flavors
(Tab.~\ref{tab:spin2}) we find excellent agreement in the central
values obtained between {\tt NNPDFpol1.0} and DSSV08; the NNPDF
uncertainties are rather larger, but this could also be due to the
fact that the DSSV08 dataset is sensitive to flavour separation.
For the gluon first moment $\langle \Delta g
\rangle$, the {\tt NNPDFpol1.0} result is characterized by an
uncertainty which is much larger than that of any other determination:
a factor of three or four larger than ABFR98 and AAC08, ten times larger
than BB10, and twenty times larger than DSSV08 and LSS10. It is
compatible with zero within this large uncertainty.
We have seen that for the quark singlet, the {\tt NNPDFpol1.0}
uncertainty is similar to that of groups which include an estimate of
extrapolation uncertainties. In order to assess the impact
of the extrapolation uncertainty for the gluon, we have
computed the gluon first truncated moment in the region
$x\in[10^{-3},1]$:
\begin{equation}
\int_{10^{-3}}^1dx\, \Delta g(x, Q^2=1~{\rm GeV^2}) = -0.26 \pm 1.19 \,,
\end{equation}
to be compared with the result of Tab.~\ref{tab:spin1}, which is
larger by almost a factor four.
We must conclude that the experimental status of the gluon first
moment is still completely uncertain, unless one is willing to make
strong theoretical assumptions on the behaviour of the polarized gluon
at small $x$, and that previous different conclusions were affected by
a significant under-estimate of the impact of the bias in the choice
of functional form, in the data and especially in the extrapolation
region. Because of the large uncertainty related to the extrapolation
region, only low $x$ data can improve this situation, such as those
which could be collected at a high
energy electron-ion collider~\cite{Deshpande:2005wd,Boer:2011fh}.
\subsection{The Bjorken sum rule}
\label{sec:bjorken}
Perturbative factorization, expressed in this context
by Eq.~(\ref{1}) for the structure function $g_1(x,Q^2)$,
and the assumption of exact isospin symmetry, immediately
lead to the so-called Bjorken sum rule (originally
derived~\cite{Bjorken:1966jh,Bjorken:1969mm} using current algebra):
\begin{equation}
\label{eq:bjorken}
\Gamma_1^p\left( Q^2\right) - \Gamma_1^n\left( Q^2\right) = \frac{1}{6}
\Delta C_{\mathrm{NS}} (\alpha_s(Q^2)) a_3 \ ,
\end{equation}
where
\begin{equation}
\label{eq:g1pmomentum}
\Gamma_1^{p,n}(Q^2) \equiv \int_0^1 dx\, g_1^{p,n} (x,Q^2) \ ,
\end{equation}
and $\Delta C_{\mathrm{NS}} (\alpha_s(Q^2))$
is the first moment of the non-singlet coefficient function,
while $a_3$ is defined in Eq.~\eqref{eq:t3sr}.
Because the first moment of the non-singlet coefficient function
$\Delta C_{\rm NS}$ is known up to three loops~\cite{Larin:1991tj} and
isospin symmetry is expected to hold to high accuracy, the Bjorken sum
rule Eq.~(\ref{eq:bjorken}) potentially provides a theoretically very
accurate handle on the strong coupling constant: in principle,
the truncated isotriplet first moment
\begin{equation}
\label{eq:bjorken-cut}
\Gamma_1^{\rm NS}\left( Q^2,x_{\rm min}\right) \equiv
\int_{x_{\rm min}}^1 dx \left[ g_1^p\left( x,Q^2 \right) - g_1^n\left( x,Q^2 \right) \right]
\end{equation}
can be extracted from
the data without any theoretical assumption. Given a measurement of
$\Gamma_1^{\rm NS}\left( Q^2,0\right)$ at one scale the strong coupling can then be
extracted from Eq.~(\ref{eq:bjorken}) using the value of $a_3$ from
$\beta$ decays, while given a measurement of
$\Gamma_1^{\rm NS}\left( Q^2,0\right)$ at two scales both $a_3$ and the value
of $\alpha_s$ can be extracted simultaneously.
In Ref.~\cite{DelDebbio:2009sq}, $a_3$ and $\alpha_s$ where simultaneously determined
from a set of nonsinglet truncated moments (both the first and higher
moments), by exploiting the scale
dependence of the latter~\cite{Forte:1998nw}, with the result
$g_A=1.04\pm0.13$ and $\alpha_s(M_z)=0.126^{+0.006}_{-0.014}$, where
the uncertainty is dominated by the data, interpolation and
extrapolation, but also includes theoretical (QCD)
uncertainties. In this reference, truncated moments were determined
from a neural network interpolation of existing data, sufficient for
a computation of moments at any scale. However, because the small $x$
behaviour of the structure function is only weakly constrained by
data, the $x\to0$ extrapolation was done by assuming a powerlike
(Regge) behaviour~\cite{Close:1994he}.
\begin{figure}[t!]
\begin{center}
\epsfig{width=0.47\textwidth,figure=fitted_a3.eps}
\epsfig{width=0.47\textwidth,figure=fixed_a3.eps}
\caption{\small The truncated
Bjorken sum rule $\Gamma_1^{\rm NS}\left( Q^2,x\right)$ Eq.~(\ref{eq:bjorken-cut})
plotted as a function of $x$ for $Q^2=1$~GeV$^2$,
for the
fit with free $a_3$ (left) and for the
reference fit with $a_3$ fixed to the value
Eq.~(\ref{eq:a3}),
(right panel). In the left plot,
the shaded band corresponds to the
asymptotic value of the truncated sum rule, Eq.~(\ref{eq:extriplet}),
while in the right plot it corresponds to the
experimental value Eq.~(\ref{eq:a3}).
\label{fig:bjsr}}
\end{center}
\end{figure}
The situation within {\tt NNPDFpol1.0} can be understood by exploiting
the PDF
determination in which $a_3$ is not fixed by the triplet sum
rule, discussed in Sect.~\ref{sec:srres}. Using the results of this
determination, we find
\begin{equation}\label{eq:a3exp}
a_3=\int_0^1dx\,\Delta T_3 (x, Q^2) = 1.19 \pm 0.22\ .
\end{equation}
The uncertainty is about twice that of the determination of
Ref.~\cite{DelDebbio:2009sq}. As mentioned, the latter was obtained
from a neural network parametrization of the data with no theoretical
assumptions, and based on a methodology which is quite close to that
of the {\tt NNPDFpol1.0} PDF determination discussed here, the only
difference being the assumption of Regge behaviour in order to perform
the small $x$ extrapolation. This strongly suggests that, as in the
case of the gluon distribution discussed above, the uncertainty on the
value Eq.~(\ref{eq:a3exp}) is dominated by the small $x$
extrapolation.
To study this, in
Fig.~\ref{fig:bjsr} we plot the value of the truncated Bjorken sum
rule $\Gamma_1^{\rm NS}\left( Q^2,x_{\rm min}\right)$
Eq.~(\ref{eq:bjorken-cut}) as a function
of the lower limit of integration $x_{\rm min}$ at $Q_0^2=1$~GeV$^2$, along
with the asymptotic value
\begin{equation}\label{eq:extriplet}
\Gamma_1^{\rm NS}\left( 1~\hbox{GeV}^2,0 \right)= 0.16 \pm 0.03
\end{equation}
which at NLO corresponds to the value of $a_3$ given by Eq.~(\ref{eq:a3exp}).
As a consistency check, we also show the same plot for our baseline fit,
in which $a_3$ is fixed by the sum rule to the value
Eq.~(\ref{eq:a3}). It is clear that indeed the uncertainty is
completely dominated by the small $x$ extrapolation.
This suggests that a determination of $\alpha_s$ from the Bjorken sum
rule is not competitive unless one is willing to make assumptions on
the small $x$ behaviour of the nonsinglet structure function in the
unmeasured region. Indeed, it is clear that a determination based on
{\tt NNPDFpol1.0} would be affected by an uncertainty which is
necessarily larger than that found in Ref.~\cite{DelDebbio:2009sq},
which is already not competitive. The fact that a determination
of $\alpha_s$ from the Bjorken sum rule is not competitive due to
small $x$ extrapolation ambiguities was already pointed out in
Ref.~\cite{Altarelli:1998nb}, where values of $a_3$ and $\alpha_s$
similar to those of Ref.~\cite{DelDebbio:2009sq} were obtained.
\section{Results}
\label{sec:results}
We now present the main result of this paper, namely the
first determination of a polarized PDF set based on the NNPDF
methodology, \texttt{NNPDFpol1.0}. We will first illustrate the
statistical features of our PDF fit, then compare the
\texttt{NNPDFpol1.0} PDFs to other recent polarized parton
sets~\cite{deFlorian:2009vb,Leader:2010rb,Hirai:2008aj,Blumlein:2010rn}.
We will finally discuss the stability of our results upon the
variation of several
theoretical and methodological assumptions: the treatment of target-mass
corrections, the use of sum rules to fix the triplet and octet axial charges,
the implementation of positivity of PDFs, and preprocessing of neural
networks and its impact on small and large $x$ behaviour.
We will not discuss here the way predictions for PDFs and uncertainties
are obtained from NNPDF replica sets, for which we refer to general
reviews, such as Ref.~\cite{Alekhin:2011sk}.
\subsection{Statistical features}
\label{sec:stat_features}
The statistical features of the {\tt NNPDFpol1.0} analysis are summarized in
Tabs.~\ref{tab:chi2tab1}-\ref{tab:chi2tab2}, for the full
dataset and for individual experiments and sets respectively.
\begin{table}[h!]
\begin{center}
\small
\input{est1_ref}
\end{center}
\caption{\small Statistical estimators for {\tt NNPDFpol1.0} with
$N_\mathrm{rep}=100$ replicas.}
\label{tab:chi2tab1}
\end{table}
\begin{table}[t]
\centering
\input{est2_ref}
\caption{\small Same as Tab.~\ref{tab:chi2tab1} but for individual experiments.}
\label{tab:chi2tab2}
\end{table}
The error function $\langle E\rangle$ Eq.~(\ref{eq:errfun}) shown in
the tables both for the total, training and validation datasets
is the figure of merit for the quality of the fit of each PDF replica
to the corresponding data replica. The quantity which is actually
minimized during the neural network training is this figure of merit
for the training set, supplemented by weighting in the early stages of
training according to Eq.~(\ref{eq:weight_errfun}) and by a Lagrange
multiplier to enforce positivity according to Eq.~(\ref{eq:lagrmult}).
In the table we also show the average over all replicas $\left\langle
\chi_{\mathrm{tot}}^{2(k)}\right\rangle$ of $\chi_{\mathrm{tot}}^{2(k)}$
computed for the $k$-th replica, which coincides with the figure of
merit Eq.~(\ref{eq:weight_errfun}), but with the data replica $g_I^{\rm (art)(k)}$
replaced by the experimental data $g_I^{\rm (dat)}$. We finally show
$\chi^2_{\mathrm {tot}}$, which coincides with the figure of
merit Eq.~(\ref{eq:weight_errfun}), but again with $g_I^{\rm
(art)(k)}$ replaced by $g_I^{\rm (dat)}$, and also with
$g_I^{(\mathrm{net})(k)}$ replaced by $\left\langle
g_I^{(\mathrm{net})(k)}\right\rangle$, i.e. the average of the observable over
replicas, which provides our best prediction.
The average
number of iterations of the genetic algorithm at stopping, $\left\langle \rm TL
\right\rangle$, is also given in this table.
\begin{figure}[t]
\begin{center}
\epsfig{width=0.49\textwidth,figure=chi2pdfs.eps}
\epsfig{width=0.49\textwidth,figure=ertot.eps}
\caption{\small Distribution of $\chi^{2(k)}$ and $E_{\mathrm{tr}}^{(k)}$ over
the sample of $N_{\mathrm{rep}}=100$ replicas.}
\label{fig:chi2-Etr-distr}
\end{center}
\end{figure}
\begin{figure}[t]
\begin{center}
\epsfig{width=0.49\textwidth,figure=tl.eps}
\caption{\small Distribution of training lengths over the sample of
$N_{\mathrm{rep}}=100$ replicas.}
\label{fig:TL-distr}
\end{center}
\end{figure}
The distribution of $\chi^{2(k)}$, $E_{\mathrm{tr}}^{(k)}$, and
training lengths among the $N_{\mathrm{rep}}=100$ replicas are shown
in Fig.~\ref{fig:chi2-Etr-distr} and Fig.~\ref{fig:TL-distr}
respectively. Note that the latter has
a long
tail which causes an accumulation of points at the maximum
training length, $N_{\mathrm{gen}}^{\mathrm{max}}$. This means that
there is a fraction of replicas that do not
fulfill the stopping criterion.
This may cause a loss in accuracy in outlier fits,
which however make up fewer than $10\%$ of the total sample.
The features of the fit can be summarized as follows:
\begin{itemize}
\item The quality of the central fit, as measured by its
$\chi_{\mathrm{tot}}^{2}=0.77$, is good. However, this value should
be taken with care in view of the fact that uncertainties for all
experiments but two are overestimated because the
covariance matrix is not available and thus correlations between
systematics cannot be properly accounted for. This explains the
value lower than one for this quantity, which would be very unlikely
if it had included correlations.
\item The values of $\chi_{\mathrm{tot}}^{2}$ and $\left\langle E \right\rangle$ differ
by approximately one unit. This is due to the fact that replicas
fluctuate within their uncertainty about the experimental data, which in
turn are gaussianly distributed about a true
value~\cite{Forte:2002fg}: it shows that the neural net is correctly
reproducing the underlying law thus being closer to the true
value. This is confirmed by the fact that $\left\langle \chi^{2(k)}\right\rangle$ is of
order one.
\begin{figure}[t]
\begin{center}
\epsfig{width=0.8\textwidth,figure=chi2histo.eps}
\caption{\small Value of the $\chi^2$ per data
point for the datasets
included in the {\tt NNPDFpol1.0} reference fit, listed in Tab.~\ref{tab:chi2tab2}.
The horizontal line is
the unweighted average of these $\chi^2$ over the datasets and
the black dashed lines give the one-sigma interval about it.}
\label{fig:chi2-dist}
\end{center}
\end{figure}
\item The distribution of $\chi^2$ for different experiments (also
shown as a histogram in Fig.~\ref{fig:chi2-dist}) shows sizable
differences, and indeed the standard deviation (shown as a dashed
line in the plot) about the mean (shown as a solid line) is very
large. This can be understood as a consequence of the lack of
information on the covariance matrix: experiments where
large correlated uncertainties are treated as uncorrelated will
necessarily have a smaller value of the $\chi^2$.
\end{itemize}
\subsection{Parton distributions}
\label{sec:pdfs}
The {\tt NNPDFpol1.0} parton distributions, computed from a set of
$N_{\mbox{\scriptsize{rep}}}=100$ replicas, are displayed in
Fig.~\ref{fig:ppdfs-100} at the input scale $Q_0^2=1$ GeV$^2$, in the
PDF parametrization basis Eq.~(\ref{eq:PDFbasisnets}) as a function of
$x$ both on a logarithmic and linear scale. In
Figs.~\ref{fig:ppdfs3}-\ref{fig:ppdfs2} the same PDFs are plotted in
the flavour basis, and compared to other available NLO PDF sets:
BB10~\cite{Blumlein:2010rn} and AAC08~\cite{Hirai:2008aj} in
Fig.~\ref{fig:ppdfs3}, and DSSV08~\cite{deFlorian:2009vb} in
Fig.~\ref{fig:ppdfs2}. We do not show a direct comparison to the LSS
polarized PDFs~\cite{Leader:2010rb} because there are no publicly
available routines for the computation of PDF uncertainties for this
set. Note that the dataset used for the BB10 determination contains
purely DIS data, and that for
AAC contains DIS supplemented by some high-$p_T$ RHIC pion production data:
hence they are directly comparable to our PDF determination. The
DSSV08 determination instead includes, on top of DIS data, polarized
jet production data, and, more importantly, a large amount of
semi-inclusive DIS data which in particular allow for
flavour-antiflavour separation and a more direct handle on
strangeness. All uncertainties in these plots correspond to the
nominal 1--$\sigma$ error bands.
\begin{figure}[p]
\begin{center}
\epsfig{width=0.43\textwidth,figure=xSinglet_Q2_1_log_alone.eps}
\epsfig{width=0.43\textwidth,figure=xSinglet_Q2_1_lin_alone.eps}
\epsfig{width=0.43\textwidth,figure=xg_Q2_1_log_alone.eps}
\epsfig{width=0.43\textwidth,figure=xg_Q2_1_lin_alone.eps}
\epsfig{width=0.43\textwidth,figure=xT3_Q2_1_log_alone.eps}
\epsfig{width=0.43\textwidth,figure=xT3_Q2_1_lin_alone.eps}
\epsfig{width=0.43\textwidth,figure=xT8_Q2_1_log_alone.eps}
\epsfig{width=0.43\textwidth,figure=xT8_Q2_1_lin_alone.eps}
\caption{\small The {\tt NNPDFpol1.0} polarized parton distributions at
$Q_0^2=1$ GeV$^2$ in the parametrization basis plotted as a function of $x$,
on a logarithmic (left) and linear (right) scale.}
\label{fig:ppdfs-100}
\end{center}
\end{figure}
\begin{figure}[p]
\begin{center}
\epsfig{width=0.43\textwidth,figure=xup_Q2_1_loglha_others.eps}
\epsfig{width=0.43\textwidth,figure=xup_Q2_1_linlha_others.eps}
\epsfig{width=0.43\textwidth,figure=xdp_Q2_1_loglha_others.eps}
\epsfig{width=0.43\textwidth,figure=xdp_Q2_1_linlha_others.eps}
\epsfig{width=0.43\textwidth,figure=xsp_Q2_1_loglha_others.eps}
\epsfig{width=0.43\textwidth,figure=xsp_Q2_1_linlha_others.eps}
\epsfig{width=0.43\textwidth,figure=xg_Q2_1_loglha_others.eps}
\epsfig{width=0.43\textwidth,figure=xg_Q2_1_linlha_others.eps}
\caption{\small Comparison of the {\tt NNPDFpol1.0} PDFs (in the
flavour basis) and the
BB10~\cite{Blumlein:2010rn} and AAC08~\cite{Hirai:2008aj}
PDFs. \label{fig:ppdfs3}}
\end{center}
\end{figure}
\begin{figure}[p]
\begin{center}
\epsfig{width=0.43\textwidth,figure=xup_Q2_1_loglha_dssv.eps}
\epsfig{width=0.43\textwidth,figure=xup_Q2_1_linlha_dssv.eps}
\epsfig{width=0.43\textwidth,figure=xdp_Q2_1_loglha_dssv.eps}
\epsfig{width=0.43\textwidth,figure=xdp_Q2_1_linlha_dssv.eps}
\epsfig{width=0.43\textwidth,figure=xsp_Q2_1_loglha_dssv.eps}
\epsfig{width=0.43\textwidth,figure=xsp_Q2_1_linlha_dssv.eps}
\epsfig{width=0.43\textwidth,figure=xg_Q2_1_loglha_dssv.eps}
\epsfig{width=0.43\textwidth,figure=xg_Q2_1_linlha_dssv.eps}
\caption{\small Comparison of the {\tt NNPDFpol1.0} PDFs (in the
flavour basis) and the DSSV08 PDFs~\cite{deFlorian:2009vb}.
\label{fig:ppdfs2}}
\end{center}
\end{figure}
The main conclusions of this comparison are the following:
\begin{itemize}
\item The central values of the $\Delta u + \Delta\bar{u}$ and the
$\Delta d + \Delta\bar{d}$ are in
reasonable agreement with those of other parton sets. The {\tt NNPDFpol1.0}
results are in best agreement with DSSV08, in slightly worse
agreement with AAC08, and in worst agreement with
BB10. Uncertainties on these PDFs are generally slightly larger for
NNPDF than for other sets, especially DSSV, which however is based
on a much wider dataset.
\item The {\tt NNPDFpol1.0} determination of $\Delta s +
\Delta\bar{s}$ is affected by a much larger uncertainty than BB10
and AAC08, for almost all
values of $x$. The AAC08 and BB10 strange PDFs fall well within the
{\tt NNPDFpol1.0} uncertainty band.
\item The {\tt NNPDFpol1.0} determination of $\Delta s +
\Delta\bar{s}$ is inconsistent at the two sigma level
in the medium-small $x\sim0.1$ region with DSSV08, which is also
rather more accurate, as one would expect as it includes
semi-inclusive data (in particular for production of hadrons with
strangeness). This suggests a tension between the inclusive analysis
data and the semi-inclusive analysis.
\item The gluon PDF is affected by a large uncertainty,
rather larger than any other set, especially at small $x$. In
particular, the {\tt NNPDFpol1.0} polarized gluon distribution
is compatible with zero for all values of $x$.
\item Uncertainties on the PDFs in the regions where no data are available
tend to be larger than those of other sets. At very large
values of $x$ the PDF uncertainty band is largely determined by the
positivity constraint.
\end{itemize}
Finally, in Fig.~\ref{fig:g1} we compare
the
structure function $g_1(x,Q^2)$ for proton,
deuteron and neutron, computed using {\tt NNPDFpol1.0} (with
its one-$\sigma$ uncertainty band) to the experimental data included in
the fit. Experimental data are grouped in bins of $x$
with a logarithmic spacing, while the NNPDF
prediction and its uncertainty are computed at the central value of
each bin.
The uncertainty band in the {\tt NNPDFpol1.0} result is typically
smaller than the experimental errors, except at small-$x$ where a much
more restricted dataset is available; in that region, the
uncertainties are comparable. Scaling violations of the polarized
structure functions are clearly visible, especially for $g_1^p$,
despite the limited range in $Q^2$.
\begin{figure}[p]
\begin{center}
\epsfig{width=0.4\textwidth,figure=expdata_g1p.eps}
\epsfig{width=0.4\textwidth,figure=expdata_g1d.eps}
\epsfig{width=0.4\textwidth,figure=expdata_g1n.eps}
\caption{\small The proton, neutron and deuteron
polarized structure function $g_1(x,Q^2)$ as functions of $Q^2$ in
different bins of $x$ compared to experimental data.
Experimental data are grouped in bins of $x$, while
\texttt{NNPDFpol1.0} results are given at the center of each bin,
whose value is given next to each curve. In order to improve
legibility,
the values of $g_1(x,Q^2)$
have been shifted by the
amount given next to each curve.
\label{fig:g1}}
\end{center}
\end{figure}
\subsection{Stability of the results}
\label{sec:stab}
Our results have been obtained with a number of theoretical
and methodological assumptions, discussed in
Sects.~\ref{sec:polpdfs}-\ref{sec:minim}. We will now test their
upon variation of these assumptions.
\subsubsection{Target-mass corrections and $g_2$.}
\label{sec:tmcres}
We have consistently included in our determination of $g_1$
corrections suppressed by powers of the nucleon mass which are of
kinematic origin. Thus in particular,
as explained in Sec.~\ref{sec:tmc}, we have included
target-mass corrections (TMCs) up to first order in
${m^2}/{Q^2}$. Furthermore, both TMCs and the relation between the
measured asymmetries and the structure function $g_1$ involve
contributions to the structure
function $g_2$ proportional to powers of ${m^2}/{Q^2}$ which we
include according to Eq.~(\ref{eq:g1tog2}) or Eq.~(\ref{eq:g1tog2p})
(see the discussion in Sect.~\ref{sec:datasetl}).
Our default PDF set is obtained
assuming that $g_2$ is given by the Wandzura-Wilczek relation,
Eq.~(\ref{eq:wwrel}).
In order to assess the impact of these assumptions on our results, we
have performed two more PDF determinations. In the first, we set $m=0$
consistently everywhere, both in the extraction of the structure functions
from the
asymmetry data and in our computation of structure functions.
This thus removes TMCs, and also contributions
proportional to $g_2$. In the second, we retain
mass effects, but we assume $g_2=0$.
The statistical
estimators for each of these three fits over the full dataset are
shown in Tab.~\ref{tab:tmc_estimators}. Clearly, all fits
are of comparable quality.
\begin{table}[t]
\centering
\small
\begin{tabular}{|c|c|c|c|}
\hline
Fit & {\tt NNPDFpol1.0} $g_2=g_2^{\mbox{\tiny WW}}$ & {\tt NNPDFpol1.0} $m=0$ & {\tt NNPDFpol1.0} $g_2=0$ \\
\hline
\hline
$\chi^{2}_{\rm tot}$ & 0.77 & 0.78 & 0.75 \\
$\left\langle E \right\rangle \pm \sigma_{E}$ & 1.82 $\pm$ 0.18 & 1.81 $\pm$ 0.16 & 1.83 $\pm$ 0.15 \\
$\left\langle E_{\rm tr} \right\rangle \pm \sigma_{E_{\rm tr}}$ & 1.66 $\pm$ 0.49 & 1.62 $\pm$ 0.50 & 1.70 $\pm$ 0.38\\
$\left\langle E_{\rm val} \right\rangle \pm \sigma_{E_{\rm val}}$ & 1.88 $\pm$ 0.67 & 1.84 $\pm$ 0.70 & 1.96 $\pm$ 0.56\\
\hline
$\left\langle \chi^{2(k)} \right\rangle \pm \sigma_{\chi^{2}}$ & 0.91 $\pm$ 0.12 & 0.90 $\pm$ 0.09 & 0.86 $\pm$ 0.09 \\
\hline
\end{tabular}
\caption{\small The statistical estimators of Tab.~\ref{tab:chi2tab1}
(obtained
assuming $g_2=g_2^{\mbox{\tiny WW}}$) compared to a fit with $m=0$ or
with $g_2=0$.}
\label{tab:tmc_estimators}
\end{table}
Furthermore, in
Fig.~\ref{fig:TMC_comparison} we compare the PDFs at
the initial scale $Q_0^2$ determined in these fits to our default set:
differences are hardly visible.
This comparison can be made more quantitative by using the distance
$d(x,Q^2)$ between different fits, as defined in Appendix A of
Ref.~\cite{Ball:2010de}. The distance is defined in such a way that if we
compare two different samples of $N_{\rm rep}$ replicas each extracted
from the same distribution, then on average $d=1$, while if the two
samples are extracted from two distributions which differ by one
standard deviation, then on average $d=\sqrt{N_{\rm rep}}$ (the
difference being due to the fact that the standard deviation of the
mean scales as $1/\sqrt{N_{\rm rep}}$).
The distances $d(x,Q^2)$
between central values and uncertainties of the three fits
of Tab.~\ref{tab:tmc_estimators} are shown in
Fig.~\ref{fig:distances_noTMCs}. They never exceed $d=4$, which means less
than half a standard deviation for $N_{\rm rep}=100$.
It is interesting to observe that distances tend to be larger in the
large-$x$ region, where the expansion in powers of $m^2/Q^2$ is less
accurate, and the effects of dynamical higher twists can become relevant.
It is reassuring that even in this region the distances are
reasonably small.
We conclude that inclusive DIS data, with our kinematic cuts, do not show
sensitivity to finite nucleon mass effects, neither in terms of fit quality, nor in
terms of the effect on PDFs.
\begin{figure}[t]
\begin{center}
\epsfig{width=0.43\textwidth,figure=xSinglet_Q2_1_log_TMC.eps}
\epsfig{width=0.43\textwidth,figure=xg_Q2_1_log_TMC.eps}
\epsfig{width=0.43\textwidth,figure=xT3_Q2_1_log_TMC.eps}
\epsfig{width=0.43\textwidth,figure=xT8_Q2_1_log_TMC.eps}
\caption{\small Comparison between the default \texttt{NNPDFpol1.0}
PDFs (labeled as $g_2=g_2^{\mbox{\tiny WW}}$ in the plot),
PDFs with $m=0$ (labeled as noTMCs in the plot) and PDFs with $g_2=0$;
each corresponds to the statistical estimators of Tab.~\ref{tab:tmc_estimators}.}
\label{fig:TMC_comparison}
\end{center}
\end{figure}
\begin{figure}[p]
\begin{center}
\epsfig{width=0.80\textwidth,figure=distances_0vsnoTMC.eps}\\
\epsfig{width=0.80\textwidth,figure=distances_WWvsnoTMC.eps}\\
\epsfig{width=0.80\textwidth,figure=distances_0vsWW.eps}
\caption{\small Distances between each pair of the three sets of PDFs
shown in Fig.~\ref{fig:TMC_comparison}.}
\label{fig:distances_noTMCs}
\end{center}
\end{figure}
\subsubsection{Sum rules}
\label{sec:srres}
Our default PDF fit is obtained by assuming that
the triplet axial charge $a_3$ is fixed to its value extracted from
$\beta$ decay,
Eq.~(\ref{eq:a3}), and that the octet axial charge $a_8$ is fixed to
the value of $a_8$ determined from baryon octet
decays, but with an inflated uncertainty in order to allow for SU(3) violation,
Eq.~(\ref{eq:a8p}). As discussed after Eq.~(\ref{eq:sumrules1})
uncertainties on them are included by randomizing their values among replicas.
In order to test the impact of these assumptions, we have produced two
more PDF determinations. In the first, we have not imposed the triplet
sum rule Eq.~(\ref{eq:t3sr}), so in particular $a_3$ is free and
determined by the data,
instead of being fixed to the value Eq.~(\ref{eq:a3}). In the
second, we have assumed that the uncertainty on $a_8$ is given by
the much smaller value of Eq.~(\ref{eq:a8}).
\begin{table}[t]
\centering
\small
\begin{tabular}{|c|c|c|}
\hline
Fit & free $a_3$ & $a_8$ Eq.~(\ref{eq:a8}) \\
\hline
\hline
$\chi^{2}_{\rm tot}$ & 0.79 & 0.77 \\
$\left\langle E \right\rangle \pm \sigma_{E}$ & 1.84 $\pm$ 0.19 & 1.86 $\pm$ 0.19 \\
$\left\langle E_{\rm tr} \right\rangle \pm \sigma_{E_{\rm tr}}$ & 1.73 $\pm$ 0.41 & 1.66 $\pm$ 0.53 \\
$\left\langle E_{\rm val} \right\rangle \pm \sigma_{E_{\rm val}}$ & 1.93 $\pm$ 0.58 & 1.87 $\pm$ 0.71 \\
\hline
$\left\langle \chi^{2(k)} \right\rangle \pm \sigma_{\chi^{2}}$ & 0.93 $\pm$ 0.12 & 0.92 $\pm$ 0.15 \\
\hline
\end{tabular}
\caption{\small The statistical estimators of Tab.~\ref{tab:chi2tab1}, but for
fits in which the triplet sum rule is not imposed (free $a_3$) or
in which the octet sum rule is imposed with the smaller uncertainty
Eq.~(\ref{eq:a8}).}
\label{tab:sr_estimators}
\end{table}
The statistical estimators for the total dataset for each of these fits
are shown in Tab.~\ref{tab:sr_estimators}. Here too, there is no
significant difference in fit quality between these fits and the default.
The distances between PDFs in the default and the free $a_3$
fits are displayed in Fig.~\ref{fig:distances_a3}. As one may expect,
only the triplet is affected significantly:
the central value is shifted by about $d \sim
5$, i.e. about half-$\sigma$, in the region $x\sim
0.3$, where $x\Delta T_3$ has a maximum, and also around $x\sim
0.01$. The uncertainties on the PDFs are very similar in both cases for
all PDFs, except $\Delta T_3$ at small-$x$: in this case, removing the
$a_3$ sum rule results in a moderate increase of the uncertainties;
the effect of removing $a_3$ is otherwise negligible.
The singlet and triplet PDFs for these two fits are compared
in Fig.~\ref{fig:fit_a3}.
\begin{figure}[t]
\begin{center}
\epsfig{width=0.80\textwidth,figure=distances_a3.eps}
\caption{\small Distances between PDFs (central values and
uncertainties) for the
default fit, with $a_3$ fixed, and the fit with free $a_3$,
computed using $N_{\mbox{\tiny rep}}=100$
replicas from each set.}
\label{fig:distances_a3}
\end{center}
\end{figure}
\begin{figure}[t]
\begin{center}
\epsfig{width=0.43\textwidth,figure=xSinglet_Q2_1_log_a3}
\epsfig{width=0.43\textwidth,figure=xT3_Q2_1_log_a3}
\caption{\small Comparison of the singlet and triplet PDFs for the
default fit, with $a_3$ fixed, and the fit with free $a_3$.}
\label{fig:fit_a3}
\end{center}
\end{figure}
The
distances between the default and the fit with the smaller uncertainty
on $a_8$, Eq.~(\ref{eq:a8}),
are shown in Fig.~\ref{fig:distances_a8}. In this case, again as
expected, the only effect is on the $\Delta T_8$ uncertainty, which changes
in the region $10^{-2}\lesssim x \lesssim 10^{-1}$
by up to $d\sim 6$ (about half a standard deviation): if a more
accurate value of $a_8$ is assumed, the determined $\Delta T_8$
is correspondingly more accurate. Central values are unaffected.
The singlet and octet PDFs for this fit are compared to the default
in Fig.~\ref{fig:fit_a8}. We conclude that the size of the uncertainty
on $\Delta T_8$ has a moderate effect on our fit; on the other hand
it is clear that if the octet sum rule were not imposed at all, the
uncertainty on the octet and thus on strangeness would increase very
significantly, as we have checked explicitly.
\begin{figure}[t]
\begin{center}
\epsfig{width=0.80\textwidth,figure=distances_a8.eps}
\caption{\small Distances between PDFs (central values and
uncertainties) for the default fit, with $a_8$ Eq.~(\ref{eq:a8p}),
and the fit with the value of $a_8$ with smaller uncertainty,
Eq.~(\ref{eq:a8}).}
\label{fig:distances_a8}
\end{center}
\end{figure}
\begin{figure}[t]
\begin{center}
\epsfig{width=0.43\textwidth,figure=xSinglet_Q2_1_log_a8}
\epsfig{width=0.43\textwidth,figure=xT8_Q2_1_log_a8}
\caption{\small Comparison of the singlet and octet PDFs for the
default fit, with $a_8$ Eq.~(\ref{eq:a8p}), and the fit with the
value of $a_8$ with smaller uncertainty, Eq.~(\ref{eq:a8}).}
\label{fig:fit_a8}
\end{center}
\end{figure}
We conclude that our fit results are quite stable upon variations of our treatment
of both the triplet and the octet sum rules.
\subsection{Positivity}
\label{sec:thconstraints}
As discussed in Sect.~\ref{sec:minim}, positivity of the individual
cross sections entering the polarized asymmetries
Eq.~(\ref{eq:xsecasy}) has been imposed at leading order according to
Eq.~(\ref{eq:possigma}), using the \texttt{NNPDF2.1 NNLO} PDF set~\cite{Ball:2011mu}, separately for the lightest polarized quark
PDF combinations $\Delta u + \Delta\bar{u}$, $\Delta d +
\Delta\bar{d}$, $\Delta s + \Delta\bar{s}$ and for the polarized gluon
PDF, by means of a Lagrange multiplier Eq.~(\ref{eq:lagrmult}). After
stopping, positivity is checked a posteriori and replicas which do not
satisfy it are discarded and retrained.
In Fig.~\ref{fig:pdfposconstr} we compare to the positivity bound for
the up, down, strange PDF combinations and
gluon PDF a set of $N_{\mbox{\tiny rep}}=100$ replicas obtained by enforcing positivity through a
Lagrange multiplier, but before the final, \textit{a posteriori} check.
Almost all replicas satisfy the constraint, but at least one replica
which clearly violates it for
the
$s+\bar{s}$ combination (and thus will be discarded) is seen.
\begin{figure}[t]
\begin{center}
\epsfig{width=0.43\textwidth,figure=positivity_uubar.eps}
\epsfig{width=0.43\textwidth,figure=positivity_ddbar.eps}
\epsfig{width=0.43\textwidth,figure=positivity_ssbar.eps}
\epsfig{width=0.43\textwidth,figure=positivity_gluon.eps}
\caption{\small The positivity bound Eq.~(\ref{eq:possigma}), compared
to a set of $N_{\mbox{\tiny rep}}=100$ replicas (dashed
lines).}
\label{fig:pdfposconstr}
\end{center}
\end{figure}
In order to assess the effect of the positivity
constraints we have performed a
fit without imposing positivity. Because positivity significantly
affects PDFs in the region where no data are available, and
thus in particular their large $x$ behaviour, preprocessing exponents for this
PDF determination had to be determined again using the procedure
described in Sect.~\ref{sec:net-param}. The values of the large $x$
preprocessing exponents used in the fit without positivity are shown
in Tab.~\ref{tab:prepexpsnopos}. The small $x$
exponents are the same as in the
baseline fit, Tab.~\ref{tab:prepexps}.
\begin{table}[h!]
\begin{center}
\begin{tabular}{c|c}
\hline PDF & $m$ \\
\hline
$\Delta\Sigma(x, Q_0^2)$ & $\left[ 0.5, 5.0 \right]$ \\
\hline
$\Delta g(x, Q_0^2)$ & $\left[ 0.5, 5.0 \right]$\\
\hline
$\Delta T_3(x, Q_0^2)$ & $\left[ 0.5, 4.0 \right]$\\
\hline
$\Delta T_8(x, Q_0^2)$ & $\left[ 0.5, 6.0 \right]$ \\
\hline
\end{tabular}
\caption{\small \label{tab:prepexpsnopos} Ranges for the
large $x$
preprocessing exponents Eq.~(\ref{eq:PDFbasisnets})
for the fit in which no positivity
is imposed. The small $x$ exponents are the same as in the
baseline fit Tab.~\ref{tab:prepexps}.}
\end{center}
\end{table}
The corresponding estimators are
shown in Tab.~\ref{tab:pos_estimators}. Also in this case, we see no
significant change in fit quality, with only a slight improvement in
$\chi^2_{\rm tot}$ when the constraint is removed. This shows that
our PDF parametrization is flexible enough to easily accommodate positivity.
On the other hand, clearly the positivity bound has a significant
impact on PDFs, especially in the large $x$ region, as shown in
Fig.~\ref{fig:pdfposbench}, where PDFs obtained from this fit are
compared to the baseline.
At small $x$, instead, the impact of positivity is moderate, because,
as discussed in Sect.~\ref{sec:thconst}, $g_1/F_1\sim x$ as
$x\to0$~\cite{Ball:1995ye} so there is no constraint in the
limit. This in particular implies that there is no significant loss of
accuracy in imposing the LO positivity bound, because
in the small $x\lesssim10^{-2}$ region, where the LO and NLO
positivity bounds differ
significantly~\cite{Forte:1998kd} the bound is not significant.
\begin{table}[t]
\centering
\small
\begin{tabular}{|c|c|}
\hline
Fit & {\tt NNPDFpol1.0} no positivity \\
\hline
\hline
$\chi^{2}_{\rm tot}$ & 0.72\\
$\left\langle E \right\rangle \pm \sigma_{E}$ & 1.84 $\pm$ 0.22\\
$\left\langle E_{\rm tr} \right\rangle \pm \sigma_{E_{\rm tr}}$ & 1.60 $\pm$ 0.20\\
$\left\langle E_{\rm val} \right\rangle \pm \sigma_{E_{\rm val}}$ & 2.07 $\pm$ 0.39\\
\hline
$\left\langle \chi^{2(k)} \right\rangle \pm \sigma_{\chi^{2}}$ & 0.95 $\pm$ 0.16 \\
\hline
\end{tabular}
\caption{\small The statistical estimators of Tab.~\ref{tab:chi2tab1}
for a fit without positivity constraints.}
\label{tab:pos_estimators}
\end{table}
\begin{figure}[p]
\begin{center}
\epsfig{width=0.43\textwidth,figure=xup_Q2_1_linlha_positivity.eps}
\epsfig{width=0.43\textwidth,figure=xdp_Q2_1_linlha_positivity.eps}
\epsfig{width=0.43\textwidth,figure=xsp_Q2_1_linlha_positivity.eps}
\epsfig{width=0.43\textwidth,figure=xg_Q2_1_lin_pos.eps}
\caption{\small The \texttt{NNPDFpol1.0} PDFs with and without
positivity constraints compared at the initial parametrization scale
$Q_0^2=1$ GeV$^2$ in the flavor basis. \label{fig:pdfposbench}}
\end{center}
\end{figure}
\subsection{Small- and large-$x$ behaviour and preprocessing}
\label{sec:prepexp}
The asymptotic behavior of both polarized and unpolarized
PDFs for $x$ close to 0 or 1 is not controlled by perturbation theory,
because powers of $\ln\frac{1}{x}$ and $\ln(1-x)$
respectively appear in the perturbative coefficients, thereby spoiling
the reliability of the perturbative expansion close to the endpoints.
Non-perturbative effects are also expected to set in eventually (see
e.g.~\cite{Roberts:1990ww,Ball:1995ye}). For this reason,
our fitting procedure makes no assumptions on the large- and small-$x$
behaviors of PDFs, apart from the positivity and integrability constraints
discussed in the previous Section.
It is however necessary to check that no bias is introduced by the
preprocessing. We do this following the iterative method
described in Sect.~\ref{sec:net-param}. The outcome of the procedure
is the set of exponents Eq.~(\ref{eq:PDFbasisnets}), listed in
Tab.~\ref{tab:prepexps}. The lack of bias with these choices is
explicitly demonstrated in Fig.~\ref{fig:prep}, where we plot the 68\%
confidence level of the distribution of
\begin{align}
&\alpha[\Delta q(x,Q^2)]=\frac{\ln \Delta q(x,Q^2)}{\ln\frac{1}{x}}\mbox{ ,}
\label{eq:exp2}
\\
&\beta[\Delta q(x,Q^2)]=
\frac{ \ln \Delta q(x,Q^2) }{\ln(1-x)}\mbox{ ,}
\label{eq:exp1}
\end{align}
$\Delta q=\Delta\Sigma\mbox{, }\Delta g\mbox{, }\Delta T_3\mbox{, }\Delta T_8$, for the default \texttt{NNPDFpol1.0} $N_{\rm rep}=100$ replica
set, at $Q^2=Q_0^2=1$~GeV$^2$, and compare them to the ranges of
Tab.~\ref{tab:prepexps}.
It is apparent that as the endpoints
$x=0$ and $x=1$ are approached, the uncertainties on both
the small-$x$ and the large-$x$ exponents lie well within the range
of the preprocessing exponents for all PDFs, thus confirming
that the latter do not introduce any bias.
\begin{figure}[p]
\begin{center}
\epsfig{width=0.41\textwidth,figure=alpha-Singlet_Q_1_log.eps}
\epsfig{width=0.41\textwidth,figure=beta-Singlet_Q_1_lin.eps}
\epsfig{width=0.41\textwidth,figure=alpha-g_Q_1_log.eps}
\epsfig{width=0.41\textwidth,figure=beta-g_Q_1_lin.eps}
\epsfig{width=0.41\textwidth,figure=alpha-T3_Q_1_log.eps}
\epsfig{width=0.41\textwidth,figure=beta-T3_Q_1_lin.eps}
\epsfig{width=0.41\textwidth,figure=alpha-T8_Q_1_log.eps}
\epsfig{width=0.41\textwidth,figure=beta-T8_Q_1_lin.eps}
\caption{\small The 68\% confidence level of the distribution of
effective small- and large-$x$ exponents
Eqs.~(\ref{eq:exp2}-\ref{eq:exp1}) for the default $N_{\rm
rep}=100$ replica {\tt NNPDFpol1.0} set at $Q_0^2=1$~GeV$^2$, plotted as
functions of $x$. The range of variation of the preprocessing
exponents of Tab.~\ref{tab:prepexps} is also shown in each case
(solid lines).}
\label{fig:prep}
\end{center}
\end{figure}
|
1,108,101,566,696 | arxiv | \section{Proofs}
\setcounter{theorem}{0}
\begin{theorem}[\citet{lee2019tight}] For any $f: \mathbb{R}^d \rightarrow [0,1]$ and parameter $\lambda \in \mathbb{R}^{+}$, define:
\begin{equation}
p({\bm{x}}) := \mathop{\mathbb{E}}_{\epsilon \sim {\mathcal{U}}^d(-\lambda,\lambda)} \left[f({\bm{x}} + \epsilon)\right].
\end{equation}
Then, $p(.)$ is $1/(2\lambda)$-Lipschitz with respect to the $\ell_1$ norm.
\end{theorem}
\begin{proof}
Consider two arbitrary points ${\bm{x}}, {\bm{x}}'$ where $\delta:={\bm{x}}'-{\bm{x}}$. We consider two cases.
\begin{itemize}
\item Case 1: $\|\delta\|_1 \geq 2\lambda$:
Then, because $f(\cdot)\in [0,1]$, and therefore $p(\cdot)\in [0,1]$, we have:
\begin{equation}
|p({\bm{x}})- p({\bm{x}}')| \leq 1 \leq \frac{\|\delta\|_1}{2\lambda}
\end{equation}
\item Case 2: $\|\delta\|_1 < 2\lambda$:
In this case, for each $i$, $|\delta_i| < 2\lambda$. Define ${\mathcal{B}}({\bm{x}})$ as the $\ell_\infty$ ball of radius $\lambda$ around ${\bm{x}}$, and ${\mathcal{U}}({\mathcal{B}}({\bm{x}}))$ as the uniform distribution on this ball (and, similarly ${\mathcal{U}}(\cdot)$, on any other set). In other words:
\begin{equation}
p({\bm{x}}) = \mathop{\mathbb{E}}_{{\bm{z}} \sim {\mathcal{U}}({\mathcal{B}}({\bm{x}}))} f({\bm{z}})
\end{equation}
Then,
\begin{equation}
\begin{split}
&|p({\bm{x}})- p({\bm{x}}')| \\&= |\mathop{\mathbb{E}}_{{\bm{z}} \sim {\mathcal{U}}({\mathcal{B}}({\bm{x}}))} f({\bm{z}}) - \mathop{\mathbb{E}}_{{\bm{z}} \sim {\mathcal{U}}({\mathcal{B}}({\bm{x}}'))} f({\bm{z}}) |\\ &=
\Big|\Big(\Pr_{{\bm{z}} \sim {\mathcal{U}}({\mathcal{B}}({\bm{x}}))} \mkern-18mu {\bm{z}} \in {\mathcal{B}}({\bm{x}})\setminus {\mathcal{B}}({\bm{x}}')\mathop{\mathbb{E}}_{
\substack{{\bm{z}} \sim {\mathcal{U}}({\mathcal{B}}({\bm{x}})\\\setminus {\mathcal{B}}({\bm{x}}'))
}} \mkern-18mu f({\bm{z}}) \\&+
\Pr_{{\bm{z}} \sim {\mathcal{U}}({\mathcal{B}}({\bm{x}}))} \mkern-18mu {\bm{z}} \in {\mathcal{B}}({\bm{x}})\cap {\mathcal{B}}({\bm{x}}')\mathop{\mathbb{E}}_{ \substack{{\bm{z}} \sim {\mathcal{U}}({\mathcal{B}}({\bm{x}})\\\cap {\mathcal{B}}({\bm{x}}'))
}} \mkern-18mu f({\bm{z}})\Big) \\&-
\Big(\Pr_{{\bm{z}} \sim {\mathcal{U}}({\mathcal{B}}({\bm{x}}'))} \mkern-18mu {\bm{z}} \in {\mathcal{B}}({\bm{x}}')\setminus {\mathcal{B}}({\bm{x}})\mathop{\mathbb{E}}_{ \substack{{\bm{z}} \sim {\mathcal{U}}({\mathcal{B}}({\bm{x}}')\\\setminus {\mathcal{B}}({\bm{x}}))
}} \mkern-18mu f({\bm{z}}) \\&+
\Pr_{{\bm{z}} \sim {\mathcal{U}}({\mathcal{B}}({\bm{x}}'))} \mkern-18mu {\bm{z}} \in {\mathcal{B}}({\bm{x}})\cap {\mathcal{B}}({\bm{x}}')\mathop{\mathbb{E}}_{ \substack{{\bm{z}} \sim {\mathcal{U}}({\mathcal{B}}({\bm{x}})\\\cap {\mathcal{B}}({\bm{x}}'))
}} \mkern-18mu f({\bm{z}})\Big)\Big|\\&=
\Big|\Pr_{{\bm{z}} \sim {\mathcal{U}}({\mathcal{B}}({\bm{x}}))} \mkern-18mu {\bm{z}} \in {\mathcal{B}}({\bm{x}})\setminus {\mathcal{B}}({\bm{x}}')\mathop{\mathbb{E}}_{ \substack{{\bm{z}} \sim {\mathcal{U}}({\mathcal{B}}({\bm{x}})\\\setminus {\mathcal{B}}({\bm{x}}'))
}} \mkern-18mu f({\bm{z}}) \\&-
\Pr_{{\bm{z}} \sim {\mathcal{U}}({\mathcal{B}}({\bm{x}}'))} \mkern-18mu {\bm{z}} \in {\mathcal{B}}({\bm{x}}')\setminus {\mathcal{B}}({\bm{x}})\mathop{\mathbb{E}}_{ \substack{{\bm{z}} \sim {\mathcal{U}}({\mathcal{B}}({\bm{x}}')\\\setminus {\mathcal{B}}({\bm{x}}))
}} \mkern-18mu f({\bm{z}}) \Big|
\end{split}
\end{equation}
Note that:
\begin{equation}
\begin{split}
&\Pr_{{\bm{z}} \sim {\mathcal{U}}({\mathcal{B}}({\bm{x}}'))}{\bm{z}} \in {\mathcal{B}}({\bm{x}}')\setminus {\mathcal{B}}({\bm{x}}) \\&= \Pr_{{\bm{z}} \sim {\mathcal{U}}({\mathcal{B}}({\bm{x}}))}{\bm{z}} \in {\mathcal{B}}({\bm{x}})\setminus {\mathcal{B}}({\bm{x}}')
\end{split}
\end{equation}
Because both represent the probability of a uniform random variable on an $\ell_\infty$ ball of radius $\lambda$ taking a value outside of the region ${\mathcal{B}}({\bm{x}})\cap {\mathcal{B}}({\bm{x}}')$ (which is entirely contained within both balls.) Then:
\begin{equation} \label{eq:thm_1_pf_pr}
\begin{split}
&|p({\bm{x}})- p({\bm{x}}')| \\&=
\Pr_{{\bm{z}} \sim {\mathcal{U}}({\mathcal{B}}({\bm{x}}))}{\bm{z}} \in {\mathcal{B}}({\bm{x}})\setminus {\mathcal{B}}({\bm{x}}') \\&\times \Big|\mathop{\mathbb{E}}_{\substack{{\bm{z}} \sim {\mathcal{U}}({\mathcal{B}}({\bm{x}})\setminus\\ {\mathcal{B}}({\bm{x}}'))}} f({\bm{z}}) -
\mathop{\mathbb{E}}_{\substack{{\bm{z}} \sim {\mathcal{U}}({\mathcal{B}}({\bm{x}}')\\\setminus {\mathcal{B}}({\bm{x}}))}} f({\bm{z}}) \Big| \\&\leq
\Pr_{{\bm{z}} \sim {\mathcal{U}}({\mathcal{B}}({\bm{x}}))}{\bm{z}} \in {\mathcal{B}}({\bm{x}})\setminus {\mathcal{B}}({\bm{x}}').
\end{split}
\end{equation}
Where, in the last line, we used the fact that $f(\cdot)\in [0,1]$. Let ${\mathcal{V}}({\mathcal{S}})$ represent the volume of a set ${\mathcal{S}}$. Note that ${\mathcal{B}}({\bm{x}}) \cap {\mathcal{B}}({\bm{x}}')$ is a $d$-hyperrectangle, with each edge of length
\begin{equation}
\min(x_i,x'_i) +\lambda - (\max(x_i,x'_i)-\lambda) = 2\lambda - |\delta_i|
\end{equation}
Then following Equation \ref{eq:thm_1_pf_pr},
\begin{equation}
\begin{split}
&|p({\bm{x}})- p({\bm{x}}')| \\&\leq
\frac{{\mathcal{V}}({\mathcal{B}}({\bm{x}})) - {\mathcal{V}}({\mathcal{B}}({\bm{x}}) \cap {\mathcal{B}}({\bm{x}}') )}{{\mathcal{V}}({\mathcal{B}}({\bm{x}}))} \\&=
1 - \frac{\mathop{\Pi}_{i=1}^d (2\lambda - |\delta_i|)}{(2\lambda)^d}
\\&=
1 - \mathop{\Pi}_{i=1}^d\left(1 - \frac{|\delta_i|}{2\lambda}\right)
\end{split}
\end{equation}
Note that, for $1 \leq d' \leq d$:
\begin{equation}
\begin{split}
&\mathop{\Pi}_{i=1}^{d'}\left(1 - \frac{|\delta_i|}{2\lambda}\right) \\&= \mathop{\Pi}_{i=1}^{d'-1}\left(1 - \frac{|\delta_i|}{2\lambda}\right) - \frac{|\delta_{d'}|}{2\lambda} \mathop{\Pi}_{i=1}^{d'-1}\left(1 - \frac{|\delta_i|}{2\lambda}\right) \\&\geq
\mathop{\Pi}_{i=1}^{d'-1}\left(1 - \frac{|\delta_i|}{2\lambda}\right) - \frac{|\delta_{d'}|}{2\lambda}
\end{split}
\end{equation}
By induction:
\begin{equation}
\mathop{\Pi}_{i=1}^{d}\left(1 - \frac{|\delta_i|}{2\lambda}\right) \geq
1 - \sum_{i=1}^{d}\ \frac{|\delta_i|}{2\lambda}
\end{equation}
Therefore,
\begin{equation}
\begin{split}
&|p({\bm{x}})- p({\bm{x}}')| \\&\leq
1 - \mathop{\Pi}_{i=1}^d\left(1 - \frac{|\delta_i|}{2\lambda}\right) \\&\leq
1- \left( 1 - \sum_{i=1}^{d}\ \frac{|\delta_i|}{2\lambda}\right) \\&=\frac{\|\delta\|_1}{2\lambda}
\end{split}
\end{equation}
\end{itemize}
Thus, by the definition of Lipschitz-continuity, $p$ is $1/(2\lambda)$-Lipschitz with respect to the $\ell_1$ norm.
\end{proof}
\setcounter{theorem}{1}
\begin{theorem}[General Case]
For any $f: \mathbb{R}^d \rightarrow [0,1]$, and $\lambda> 0$ let ${\bm{s}} \in [0,2\lambda]^d$ be a random variable, with a fixed distribution such that:
\begin{equation}
s_i \sim {\mathcal{U}}(0,2\lambda), \,\,\,\, \forall i.
\end{equation}
Note that the components $s_1, ..., s_d$ are \textbf{not} required to be distributed independently from each other. Then, define:
\begin{align}
\tilde{x}_i &:=
\frac{ \min(2\lambda \ceil{\frac{x_i - s_i}{2\lambda} } + s_i, 1) }{2} \\
&+ \frac{\max(2\lambda \ceil{\frac{x_i - s_i}{2\lambda} -1} + s_i, 0)}{2}\
,\,\,\,\, \forall i \\
p({\bm{x}}) &:=\mathop{\mathbb{E}}_{{\bm{s}}}\left[ f(\tilde{{\bm{x}}})\right].
\end{align}
Then, $p(.)$ is $1/(2\lambda)$-Lipschitz with respect to the $\ell_1$ norm.
\end{theorem}
\begin{proof}
Consider two arbitrary points ${\bm{x}}, {\bm{x}}'$ where $\delta:={\bm{x}}'-{\bm{x}}$. We consider two cases.
\begin{itemize}
\item Case 1: $\|\delta\|_1 \geq 2\lambda$:
Then, because $f(\cdot)\in [0,1]$, and therefore $p(\cdot)\in [0,1]$, we have:
\begin{equation}
|p({\bm{x}})- p({\bm{x}}')| \leq 1 \leq \frac{\|\delta\|_1}{2\lambda}
\end{equation}
\item Case 2: $\|\delta\|_1 < 2\lambda$:
In this case, for each $i$, $|\delta_i| < 2\lambda$, and therefore $\ceil{\frac{x_i - s_i}{2\lambda} }$ and $\ceil{\frac{x_i' - s_i}{2\lambda} }$ differ by at most one. Furthermore, $\ceil{\frac{x_i - s_i}{2\lambda} }$ differs from $\ceil{\frac{x_i}{2\lambda} }$ by at most one, and similarly for $x_i'$. Without loss of generality, assume $x_i < x_i'$ (i.e., $\delta_i = |\delta_i| = x_i' - x_i$).
There are two cases:
\begin{itemize}
\item Case A: $\ceil{\frac{x_i}{2\lambda}} = \ceil{\frac{x_i'}{2\lambda}}$. Let this integer be $n$. Then:
\begin{itemize}
\item $\ceil{\frac{x_i-s_i}{2\lambda}} = \ceil{\frac{x_i'-s_i}{2\lambda}} = n$ iff $\frac{s_i}{2\lambda} < \frac{x_i}{2\lambda} -(n-1)$ (which also implies $\frac{s_i}{2\lambda} < \frac{x'_i}{2\lambda} -(n-1)$).
\item $\ceil{\frac{x_i-s_i}{2\lambda}} = \ceil{\frac{x_i'-s_i}{2\lambda}} = n-1$ iff $\frac{s_i}{2\lambda} \geq \frac{x'_i}{2\lambda} -(n-1)$ (which also implies $\frac{s_i}{2\lambda} \geq \frac{x_i}{2\lambda} -(n-1)$).
\end{itemize}
Then $\ceil{\frac{x_i - s_i}{2\lambda} }$ and $\ceil{\frac{x_i' - s_i}{2\lambda} }$ differ only if $\frac{x_i}{2\lambda} -(n-1) \leq \frac{s_i}{2\lambda} < \frac{x_i'}{2\lambda} -(n-1)$, which occurs with probability $\frac{\delta_i}{2\lambda}$.
\item Case B: $\ceil{\frac{x_i}{2\lambda}} + 1 = \ceil{\frac{x_i'}{2\lambda}}$. Let $n := \ceil{\frac{x_i}{2\lambda}}$. Then $\ceil{\frac{x_i-s_i}{2\lambda}}$ and $\ceil{\frac{x_i'-s_i}{2\lambda}}$ can differ if either:
\begin{itemize}
\item $\ceil{\frac{x_i-s_i}{2\lambda}} = n$ and $\ceil{\frac{x_i'-s_i}{2\lambda}} = n+1$. This occurs iff $\frac{s_i}{2\lambda} < \frac{x'_i}{2\lambda} -n$ (which also implies $\frac{s_i}{2\lambda} < \frac{x_i}{2\lambda} -(n-1)$).
\item $\ceil{\frac{x_i-s_i}{2\lambda}} = n-1$ and $\ceil{\frac{x_i'-s_i}{2\lambda}} = n$. This occurs iff $\frac{s_i}{2\lambda} \geq \frac{x_i}{2\lambda} -(n-1)$ (which also implies $\frac{s_i}{2\lambda} \geq \frac{x'_i}{2\lambda} -n$).
\end{itemize}
In other words, $\ceil{\frac{x_i-s_i}{2\lambda}} = \ceil{\frac{x_i'-s_i}{2\lambda}}$ iff:
\begin{equation*}
\frac{x_i}{2\lambda} -(n-1) > \frac{s_i}{2\lambda} \geq \frac{x_i'}{2\lambda} -n
\end{equation*}
Or equivalently:
\begin{equation*}
\frac{x_i}{2\lambda} -n +1 > \frac{s_i}{2\lambda} \geq \frac{x_i}{2\lambda} -n + \frac{\delta_i}{2\lambda}
\end{equation*}
This happens with probability $1-\frac{\delta_i}{2\lambda}$. Therefore, $\ceil{\frac{x_i-s_i}{2\lambda}}$ and $\ceil{\frac{x_i'-s_i}{2\lambda}}$ differ with probability $\frac{\delta_i}{2\lambda}$.
\end{itemize}
Note that $\ceil{\frac{x_i-s_i}{2\lambda} -1}$ and $\ceil{\frac{x_i'-s_i}{2\lambda}-1}$ differ only when $\ceil{\frac{x_i-s_i}{2\lambda}}$ and $\ceil{\frac{x_i'-s_i}{2\lambda}}$ differ. Therefore in both cases, $\tilde{x}_i$ and $\tilde{x}_i'$ differ with probability at most $\frac{|\delta_i|}{2\lambda}$. The rest of the proof proceeds as in the $\lambda \geq 0.5$ case in the main text.
\end{itemize}
\end{proof}
\setcounter{corollary}{0}
\begin{corollary}[General Case]
For any $f: \mathbb{R}^d \rightarrow [0,1]$, and $\lambda \geq 0$ (with $2\lambda$ a multiple of $1/q$), let ${\bm{s}} \in \left[0,2\lambda-1/q\right]_{(q)}^d + \mathbbm{1}/(2q)$ be a random variable with a fixed distribution such that:
\begin{equation}
s_i \sim {\mathcal{U}}_{(q)}\left(0,2\lambda-1/q\right) + 1/(2q), \,\,\,\, \forall i.
\end{equation}
Note that the components $s_1, ..., s_d$ are \textbf{not} required to be distributed independently from each other. Then, define:
\begin{align}
\tilde{\mathbf{x}}_i &:=
\frac{ \min(2\lambda \ceil{\frac{\mathbf{x}_i - s_i}{2\lambda} } + s_i, 1) }{2} \\
&+ \frac{\max(2\lambda \ceil{\frac{\mathbf{x}_i - s_i}{2\lambda} -1} + s_i, 0)}{2}\
,\,\,\,\, \forall i \\
p(\mathbf{x}) &:=\mathop{\mathbb{E}}_{{\bm{s}}}\left[ f(\tilde{\mathbf{x}})\right].
\end{align}
Then, $p(.)$ is $1/(2\lambda)$-Lipschitz with respect to the $\ell_1$ norm on the quantized domain $\mathbf{x} \in [0,1]^d_{(q)}$.
\end{corollary}
\begin{proof}
The proof is substantially similar to the proof of the continuous case above. Minor differences occur in Cases 2.A and 2.B (mostly due to inequalities becoming strict, because possible values of $s_i$ are offset from values of $\mathbf{x}_i$) which we show here:
\begin{itemize}
\item Case A: $\ceil{\frac{\mathbf{x}_i}{2\lambda}} = \ceil{\frac{\mathbf{x}_i'}{2\lambda}}$. Let this integer be $n$. Then:
\begin{itemize}
\item $\ceil{\frac{\mathbf{x}_i-s_i}{2\lambda}} = \ceil{\frac{\mathbf{x}_i'-s_i}{2\lambda}} = n$ iff $\frac{s_i}{2\lambda} < \frac{\mathbf{x}_i}{2\lambda} -(n-1)$ (which also implies $\frac{s_i}{2\lambda} < \frac{\mathbf{x}'_i}{2\lambda} -(n-1)$).
\item $\ceil{\frac{\mathbf{x}_i-s_i}{2\lambda}} = \ceil{\frac{\mathbf{x}_i'-s_i}{2\lambda}} = n-1$ iff $\frac{s_i}{2\lambda} > \frac{\mathbf{x}'_i}{2\lambda} -(n-1)$ (which also implies $\frac{s_i}{2\lambda} > \frac{\mathbf{x}_i}{2\lambda} -(n-1)$).
\end{itemize}
Then $\ceil{\frac{\mathbf{x}_i - s_i}{2\lambda} }$ and $\ceil{\frac{\mathbf{x}_i' - s_i}{2\lambda} }$ differ only if $\frac{\mathbf{x}_i}{2\lambda} -(n-1) < \frac{s_i}{2\lambda} < \frac{\mathbf{x}_i'}{2\lambda} -(n-1)$.
There are exactly $q\cdot \delta_i$ discrete values that $s_i$ can take such that this condition holds. This is out of $2\lambda q$ possible values over which $s_i$ is uniformly distributed. Therefore, the condition holds with probability $\frac{\delta_i}{2\lambda}$.
\item Case B: $\ceil{\frac{\mathbf{x}_i}{2\lambda}} + 1 = \ceil{\frac{\mathbf{x}_i'}{2\lambda}}$. Let $n := \ceil{\frac{\mathbf{x}_i}{2\lambda}}$. Then $\ceil{\frac{\mathbf{x}_i-s_i}{2\lambda}}$ and $\ceil{\frac{\mathbf{x}_i'-s_i}{2\lambda}}$ can differ if either:
\begin{itemize}
\item $\ceil{\frac{\mathbf{x}_i-s_i}{2\lambda}} = n$ and $\ceil{\frac{\mathbf{x}_i'-s_i}{2\lambda}} = n+1$. This occurs iff $\frac{s_i}{2\lambda} < \frac{\mathbf{x}'_i}{2\lambda} -n$ (which also implies $\frac{s_i}{2\lambda} < \frac{\mathbf{x}_i}{2\lambda} -(n-1)$).
\item $\ceil{\frac{\mathbf{x}_i-s_i}{2\lambda}} = n-1$ and $\ceil{\frac{\mathbf{x}_i'-s_i}{2\lambda}} = n$. This occurs iff $\frac{s_i}{2\lambda} > \frac{\mathbf{x}_i}{2\lambda} -(n-1)$ (which also implies $\frac{s_i}{2\lambda} > \frac{\mathbf{x}'_i}{2\lambda} -n$).
\end{itemize}
In other words, $\ceil{\frac{\mathbf{x}_i-s_i}{2\lambda}} = \ceil{\frac{\mathbf{x}_i'-s_i}{2\lambda}}$ iff:
\begin{equation*}
\frac{\mathbf{x}_i}{2\lambda} -(n-1) > \frac{s_i}{2\lambda} > \frac{\mathbf{x}_i'}{2\lambda} -n
\end{equation*}
Or equivalently:
\begin{equation*}
\frac{\mathbf{x}_i}{2\lambda} -n +1 > \frac{s_i}{2\lambda} > \frac{\mathbf{x}_i}{2\lambda} -n + \frac{\delta_i}{2\lambda}
\end{equation*}
There are exactly $q\cdot (1-\delta_i)$ discrete values that $s_i$ can take such that this condition holds. This is out of $2\lambda q$ possible values over which $s_i$ is uniformly distributed. Therefore, the condition holds with probability $\frac{1-\delta_i}{2\lambda}$. Thus, $\ceil{\frac{\mathbf{x}_i-s_i}{2\lambda}}$ and $\ceil{\frac{\mathbf{x}_i'-s_i}{2\lambda}}$ differ with probability $\frac{\delta_i}{2\lambda}$.
\end{itemize}
\end{proof}
\section{Prior Works on Derandomized Smoothing}
\label{sec:prior_derandomized}
While this work is, to the best of our knowledge, the first to propose a derandomized version of a randomized smoothing algorithm to certify for a norm-based threat model without resricting the base classifier or requiring time exponential in the dimension $d$ of the input, prior deterministic ``randomized smoothing'' certificates have been proposed. These include:
\begin{itemize}
\item \textbf{Certificates for non-norm ($\ell_0$-like) threat models}. This includes certificates against patch adversarial attacks \citep{DBLP:conf/nips/0001F20a}; as well as poisoning attacks under a label-flipping \citep{rosenfeld2020certified} or whole-sample insertion/deletion \citep{levine2021deep} threat-model.
These threat models are ``$\ell_0$-like'' because the attacker entirely corrupts some portion of the data, rather than just distorting it.
\citet{DBLP:conf/nips/0001F20a} and \citet{levine2021deep} deal with this by ensuring that only a bounded fraction of base classifications can possibly be simultaneously exposed to any of the corrupted data. In the respective cases of patch adversarial attacks and poisoning attacks, it is shown that this can be done with a finite number of base classifications. \citet{rosenfeld2020certified}'s method, by contrast, is based on the randomized $\ell_0$ certificate proposed by \citet{lee2019tight}, and is discussed below.
\item \textbf{Certificates for restricted classes of base classifiers}. This includes $k$-nearest neighbors \citep{weber2020rab} (for $\ell_2$ poisoning attacks) and linear models \citep{rosenfeld2020certified} (for label-flipping poisoning attacks). In these cases, existing randomized certificates are evaluated exactly for a restricted set of base classifier models. (\citet{pmlr-v97-cohen19c} and \citet{lee2019tight}'s methods, respectively.) It is notable that these are both poisoning certificates: in the poisoning setting, where the corrupted data is the training data, true randomized smoothing is less feasible, because it requires training very large ensemble of classifiers to achieve desired statistical properties. \citet{weber2020rab} also attempts this directly, however.
\item \textbf{Certificates requiring time exponential in dimension $d$}. This includes, in particular, a concurrent work, \cite{kao2020deterministic}, which provides deterministic $\ell_2$ certificates. In order to be practical, this method requires that the first several layers of the network be Lipschitz-bounded by means other than smoothing. The ``smoothing'' is then applied only in a low-dimensional space. The authors note that this method is unlikely to scale to ImageNet.
\end{itemize}
\section{Experimental Details} \label{sec:exp_details}
For uniform additive noise, we reproduced \citet{pmlr-v119-yang20c}'s results directly, using their released code. Note that we also reproduced the training of all models, rather than using released models. For Independent SSN and DSSN, we followed the same training procedure as in \citet{pmlr-v119-yang20c}, but instead used the noise distribution of our methods during training. For DSSN, we used the same vector ${\bm{v}}$ to generate noise during training and test time: note that our certificate requires ${\bm{v}}$ to be the same fixed vector whenever the classifier is used. In particular, we used a pseudorandom array generated using the Mersenne Twister algorithm with seed 0, as implemented in NumPy as numpy.random.RandomState. This is guaranteed to produce identical results on all platforms and for all future versions of NumPy, given the same seed, so in practice we only store the seed (0). In Section \ref{sec:seed}, we explore the sensitivity of our method to different choices of pseudorandom seeds.
In a slight deviation from \citet{pmlr-v97-cohen19c}, \citet{pmlr-v119-yang20c} uses different noise vectors for each sample in a batch when training (\citet{pmlr-v97-cohen19c} uses the same $\epsilon$ for all samples in a training batch to improve speed). We follow \citet{pmlr-v119-yang20c}'s method: this means that when training DSSN, we train the classifier on each sample only once per epoch, with a single, randomly-chosen value of $s_\text{base}$, which varies between samples in a batch.
Training parameters (taken from \citet{pmlr-v119-yang20c}) were as follows (Table \ref{tab:training_params}):
\begin{table}[h]
\centering
\begin{tabular}{|c|c|c|}
\hline
& CIFAR-10& ImageNet\\
\hline
Architecture& WideResNet-40& ResNet-50 \\
\hline
Number of Epochs& 120& 30\\
\hline
Batch Size& 64 \footnotemark& 64 \\
\hline
Initial &0.1& 0.1 \\
Learning Rate&& \\
\hline
LR Scheduler &Cosine &Cosine \\
& Annealing& Annealing \\
\hline
\end{tabular}
\caption{Training parameters for experiments.}
\label{tab:training_params}
\end{table}
\footnotetext{There is a discrepancy between the code and the text of \citet{pmlr-v119-yang20c} about the batch size used for training on CIFAR-10: the paper says to use a batch size of 128, while the instructions for reproducing the paper's results released with the code use a batch size of 64. Additionally, inspection of one of \citet{pmlr-v119-yang20c}'s released models indicates that a batch size of 64 was in fact used. (In particular, the ``num\_batches\_tracked'' field in the saved model, which counts the total number of batches used in training, corresponded with a batch size of 64.) We therefore used a batch size of 64 in our reproduction, assuming that the discrepancy was a result of a typo in that paper.}
For all certification results in the main text, and most training results, we used a single NVIDIA 2080 Ti GPU. (Some experiments with denoisers in Section \ref{sec:denoise}, as well as ImageNet stability training, used two GPUs.)
For testing, we used the entire CIFAR-10 test set (10,000 images) and a subset of 500 images of ImageNet (the same subset used by \citet{pmlr-v97-cohen19c}).
When reporting clean accuracies for randomized techniques (uniform additive noise and Independent SSN), we followed \cite{pmlr-v119-yang20c} by simply reporting the percent of samples for which the $N_0 = 64$ initial noise perturbations, used to pick the top class during certification, actually selected the correct class. (Notably, \cite{pmlr-v119-yang20c} does not use an ``abstain'' option for prediction, as some other randomized smoothing works \cite{pmlr-v97-cohen19c} do.) On the one hand, this is an inexact estimate of the accuracy of the \textit{true} classifier $p({\bm{x}})$, which uses the true expectation. On the other hand, it is the actual, empirical accuracy of a classifier that is being used in practice. This is not an issue when reporting the clean accuracy for DSSN, which is exact.
In DSSN, following \citet{DBLP:conf/nips/0001F20a} (discussed in Section \ref{sec:prior_derandomized}), if two classes tie in the number of ``votes'', we predict the first class lexicographically: this means that we can certify robustness up to \textit{and including} the radius $\rho$, because we are guaranteed consistent behavior in the case of ties. Reported certified radii for DSSN should therefore be interpreted to guarantee robustness even in the $\|\mathbf{x}-\mathbf{x}'\|_1 = \rho$ case. (This is not a meaningful distinction in randomized methods where the space is taken as continuous).
\begin{table*}[ht]
\begin{tabular}{|c|l|l|l|l|l|l|l|l|}
\hline
&0.5&1.0&1.5&2.0&2.5&3.0&3.5&4.0\\
\hline
Seed = 0&72.25\%&63.07\%&56.21\%&51.33\%&46.76\%&42.66\%&38.26\%&33.64\%\\
&(81.50\%&(77.85\%&(71.17\%&(67.98\%&(65.40\%&(65.40\%&(65.40\%&(65.40\%\\
&@ $\sigma$=0.75)&@ $\sigma$=1.25)&@ $\sigma$=2.25)&@ $\sigma$=3.0)&@ $\sigma$=3.5)&@ $\sigma$=3.5)&@ $\sigma$=3.5)&@ $\sigma$=3.5)\\
\hline
Seed = 1&72.01\%&62.73\%&56.03\%&51.20\%&46.71\%&42.45\%&37.87\%&33.08\%\\
&(81.85\%&(75.64\%&(72.19\%&(67.65\%&(66.93\%&(66.19\%&(66.19\%&(66.19\%\\
&@ $\sigma$=0.75)&@ $\sigma$=1.5)&@ $\sigma$=2.0)&@ $\sigma$=3.0)&@ $\sigma$=3.25)&@ $\sigma$=3.5)&@ $\sigma$=3.5)&@ $\sigma$=3.5)\\
\hline
Seed = 2&72.62\%&62.79\%&56.06\%&51.02\%&46.85\%&42.52\%&38.22\%&33.53\%\\
&(81.19\%&(74.26\%&(70.13\%&(70.13\%&(65.33\%&(65.33\%&(65.33\%&(65.33\%\\
&@ $\sigma$=0.75)&@ $\sigma$=1.75)&@ $\sigma$=2.5)&@ $\sigma$=2.5)&@ $\sigma$=3.5)&@ $\sigma$=3.5)&@ $\sigma$=3.5)&@ $\sigma$=3.5)\\
\hline
\end{tabular}
\caption{Comparison of DSSN using different random seeds to generate ${\bm{v}}$ on CIFAR-10. Matching \citet{pmlr-v119-yang20c}, we test on 15 noise levels ($\sigma \in \{0.15, 0.25n \text{ for }1 \leq n \leq 14\}$). We report the best certified accuracy at a selection of radii $\rho$, as well as the clean accuracy and noise level of the associated classifier. We find very little difference between the different seed values, with all certified accuracies within $\pm 0.65$ percentage points of each other.}
\label{tab:seed}
\end{table*}
\section{Effect of pseudorandom choice of ${\bm{v}}$} \label{sec:seed}
In Section \ref{sec:exp_details}, we mention that the vector ${\bm{v}}$ used in the derandomization of DSSN, which must be re-used every time the classifier is used, is generated pseudorandomly, using a seed of 0 in all experiments. In this section, we explore the sensitivity of our results to the choice of vector ${\bm{v}}$, and in particular to the choice of random seed. To do this, we repeated all standard-training DSSN experiments on CIFAR-10, using two additional choices of random seeds. We performed both training and certification using the assigned ${\bm{v}}$ vector for each experiment. Result are summarized in Table \ref{tab:seed}. We report a tabular summary, rather than certification curves, because the curves are too similar to distinguish. In general, the choice of random seed to select ${\bm{v}}$ does not seem to impact the certified accuracies: all best certified accuracies were within $0.65$ percentage points of each other. This suggests that our method is robust to the choice of this hyperparameter.
\section{Effect of a Denoiser} \label{sec:denoise}
As shown in Figure \ref{fig:random_derandom} in the main text, at large $\lambda$, there is a substantial benefit to SSN which is unrelated to derandomization, due to the differences in noise distributions discussed in Section \ref{sec:marg_distrib}. However, Equation \ref{eq:noise_mapping} shows that the difference between uniform additive noise and Independent SSN is a simple, deterministic transformation on each pixel. We therefore wondered whether training a denoiser network, to learn the relationship between ${\bm{x}}$ and the noisy sample (${\bm{x}} + \epsilon$ or $\tilde{{{\bm{x}}}}$), would eliminate the differences between the methods.
\citet{salman2020denoised} proposes methods of training denoisers for randomized smoothing, in the context of using smoothing on pre-trained classifiers. In this context, the noisy image first passes through a denoiser network, before being passed into a classification network trained on clean images. We used their code (and all default parameters), in three variations:
\begin{enumerate}
\item \textbf{Stability Denoising}: In this method, the pre-trained classifier network is required for training the denoiser. The loss when training the denoiser is based on the consistency between the logit outputs of the classifier on the clean input ${\bm{x}}$ and on the denoised version of the noisy input. This is the best-performing method in \cite{salman2020denoised}. However, note that it does not directly use the pixel values of ${\bm{x}}$ when training the denoiser, and therefore might not ``learn'' the correspondence between clean and noisy samples (Figure \ref{fig:compare_representations} in the main text) as easily.
\item \textbf{MSE Denoising}: This trains the denoiser via direct supervised training, with the objective of reducing the mean squared error difference between the pixel values of the clean and denoised samples. Then, classification is done using a classifier that is pre-trained only on clean samples. This performs relatively poorly in \cite{salman2020denoised}, but should directly learn the correspondence between clean and noisy samples.
\item \textbf{MSE Denoising with Retraining}: For this experiment, we trained an MSE denoiser as above, but \textit{then} trained the entire classification pipeline (the denoiser $+$ the classifier) on noisy samples. Note that the classifier is trained from scratch in this case, with the pre-trained denoiser already in place (but being fine-tuned as the classifier is trained).
\end{enumerate}
We tested on CIFAR-10, at three different noise levels, without stability training. See Figure \ref{fig:denoiser} for results. Overall, we find that at high noise, there is still a significant gap in performance between Independent SSN and \cite{pmlr-v119-yang20c}'s method, using all of the denoising techniques. One possible explanation is that it is also more difficult \textit{for the denoiser} to learn the noise distribution of \cite{pmlr-v119-yang20c}, compared to our distributions.
\section{Additive and splitting noise allow for different types of joint noise distributions}
In Section \ref{sec:uas_compare} in the main text, we showed that, in the $\lambda=0.5$ case, SSN leads to marginal distributions which are simple affine transformations of the marginal distributions of the uniform additive smoothing noise (Equation \ref{eq:equivalence_lambda_half}). However, we also showed (Proposition 1) that, even in this case, certification is not possible using arbitrary joint distributions of $\epsilon$ with uniform additive noise, as it is with SSN. This difference is explained by the fact that, even for $\lambda = 0.5$, the joint distributions of $({\bm{x}} + \epsilon)$ which can be generated by uniform additive noise and the joint distributions of $\tilde{{\bm{x}}}$ which can be generated by SSN respectively are in fact quite different.
To quantify this, consider a pair of two joint distributions: ${\mathcal{D}}$, with marginals uniform on $[-0.5, 0.5]$, and ${\mathcal{S}}$, with marginals uniform on $[0, 1]$. Let ${\mathcal{D}}$ and ${\mathcal{S}}$ be considered \textit{equivalent} if, for $\epsilon \sim {\mathcal{D}}$ and ${\bm{s}} \sim {\mathcal{S}}$:
\begin{equation}
\tilde{{\bm{x}}} \sim (1/2) ({\bm{x}} + \epsilon) + \mathbbm{1}/4 \,\,\,\,\,\,\forall {\bm{x}}
\end{equation}
where $\tilde{{\bm{x}}}$ is generated using the SSN noise ${\bm{s}}$ (compare to Equation \ref{eq:equivalence_lambda_half} in the main text).
\begin{proposition}
The only pair of equivalent joint distributions $({\mathcal{D}},{\mathcal{S}})$ is ${\mathcal{D}} \sim {\mathcal{U}}^d(-0.5,0.5)$, ${\mathcal{S}} \sim {\mathcal{U}}^d(0,1)$.
\end{proposition}
\begin{proof}
We first describe a special property of SSN (with $\lambda$ = 0.5):
Fix a smoothed value $\tilde{{\bm{x}}}'$, and let ${\mathcal{X}}'$ be the set of all inputs ${\bm{x}}$ such that $\tilde{{\bm{x}}}'$ can be generated from ${\bm{x}}$ under \textit{any} joint splitting distribution ${\mathcal{S}}$.
From Figure \ref{fig:compare_representations}-a in the main text, we can see that this is simply
\begin{equation} \label{eq:apdx_ball}
{\mathcal{X}}' = \{{\bm{x}} | \tilde{x}'_i \leq x_i/2 + (1/2) \leq \tilde{x}'_i + (1/2) \,\,\,\, \forall i\}.
\end{equation}
Notice that to generate $\tilde{{\bm{x}}}'$, \textit{regardless of the value of ${\bm{x}} \in {\mathcal{X}}'$}, the splitting vector ${\bm{s}}$ must be exactly the following:
\begin{equation}
s_i = \begin{cases}
2\tilde{x}_i' &\text{ if } \tilde{x}_i' < 1/2\\
2\tilde{x}_i' -1 &\text{ if } \tilde{x}_i' \geq 1/2\\
\end{cases}
\end{equation}
(This is made clear by Figure \ref{fig:split_randomization_1} in the main text.)
If ${\bm{x}} \in {\mathcal{X}}'$, then $\tilde{{\bm{x}}}'$ will be generated iff this value of ${\bm{s}}$ is chosen. Therefore, given a fixed splitting distribution ${\mathcal{S}}$, the probability of generating $\tilde{{\bm{x}}}'$ must be \textit{constant} for all points in ${\mathcal{X}}'$.
Now, we compare to uniform additive noise. In order for ${\mathcal{D}}$ and ${\mathcal{S}}$ to be equivalent, for the fixed noised point $({\bm{x}}+\epsilon)' = 2\tilde{{\bm{x}}'} - \mathbbm{1}/2$, it must be the case that all points in ${\mathcal{X}}'$ are equally likely to generate $({\bm{x}}+\epsilon)'$. But note from Equation \ref{eq:apdx_ball} that ${\mathcal{X}}'$ is simply the uniform $\ell_\infty$ ball of radius 0.5 around $({\bm{x}}+\epsilon)'$. This implies that ${\mathcal{D}}$ must be the uniform distribution ${\mathcal{D}} \sim {\mathcal{U}}^d(-0.5,0.5)$, which is equivalent to the splitting distribution ${\mathcal{S}} \sim {\mathcal{U}}^d(0,1)$.
\end{proof}
The \textit{only} case when SSN and uniform additive noise can produce similar distributions of noisy samples is when all noise components are independent. This helps us understand how SSN can work with \textit{any} joint distribution of splitting noise, while uniform additive noise has only been shown to produce accurate certificates when all components of $\epsilon$ are independent.
\section{Complete Certification Data on CIFAR-10 and ImageNet}
We provide complete certification results for uniform additive noise, randomized SSN with independent noise, and DSSN, at all tested noise levels on both CIFAR-10 and ImageNet, using both standard and stability training. For CIFAR-10, see Figures \ref{fig:appendix_cifar_0}, \ref{fig:appendix_cifar_1}, \ref{fig:appendix_cifar_2}, and \ref{fig:appendix_cifar_3}. For ImageNet, see Figure \ref{fig:appendix_imagenet_0}. In Figure \ref{fig:time_per_appendix} we compare the time required to certify each image for DSSN and \citet{pmlr-v119-yang20c}'s uniform random noise method, on both datasets.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{denoiser_results.png}
\caption{Certified accuracies of models trained with denoisers, for additive uniform noise, SSN with independent noise, and DSSN. See text of Section \ref{sec:denoise} for further details on the denoisers used. For $\sigma \geq 2.0$, Independent SSN outperfroms \cite{pmlr-v119-yang20c}'s method, suggesting that the difference in noise representations can not be resolved by using a denoiser. (It may appear as if \cite{pmlr-v119-yang20c}'s method is more robust at large radii for $\sigma = 3.5$ with an MSE denoiser without retraining: however, this is for a classifier with \textit{clean accuracy} $\approx 10\%$, so this is vacuous: similar results can be achieved by simply always returning the same class.)
}
\label{fig:denoiser}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{appendix_cifar_0.png}
\caption{Certification results for CIFAR-10, comparing uniform additive noise, randomized SSN with independent noise, and DSSN, for $\sigma \in \{0.15,0.25,0.5,0.75\}$}
\label{fig:appendix_cifar_0}.
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{appendix_cifar_1.png}
\caption{Certification results for CIFAR-10, comparing uniform additive noise, randomized SSN with independent noise, and DSSN, for $\sigma \in \{1.0,1.25,1.5,1.75\}$}
\label{fig:appendix_cifar_1}.
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{appendix_cifar_2.png}
\caption{Certification results for CIFAR-10, comparing uniform additive noise, randomized SSN with independent noise, and DSSN, for $\sigma \in \{2.0,2.25,2.5,2.75\}$}
\label{fig:appendix_cifar_2}.
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{appendix_cifar_3.png}
\caption{Certification results for CIFAR-10, comparing uniform additive noise, randomized SSN with independent noise, and DSSN, for $\sigma \in \{3.0,3.25,3.5\}$}
\label{fig:appendix_cifar_3}.
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{appendix_imagenet_0.png}
\caption{Certification results for ImageNet, comparing uniform additive noise, randomized SSN with independent noise, and DSSN, for $\sigma \in \{0.5,2.0,3.5\}$. Note that we see less improvement in reported certified accuracies due to derandomization (i.e., less difference between Independent SSN and DSSN) in ImageNet compared to in CIFAR-10, particularly at large noise levels.}
\label{fig:appendix_imagenet_0}.
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{time_per_appendix.png}
\caption{Comparison of the certification time per image of DSSN and \citet{pmlr-v119-yang20c}'s uniform additive noise method. We used a single NVIDIA 2080 Ti GPU. Although in contrast to \cite{pmlr-v119-yang20c}, our certification time scales linearly with the noise level, the fact that \cite{pmlr-v119-yang20c} uses 100,000 smoothing samples makes our method much faster even at the largest tested noise levels.}
\label{fig:time_per_appendix}
\end{figure*}
\section{Introduction and Related Works}
Adversarial robustness in machine learning is a broad and widely-studied field which characterizes the \textit{worst-case} behavior of machine learning systems under small input perturbations \citep{szegedy2013intriguing,goodfellow2014explaining,carlini2017towards}. One area of active research is the design of {\it certifiably-robust} classifiers where, for each input ${\bm{x}}$, one can compute a magnitude $\rho$, such that \textit{all} perturbed inputs ${\bm{x}}'$ within a radius $\rho$ of ${\bm{x}}$ are guaranteed to be classified in the same way as ${\bm{x}}$. Typically, $\rho$ represents a distance in an $\ell_p$ norm: $\|{\bm{x}}-{\bm{x}}'\|_p \leq \rho$, for some $p$ which depends on the technique used.\footnote{Certifiably-robust classifiers for non-$\ell_p$ threat models have also been proposed including sparse ($\ell_0$) adversarial attacks \citep{levine2020robustness, lee2019tight}, Wasserstein attacks \citep{levine2020wasserstein}, geometric transformations \citep{fischer2020randomized} and patch attacks \citep{Chiang2020Certified, DBLP:conf/nips/0001F20a,Xiang2020PatchGuardPD,Zhang2020ClippedBD}. However, developing improved certified defenses under $\ell_p$ adversarial threat models remains an important problem and will be our focus in this paper.}
A variety of techniques have been proposed for certifiably $\ell_p$-robust classification \citep{wong2018provable,gowal2018effectiveness,Raghunathan2018,tjeng2018evaluating,NEURIPS2018_d04863f1,pmlr-v119-singla20a}. Among these are techniques that rely on Lipschitz analysis: if a classifier's logit functions can be shown to be Lipschitz-continuous, this immediately implies a robustness certificate \citep{Li2019PreventingGA,pmlr-v97-anil19a}. In particular, consider a classifier with logits $\{p_A, p_B, p_C, ...\}$, all of which are $c$-Lipschitz. Suppose for an input ${\bm{x}}$, we have $p_A({\bm{x}}) > p_B({\bm{x}}) \geq p_C({\bm{x}}) \geq ...$. Also suppose the gap between the largest and the second largest logits is $d$ (i.e. $p_A({\bm{x}}) - p_B({\bm{x}}) = d$). The Lipschitzness implies that for all ${\bm{x}}'$ such that $\|{\bm{x}}-{\bm{x}}'\| < d/(2c)$, $p_A({\bm{x}}')$ will still be the largest logit: in this ball,
\begin{equation}
p_A({\bm{x}}') > p_A({\bm{x}}) -\frac{d}{2} \geq p_{\text{others}}({\bm{x}}) + \frac{d}{2} > p_{\text{others}}({\bm{x}}'),
\end{equation}
where the first and third inequalities are due to Lipschitzness. Certification techniques based on \textit{randomized smoothing} \citep{pmlr-v97-cohen19c, salman2019provably, pmlr-v119-yang20c, lee2019tight, lecuyer2019certified, li2019certified, teng2020ell}, are, at the time of writing, the only robustness certification techniques that scale to tasks as complex as ImageNet classification (See \citet{li2020sok} for a recent and comprehensive review and comparison of robustness certification methods.) In these methods, a ``base'' classifier is used to classify a large set of randomly-perturbed versions $({\bm{x}} + \epsilon)$ of the input image ${\bm{x}}$ where $\epsilon$ is drawn from a fixed distribution. The final classification is then taken as the plurality-vote of these classifications on noisy versions of the input. If samples ${\bm{x}}$ and ${\bm{x}}'$ are close, the distributions of $({\bm{x}} + \epsilon)$ and $({\bm{x}}' + \epsilon)$ will substantially overlap, leading to provable robustness.
\citet{salman2019provably} and \citet{levine2019certifiably} show that these certificates can in some cases be understood as certificates based on Lipschitz continuity where the \textit{expectation} of the output of the base classifier (or a function thereof) over the smoothing distribution is shown to be Lipschitz.
Randomized smoothing for the $\ell_1$ threat model was previously proposed by \citet{lecuyer2019certified, li2019certified, teng2020ell, lee2019tight} and \citet{pmlr-v119-yang20c}. \citet{pmlr-v119-yang20c} shows the best empirical performance (using a certificate originally presented by \citet{lee2019tight} without experiments). These methods use {\it additive} smoothing noise and provide \textit{high-probability} certificates, with a failure rate that depends on the number of noisy samples.
In this work, we propose a {\it non-additive} smoothing method for $\ell_1$-certifiable robustness on quantized data that is \textit{deterministic}.\footnote{We note that previous works have proposed deterministic forms of randomized-smoothing certificates \citep{DBLP:conf/nips/0001F20a,rosenfeld2020certified,levine2021deep,weber2020rab,kao2020deterministic}. However, this work is, to our knowledge, the first to certify robustness for arbitrary base-classifiers in a norm-based threat model, with a runtime that does not grow exponentially with dimension. See the appendix for more details.} By ``quantized'' data, we mean data where each feature value occurs on a discrete level. For example, standard image files (including standard computer vision datasets, such as ImageNet and CIFAR-10) are quantized, with all pixel values belonging to the set $\{0, 1/255, 2/255, ..., 1\}$. We call our method {\bf D}eterministic {\bf S}moothing with {\bf S}plitting {\bf N}oise ({\bf DSSN}). DSSN produces \textit{exact} certificates, rather than high-probability ones. It also produces certificates in substantially less time than randomized smoothing because a large number of noise samples are no longer required. In addition to these benefits, the certified radii generated by DSSN are significantly larger than those of the prior state-of-the-art.
To develop DSSN, we first propose a randomized method, Smoothing with Splitting Noise (\textbf{SSN}). Rather than simple additive noise, SSN uses ``splitting'' noise to generate a noisy input $\tilde{{\bm{x}}}$: first, we generate a noise vector ${\bm{s}}$ to split the input domain $[0,1]^d$ into subdivisions. Then, the noisy input $\tilde{{\bm{x}}}$ is just the center of whichever sub-division ${\bm{x}}$ belongs to. In contrast to prior smoothing works, this noise model is {\it non-additive}.
In contrast to additive uniform noise where the noise components (${\epsilon}_i$'s in ${\epsilon}$) are independently distributed, in SSN, the splitting vector components ($s_i$'s in ${\bm{s}}$) \textit{do not} need to be independently distributed. Thus, unlike the additive uniform smoothing where noise vectors must be drawn from a $d$-dimensional probability distribution, in SSN, the splitting vectors can be drawn from a {\it one-dimensional} distribution. In the quantized case, the splitting vector can be further reduced to a choice between a small number of elements, leading to a derandomized version of SSN (i.e. DSSN).
Below, we summarize our contributions:
\begin{itemize}
\item We propose a novel randomized smoothing method, {\bf SSN}, for the $\ell_1$ adversarial threat model (Theorem \ref{thm:main_case_1}).
\item We show that \textbf{SSN} effectively requires smoothing in {\it one-dimension} (instead of $d$), thus it can be efficiently derandomized, yielding a deterministic certifiably robust classification method called \textbf{DSSN}.
\item On ImageNet and CIFAR-10, we empirically show that DSSN significantly outperforms previous smoothing-based robustness certificates, effectively establishing a new state-of-the-art.
\end{itemize}
\section{Notation}
Let ${\bm{x}}$, ${\bm{x}}'$ represent two points in $[0,1]^d$. We assume that our input space is bounded: this assumption holds for many applications (e.g., pixel values for image classification). If the range of values is not $[0,1]$, all dimensions can simply be scaled. A ``base'' classifier function will be denoted as $f: \mathbb{R}^d \rightarrow [0,1]$. In the case of a multi-class problem, this may represent a single logit.
Let $\delta := {\bm{x}}' - {\bm{x}}$, with components $\delta_1$, ..., $\delta_d$. A function $p: [0,1]^d \rightarrow [0,1]$ is said to be $c$-Lipschitz with respect to the $\ell_1$ norm iff:
\begin{equation}
|p({\bm{x}})- p({\bm{x}}')| \leq c \|\delta\|_1,\,\,\,\, \forall {\bm{x}},{\bm{x}}'.
\end{equation}
Let ${\mathcal{U}}(a,b)$ represent the uniform distribution on the range $[a,b]$, and ${\mathcal{U}}^d(a,b)$ represent a random $d$-vector, where each component is \textit{independently} uniform on $[a,b]$.
Let $\bm{1}_{\text{(condition)}}$ represent the indicator function, and $\mathbbm{1}$ be the vector $[1,1,...]^T$. In a slight abuse of notation, for $z\in \mathbb{R}, n \in \mathbb{R}^{+}$, let $z \bmod n := z - n\lfloor \frac{z}{n} \rfloor$ where $\lfloor\cdot\rfloor$ is the floor function; we will also use $\lceil\cdot\rceil$ as the ceiling function. For example, $9.5 \bmod 2 = 1.5$.
We will also discuss quantized data. We will use $q$ for the number of quantizations. Let
\begin{equation}
[a,b]_{(q)} := \left\{i/q \,\,\big| \,\, \lceil aq \rceil \leq i \leq \lfloor bq \rfloor \right\}.
\end{equation}
In particular, $[0,1]_{(q)}$ denotes the set $\{0, 1/q, 2/q, ..., (1-q)/q, 1\}$. Let $\mathbf{x},\mathbf{x}'$ represent two points in $[0,1]_{(q)}^d$. A domain-quantized function $p: [0,1]_{(q)}^d \rightarrow [0,1]$ is said to be $c$-Lipschitz with respect to the $\ell_1$ norm iff:
\begin{equation}
|p(\mathbf{x})- p(\mathbf{x}')| \leq c \|\delta\|_1,\,\,\,\, \forall \mathbf{x},\mathbf{x}' \in [0,1]_{(q)}^d,
\end{equation}
where $\delta := \mathbf{x}' - \mathbf{x}$. The uniform distribution on the set $[a,b]_{(q)}$ is denoted ${\mathcal{U}}_{(q)}(a,b)$.
\section{Prior Work on Uniform Smoothing for $\ell_1$ Robustness} \label{sec:yang}
\citet{lee2019tight} proposed an $\ell_1$ robustness certificate using uniform random noise:
\begin{theorem}[\citet{lee2019tight}] \label{thm:uniform}For any $f: \mathbb{R}^d \rightarrow [0,1]$ and parameter $\lambda \in \mathbb{R}^{+}$, define:
\begin{equation}
p({\bm{x}}) := \mathop{\mathbb{E}}_{\epsilon \sim {\mathcal{U}}^d(-\lambda,\lambda)} \left[f({\bm{x}} + \epsilon)\right].
\end{equation}
Then, $p(.)$ is $1/(2\lambda)$-Lipschitz with respect to the $\ell_1$ norm.
\end{theorem}
\citet{pmlr-v119-yang20c} later provided a theoretical justification for the uniform distribution being optimal among {\it additive} noise distributions for certifying $\ell_1$ robustness\footnote{More precisely, \citet{pmlr-v119-yang20c} suggested that distributions with $d$-cubic level sets are optimal for $\ell_1$ robustness.}. \citet{pmlr-v119-yang20c} also provided experimental results on CIFAR-10 and ImageNet which before our work were the state-of-the-art $\ell_1$ robustness certificates.
Following \citet{pmlr-v97-cohen19c}, \citet{pmlr-v119-yang20c} applied the smoothing method to a ``hard'' (in \citet{salman2019provably}'s terminology) base classifier. That is, if the base classifier returns the class $c$ on input ${\bm{x}}+\epsilon$, then $f_c({\bm{x}}+\epsilon) = 1$, otherwise $f_c({\bm{x}}+\epsilon) = 0$.
Also following \citet{pmlr-v97-cohen19c}, in order to apply the certificate in practice, \citet{pmlr-v119-yang20c} first takes $N_0 =64$ samples to estimate the plurality class $A$, and then uses $N = 100,000$ samples to lower-bound $p_A({\bm{x}})$ (the fraction of noisy samples $\tilde{{\bm{x}}}$ classified as $A$) with high probability. The other smoothed logit values ($p_B({\bm{x}}),$ etc.) can then all be assumed to be $\leq 1-p_A({\bm{x}})$. This approach has the benefit that each logit does not require an independent statistical bound, and thus reduces the estimation error, but it has the drawback that certificates are impossible if $p_A({\bm{x}}) \leq 0.5$, creating a gap between the clean accuracy of the smoothed classifier and the certified accuracy near $\rho = 0$.
We note that the stated Theorem \ref{thm:uniform} is slightly more general than the originally stated version by \citet{lee2019tight}: the original version assumed that only $p_A({\bm{x}})$ is available, as in the above estimation scheme, and therefore just gave the $\ell_1$ radius in which $p_A({\bm{x}}')$ is guaranteed to remain $\geq 0.5$. For completeness, we provide a proof of the more general form (Theorem \ref{thm:uniform}) in the appendix.
In this work, we show that by using \textit{deterministic smoothing} with \textit{non-additive noise}, improved certificates can be achieved, because we (i) avoid the statistical issues presented above (by estimating all smoothed logits \textit{exactly}), and (ii) improve the performance of the base classifier itself.
\section{Our Proposed Method}
In this paper, we describe a new method, Smoothing with Splitting Noise (\textbf{SSN}), for certifiable robustness against $\ell_1$ adversarial attacks. In this method, for each component $x_i$ of ${\bm{x}}$, we randomly split the interval $[0,1]$ into sub-intervals. The noised value $\tilde{x}_i$ is the middle of the sub-interval that contains $x_i$. We will show that this method corresponds closely to the uniform noise method, and so we continue to use the parameter $\lambda$. The precise correspondence will become clear in Section \ref{sec:marg_distrib}: however, for now, $\lambda$ can be interpreted as controlling (the inverse of) the frequency with which the interval $[0,1]$ is split into sub-intervals. We will show that this method, unlike the additive uniform noise method, can be efficiently derandomized. For simplicity, we will first consider the case corresponding to $\lambda \geq 0.5$, in which at most two sub-intervals are created, and present the general case later.
\begin{figure}[t]
\centering
\includegraphics[width=0.40\textwidth]{split_randomization_4-2.pdf}
\caption{(a) Definition of $\tilde{{\bm{x}}}$ in the $\lambda \geq 0.5$ case. If $s_i \in [0,1)$, then it ``splits'' the interval $[0,1]$: $\tilde{x}_i$ is the center of whichever sub-interval $x_i$ occurs in. If $s_i > 1$, $\tilde{x}_i = 0.5$, and no information about the original pixel is kept. (b) An example of $\tilde{\mathbf{x}}$ in the \textit{quantized} $\lambda \geq 0.5$ case. Here, $q=4$ and $2\lambda = 5/4 $. We see that $\mathbf{x}_i = 1/4$ lies directly on a quantization level, while $s_i = 7/8$ lies on a half-step between quantization levels. We choose $s_i$ to lie on ``half-steps'' for the sake of symmetry: the range of $\tilde{\mathbf{x}}_i$ is symmetrical around $1/2$.}
\label{fig:split_randomization_1}
\end{figure}
\begin{theorem}[$\lambda \geq 0.5$ Case] \label{thm:main_case_1}
For any $f: \mathbb{R}^d \rightarrow [0,1]$, and $\lambda \geq 0.5$ let ${\bm{s}} \in [0,2\lambda]^d$ be a random variable with a fixed distribution such that:
\begin{equation}
s_i \sim {\mathcal{U}}(0,2\lambda), \,\,\,\, \forall i.
\end{equation}
Note that the components $s_1, ..., s_d$ are \textbf{not} required to be distributed independently from each other. Then, define:
\begin{align}
\tilde{x}_i &:=
\frac{\min{(s_i,1)} + \bm{1}_{x_i > s_i}}{2}\
,\,\,\,\, \forall i \label{eq:x_tilde_def}\\
p({\bm{x}}) &:=\mathop{\mathbb{E}}_{{\bm{s}}}\left[ f(\tilde{{\bm{x}}})\right].
\end{align}
Then $p$ is $1/(2\lambda)$-Lipschitz with respect to the $\ell_1$ norm.
\end{theorem}
To understand the distribution of $\tilde{x}_i$, we can view $s_i$ as ``splitting'' the interval $[0,1]$ into two sub-intervals, $[0,s_i]$ and $(s_i, 1]$. $\tilde{x}_i$ is then the middle of whichever sub-interval contains $x_i$. If $s_i \geq 1$, then the interval $[0,1]$ is not split, and $\tilde{x}_i$ assumes the value of the middle of the entire interval ( $= 1/2$): see Figure \ref{fig:split_randomization_1}-a.
\begin{proof}
Consider two arbitrary points ${\bm{x}}, {\bm{x}}'$ where $\delta:={\bm{x}}'-{\bm{x}}$. Note that $ \max(x_i, x'_i) - \min(x_i, x'_i) = |x_i'-x_i|= |\delta_i|$. For a fixed vector ${\bm{s}}$, additionally note that
$\tilde{x}_i = \tilde{x}'_i$ unless $s_i$ falls between $x_i$ and $x'_i$ (i.e., unless $\min(x_i, x'_i) \leq s_i < \max(x_i, x'_i)$). Therefore:
\begin{equation}
\Pr_{\bm{s}} [\tilde{x}_i \neq \tilde{x}'_i] = \frac{|\delta_i|}{2\lambda}.
\end{equation}
By union bound:
\begin{equation} \label{eq:union_bound}
\begin{split}
\Pr_{\bm{s}} [\tilde{{\bm{x}}} \neq \tilde{{\bm{x}}'}]& = \Pr_{\bm{s}} \left[\bigcup_{i=1}^d \tilde{x}_i \neq \tilde{x}'_i\right] \\
&\leq \sum_{i=1}^d \frac{|\delta_i|}{2\lambda} = \frac{\|\delta\|_1}{2\lambda}
\end{split}
\end{equation}
Then:
\begin{equation}
\begin{split}
&|p({\bm{x}})- p({\bm{x}}')|\\ &= \left|\mathop{\mathbb{E}}_{{\bm{s}}}\left[ f(\tilde{{\bm{x}}})\right] - \mathop{\mathbb{E}}_{{\bm{s}}}\left[ f(\tilde{{\bm{x}}}')\right]\right| \\&=
\left|\mathop{\mathbb{E}}_{{\bm{s}}}\left[ f(\tilde{{\bm{x}}}) - f(\tilde{{\bm{x}}}')\right]\right| \\&=
\Bigg|\Pr_{\bm{s}} [\tilde{{\bm{x}}} \neq \tilde{{\bm{x}}}']\mathop{\mathbb{E}}_{{\bm{s}}}
\left[ f(\tilde{{\bm{x}}}) - f(\tilde{{\bm{x}}}') | \tilde{{\bm{x}}} \neq \tilde{{\bm{x}}}'\right]
\\&+
\Pr_{\bm{s}} [\tilde{{\bm{x}}} = \tilde{{\bm{x}}}']
\mathop{\mathbb{E}}_{{\bm{s}}}\left[ f(\tilde{{\bm{x}}}) - f(\tilde{{\bm{x}}}') | \tilde{{\bm{x}}} = \tilde{{\bm{x}}}'\right]
\Bigg| \\
\end{split}
\end{equation}
Because $\mathop{\mathbb{E}}_{{\bm{s}}}\left[ f(\tilde{{\bm{x}}}) - f(\tilde{{\bm{x}}}') | \tilde{{\bm{x}}} = \tilde{{\bm{x}}}'\right]$ is zero, we have:
\begin{equation}
\begin{split}
& |p({\bm{x}})- p({\bm{x}}')|\\
& =\Pr_{\bm{s}} [\tilde{{\bm{x}}} \neq \tilde{{\bm{x}}}'] \left|\mathop{\mathbb{E}}_{{\bm{s}}}
\left[ f(\tilde{{\bm{x}}}) - f(\tilde{{\bm{x}}}') | \tilde{{\bm{x}}} \neq \tilde{{\bm{x}}}'\right]
\right| \\
&\leq \frac{\|\delta\|_1}{2\lambda}\cdot 1
\end{split}
\end{equation}
where in the final step, we used Equation \ref{eq:union_bound}, as well as the assumption that $f(\cdot) \in [0,1]$. Thus, by the definition of Lipschitz-continuity, $p$ is $1/(2\lambda)$-Lipschitz with respect to the $\ell_1$ norm.
\end{proof}
It is important that we do \textbf{not} require that $s_i$'s be independent. (Note the union bound in Equation \ref{eq:union_bound}: the inequality holds regardless of the joint distribution of the components of ${\bm{s}}$, as long as each $s_i$ is uniform.) This allows us to develop a deterministic smoothing method below.
\subsection{Deterministic SSN (DSSN)}
If SSN is applied to quantized data\footnote{Note that standard image files such as ImageNet and CIFAR-10 are quantized, with all pixel values belonging to the set $\{0, 1/255, 2/255, ... 255/255\}$. As \citet{carlini2017towards} notes, if a natural dataset is quantized, adversarial examples to this dataset must also be quantized (in order to be recognized/saved as valid data at all). Therefore, our assumption of quantized data is a rather loose constraint which applies to many domains considered in adversarial machine learning.} (e.g. images), we can use the fact that the noise vector ${\bm{s}}$ in Theorem \ref{thm:main_case_1} is \textit{not} required to have independently-distributed components to derive an efficient derandomization of the algorithm. In order to accomplish this, we first develop a quantized version of the SSN method, using input $\mathbf{x} \in [0,1]^d_q$ (i.e. $\mathbf{x}$ is a vector whose components belong to $\{0,1/q,...,1\}$). To do this, we simply choose each of our splitting values $s_i$ to be on one of the half-steps between possible quantized input values: ${\bm{s}} \in \left[0,2\lambda-1/q\right]_{(q)}^d + \mathbbm{1}/(2q)$. We also require that $2\lambda$ is a multiple of $1/q$ (in experiments, when comparing to randomized methods with continuous $\lambda$, we use $\lambda' = \floor{2\lambda q}/{2q}$.) See Figure \ref{fig:split_randomization_1}-b.
\begin{corollary}[$\lambda \geq 0.5$ Case] \label{thm:main_case_1_quantized}
For any $f: \mathbb{R}^d \rightarrow [0,1]$, and $\lambda \geq 0.5$ (with $2\lambda$ a multiple of $1/q$), let ${\bm{s}} \in \left[0,2\lambda-1/q\right]_{(q)}^d + \mathbbm{1}/(2q)$ be a random variable with a fixed distribution such that:
\begin{equation}
s_i \sim {\mathcal{U}}_{(q)}\left(0,2\lambda-1/q\right) + 1/(2q), \,\,\,\, \forall i.
\end{equation}
Note that the components $s_1, ..., s_d$ are \textbf{not} required to be distributed independently from each other. Then, define:
\begin{align}
\tilde{\mathbf{x}}_i &:= \frac{\min(s_i,1) + \bm{1}_{\mathbf{x}_i > s_i}}{2},\,\,\,\, \forall i \label{eq:x_tilde_def_quantized}\\
p(\mathbf{x}) &:=\mathop{\mathbb{E}}_{{\bm{s}}}\left[ f(\tilde{\mathbf{x}})\right].
\end{align}
Then, $p(.)$ is $1/(2\lambda)$-Lipschitz with respect to the $\ell_1$ norm on the quantized domain $\mathbf{x} \in [0,1]^d_{(q)}$.
\end{corollary}
\begin{proof}
Consider two arbitrary quantized points $\mathbf{x}, \mathbf{x}'$ where $\delta=\mathbf{x}'-\mathbf{x}$. Again note that $ \max(\mathbf{x}_i, \mathbf{x}'_i) - \min(\mathbf{x}_i, \mathbf{x}'_i) = |\mathbf{x}_i'-\mathbf{x}_i|= |\delta_i|$. For a fixed vector ${\bm{s}}$, additionally note that
$\tilde{\mathbf{x}}_i = \tilde{\mathbf{x}}'_i$ unless $s_i$ falls between $\mathbf{x}_i$ and $\mathbf{x}'_i$ (i.e., unless $\min(\mathbf{x}_i, \mathbf{x}'_i) \leq s_i < \max(\mathbf{x}_i, \mathbf{x}'_i)$).
Note that $\delta_i$ must be a multiple of $1/q$, and that there are exactly $q\cdot |\delta_i|$ discrete values that $s_i$ can take such that the condition $\min(\mathbf{x}_i, \mathbf{x}'_i) \leq s_i < \max(\mathbf{x}_i, \mathbf{x}'_i)$ holds. This is out of $2\lambda q$ possible values over which $s_i$ is uniformly distributed. Thus, we have:
\begin{equation}
\Pr_{\bm{s}} [\tilde{\mathbf{x}}_i \neq \tilde{\mathbf{x}'}_i] = \frac{|\delta_i|}{2\lambda}
\end{equation}
The rest of the proof proceeds as in the continuous case (Theorem \ref{thm:main_case_1}).
\end{proof}
If we required that $s_i$'s be independent, an exact computation of $p(\mathbf{x})$ would have required evaluating $(2\lambda q)^d$ possible values of ${\bm{s}}$. This is not practical for large $d$. However, because we do not have this independence requirement, we can avoid this exponential factor. To do this, we first choose a single scalar splitting value $s_{\text{base}}$: each $s_i$ is then simply a constant offset of $s_{\text{base}}$. We proceed as follows:
First, before the classifier is ever used, we choose a single, fixed, arbitrary vector $\mathbf{v} \in [0,2\lambda-1/q]_{(q)}^d$. In practice, $\mathbf{v}$ is generated pseudorandomly when the classifier is trained, and the seed is stored with the classifier so that the same $\mathbf{v}$ is used whenever the classifier is used. Then, at test time, we sample a scalar variable as:
\begin{equation}
s_{\text{base}} \sim {\mathcal{U}}_{(q)}(0,2\lambda-1/q) + 1/(2q).
\end{equation}
Then, we generate each $s_i$ by simply adding the base variable $s_{base}$ to $v_i$:
\begin{equation}
\forall i, \,\,\,\,\,s_i := (s_{\text{base}} + v_i) \mod 2\lambda
\end{equation}
Note that the marginal distribution of each $s_i$ is $s_i \sim {\mathcal{U}}_{(q)}\left(0,2\lambda-1/q\right) + 1/(2q)$, which is sufficient for our provable robustness guarantee. In this scheme, the only source of randomness at test time is the single random scalar $s_{\text{base}}$, which takes on one of $2\lambda q$ values. We can therefore evaluate the exact value of $p(\mathbf{x})$ by simply evaluating $f(\tilde{\mathbf{x}})$ a total of $2\lambda q$ times, for each possible value of $s_{\text{base}}$. Essentially, by removing the independence requirement, the splitting method allows us to replace a $d$-dimensional noise distribution with a {\it one}-dimensional noise distribution. In quantized domains, this allows us to efficiently derandomize the SSN method without requiring exponential time. We call this resulting deterministic method \textbf{DSSN}.
One may wonder why we do not simply use $s_1 = s_2 = s_3...= s_d$. While this can work, it leads to some undesirable properties when $\lambda > 0.5$. In particular, note that with probability $(2\lambda -1)$, we would have all splitting values $s_i > 1$. This means that every element $\tilde{x}_i$ would be 0.5. In other words, with probability $(2\lambda -1)/ (2\lambda)$, $\tilde{{\bm{x}}} = 0.5 \cdot \mathbbm{1}$. This restricts the expressivity of the smoothed classifier:
\begin{equation}
p(\mathbf{x}) = \frac{2\lambda -1}{2\lambda} f(0.5 \cdot \mathbbm{1}) +\frac{1}{2\lambda}\mathop{\mathbb{E}}_{{\bm{s}} < 1}\left[ f(\tilde{\mathbf{x}})\right].
\end{equation}
This is the sum of a constant, and a function bounded in $[0,1/(2\lambda)]$. Clearly, this is undesirable. By contrast, if we use an offset vector $\mathbf{v}$ as described above, not every component will have $s_i > 1$ simultaneously. This means that $\tilde{{\bm{x}}}$ will continue to be sufficiently expressive over the entire distribution of $s_{\text{base}}$.
\subsection{Relationship to Uniform Additive Smoothing} \label{sec:uas_compare}
In this section, we explain the relationship between SSN and uniform additive smoothing \citep{pmlr-v119-yang20c} with two main objectives:
\begin{enumerate}
\item We show that, for each element $x_i$, the \textit{marginal} distributions of the noisy element $\tilde{x}_i$ of SSN and the noisy element $(x_i + \epsilon_i)$ of uniform additive smoothing are directly related to one another. However we show that, for large $\lambda$, the distribution of uniform additive smoothing $(x_i + \epsilon_i)$ has an undesirable property which SSN avoids. This creates large empirical improvements in certified robustness using SSN, demonstrating an additional advantage to our method separate from derandomization.
\item We show that additive uniform noise does \textit{not} produce correct certificates when using arbitrary joint distributions of $\epsilon$. This means that it cannot be easily derandomized in the way that SSN can.
\end{enumerate}
\subsubsection{Relationship between Marginal Distributions of $\tilde{x}_i$ and $(x_i + \epsilon_i)$ } \label{sec:marg_distrib}
To see the relationship between uniform additive smoothing and SSN, we break the marginal distributions of each component of noised samples into cases (assuming $\lambda \geq 0.5$):
\begin{equation} \label{eq:uniform_additive_cases}
x_i + \epsilon_i \sim \begin{cases}
{\mathcal{U}}(x_i-\lambda,1-\lambda) &\text{ w. prob. } \frac{1-x_i}{2\lambda} \\ {\mathcal{U}}(1-\lambda,\lambda) &\text{ w. prob. } \frac{2\lambda - 1}{2\lambda} \\
{\mathcal{U}}(\lambda,x_i+\lambda) &\text{ w. prob. } \frac{x_i}{2\lambda}
\\
\end{cases}
\end{equation}
\begin{equation}
\tilde{x}_i \sim \begin{cases}
\,\,\,\frac{{\mathcal{U}}(x_i,1)}{2} &\text{ w. prob. } \frac{1-x_i}{2\lambda}\\
\,\,\,\,\,\,\,\,\,\, \frac{1}{2} &\text{ w. prob. } \frac{2\lambda - 1}{2\lambda} \\
\frac{{\mathcal{U}}(1,x_i+1) }{2} &\text{ w. prob. } \frac{x_i}{2\lambda}
\end{cases}
\end{equation}
We can see that there is a clear correspondence (which also justifies our re-use of the parameter $\lambda$.) In particular, we can convert the marginal distribution of uniform additive noise to the marginal distribution of SSN by applying a simple mapping: $\tilde{x}_i \sim g(x_i + \epsilon_i )$ where:
\begin{equation} \label{eq:noise_mapping}
g(z) := \begin{cases}
\frac{z+\lambda}{2} \quad &\text{ if } z < 1-\lambda\\
\frac{1}{2} \quad &\text{ if } 1-\lambda<z< \lambda\\
\frac{z-\lambda + 1}{2} \quad &\text{ if } z>\lambda \\
\end{cases}
\end{equation}
For $\lambda = 0.5$, this is a simple affine transformation:
\begin{equation}
\tilde{x}_i \sim 1/2 (x_i + \epsilon_i) + 1/4 \label{eq:equivalence_lambda_half}
\end{equation}
In other words, in the case of $\lambda = 0.5$, $\tilde{x}_i$ is also uniformly distributed. However, for $\lambda > 0.5$, Equation \ref{eq:uniform_additive_cases} reveals an unusual and undesirable property of using uniform additive noise: \textit{regardless of the value of $x_i$}, there is always a fixed probability $\frac{2\lambda-1}{2\lambda}$ that the smoothed value $x_i + \epsilon_i$ is uniform on the interval $[1-\lambda, \lambda]$. Furthermore, this constant probability represents the only case in which $(x_i + \epsilon_i)$ can assume values in this interval. These values therefore carry no information about $x_i$ and are all equivalent to each other. However, if $\lambda$ is large, this range dominates the total range of values of $x_i + \epsilon_i$ which are observed (See Figure \ref{fig:compare_representations}-b.)
By contrast, in SSN, while there is still a fixed $\frac{2\lambda-1}{2\lambda}$ probability that the smoothed component $\tilde{x}_i$ assumes a ``no information'' value, this value is always \textit{fixed} ($\tilde{x}_i=1/2$). Empirically, this dramatically improves performance when $\lambda$ is large. Intuitively, this is because when using uniform additive smoothing, the base classifier must \textit{learn to ignore} a very wide range of values (all values in the interval $[1-\lambda, \lambda]$) while in SSN, the base classifier only needs to learn to ignore a specific constant ``no information'' value $1/2$. See Figure \ref{fig:compare_representations} for a visual comparison of the two noise representations.\footnote{Note that this use of a ``no information'' value bears some similarity to the ``ablation'' value in \citet{levine2020robustness}, a randomized smoothing defense for $\ell_0$ adversarial attacks}
\begin{figure}[t]
\centering
\includegraphics[width=0.30\textwidth]{Compare_representation_ranges-4.pdf}
\caption{Range of noise values possible for each sample feature $x_i$, under (a) SSN, for any $\lambda \geq 0.5$ and (b) uniform additive smoothing, $\lambda = 1.5$. Possible pairs of clean and noise values are shown in grey (both light and dark). In uniform additive smoothing, note that all values of $x_i+\epsilon_i$ in the range [-0.5,1.5], shown in dark grey, can correspond to \textit{any} value of $x_i$. This means that these values of $x_i+\epsilon_i$ carry no information about $x_i$ whatsoever. By contrast, using SSN, only the value $\tilde{x}_i = 1/2$ has this property.}
\label{fig:compare_representations}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{independence_3.pdf}
\caption{Comparison of independent uniform additive noise, correlated uniform additive noise, and correlated SSN, in $\mathbb{R}^2$ for $\lambda = 0.5$. In all figures, the blue and red points represent points ${\bm{x}}$ and ${\bm{x}}'$ and the black border represents the range $[0,1]^2$. (a) Distributions of ${\bm{x}}+\epsilon$ and ${\bm{x}}'+\epsilon$ for independent uniform additive noise. The robustness guarantee relies on the significant overlap of the shaded regions, representing the sampled distributions. Note that by Equation \ref{eq:equivalence_lambda_half}, these are also the distributions of for $2\tilde{{\bm{x}}} - \mathbbm{1}/2$ and $2\tilde{{\bm{x}}}' - \mathbbm{1}/2$ using SSN with $s_1$ and $s_2$ distributed independently. (b) Using correlated additive noise ($\epsilon_1 = \epsilon_2$) does \textit{not} produce an effective robustness certificate: the sampled distributions ${\bm{x}}+\epsilon$ and ${\bm{x}}'+\epsilon$ (blue and red lines) do not overlap. (c) Using correlated splitting noise ($s_1 = s_2$) produces an effective robustness certificate, because distributions of $\tilde{{\bm{x}}}$ and $\tilde{{\bm{x}}}'$ overlap significantly. Here, for consistency in scaling, we show the distributions of $2\tilde{{\bm{x}}} - \mathbbm{1}/2$ and $2\tilde{{\bm{x}}}' - \mathbbm{1}/2$ (blue line and red line), with the overlap shown as purple. Note that this is a {\it one-dimensional} smoothing distribution, and therefore can be efficiently derandomized. }
\label{fig:l1_indep}
\end{figure}
\subsubsection{Can Additive Uniform Noise Be Derandomized?}
As shown above, in the $\lambda=0.5$ case, SSN leads to marginal distributions which are simple affine transformations of the marginal distributions of the uniform additive smoothing. One might then wonder whether we can derandomize additive uniform noise in a way similar to DSSN. In particular, one might wonder whether arbitrary joint distributions of $\epsilon$ can be used to generate valid robustness certificates with uniform additive smoothing, in the same way that arbitrary joint distributions of ${\bm{s}}$ can be used with SSN. It turns out that this is not the case. We provide a counterexample (for $\lambda = 0.5$) below:
\begin{proposition} \label{prop:uniform_broken}
There exists a base classifier $f: \mathbb{R}^2 \rightarrow [0,1]$ and a joint probability distribution ${\mathcal{D}}$, such that $\epsilon_1,\epsilon_2 \sim {\mathcal{D}}$ has marginals $\epsilon_1 \sim {\mathcal{U}}(-0.5,0.5)$ and $\epsilon_2 \sim {\mathcal{U}}(-0.5,0.5)$ where for
\begin{equation}
p({\bm{x}}) := \mathop{\mathbb{E}}_{\epsilon \sim{\mathcal{D}} } \left[f({\bm{x}} + \epsilon)\right],
\end{equation}
$p(.)$ is \textbf{not} $1$-Lipschitz with respect to the $\ell_1$ norm.
\end{proposition}
\begin{proof}
Consider the base classifier $f({\bm{z}}) := \bm{1}_{z_1 > 0.4 + z_2 }$, and let $\epsilon$ be distributed as $\epsilon_1 \sim {\mathcal{U}}(-0.5,0.5)$ and $\epsilon_2 = \epsilon_1$. Consider the points ${\bm{x}} = [0.8,0.2]^T$ and ${\bm{x}}' = [0.6,0,4]^T$. Note that $\|\delta\|_1 = 0.4$. However,
\begin{equation}
\begin{split}
p({\bm{x}}) = \mathop{\mathbb{E}}_{\epsilon}[f({\bm{x}}+\epsilon)] = \mathop{\mathbb{E}}_{\epsilon_1}[f(.8+\epsilon_1, .2+\epsilon_1) ] = 1 \\
p({\bm{x}}') = \mathop{\mathbb{E}}_{\epsilon}[f({\bm{x}}'+\epsilon)] = \mathop{\mathbb{E}}_{\epsilon_1}[f(.6+\epsilon_1, .4+\epsilon_1) ] = 0 \\
\end{split}
\end{equation}
Thus, $|p({\bm{x}})-p({\bm{x}}')| > \|\delta\|_1$.
\end{proof}
\begin{figure}[h]
\centering
\includegraphics[width=0.45\textwidth]{split_randomization_3-2.pdf}
\caption{Example of $\tilde{x}_i$ in the $\lambda < 0.5$ case. In this case, the interval $[0,1]$ is split into sub-intervals $[0,s_i]$, $(s_i,s_i+2\lambda]$, and $(s_i+2\lambda, 1]$. $\tilde{x}_i$ is assigned to the middle of whichever of these intervals $x_i$ falls into.}
\label{fig:split_randomization_small}
\end{figure}
In the appendix, we provide intuition for this, by demonstrating that despite having similar \textit{marginal} distributions, the \textit{joint} distributions of $\tilde{{\bm{x}}}$ and $({\bm{x}} + \epsilon)$ which can be generated by SSN and additive uniform noise, respectively, are in fact quite different. An example is shown in Figure \ref{fig:l1_indep}.
\subsection{General Case, including $\lambda < 0.5$}
In the case $\lambda < 0.5$, we split the $[0,1]$ interval not only at $s_i \in [0,2\lambda]$, but also at every value $s_i+2\lambda n$, for $n \in {\mathbb{N}}$. An example is shown in Figure \ref{fig:split_randomization_small}. Note that this formulation covers the $\lambda \geq 0.5$ case as well (the splits for $n \geq 1$ are simply not relevant).
\setcounter{theorem}{\thetheorem-1}
\begin{theorem}[General Case] \label{thm:main_case_2}
For any $f: \mathbb{R}^d \rightarrow [0,1]$, and $\lambda> 0$ let ${\bm{s}} \in [0,2\lambda]^d$ be a random variable, with a fixed distribution such that:
\begin{equation}
s_i \sim {\mathcal{U}}(0,2\lambda), \,\,\,\, \forall i.
\end{equation}
Note that the components $s_1, ..., s_d$ are \textbf{not} required to be distributed independently from each other. Then, define:
\begin{align}
\tilde{x}_i &:=
\frac{ \min(2\lambda \ceil{\frac{x_i - s_i}{2\lambda} } + s_i, 1) }{2} \\
&+ \frac{\max(2\lambda \ceil{\frac{x_i - s_i}{2\lambda} -1} + s_i, 0)}{2}\
,\,\,\,\, \forall i \\
p({\bm{x}}) &:=\mathop{\mathbb{E}}_{{\bm{s}}}\left[ f(\tilde{{\bm{x}}})\right].
\end{align}
Then, $p(.)$ is $1/(2\lambda)$-Lipschitz with respect to the $\ell_1$ norm.
\end{theorem}
The proof for this case, as well as its derandomization, are provided in the appendix. As with the $\lambda \geq 0.5$ case, the derandomization allows for $p({\bm{x}})$ to be computed exactly using $2\lambda q$ evaluations of $f$.
\section{Experiments}
We evaluated the performance of our method on CIFAR-10 and ImageNet datasets, matching all experimental conditions from \cite{pmlr-v119-yang20c} as closely as possible (further details are given in the appendix.) Certification performance data is given in Table \ref{tab:cifar_results} for CIFAR-10 and Figure \ref{fig:imagenet} for Imagenet.
Note that instead of using the hyperparameter $\lambda$, we report experimental results in terms of $\sigma = \lambda/\sqrt{3}$: this is to match \cite{pmlr-v119-yang20c}, where this gives the standard deviation of the uniform noise.
We find that DSSN significantly outperforms \citet{pmlr-v119-yang20c} on both datasets, particularly when certifying for large perturbation radii. For example, at $\rho=4.0$, DSSN provides a 36\% certified accuracy on CIFAR-10, while uniform additive noise provides only 27\% certified accuracy.
In addition to these numerical improvements, DSSN certificates are \textit{exact} while randomized certificates hold only with \textit{high-probability}. Following \citet{pmlr-v119-yang20c}, all certificates reported here for randomized methods hold with $99.9\%$ probability: there is no such failure rate for DSSN.
Additionally, the certification runtime of DSSN is reduced compared to randomized methods. Although in contrast to \cite{pmlr-v119-yang20c}, our certification time depends on the noise level, the fact that \citet{pmlr-v119-yang20c} uses 100,000 smoothing samples makes our method much faster even at the largest tested noise levels. For example, on CIFAR-10 at $\sigma=3.5$, using a single NVIDIA 2080 Ti GPU, we achieve an average runtime of 0.41 seconds per image, while \citet{pmlr-v119-yang20c}'s method requires 13.44 seconds per image.
\citet{pmlr-v119-yang20c} tests using both standard training on noisy samples as well as stability training \citep{li2019certified}: while our method dominates in both settings, we find that the stability training leads to less of an improvement in our methods, and is in some cases detrimental. For example, in Table \ref{tab:cifar_results}, the best certified accuracy is always higher under stability training for uniform additive noise, while this is not the case for DSSN at $\rho < 3.0$. Exploring the cause of this may be an interesting direction for future work.\footnote{On CIFAR-10, \citet{pmlr-v119-yang20c} also tests using semi-supervised and transfer learning approaches which incorporate data from other datasets. We consider this beyond the scope of this work where we consider only the supervised learning setting.}
In Figure \ref{fig:random_derandom}, we compare the uniform additive smoothing method to DSSN, as well the \textit{randomized} form of SSN with independent splitting noise. At mid-range noise levels, the primary benefit of our method is due to derandomization; while at large noise levels, the differences in noise representation discussed in Section \ref{sec:marg_distrib} become more relevant. In the appendix, we provide complete certification data at all tested noise levels, using both DSSN and SSN with independent noise, as well as more runtime data. Additionally we further explore the effect of the noise representation: given that Equation \ref{eq:noise_mapping} shows a simple mapping between (the marginal distributions of) SSN and uniform additive noise, we tested whether the gap in performance due to noise representations can be eliminated by a ``denoising layer'', as trained in \cite{salman2020denoised}. We did not find evidence of this: the gap persists even when using denoising.
\begin{table*}[]
\small
\begin{tabular}{|c|l|l|l|l|l|l|l|l|}
\hline
&$\rho = $0.5&$\rho = $1.0&$\rho = $1.5&$\rho = $2.0&$\rho = $2.5&$\rho = $3.0&$\rho = $3.5&$\rho = $4.0\\
\hline
Uniform&70.54\%&58.43\%&50.73\%&43.16\%&33.24\%&25.98\%&20.66\%&17.12\%\\
Additive Noise&(83.97\%&(78.70\%&(73.05\%&(73.05\%&(69.56\%&(62.48\%&(53.38\%&(53.38\%\\
&@ $\sigma$=0.5)&@ $\sigma$=1.0)&@ $\sigma$=1.75)&@ $\sigma$=1.75)&@ $\sigma$=2.0)&@ $\sigma$=2.5)&@ $\sigma$=3.5)&@ $\sigma$=3.5)\\
\hline
Uniform&71.09\%&60.36\%&52.86\%&47.08\%&42.26\%&38.55\%&33.76\%&27.12\%\\
Additive Noise&(78.79\%&(74.27\%&(65.88\%&(63.32\%&(57.49\%&(57.49\%&(57.49\%&(57.49\%\\
(+Stability Training)&@ $\sigma$=0.5)&@ $\sigma$=0.75)&@ $\sigma$=1.5)&@ $\sigma$=1.75)&@ $\sigma$=2.5)&@ $\sigma$=2.5)&@ $\sigma$=2.5)&@ $\sigma$=2.5)\\
\hline
{\bf DSSN - Our Method}&\textbf{72.25\%}&\textbf{63.07\%}&\textbf{56.21\%}&\textbf{51.33\%}&\textbf{46.76\%}&42.66\%&38.26\%&33.64\%\\
&(81.50\%&(77.85\%&(71.17\%&(67.98\%&(65.40\%&(65.40\%&(65.40\%&(65.40\%\\
&@ $\sigma$=0.75)&@ $\sigma$=1.25)&@ $\sigma$=2.25)&@ $\sigma$=3.0)&@ $\sigma$=3.5)&@ $\sigma$=3.5)&@ $\sigma$=3.5)&@ $\sigma$=3.5)\\
\hline
{\bf DSSN - Our Method}&71.23\%&61.04\%&54.21\%&49.39\%&45.45\%&\textbf{42.67\%}&\textbf{39.46\%}&\textbf{36.46\%}\\
(+Stability Training)&(79.00\%&(71.29\%&(66.04\%&(64.26\%&(59.88\%&(57.16\%&(56.29\%&(54.96\%\\
&@ $\sigma$=0.5)&@ $\sigma$=1.0)&@ $\sigma$=1.5)&@ $\sigma$=1.75)&@ $\sigma$=2.5)&@ $\sigma$=3.0)&@ $\sigma$=3.25)&@ $\sigma$=3.5)\\
\hline
\end{tabular}
\caption{Summary of results for CIFAR-10. Matching \citet{pmlr-v119-yang20c}, we test on 15 noise levels ($\sigma \in \{0.15, 0.25n \text{ for }1 \leq n \leq 14\}$). We report the best certified accuracy at a selection of radii $\rho$, as well as the clean accuracy and noise level of the associated classifier. Our method dominates at all radii, although stability training seems to be less useful for our method. Note that these statistics are based on reproducing \citet{pmlr-v119-yang20c}'s results; they are all within $\pm 1.5$ percentage points of \citet{pmlr-v119-yang20c}'s reported statistics.
\label{tab:cifar_results}}
\end{table*}
\begin{figure*}
\centering
\includegraphics[width=0.9\textwidth, trim={0 0.3cm 0 0.3cm},clip]{imagenet_v2.png}
\caption{Results on ImageNet. We report results at three noise levels, with and without stability training. Our method dominates in all settings: however, especially at large noise, stability training seems to \textit{hurt} our clean accuracy, rather than help it. }
\label{fig:imagenet}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=.98\textwidth,trim={0 0.2cm 0 0.4cm},clip]{compare_random_derandom_v2.png}
\caption{ Comparison on CIFAR-10 of additive smoothing \citep{pmlr-v119-yang20c} to DSSN, as well as SSN with \textit{random, independent} splitting noise, using the estimation scheme from \cite{pmlr-v119-yang20c}.
At very small levels of noise ($\sigma = 0.15$), there is little difference between the methods: in fact, with stability training, additive smoothing slightly outperforms DSSN. At intermediate noise levels, additive noise and independent SSN perform very similarly, but DSSN outperforms both. This suggests that, at this level, the primary benefit of DSSN is to eliminate estimation error (Section \ref{sec:yang}). At high noise levels, the largest gap is between additive noise and independent SSN, suggesting that in this regime, most of the performance benefits of DSSN are due to improved base classifier performance (Section \ref{sec:marg_distrib}).}
\label{fig:random_derandom}
\end{figure*}
\section{Conclusion}
In this work, we have improved the state-of-the-art smoothing-based robustness certificate for the $\ell_1$ threat model, and provided the first scalable, general-use derandomized ``randomized smoothing'' certificate for a norm-based adversarial threat model. To accomplish this, we proposed a novel \textit{non-additive} smoothing method. Determining whether such methods can be extended to other $\ell_p$ norms remains an open question for future work.
\section*{Acknowledgements}
This project was supported in part by NSF CAREER AWARD 1942230, HR00111990077, HR001119S0026, NIST 60NANB20D134 and Simons Fellowship on ``Foundations of Deep Learning.''
|
1,108,101,566,697 | arxiv | \section{Introduction}
Berry curvature is of fundamental importance for understanding some basic properties of
solid materials and is essential for the description of the dynamics of
Bloch electrons~\cite{Di2010,vanderbilt_2018}.
It acts as a magnetic field in momentum space, which leads to some anomalous transport effects,
including the first order\cite{Jungwirth2002,Onoda2002,Fang2003,Yao2004,Nagaosa2010,Di2010,Weng2015},
and second order\cite{Sodemann2015,Zhang20181,Zhang20187,You2018,Ma2019} anomalous Hall effects (AHE).
It may introduce shift current in polar materials \cite{Sipe2000,Young2012,Tan2016,Wang2019}.
Berry curvature also plays a crucial role in the classification of topological materials
\cite{Hasan2010,Qi2011,Kane2005}.
The first-principles calculations of the Berry curvature
and the anomalous Hall conductivity (AHC) have been reviewed in Ref.~\cite{Gradhand2012}.
Especially, the Berry curvature and AHC have been calculated via the Kubo formula \cite{Fang2003,Yao2004}.
However, this method requires to calculate band structures at very dense $k$ points, and sum over a large number of unoccupied states to converge the results. Therefore the computational cost is very high, even for simple materials.
Vanderbilt and co-works developed a very efficient interpolation scheme \cite{Wang2006} to calculate Berry curvature and AHC \cite{Sundaram1999,Adams1959} based on maximally
localized Wannier functions (MLWFs)\cite{Marzari1997,Souza2001}, which has been demonstrated to calculate the AHC to very high accuracy with only a tiny fraction of time of the original Kubo formula.
However, it is not always easy to construct high-quality MLWFs, for complex systems. More seriously, in some cases, the MLWFs may not respect the point group symmetries of the crystal.
Breaking of the symmetry may lead to qualitatively incorrect results. Special attention has to be paid to construct the symmetry adapted MLWFs \cite{Souza2001,Sakuma2013,Thygesen2005}.
Recently, Lee at. el. derived the Kubo formula of optical matrices as well as Berry curvature
using nonorthogonal numerical atomic orbitals (NAOs)~\cite{Lee2018}. Wang et. al. also derived the formula of Berry connection and its higher order derivatives based on NAOs~\cite{Wang2019}. The NAOs are strictly localized and more importantly, have spherical symmetries, and
therefore, no extra effort is needed to construct the symmetry-adapted MLWFs. This is of great advantage for the applications in complex systems.
In this work, we derive the full formula to calculate the Berry curvature bases on NAOs~\cite{Chen2010}. We show that in the full derivation of Berry curvature, there are correction terms to the Kubo formula\cite{Lee2018}
on the NAO bases.
Since the number of NAOs is usually larger than that of the MLWFs,
we use a orbital contraction technique to reduce the number of NAOs,
which can greatly improve the calculation efficiency without significantly reducing the calculation accuracy.
These correction terms are small if a large NAO base is used, however, for the reduced NAO bases, the corrections are not negligible in some cases.
The formula developed in this work can be directly applied to the non-orthogonal generalized Wannier functions,
which, if constructed properly, are typically more localized than the orthogonal Wannier functions \cite{he2001}.
The rest of the paper is organized as follows. In Sec.~\ref{sec:formula}, we present the derivation of Berry curvature for non-orthogonal NAOs, followed by detailed benchmark calculations of Berry curvature for BaTiO$_3$ and AHC for Fe in
Sec.~\ref{sec:results}.
In Sec.~\ref{sec:reduce_NAO}, we introduce a technique to reduce the number of NAOs
to accelerate Berry curvature calculations. We summarize in Sec.~\ref{sec:summary}.
\section{Berry curvature in non-orthogonal atomic bases}
\label{sec:formula}
\subsection{Derivation of Berry curvature formula}
The derivation of Berry curvature formula expressed in a non-orthogonal NAO basis
is similar to that of the orthogonal Wannier bases, but there are also some considerable differences.
The derivation is transparent for people who are not familiar with the Wannier functions.
We start from the definition of the Berry curvature \cite{vanderbilt_2018},
\begin{equation}
\mathbf{\Omega}_n(\mathbf{k}) = \nabla \times \mathbf{A}_n(\mathbf{k})\, ,
\end{equation}
where,
\begin{equation}
\mathbf{A}_n = i\langle u_{n\mathbf{k}}|\nabla_{\mathbf{k}}|u_{n\mathbf{k}}\rangle
\end{equation}
is the Berry connection, and $u_{n\mathbf{k}}$ are the cell-periodic Bloch functions,
whose expression on a NAO basis is given by Eq.~(\ref{eq:u_eq}) in the Appendix.
More generally,
one may generalize the above
definition of Berry connection and Berry curvature to a multi-bands case,
\begin{equation}
A_{n m, \alpha}(\mathbf{k})=i\left\langle u_{n\mathbf{k}} \mid \partial_{\alpha} u_{m\mathbf{k}}\right\rangle\, ,
\label{eq:berryconnection}
\end{equation}
and
\begin{eqnarray}
{\Omega}_{nm,\alpha\beta}(\mathbf{k})
&=& \partial_\alpha\mathbf{A}_{nm,\beta}(\mathbf{k}) - \partial_\beta\mathbf{A}_{nm,\alpha}(\mathbf{k}) \nonumber \\
&=& i\langle\partial_\alpha u_{n\mathbf{k}}|\partial_\beta u_{m\mathbf{k}}\rangle - i\langle\partial_\beta u_{n\mathbf{k}}|\partial_\alpha u_{m\mathbf{k}}\rangle\, ,
\label{eq:omega_m}
\end{eqnarray}
where $\partial_{\alpha}=\partial / \partial k_{\alpha}$.
Substituting the cell-periodic wave functions Eq.~(\ref{eq:u_eq}) into Eq.~(\ref{eq:omega_m}), we have,
\begin{eqnarray}
& &i\langle\partial_\alpha u_{n\mathbf{k}}|\partial_\beta u_{m\mathbf{k}}\rangle\nonumber \\
&=&i\sum_{\nu,\mu}C_{n\nu}^{*}C_{m\mu}\sum_{\vec{R}}\mathrm{e}^{i\mathbf{k}\cdot\mathbf{R}}\langle\mathbf{0}\nu|-r_\alpha(R_\beta-r_\beta)|\mathbf{R}\mu\rangle \nonumber \\
&+& i\sum_{\nu,\mu}\left(\partial_\alpha C_{n\nu}^{*}\right)S_{\nu\mu}\left(\partial_\beta C_{m\mu}\right) \nonumber \\
&+& \sum_{\nu,\mu}\left(\partial_\alpha C_{n\nu}^{*}\right)C_{m\mu}\sum_{\mathbf{R}}\mathrm{e}^{i\mathbf{k}\cdot\mathbf{R}}\langle0\nu|r_\beta-R_\beta|\mathbf{R}\mu\rangle \nonumber \\
&-& \sum_{\nu,\mu}C_{n\nu}^{*}\left(\partial_\beta C_{m\mu}\right)\sum_{\mathbf{R}}\mathrm{e}^{i\mathbf{k}\cdot\mathbf{R}}\langle0\nu|r_\alpha|\mathbf{R}\mu\rangle \label{uu_eq}
\end{eqnarray}
To simplify the above equation, we introduce a dipole matrix $A^R_\alpha$, as follows,
\begin{equation}\label{eq:dipole}
A^R_{\nu\mu,\alpha}(\mathbf{k}) = \sum_{\mathbf{R}}\mathrm{e}^{i\mathbf{k}\cdot\mathbf{R}}\langle\mathbf{0}\nu|r_\alpha|\mathbf{R}\mu\rangle\, ,
\end{equation}
where the superscript ``$R$'' in $A^R$ refers to that the dipole matrix is summed over the lattice ${\bf R}$.
This quantity is similar to the $A^{W}_{\nu\mu,\alpha}$ in Ref.~\cite{Wang2006}.
While it is somehow cumbersome to calculate the dipole matrix in the Wannier bases,
$A^R_{\nu\mu,\alpha}$ can be easily calculated by two-center integrals on the NAO bases
by taking the advantages of the spherical symmetry of the NAOs\cite{LI2016503,siesta-ref}.
One part of the contribution to the Berry curvature is from the dipole matrix,
\begin{equation}
\bar{A}_{nm,\alpha} = C_n^\dagger A^R_\alpha C_m \, ,
\end{equation}
which is due to the lack of inversion symmetry of the crystal.
The other contribution to the Berry curvature comes from the change of the Bloch wave function
coefficient $C_{n}({\bf k})$ with ${\bf k}$.
Following Ref.~\cite{Wang2006}, we may also introduce a $\mathbf{D}$ matrix in the non-orthogonal NAO bases, as follows,
\begin{equation}
D_{nm,\alpha} = C_n^\dagger S\left(\partial_\alpha C_m\right) \, .
\end{equation}
There are useful relations between $\mathbf{\bar{A}}$ and $\mathbf{\bar{A}^\dagger}$, $\mathbf{D}$ and $\mathbf{D}^\dagger$, and the proof is given in the Appendix.
\begin{eqnarray}
\bar{A}_{nm,\alpha} - (\bar{A}^\dagger)_{nm,\alpha} &=& -i\bar{S}_{nm,\alpha} \label{eq:relation1}\\
D_{nm,\alpha} + (D^\dagger)_{nm,\alpha} &=& -\bar{S}_{nm,\alpha} \label{eq:relation2}
\end{eqnarray}
where
\begin{equation}
\bar{S}_{nm,\alpha} = C_n^\dagger\left(\partial_\alpha S\right)C_m \, .
\end{equation}
For the orthogonal Wannier bases, where $\bar{S}_{nm,\alpha}$=0, we have $\bar{A}_{nm,\alpha} = (\bar{A}^\dagger)_{nm,\alpha}$,
and $D_{nm,\alpha} = -(D^\dagger)_{nm,\alpha}$, i.e, $\bar{\bf A}$ is Hermitian, and ${\bf D}$ is anti-Hermitian
in the orthogonal Wannier bases.
It is easy to show that Berry connection $\mathbf{A}_{mn}=i{\bf D}_{mn}+{\bf \bar{A}}^\dagger_{mn}$.
We can simplify Eq.~(\ref{uu_eq}) by inserting the identity matrix,
\begin{equation}
I = \sum_{n}C_{n}C_{n}^\dagger S = \sum_{n}SC_nC_n^\dagger \, .
\end{equation}
For example, for the second term on the right side of Eq.~(\ref{uu_eq}), we have,
\begin{eqnarray}
& i\sum_{\nu,\mu}\left(\partial_\alpha C_{n\nu}^{*}\right)S_{\nu\mu}\left(\partial_\beta C_{m\mu}\right) \nonumber\\
&= i\left(\partial_\alpha C_n^\dagger\right)S\left(\partial_\beta C_m\right) \nonumber \\
&= i\sum_{l}\left(\partial_\alpha C_n^\dagger\right)SC_lC_l^\dagger S\left(\partial_\beta C_m\right) \nonumber \\
&= i\sum_{l}(D^{\dagger})_{nl,\alpha} D_{lm,\beta}\, .
\end{eqnarray}
After some derivation, we obtain the formula of the Berry curvature in the non-orthogonal NAO bases,
\begin{eqnarray}
\Omega_{nm,\alpha\beta} &= \bar{\Omega}_{nm,\alpha\beta} + i\left(D^\dagger_\alpha D_\beta-D^\dagger_\beta D_\alpha\right)_{nm} \nonumber \\
& +\left(D^\dagger_\alpha\bar{A}^\dagger_\beta+\bar{A}_\beta D_\alpha\right)_{nm} -\left(D^\dagger_\beta\bar{A}^\dagger_\alpha+\bar{A}_\alpha D_\beta\right)_{nm} \, ,
\label{eq:origin_curv}
\end{eqnarray}
where
\begin{equation}
\label{eq:omega_bar}
\bar{\Omega}_{nm,\alpha\beta} = i\sum_{\nu,\mu}C_{n\nu}^{*}C_{m\mu}\sum_{\mathbf{R}}\mathrm{e}^{i\mathbf{k}\cdot\mathbf{R}}\langle0\nu|r_\beta R_\alpha-r_\alpha R_\beta|\mathbf{R}\mu\rangle \, .
\end{equation}
We note that Eq.~(\ref{eq:origin_curv}) is very similar to Eq.~(27) in Ref.~\cite{Wang2006}, but also with notable differences. Because $\bar{\bf A}$ is not Hermitian, and ${\bf D}$ is not anti-Hermitian for the non-orthogonal NAO bases, we cannot write the equation in the form of commutators as Eq.~(27) in Ref.~\cite{Wang2006}.
\subsection{Calculation of $\mathbf{D}$ matrix}
\label{sec:IIB}
From the linear response theory given in Appendix A, one obtains\cite{Wang2019} (for $m \neq n$),
\begin{equation}
D_{nm,\alpha} =
\frac{\bar{H}_{nm,\alpha}-E_{m\mathbf{k}} \bar{S}_{nm,\alpha}}{E_{m\mathbf{k}}-E_{n\mathbf{k}}} \, ,
\label{eq:d-matrix}
\end{equation}
where
\begin{equation}
\bar{H}_{nm,\alpha} = C_n^\dagger\left(\partial_\alpha H\right)C_m \, .
\end{equation}
There is some freedom to choose ${\bf D}_{nn}$.
However, since ${\bf D}_{nn}= C_n^\dagger ({\bf k}) S({\bf k})\partial_{\bf k} C_n({\bf k})$,
it must satisfy the following constrain,
\begin{equation}
{\bf D}_{nn}^\dagger + {\bf D}_{nn}= -C_n^\dagger ({\bf k})\partial_{\bf k} S({\bf k})\, C_n({\bf k}) \, .
\label{eq:Dnn}
\end{equation}
For the orthogonal bases, one can just take ${\bf{D}}_{nn}=0$ \cite{Wang2006}.
However, this choice is generally not feasible for the nonorthogonal base.
Instead, we can use the parallel transport gauge,
$\mathbf{A}_{nn}=i\left\langle u_{n}| \partial_{\bf k} u_{n}\right\rangle=0$, i.e.,
${\bf D}_{nn}=i{\bf \bar{A}}^\dagger_{nn}$.
Using the relation, $ \mathbf{A}_{mn}=i{\bf D}_{mn}+{\bf \bar{A}}^\dagger_{mn}$,
and the relations in Eq.~(\ref{eq:relation1}) and Eq.~(\ref{eq:relation2}) for $m$=$n$,
we can easily prove that the parallel transport gauge, automatically satisfies
the constrain of Eq.~(\ref{eq:Dnn}). Note that the parallel transport gauge
defined here for the cell-periodic functions
is different from the ``parallel transport gauge''
in Ref.\cite{Wang2006} for the state vectors $||\phi_n \rangle\rangle $
in the ``tight-binding space'' of the MLWFs.
As shown in the next section, ${\bf D}_{nn}$ would not appear in the Berry curvature calculations, and therefore the
choice of particular gauge would not change the Berry curvature, but
it may affect the Berry connection.
\subsection{Total Berry curvature}
The total Berry curvature is calculated as follows,
\begin{equation}
\Omega_{\alpha\beta}(\mathbf{k})= \sum_{n}f_n(\mathbf{k})\Omega_{nn,\alpha\beta}(\mathbf{k})\, ,
\end{equation}
where, $f_n$ is the Fermi occupation function.
To avoid the numerical instability caused
by the canceling contributions of large values
of $D_{nm}$ [see Eq.~(\ref{eq:d-matrix})],
originated from the small energy splitting between
a pair of occupied bands $n$ and $m$,
we would like to
sperate the summation between the occupied and unoccupied states,
similar to the MLWF interpolation method\cite{Wang2006}.
However, this is a little bit more tricky for the non-orthogonal bases.
We first rewrite Eq. (\ref{eq:origin_curv}) by replacing $D^\dagger_\alpha$ and $D^\dagger_\beta$
with $D_\alpha$ and $D_\beta$ using Eq.~(\ref{eq:relation2}),
\begin{eqnarray}\label{eq:modify_omega}
\Omega_{\alpha\beta}&=& \bar{\Omega}_{\alpha\beta} - \left[D_\alpha, \bar{A}^\dagger_\beta\right]
+\left[D_\beta,\bar{A}^\dagger_\alpha\right]\nonumber \\
&- &i\left[D_\alpha, D_\beta \right]
- \left(\bar{S}_\alpha\bar{A}^\dagger_\beta - \bar{S}_\beta\bar{A}^\dagger_\alpha\right)\, ,
\end{eqnarray}
where the first four terms closely resemble those of Eq.~(27) in Ref.\cite{Wang2006}.
One can proof the Berry curvature defined above is gauge invariant (see Appendix A3).
In this form, the contribution from $D$ matrices are exactly canceled for a pair of occupied
states $n$, $m$. The last term is due to the non-orthogonality of the NAO bases.
We can therefore calculate the total Berry curvature $\Omega_{\alpha\beta}$ as~\cite{Wang2006},
\begin{eqnarray}\label{eq:total_berrycurvature}
\Omega_{\alpha\beta}(\mathbf{k})
&=& \sum_{n}f_n\bar{\Omega}_{nn,\alpha\beta} \nonumber
+ \sum_{n,m}(f_m-f_n) [ i D_{nm,\alpha}D_{mn,\beta} \nonumber \\
&+& D_{nm,\alpha}(\bar{A}^\dagger)_{mn,\beta} - D_{nm,\beta}(\bar{A}^\dagger)_{mn,\alpha} ] \nonumber \\
&-& \sum_{n,m}f_n\left[ \bar{S}_{nm,\alpha}(\bar{A}^\dagger)_{mn,\beta}
- \bar{S}_{nm,\beta}(\bar{A}^\dagger)_{mn,\alpha} \right] \label{Omega_eq}
\end{eqnarray}
We immediately see that for the orthogonal bases, the last line in the above equation vanishes,
and the total Berry curvature reduces to Eq.~(32) in Ref.\cite{Wang2006}.
\subsection{Comparison with the naive Kubo formula}
The Berry curvatures are often calculated via the naive Kubo formula~\cite{Gradhand2012,Lee2018},
\begin{equation}
\label{kuob_eq}
\Omega_{\alpha\beta}^{{\rm kubo}}(\mathbf{k}) = -2\,{\rm Im} \sum_{n}^{occ}\sum_{m}^{uocc}\frac{\upsilon_{nm,\alpha}\upsilon_{mn,\beta}}{(E_{m\mathbf{k}}-E_{n\mathbf{k}})^2}\,,
\end{equation}
where $\upsilon_{nm,\alpha}$ is the velocity matrix.
According to Ref.~\cite{Yates2007,Blount1962305}, we have
\begin{equation}
\upsilon_{nm,\alpha}(\mathbf{k})
= \left(\partial_\alpha E_{n\mathbf{k}}\right) \delta_{nm}
- i\left(E_{m\mathbf{k}}-E_{n\mathbf{k}}\right)A_{nm,\alpha}\, ,
\end{equation}
where the Berry connection $A_{nm,\alpha}=iD_{nm,\alpha}+ \bar{A}^\dagger_{nm}$. One easily obtain
the velocity operator matrix for $m$$\neq$$n$,
\begin{equation}
\upsilon_{nm,\alpha}(\mathbf{k})= \bar{H}_{nm,\alpha} - E_{n\mathbf{k}}\bar{S}_{nm,\alpha} + i(E_{n\mathbf{k}}-E_{m\mathbf{k}})\bar{A}_{nm,\alpha} \,,
\end{equation}
which is identical to that derived in Ref.~\cite{Lee2018}
via a different approach.
By comparing Eq.~(\ref{Omega_eq}) and Eq.~(\ref{kuob_eq}), one finds that
the full Berry curvature actually includes some correction terms to the naive Kubo formula, i.e.,
$\Omega_{\alpha\beta}=\Omega_{\alpha\beta}^{{\rm kubo}}+\Delta\Omega_{\alpha\beta}$,
and,
\begin{eqnarray}
\Delta\Omega_{\alpha\beta}
& = &\sum_{n}^{occ}\bar{\Omega}_{nn,\alpha\beta} - i\sum_{n}^{occ}\sum_{m}^{all} [\bar{A}_{nm,\alpha} (\bar{A}^{\dagger})_{mn,\beta} \nonumber \\
&-& \bar{A}_{nm,\beta} (\bar{A}^\dagger)_{mn,\alpha} ]\, ,
\end{eqnarray}
where $\bar{\Omega}$ is defined in Eq.~(\ref{eq:omega_bar}).
These additional terms are also presented in the MLWF bases~\cite{Wang2006}.
These correction terms come from the incompleteness of
the tight-binding bases to the original Hilbert space~\cite{Graf1995,Boykin1995}.
Even though the contributions of $\Delta\Omega_{\alpha\beta}$ are often very small, and sometimes negligible,
it is still useful to have a strict formula to compare with.
This will be further addressed in the following sections.
\section{Results and discussion}
\label{sec:results}
We take BaTiO$_3$ and ferromagnetic Fe as examples to
benchmark the calculation of the Berry curvature using NAOs.
These choices represent two typical systems where Berry curvature is nonzero that either
break the inversion symmetry (BaTiO$_3$) or time-reversal symmetry (Fe). While the SOC is not needed for BaTiO$_3$
to get nonzero Berry curvature, it is essential for Fe to have nonzero Berry curvature, which is well explained
in Ref.\cite{vanderbilt_2018}
\subsection{Computational details}
We first perform self-consistent density functional theory calculation implemented
in the Atomic-orbtial Based Ab-initio Computation at UStc (ABACUS) code \cite{Chen2010,LI2016503}.
The generalized gradient approximation in the Perdew-Burke-Ernzerhof (PBE) \cite{Perdew1996} form is adopted. We adopt optimized norm-conserving Vanderbilt \cite{Hamann2013} multi-projector, SG15 pseudopotentials \cite{Schlipf2015,Scherpelz2016}.
For the BaTiO$_3$, the 5s$^2$5p$^6$5d$^1$6s$^1$ electrons for Ba, the 2s$^2$3p$^6$4s$^2$3d$^2$ electrons for Ti,
and the 2s$^2$2p$^4$ electrons for O are treated as valence electrons.
The cutoff energy for the wave function is set to 60 Ry. A $4\times4\times4$ $k$-mesh is used in the self-consistent calculations. The NAO bases for Ba, Ti, O are 4s2p2d, 4s2p2d1f, 2s2p1d, respectively.
For bcc Fe, the 3s$^2$3p$^6$4s$^2$3d$^6$ electrons are included self-consistently. Spin-orbit interactions are tuned on. The cutoff energy for the wave function is set to 120 Ry. A 16$\times$16$\times$16 $k$-mesh is used in the self-consistent calculations. The NAOs of Fe is 4s2p2d1f.
After the self-consistent calculations, the tight-binding Hamiltonian $H(\bf{R})$
and overlap matrices $S({\bf R})$ in the NAO bases, which are generated during the self-consistent calculations,
are readily outputted and are used for the Berry curvature calculations.
\subsection{Berry curvature of BaTiO$_3$}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.45\textwidth]{bandstructure.pdf}
\caption{The band structures of (a) BaTiO$_3$, and (b) bcc Fe. The red solid lines are calculated by the original bases, whereas the black dashed lines are the results of the reduced basis sets. The Fermi levels are set to $E_F$=0, and the green lines indicate the energy windows for the construction of the reduced NAO bases.}
\label{fig:bands}
\end{figure}
As the first example, we calculate the Berry curvature of rhombohedral BaTiO$_3$ with space group $R3m$ (No.~160, Rhombohedral axes).
The lattice constant $a$ = 4.081 \AA, and $\alpha$ = 89.66$^\circ$ are used. The Wyckoff positions are given in Table \ref{tab:structure_bto}. BTO is a ferroelectric insulator without inversion symmetry.
The band structures of BaTiO$_3$ near the Fermi level $E_F$=0 are shown in red solid lines in Fig.~\ref{fig:bands}(a).
\begin{table}
\caption{The Wyckoff positions of BaTiO$_3$.}
\center
\begin{tabular}{c c c c c}
\hline
& $x$ & $y$ & $z$ & Wyckoff position \\
\hline
\hline
Ba & 0.002 & 0.002 & 0.002 & 1a \\
Ti & 0.516 & 0.516 & 0.516 & 1a \\
O & 0.974 & 0.485 & 0.485 & 3b \\
\hline
\hline
\end{tabular}
\label{tab:structure_bto}
\end{table}
Figure \ref{fig:fig2}(a)-(c) depict the Berry curvatures of BaTiO$_3$ along the $a$, $b$ and $c$-axes respectively, where
\begin{equation}
\Omega_{\gamma}(\mathbf{k})=\epsilon_{\alpha \beta \gamma} \Omega_{\alpha \beta}(\mathbf{k}) \, .
\end{equation}
The Berry curvatures calculated by Eq.~(\ref{Omega_eq}) are shown in the black dashed lines. We compared the results to those calculated by the finite-difference (FD) method via Eq.~(\ref{eq:FD}), which are shown in the solid red lines.
As we see, the Berry curvatures calculated by the Eq.~(\ref{Omega_eq}) are in excellent agreement with the FD method.
Figure~\ref{fig:fig2}(d)-(f) show the differences $\Delta\Omega_{\alpha\beta}$ between this work and Kubo formula of BaTiO$_3$
along the $a$, $b$ and $c$-axes respectively, in black solid lines. As seen from the figure, $\Delta\Omega_{\alpha\beta}$ is extremely small, which are less than 1\% of the total Berry curvature. These results suggest that the NAO bases used in the calculations are rather complete for this problem.
\begin{figure*}[tbp]
\centering
\includegraphics[width=0.8\textwidth]{batio3-curv.pdf}
\caption{ (Upper panels) The Berry curvatures, (a) $\Omega_x$, (b) $\Omega_y$ and (c) $\Omega_z$ of BaTiO$_3$ along the high symmetry $k$ points.
(Lower panels) The corrections to the Kubo formula, (d) $\Delta \Omega_x$, (b) $\Delta \Omega_y$ and (c) $\Delta \Omega_z$ along the high symmetry $k$ points.The red solid lines are calculated by FD, and the black dotted lines are results of the original bases, whereas the green dashed lines are the results of the reduced bases.
}
\label{fig:fig2}
\end{figure*}
\begin{figure}[tbp]
\centering
\includegraphics[width=0.45\textwidth]{fe-curv.pdf}
\caption{(a) Berry curvature $\Omega_z$ of bcc Fe along the high symmetry $k$ points. (b) The corrections $\Delta \Omega_z$ to the Kubo formula. The solid red line is calculated by FD, and the black dotted lines are results of the original bases, whereas the green dashed lines are the results of the reduced bases. }
\label{fig:fig3}
\end{figure}
\subsection{AHC of Fe}
In this section, we calculate the Berry curvature and the AHC for bcc Fe.
The lattice constant of bcc Fe is taken as $a$ = 2.870 \AA.
The band structure of Fe is shown in Fig. \ref{fig:bands}(b) in red solid lines around Fermi level $E_F$=0.
The Berry curvature of bcc Fe along the $c$-axis is given in Fig.~\ref{fig:fig3}(a). The black dotted line is the Berry curvature calculated via Eq.~(\ref{eq:total_berrycurvature}), which is in excellent agreement with the Berry curvature calculated via FD method, shown
in the red solid line. For most $k$ points, the Berry curvature is very small except at a few $k$ points, which have huge
Berry curvature, due to the spin-orbit-split avoided crossings near the Fermi level~\cite{Yao2004,Wang2006}.
We then calculate the dc anomalous Hall conductivity (AHC) \cite{Sundaram1999,Adams1959},
\begin{equation}
\sigma_{xy} = -\frac{e^2}{\hbar}\sum_{n}\int_{BZ}\frac{d{\bf k}}{(2\pi)^3}f_n({\bf k})\Omega_{n,z}({\bf k})\, .
\end{equation}
To calculate the AHC of bcc Fe, a 300$\times$300$\times$300 $k$-mesh is used and if the Berry curvature of a certain $k$ point is greater than 100 \AA$^{2}$, the Berry curvature is recalculated at a refined 7$\times$7$\times$7 submesh around the
$k$ point, following the scheme of Ref.~\cite{Yao2004,Wang2006}. The calculated AHC for Fe is 738 ($\Omega$ cm)$^{-1}$.
This result is in good agreement with 751 ($\Omega$cm)$^{-1}$, obtained
from full-potential all-electron calculations by Yao et. al.~\cite{Yao2004}, which is also close
to 756 ($\Omega$ cm)$^{-1}$ obtained from normal conserving pseudopotential
calculations via Wannier interpolation techniques~\cite{Wang2006}.
The Fe pseudopotential in Ref.~\cite{Wang2006} is specially optimized to reproduce the all-electron result of AHC,
and the difference between this work and Ref.~\cite{Wang2006}
might come from the different pseudopotentials used in the calculations.
\section{Berry curvatures calculated with reduced basis sets}
\label{sec:reduce_NAO}
In the previous section, we showed that the Berry curvatures calculated by Eq.~(\ref{eq:total_berrycurvature}) are in excellent agreement with FD results.
However, the NAO basis sets have too many orbitals compared to the Wannier bases, and therefore computationally more expensive. We would like to reduce the basis size to accelerate the calculations, but maintain the accuracy.
The idea is to use reduced basis sets, to reproduce the band structures in a smaller energy window. This is done via a revised band interpolation technique \cite{Chen2011} via NAOs.
To reduce the number of NAOs, we reconstruct a new set of NAOs as the linear combination of original on-site orbitals, i.e,
\begin{equation}
|\tilde{\phi}_{I,\mu}\rangle = \sum_\nu^{N_I} \mathcal{U}_{I,\nu\mu}|\phi_{I,\nu}\rangle\, ,
\label{eq:redcued_NAOs}
\end{equation}
where $|\phi_{I,\nu}\rangle$ is the $\nu$-th orbit of the $I$-th atom.
One of the problems of the maximally localized Wannier functions is that their centers are not necessarily on atom positions or other high-symmetry points\cite{Sakuma2013}. In contrast, the reduced NAOs $|\tilde{\phi}_{I,\mu} \rangle$ are linear combinations of on-site NAOs, so they are still atom centered and strictly localized.
The number $\tilde{N}_I$ of the reduced NAO bases $|\tilde{\phi}_{I,\mu}\rangle$, is less than the number $N_I$ of $|\phi_{I,\nu}\rangle$. $\mathcal{U}_{I}$ is a real orthogonal matrix to keep the reduced NAOs real.
We obtained the reduced NAOs by minimizing the {\it spillage} between the wave functions calculated by original NAOs and the reduced basis set at a coarse $k$-mesh in a chosen energy window. Details of this process are given in \ref{sec:reduce_bases}.
\subsubsection{BaTiO$_3$}
Figure \ref{fig:bands}(a) compares the band structure of BaTiO$_3$ calculated by the original bases (solid red lines) and the reduced bases (black dashed lines).
The reduced bases set is optimized by fitting the wave functions in the energy window of -6 to 4.8 eV (The energy window is represented by a solid green line) which covers the lowest three bands above the Fermi level.
A uniform grid of 8$\times$8$\times$8 $k$-mesh is used to generate the reduced basis set.
The reduced base has 38 orbitals compared to the 86 orbitals in the original bases.
As we see from Fig.~\ref{fig:bands}(a), the band structures calculated by the reduced bases
are in excellent agreement with the original band structures within the energy window.
The lower energy bands above the energy window also agree very well.
For the bands far above the energy window, the agreement becomes worse as expected.
The Berry curvatures of BaTiO$_3$ calculated by the reduced bases are shown in green dashed lines in Fig.~\ref{fig:fig2}(a)-(c), compared to those calculated by the original bases. The overall agreement is rather good, with only small differences at some $k$-point.
We now check the $\Delta \Omega$ for the reduced bases, depicted in Fig.~~\ref{fig:fig2}(d)-(e), which are shown in green dashed lines. We see that for the reduced basis, $\Delta \Omega$ is significantly larger than that of the full basis calculations, which suggests that if a reduced basis is used, the corrections to the Kubo formula may not be ignored in some cases.
One may expect that if an even smaller non-orthogonal Wannier bases are used, the correction may have an even larger contribution, which cannot be ignored.
\subsubsection{bcc Fe}
The reduced basis set of the Fig. \ref{fig:bands}(b) is optimized in the energy window of -9 $\sim$ 5 eV, on a uniform grid of 8$\times$8$\times$8 $k$-points. We obtain 28 orbitals (include spin) from the original 54 orbitals. The energy bands obtained by the reduced basis set are almost identical to the original bands.
In fact, the agreement is still very well even outside the energy window, below 14 eV.
The Berry curvature $\Omega_z$ calculated by the reduced bases are also shown in Fig.~\ref{fig:fig3}(a) in the green dashed line compared to the original result. There are only very small differences between the results obtained by the original and reduced bases.
The correction to the naive Kubo formula $\Delta \Omega_z$ for the reduced bases are shown in the green dashed line
in Fig.~\ref{fig:fig3}(b). The correction is much larger than that for the original bases, shown in the black dotted line. However, the correction is still negligibly small compared to the total Berry curvature, because the dominant contribution to the contribution comes from the $D$-$D$ terms~\cite{Wang2006}.
The AHC of bcc Fe calculated by the reduced basis set is 734 ($\Omega$ cm)$^{-1}$, which is in good agreement with that calculated by the full basis set 738 ($\Omega$ cm)$^{-1}$.
\section{Summary}
\label{sec:summary}
We derive the formula to calculate the Berry curvature using non-orthogonal NAOs. We find that there are some additional correction terms besides the usual Kubo formula. We calculate the Berry curvature of Rhombohedral BaTiO$_3$ and bcc Fe, as well as the AHC for Fe. The results are in excellent agreement with the finite difference method.
We develop a method that can significantly reduce the number of orbitals in the NAO bases but can still maintain the accuracy of the calculations. We also compare the Berry curvature and AHC calculated via our formula and the Kubo formula.
We find for the original basis, the differences between the two methods are negligibly small, but for the reduced bases sets, the correction terms become larger. The methods developed in this work can be applied to non-orthogonal generalized Wannier functions.
|
1,108,101,566,698 | arxiv | \section{Introduction}
Observational studies are commonly adopted to evaluate the impact of policy changes. One limitation in using a pre-and-post design is the need for control for the underlying time trend\cite{bib1}. To overcome the bias issue from an underlying time-trend, Ashenfelter and Card in 1985 proposed the difference-in-differences (DID) study design\cite{bib2}.
Since the initial work by Ashenfelter and Card, the use of the DID study design has become widespread\cite{bib3,bib4, bib5, bib6, bib7, bib8}. For empirical research, the DID design was supposed to be unbiased and powerful to distinguish between effective and ineffective policies. However, as pointed out in the literature, the DID study design also suffers a bias issue of over-rejection in estimation, which could be very serious in some circumstances\cite{bib7, bib8}.
Bertrand et al.\cite{bib8} were among the surprising few who scrutinized the discriminatory ability of the DID design. Imposing effective and ineffective interventions (or `laws' in their paper) by simulation on weekly earning data from the Current Population Survey (CPS), they found that estimation from a DID analysis was biased in size of a significance test. They then proposed methods using data collapsing, block bootstrapping, and clustered standard error estimation. They believed that it was the correlation among observations that leads to the over-rejection rates. Followed their work, researchers have evolved different strategies to reduce estimation bias in standard error (SE). Although lots of efforts have been made and some progress has been achieved over the last two decades. However, the performance of those strategies is still disappointing\cite{bib7}. We will provide more details in subsection 2.3.
On the other hand, the bias issue in the point estimate of an intervention effect in a DID analysis has received scant attention in the literature. In practice, it is implausible to select reference groups to be ``perfectly" matching the intervention groups in their underlying time trends. If the parallel trends assumption fails, the point estimate from a DID analysis will be biased. Also, the uncontrolled part of time trends will enter into the model errors, leading to a potential higher level of error correlation. We believe that the violation of the parallel trends assumption is the dominant source of systemic bias. We argue that focusing on the inference bias and ignoring the estimation bias could be misleading. In literature, we haven't seen any promising solution to both the bias in point estimate and the bias in SE estimate. We must seek an appropriate solution to the systematic bias challenge by scrutinizing overall bias sources and their impacts.
In this paper, we investigate potential sources of systematic bias in the DID design. We propose an appropriate solution to the challenges in both the estimation bias and the inference bias. It provides unbiased estimates and unbiased references for intervention effects under linear models. Also, it can be naturally generalized to other potential models used for DID analyses and keeps the merits.
The remainder of this paper proceeds as follows. Section 2 explores potential sources of systematic bias from the DID design and analyzes their impact on statistical estimation and inference. Section 3 proposes a detrending strategy to handle the estimation bias and then a permutational detrending (PD) strategy to eliminate the inference bias. We establish asymptotic unbiasedness of the proposed approaches. Section 4 illustrates their statistical proprieties using simulation experiments. We also demonstrate their practical utility by applying it to the EASE data and the CPS data. We discuss the strengths and limitations of the proposed approaches in section 5.
\section{Potential Sources of Bias}
\subsection{A brief on the DID design}
In a DID study, we measure an intervention effect by looking at the difference between the pre-and-post effects in the intervention and reference groups\cite{bib2}. Formally, let $Y_{igt}$ be the outcome of interest for individual $i$ in group $g$ at time $t$. We try to see the effect of an intervention $I_{gt}$ (a dummy variable indicating whether the intervention affected group $g$ at time $t$). Let's consider, for simplicity of illustration, a linear regression model
\begin{equation}
Y_{igt}=\alpha_0+\alpha A_g+\beta B_t+\gamma I_{gt}+ \varepsilon_{igt}
\end{equation}
where the three main regressors represent fixed effects of group $g$ ($A_g=1/0$ indicating whether the intervention affected group $g$), time $t$ ($B_t=1/0$ indicating whether the intervention implemented at time $t$), and their interaction ($I_{gt}=A_g\times B_t$), respectively. $\gamma$ is the intervention effect interested. Status of group $g$ and time $t$ could be multiple. For each individual $i$, there is an unique group statue $g$ (either $g$ in intervention groups $\mathcal{I}$ or reference groups $\mathcal{R}$) but possible multiple time points $t^i_1, \cdots, t^i_{n_i}$ in single time period or multiple time periods.
Denoting by $\alpha_1=\alpha_0+\alpha$ and $\beta_1=\beta+\gamma$, model (1) for intervention groups can be written as \begin{equation}
Y_{igt}=\alpha_1+\beta_1 B_t+\varepsilon_{igt}, \quad g\in\mathcal{I}.
\end{equation}
Denoting by $\beta_0=\beta$, model (1) for reference groups can be written as \begin{equation}
Y_{igt}=\alpha_0+\beta_0 B_t+\varepsilon_{igt}, \quad g\in\mathcal{R}.
\end{equation}
The intervention effect $\gamma$ is the difference between $\beta_1$ and $\beta_0$.
\subsection{Potential sources of systemic bias}
A basic assumption of the DID approach is the parallel trends assumption, which supposes that {\it in the absence of intervention effect, outcomes of the intervention and reference groups would follow parallel paths over time}\cite{bib1}. Under this assumption, any temporal factors other than the policy is under control. It allows the DID design to account for unobserved temporal variables. A key to implementing the DID design is to find reference groups for which the parallel trends assumption holds. Ideally, the only difference between the intervention and the reference groups would be the exposure to the policy. However, such ``perfect" reference groups may be difficult or even impossible to find, and violation of the parallel trends assumption could lead to estimation bias of the intervention effect.
In practice, temporal factors including long-term time-trends, seasonal time-trends, and short-term shocks may confound the estimate of intervention, for their potential correlation with the exposure $B_t$ (which is also a time-varying variable). Time-invariant factors, such as individual characteristics (gender, race, social-economic status, etc.), could also raise estimation bias. It is because observations are not from the same group of individuals with time going. These factors are constant for each individual. But they are time-varying with changing cohort of individuals under observation, and hence could be correlated with $B_t$ as well. If the parallel trends assumption fails, a lack of sufficient adjustment could result in bias in the point estimate of the effect.
Potential systematic bias could also come from the used estimation equation (or likelihood function), leading to a biased SE estimate and a biased significance test. This situation is similar to that of overlap bias in a matched case-control study\cite{bib9} or a case-crossover study designs\cite{bib10, bib11}. We have generally recognized that, with structured data, a modeling method designed only for independent observations will provide a biased estimate of SE and a biased significance test. Clustering within groups and repeated measures of individuals could lead to correlation among observations. Uncontrolled part of the time trends too if the parallel trends assumption fails.
\subsection{Limitation of bias reduction methods in the literature}
Much effort has been made in the literature to overcome the challenges of bias in a DID study. The main focus was on reducing bias in SE estimates from within-group correlation in outcomes. It was for more accurate size and possible higher power in significance tests. As reviewed by Rokicki et al.\cite{bib7}, the approaches used to account for within-group correlation in outcomes can be divided into three broad categories: (1) post hoc adjustments such as CSE (Clustered Standard Errors)\cite{bib12}, bootstrapping\cite{bib13, bib14, bib15}, or permutation tests\cite{bib16, bib17, bib18, bib19, bib20}; (2) explicitly modeling the within-cluster error correlation\cite{bib21, bib22, bib23, bib24} such as GEE, random-effect models, and feasible generalized least squares; and (3) aggregating the data to the group level, thereby eliminating the within-group correlation\cite{bib8, bib25}. For more details of these approaches, we refer readers to the review paper of Rokicki et al.\cite{bib7} and related references therein.
The idea of aggregating data to group level was proposed by Bertrand et al.\cite{bib8} for repeated cross-sectional data. It is most commonly used for economic outcomes such as income or hours worked. However, this idea may not be suitable to be generalized to unbalanced data for evaluating a group-level policy on individual-level outcomes. Also, it works fare poorly as the number of groups gets small\cite{bib7}. Additionally, data aggregation unavoidably leads to information loss and results in lower efficiency of inference.
In practice, it is difficult to look into the covariance structure of errors and correctly specify it in a model (like GEE or a random-effect model). We need much more information to estimate the covariance matrix because of its high dimension. Additionally, considering a possible estimation bias in point estimate from confounding, all the strategies focus on explicitly modeling the within-cluster error correlation could be less efficient. Their performance could be getting even worse as the sample size goes down.
Some of the post hoc adjustments, such as the approximate permutation procedure\cite{bib20}, could be promising for handle within-group correlation. However, it is sensitive to the parallel trends assumption. The reason is simple and direct: even though one can reveal the null distribution of an intervention effect, the estimation bias in the point estimate also leads to biased reference. We will see in section 4 that `placebo' intervention also leads to over-rejection rates as the parallel trends assumption fails.
Although some progress has been made on control the bias in SE estimation, little attention has been paid to the bias in point estimates. It is the main limitation of the approaches motioned above in this subsection.
When the parallel-trend assumption fails, some authors resort to a polynomial trend-augmented version of the original DID model in their application studies\cite{bib26}. Vandenberghe \cite{bib26} proposed a new method using pre-intervention observations to capture linear or non-linear trend differences. Using a Monte Carlo simulation experiment, Ryan et al. \cite{bib27} tested the performance of several estimators when the parallel trends assumption fails. These estimators were original DID, DID with propensity score matching, single-group interrupted time-series analysis, and multi-group interrupted time-series analysis. Leavitt \cite{bib28} innovated new methods to handle both the estimation bias and inference bias simultaneously. The main idea of the methods was to predict the counterfactual outcomes in the absence of intervention. The author provided two unbiased estimators (under conditions) and developed an empirical Bayesian procedure for the inference. However, the proposed methods were limited to the canonical DID linear model without adjustment for covariates. How to generalize them to a general model setting is still under question.
To our knowledge, none of the methods in the literature (1) copes with both the estimation bias and the inference bias simultaneously, and (2) can be generalized for all the potential models used for a DID analysis. Applications need an ideal solution to eliminate systemic bias for all potential models adopted for a DID analysis, including the generalized linear models. The method we show in the next section is just the one expected.
\section{A Permutational Detrending Strategy}
\subsection{Detrending DID analysis}
Without the parallel trends assumption, we have to consider adjustments for various covariates. Under this circumstance, we can rewrite model (1) as
\begin{equation}
Y_{igt}=\alpha_0+\alpha A_g+\beta B_t+\gamma I_{gt}+ \lambda_g X_{igt}+\varepsilon_{igt},
\end{equation}
where $\lambda_g X_{igt}$ represents effect from all other potentially covariates, including individual characteristics and temporal impact factors other than the intervention. Using model (4) in practice could be very difficult because of the complexity of a large amount of potentially covariates, both observed and unobserved.
By separating the time-trend effect from the others, the following alternative model could be more practically useful:
\begin{equation}
Y_{igt}=\alpha_0+\alpha A_g+\beta B_t+\gamma I_{gt} + \lambda_g (t)+\mu Z_{i}+ \varepsilon_{igt},
\end{equation}
where $\lambda_g(t)$ represents the underlying time trend of group $g$ -- reflecting impact from temporal factors, and $\mu Z_{i}$ represents the effect from the characteristics of individual $i$. It is obviously to see that $Y_{igt}-\lambda_g (t)$ can be viewed as a detrending response generated from a traditional DID model.
We suggest using a simple version of the model (5)
\begin{equation}
Y_{igt}=\alpha_0+\alpha A_g+\beta B_t+\gamma I_{gt}+ \lambda_g t+\mu Z_{i}+\varepsilon_{igt}.
\end{equation}
Where $\lambda_g t$ represents the linear part of the underlying time-trend of the group $g$. In practice, underline time trends could be non-linear. Here we consider only their linear components for simplicity.\footnote{One can also consider fully detrending (both the linear and non-linear time trends) by using polynomial or cubic spline technique.} In this case, their non-linear parts will enter the model errors.
Model (6) is much simpler than model (4) for both theoretical and practical studies. It suggests considering a linearly detrending of $Y_{igt}$ (either by subtracting underline time-trends $\lambda_g t$ from $Y_{igt}$, or equivalently by model adjustment for them). We call it the detrending DID analysis method. We will see in our simulation study that the detrending technique functions as the core to eliminate systematic bias in a DID study.
If the parallel trends assumption fails, we can overcome the challenge of bias in point estimates by directly fitting data using the detrending DID model (6). Suppose model (6) holds. Under some regular conditions, for example,
\begin{equation}
E\varepsilon_{igt}=0 \mbox{ and } \varepsilon_{igt} \mbox{ is independent with the regressors} ,
\end{equation}
the detrending DID model (6) provides unbiased point estimate for the intervention effect $\gamma$.\footnote{Estimate from logistic regression or other generalized models, can be asymptotic unbiased under some regular conditions.}
\subsection{Permutational detrending DID analysis}
We show another technique to eliminate inference bias in a DID analysis. Our strategy is simply doing permutation on the individuals' records $(Y_{igt}, Z_i)$ in the model (6). Because this technique combines both detrending and permutation, we call it permutational detrending DID analysis or acronym PD DID. We have the following algorithm for the proposed method.
\noindent\textbf{Algorithm: permutational detrending DID analysis}
\begin{enumerate}
\item[1.] Estimate the intervention effect $\hat{\gamma}$ (as point estimate) using model (6) and the original data for a DID study;
\item[2.] Permute randomly $\{(Y_{igt},Z_i),g\in\mathcal{I}\}$ and $\{(Y_{igt}, Z_i),g\in\mathcal{R}\}$ separately,\footnote{After permuting, matching between an observation and an observing time will be random.} and estimate the intervention effect $\hat{\gamma}^*$ using model (6) and the data after permuting;
\item[3.] Create empirical distribution of estimation bias $\mathcal{D}=\{\hat{\gamma}^*_j|\quad j=1,\ldots, M\}$ by independently replicating above experiment (step 2) $M$ times;
\item[4.] Calculate the mean $\bar{\gamma}^*=\sum_{j=1}^M \hat{\gamma}^*_i/M$, and the 95\% confidence interval $(\hat{L}^*,\hat{U}^*)$ of the empirical distribution;
\item[5.] Adjust $(\hat{L}^*,\hat{U}^*)$ to $(\hat{L},\hat{U})$ =$(\hat{L}^*+\hat{\gamma},\hat{U}^*+\hat{\gamma})$ (as 95\% confidence interval estimate);
\item[6.] The significance of the intervention effect (p-value) can be measured by the rank of $\hat{\gamma}$ in the empirical distribution $\mathcal{D}$.
\end{enumerate}
To build an empirical null distribution of $\hat{\gamma}$ is a crucial step to the PD DID approach. We suggest using a relatively large number of replicates (e.g., $M \ge 500$) in step 2. In the following simulation and application studies, we set $M=1,000$ to get a relatively reliable distribution and relatively high resolution of $p$ value.
If the model errors in (6) are correlated, then the traditional SE estimate for $\gamma$ is biased. In this case, the traditional statistical inference is inappropriate. Alternatively, we choose to compare $\hat{\gamma}$ with those $\hat{\gamma}^*$s in $\mathcal{D}$. We will prove that the PD DID analysis provides an unbiased significance test.
\subsection{Asymptotic property of the PD DID Strategy}
Theoretically speaking, if the model (6) and the condition (7) hold, then the PD DID analysis provides unbiased point estimates, confidence interval estimates, and significance tests for $\gamma$, as permutation replication times $M\rightarrow\infty$. Our reasoning is as follows.
Under condition (7), model (6) provides unbiased estimates for all the parameters. So is the point estimate for $\gamma$ from the PD DID analysis. Because the confidence interval and the significance test for $\gamma$ depend on the estimated null distribution $\mathcal{D}$, we will focus on proving that the empirical distribution $\mathcal{D}$ is asymptotically unbiased as permutation replication times $M\rightarrow \infty$.
Let's start with a stronger null hypothesis: individual's record $(Y_{igt}, Z_i)$ is independent with time $t$. It implies that, $\gamma=0$, $\beta=0$ and $\lambda_g = 0$ in model (6). Under the stronger null hypothesis, all the $M$ permuted samples $\{(Y^{*}_{igt}, Z^{*}_i)\}_j$, $j=1,\ldots, M$, are independent and identically distributed replications of the original sample $\{(Y_{igt}, Z_i)\}$. It implies that the parameter estimates under permuted samples $\{(\hat{\gamma}_j^*,\hat{\beta}_j^*,\hat{\lambda}_{gj}^*)$, $j=1,\ldots, M\}$ are also independent and identically distributed replications of the parameter estimates under the original sample $(\hat{\gamma}_j, \hat{\beta}_j,\hat{\lambda}_{gj})$. Therefore, the empirical distribution $\mathcal{D}$ approaches the true distribution of $\hat{\gamma}$ under the null hypothesis, as $M\rightarrow\infty$. It guarantees asymptotic unbiasedness of the confidence interval estimate and significance test.\footnote{This conclusion is not limited to a linear regression model. It holds for all models adopted for a DID analysis if and only if the point estimate is unbiased. If the point estimate is asymptotically unbiased as the sample size $N\rightarrow\infty$, then the confident interval estimates and the significance test is asymptotically unbiased as both $N$ and $M\rightarrow\infty$.}
\section{Simulation and Empirical Studies}
\subsection{Simulation design}
The simulation experiments in this paper are designed for (a) bias check and size check under the null hypothesis and (b) bias check and power check under alternative hypotheses, for the original DID analysis, the detrending DID analysis, and the permutational detrending DID analysis.
Simulation data are generated from the following data generating process.
\begin{equation}
Y_{igt}=\gamma I_{gt}+\lambda_g t+\alpha_g+u_g+v_i+w_{it}
\end{equation}
with
$$u_g\sim N(0,\sigma_u^2); \quad v_i\sim N(0,\sigma_v^2);$$
$$w_{it}\sim \mbox{AR(1) with } N(0, \sigma_w^2) \mbox{ distribution}.$$
Where the first three items represent fixed effects. $\gamma$ is the true effect of the intervention. $\lambda_g t$ represents the underlying time-trend in the group $g$. And $\alpha_g$ is the mean effect of the group $g$. The last three items are random effects. $u_g$ represents random effect at group-level. $v_i$ represents the random effect at the individual level, and $w_{it}$ represents the random effect at the observation level. Pare-wise correlation among within-group individual effects is set at $\rho$ (i.e., $COR(v_i,v_j)=\rho$ for individuals $i$ and $j$ in group $g$). Observations from the same individual are viewed as repeated measures and generated by a first-order auto-regression (AR(1)) process with normal distributions and one autocorrelation parameter $\rho$. The AR(1) process allows observations from each individual to be serially correlated (i.e., $COR(w_{it}, w_{i,t-1})=\rho$). Via this data generating progress, the observations within-group and within-individual will be correlated if $\rho > 0$, and are independent if $\rho = 0$.
In the following simulation experiments, we consider two intervention groups and two reference groups. Each of the four groups has $n=200$ individuals, and each individual has $1$ to $7$ observations (with $4$ observations on average for each individual). We set the duration of the study as one year (or $365$ days), the first half-year (182 days) as the pre-intervention period, and the second half-year (183 days) as the post-intervention period. We randomly select a visit/observing date for each of these observations ($Y_{igt}$, totally 3,200) among the 365 days.
For simulation 1 and 2 in the next subsections, we set $\alpha_g= \pm 0.5$ for intervention/reference groups and $\sigma_u=0.1$ (for the group-level effect $u_g$), $\sigma_v=1$ (for the individual-level effect $v_i$), $\sigma_w=0.1$ (for the AR(1) process). We let $\gamma$, $\lambda_g$ and $\rho$ change to create different scenarios.\footnote{The setting of the parameters (the first setting) is quite empirical. To see influence of a setting on the findings, we also try another setting (the second setting): set ($\alpha_g=\pm 5$, $\sigma_u=1$, $\sigma_v=1$, and $\sigma_w=1$) and let $\gamma$, $\lambda_g$ and $\rho$ change to create different scenarios. Corresponding results are reported in the supplementary Excel file ``S-Table 1". We get the same conclusions from the results, excepting relatively lower power of significance test (because the signal-noise ratio is relatively lower under the second setting). It implies that the findings reported in the paper is not associated with the setting of parameters.}
\subsection{Bias check and size check under null}
We try by the simulation to check the bias in point estimate and the size of significance test for the three candidate analysis methods. \\
\noindent \textbf{Simulation 1.} Set intervention effect $\gamma=0$, and $\rho=0$, $0.5$ or $0.9$, and underlying time-trend slop $\lambda_g=\pm l/365$ for intervention/reference groups, with the time-trend slop parameter $l$ takes a value from the following list $\{-0.5, \ldots, -0.1, 0, 0.1, \ldots, 0.5\}$. We replicate 1,000 experiments for each setting of $(\gamma, \rho, l)$. In each experiment, we simulate data using the data generating process. Then the original DID analysis method and the proposed detrending DID analysis approaches are adopted to fit the data. The average value of the 1,000 point estimates and the reject frequency (with significance criteria 0.05) in the 1,000 significance tests for intervention effect $\gamma$ are reported in Table 1.
\begin{center}
\mbox{(Insert Table 1 near here)}
\end{center}
From the table, we see that, under the null hypothesis ($\gamma=0$) and also under the parallel trends assumption ($l$=0), the bias in point estimate from the original DID analysis is very close to zero, and the size of a significance test is close to the nominal size 50 in 1,000 replications. Unfortunately, the bias of the estimated effect also goes up (down) as the time-trend slop parameter $l$ goes up (down), leading to higher and higher rejection rates. It implies that the original DID analysis heavily depends upon the parallel trends assumptions. In other words, violation of the parallel trends assumption could lead to unacceptable bias in both point estimate and significance test. We detected no significant difference between the results from $\rho=0$, $\rho=0.5$ and $\rho=0.9$ scenarios. It suggests that the dominating source of bias in these simulated scenarios roots from the violation of parallel trends assumption.
Just as we expected, the detrending DID analyses eliminate both biases in the point estimate and the significance test in all the simulated scenarios under the null hypothesis. Comparing the performance of the original DID analysis with that of the detrending DID analysis (or the PD DID analysis), we conclude that it is an insufficient adjustment for time-trend that leads to a biased point estimate and a biased significance test. Under these simulated scenarios, we can't see any significant improvement from the PD DID analysis over the detrending DID analysis.
\subsection{Bias check and power check under alternatives}
Creating scenarios under alternative hypotheses, we do a bias check and a power check for the three candidate analysis methods.
\noindent \textbf{Simulation 2.} This simulation is similar to Simulation 1, except setting $\rho=0.5$, the intervention effect $\gamma$ taking a value from $\{0, 0.05, \ldots, 0.5\}$ and the time-trend slops $\lambda_g=\pm l/365$ for intervention/reference groups with the slop parameter $l=-0.2/0/0.2$. At each setting of $(\gamma, \rho, l)$, the average value of the 1,000 point estimates and reject frequency (with significance criteria 0.05) in the 1,000 significance tests for the intervention effect $\gamma$ are depicted in Figure 1.
\begin{center}
\mbox{(Insert Figure 1 near here)}
\end{center}
The left column of Figure 1 explains the performance of the original DID analysis. It tells us again the biased nature of the original DID design as the parallel trends assumption fails. From the upper-left panel of Figure 1, we see that the influence on point estimate from time-trend is nearly constant. In detail, if the time-trend parameter $l$ is $0/0.2/-0.2$, then estimation bias is also close to $0/0.2/-0.2$. We can find the same phenomenon in Simulation 1 (see Table 1). The bottom-left panel shows that the power of the original DID design is perfect when the parallel trends assumption holds and misleading as it fails.
Contracting to the original DID analysis, our detrending DID analysis approach tells a different story (the middle column of Figure 1). Whenever the parallel trends assumption holds or not, the proposed detrending DID analysis provides an accurate point estimate of the intervention effect (the upper-middle panel of Figure 1). It implies that the proposed detrending approach provides an unbiased point estimate under alternatives. Also, the proposed detrending DID analysis provides an unbiased significance test for the intervention effect (bottom-middle panel of Figure 1). It is ``independent" with the underlying time-trends in both the intervention and reference groups. Again, the PD DID analysis performs similarly to the detrending DID analysis (right column of Figure 1).
Under the parallel trends assumption, the original DID analysis offers the best test power. However, it is sensitive to the underlying time trends. While the two detrending DID approaches provide unbiased and robust results (no matter the parallel trends assumption holds or not), but with relatively lower test power. Gaining unbiasedness using the detrending strategy is not free. We need data information to estimate the time trends in the intervention and reference groups.
\subsection{An empirical study on EASE data}
Using clinical data, we illustrate applying the proposed methods to avoid biased estimates in DID design studies. The data were from the Elder-Friendly Approaches to the Surgical Environment (EASE) study \cite{bib29}. Briefly, It was a prospective, non-randomized, controlled pre-and-post study at two tertiary care hospitals (University of Alberta Hospital, Edmonton, and Foothills Medical Centre, Calgary) in Alberta, Canada, from April 14, 2014, to March 28, 2017. The EASE program was a surgical quality improvement initiative designed for older patients (especially those with frailty) in an emergency surgical setting.
Data were collected before the intervention from April 14, 2014, to July 23, 2015. It was followed by a 3-month implementation period after introduced the EASE initiative to the Edmonton site. The data used in the empirical study contains 684 records from the Edmonton side (153 and 140 in pre and post periods) and Calgary side (169 and 222 in pre and post periods). We extract two dummy variables (indicating intervention side and post-intervention period), surgery start date, four outcome variables (in-hospital death, postoperative serious infections in hospital, primary outcome in hospital, and total length of stay (LOS) in hospital), and baseline characteristics of the patients (age, sex, ASA status, surgery type, and Charlson comorbidity index).\footnote{The primary outcome was a composite of major postoperative in-hospital complications or death. The ASA status represents the American Society of Anesthesiologists [ASA] physical status classification.} For more details, we refer readers to the EASE study paper\cite{bib29}.
\begin{center}
\mbox{(Insert Table 2 near here)}
\end{center}
The three binary outcomes were analyzed using logistic regression. The duration outcome (total LOS) was analyzed using negative binomial regression. All the models were adjusted with the five characteristics of the patients. We report point estimates of the intervention effect and corresponding p-values of significance test in Table 2. From the table, we see the following. First, the detrending DID makes a significant change in the estimated effects from the original DID, which indicates non-ignorable biases in estimates from the original DID analyses. Second, the permutational detrending DID provides similar p-values comparing to the detrending DID on the three binary outcomes but not on total LOS, which implies the necessity of handling bias in SE estimate in practice sometimes. Third, results from the detrending DID analyses indicate that the EASE intervention reduced the in-hospital mortality risk but was far from significant.
\subsection{An Empirical study on CPS data}
The CPS data is one of the most popular used data in DID literature. We provide this empirical study to answer an important question: where does systematic bias root in social-economical police analyses using DID designs? The focus is on illustrating methodology utility instead of exploring a social-economical policy issue.
We extract data on women in their fourth interview month in the Merged Outgoing Rotation Group of the CPS from 1979 to 1999. There are 549,735 observations of women between 25 and 50 years old who lived in the United States, which provide information on weekly earnings, employment status, education, age, and state of residence. We define wage as log(weekly earnings) as the dependent variable and divide the study period into pre (1979-1989) and post (1990-1999) periods. We select from the 50 states the top 12 in size of observations (California, New York, Texas, Ohio, Illinois, Florida, Pennsylvania, Michigan, New Jersey, North Carolina, Massachusetts, and Virginia). For each pair among the 12 states (total 66 combinations), we select the first one as `intervention' and the second as `reference.' We estimate the interactive effect between pre-and-post and intervention-and-reference, using the original DID, detrending DID, and permutational detrending DID analyses. We include age and education as covariates in each of the 198 DID models. Supplementary material (S-Table 2) contains the results of the three DID analyses. Estimates and p-values by analysis methods are depicted in Figure 2.
\begin{center}
\mbox{(Insert Figure 2 near here)}
\end{center}
Estimates before-and-after detrending are different (the top panels of Figure 2). The correlation between the two groups of point estimates is 0.504. It implies that the parallel trends assumption fails for the outcomes in most of the 66 pairs of states. The difference in point estimates also leads to a big difference in reference conclusions (the bottom panels of Figure 2). Among the 66 pairs of states, 31 (47.0\%) and 17 (25.8\%) intervention effects are reported as significant (p-value $<0.05$) before-and-after detrending, respectively. The correlation between the two groups of p-values is close to zero (-0.046). To our surprise, the p-values before-and-after permutation are very similar (the bottom panels of Figure 2). The correlation between the two groups of p-values is close to one (0.997). It indicates the dominant systematic bias is from unparalleled time trends in most of these original DID studies. On the other hand, the impact from correlation among observations is trivial.
\begin{center}
\mbox{(Insert Table 3 near here)}
\end{center}
Table 3 contains the modeling results with significant effect after Bonferroni adjustment (p-value $<0.05/66$) estimated by the detrending DID or permutational detrending analyses. Under this criterion, we find three pairs of states with significant interaction effects between pre-and-post and intervention-and-reference. We believe there was a big difference between pre-and-post policy changes in the three pairs of states.
\section{Discussion}
In a DID analysis, a difference between underlying time trends results in a bias in point estimate, while error structure leads to a bias in SE estimate. Under the parallel trends assumption, the original DID analysis provides an unbiased point estimate and a powerful significance test. It becomes complicated, however, as this basic assumption fails.
Detrending provides an unbiased point estimate for the intervention effect. It is the key to eliminate systematic bias in a DID study. Without it, we can't do permutation. It is because the matching between individuals' observations and corresponding observing times is unexchangable. With detrending, the permutation algorithm can control inference bias from correlations among errors. We've proved in linear regression that: (1) the detrending DID analysis provides an unbiased point estimate for the intervention effect, and (2) the permutational detrending DID analysis provides both an unbiased point estimate as well as an unbiased significance test. It is worth noting that the contribution of the permutation algorithm is to create an unbiased estimate of the null distribution (as permutation replication times $M\rightarrow\infty$). We see from the proving process that it is model-free and distribution-free.
Our simulation experiments support the theoretical conclusions. We see from them that: (1) after detrending, both biases in point estimate and the significance test are gone; and (2) the function of the permutation algorithm is trivial on bias in the significance test. We guess that the over-rejection issue in DID analyses in the literature was mainly from bias in point estimate instead of bias in SE estimate. Our empirical studies in both clinical and social-economical research areas support the conclusion. Because of this, we believe that detrending is the top crucial step to a DID analysis. We argue that considering bias in SE and ignoring bias from underlying time trends could be misleading.
The advantage of the detrending technique is at achieving unbiasedness of a point estimate but with a loss of efficiency in a significance test. The advantage of the permutation algorithm is at getting an unbiased estimate of the null distribution but with a computational burden. In practice, we can use one or both based on the need of an application. If the parallel trends assumption holds, we don't need to use detrending. We can employ the permutation algorithm to handle a potential correlation between errors without losing any efficiency on the significance test. The detrending technique and the permutational detrending algorithm are simple and effective. It is easy to apply them to any model selected for a DID analysis, such as a generalized linear model.
\section*{Acknowledgements}
We thank professor Khadaroo R.G. for her permission to use the EASE data. This study was approved by the Health Research Ethics Board at the University of Alberta and the need for patient-level informed consent was
waived. No patient records/information was required or identified.
\subsection*{Funding}
This research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors.
\subsection*{Conflicts of Interest}
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
\subsection*{Supplementary Material}
{\it Supplementary material 1}: S-Table 1. Simulation results under the second setting of parameters.\\
{\it Supplementary material 2}: S-Table 2. Results of application study on CPS data.\\
|
1,108,101,566,699 | arxiv | \section{Introduction}
Magnetic small-angle neutron scattering (SANS) is a powerful tool for investigating nonuniform magnetization structures on a mesoscopic length scale ($\sim 1-300 \, \mathrm{nm}$) inside magnetic materials (see Ref.~\cite{rmp2019} for a recent review). An advantage of the SANS technique, compared e.g.\ to electron-microscopy-based methods, is that it provides statistically-averaged information about a large number of scattering objects. When conventional SANS is supplemented by ultra or very small-angle neutron scattering the spatial resolution can be extended up to the micrometer range~\cite{jericha2012,jericha2013}. This is an important size regime in which many macroscopic material properties are realized. Magnetic SANS has previously been applied to study the spin structures of a wide range of materials such as magnetic nanoparticles~\cite{disch2012,guenther2014,maurer2014,bender2015,grutter2017,bender2018jpcc,bender2018prb,oberdick2018,krycka2019,benderapl2019,bersweiler2019,zakutna2020}, hard and soft magnetic nanocomposites~\cite{suzuki2007,herr08pss,lister2010,ono2016}, proton domains~\cite{michels06a,aswal08nim,noda2016}, magnetic steels~\cite{bischof07,bergner2013,Pareja:15,shu2018}, reentrant spin glasses~\cite{mirebeau2018}, or Heusler-type alloys~\cite{bhatti2012,runov2006,michelsheusler2019,leighton2019}.
In Ref.~\onlinecite{metmi2015}, based on the analytical solution of the corresponding micromagnetic problem, we have derived expressions for the SANS cross section of a ferromagnetic medium with a weakly inhomogeneous uniaxial magnetic anisotropy, saturation magnetization, and exchange stiffness, which is valid up to the third order in the (small) amplitudes of the inhomogeneities. It follows e.g.\ that the second-order SANS cross section at sufficiently small values of the applied magnetic field inevitably displays a prominent UFO-like shape~\cite{perigo2014}. For periodic systems of defects, this theory also reproduces magnetic configurations (see Fig.~5 in Ref.~\onlinecite{michelsdmi2019}), in which the positions of vortices and saddles satisfy certain constraints~\cite{BM18,BM17}, whose topological origins can be traced back to Abel's theorem. But the central result of Ref.~\onlinecite{metmi2015} is that under very general assumptions regarding the type, distribution, and magnitude of random inhomogeneities of material parameters in a magnet, a specific combination of SANS cross-section values as a function of the scattering vector length is exactly zero in the second order, whereas the third-order contribution to this combination is nonzero and has a nontrivial functional dependence on the scattering vector, magnetic field, and the average exchange length.
Normally, the higher-order contributions are insignificant as compared to the larger, lower-order ones. But the cancellation of the second-order terms allows one to unmask the third-order effect and opens it for direct experimental observation and analysis, which is the main purpose of the present work.
In Sec.~\ref{msanscrosssection} we provide the basic equations for the magnetic SANS cross section in the perpendicular scattering geometry. Sections~\ref{msanssecondorder} and \ref{msansthirdorder} then briefly summarize the expressions for the second and higher-order terms in the SANS cross sections. Our numerical and experimental results are presented and discussed in Sec.~\ref{results}. This section also includes the expression for the third-order effect function for arbitrary spatial defect profile and our analysis of the experimental data.
\section{Summary of previous results}
\subsection{Magnetic SANS Cross Section}
\label{msanscrosssection}
Magnetic SANS experiments are commonly performed in a setup schematically shown in Fig.~\ref{fig:setup}.
\begin{figure}[tb!]
\centering
\resizebox{1.0\columnwidth}{!}{\includegraphics{setup}}
\caption{\label{fig:setup}Schematic drawing of the SANS setup (see main text for explanations).}
\end{figure}
The experiment measures the scattering cross section as a function of the scattering vector $\mathbf{q} = \mathbf{k}_1 - \mathbf{k}_0$, being the difference between the wave vectors of the scattered ($\mathbf{k}_1$) and incident ($\mathbf{k}_0$) neutrons; its magnitude $q = |\mathbf{q}| = (4\pi/\lambda) \sin(\psi/2)$ depends on the mean wavelength $\lambda$ of the neutrons (selected by the velocity selector) and on the scattering angle $\psi$. The applied-field direction $\mathbf{H}_0$ is parallel to the $\mathbf{e}_z$-direction of a Cartesian laboratory coordinate system and perpendicular to the incident neutron beam ($\mathbf{k}_0 \parallel \mathbf{e}_x \perp \mathbf{H}_0$). In the small-angle approximation, the component of $\mathbf{q}$ along $\mathbf{k}_0$ is neglected, i.e., $\mathbf{q} \cong \{ 0, q_y, q_z \} = q \{ 0, \sin\theta, \cos\theta \}$, where the angle $\theta$ specifies the orientation of $\mathbf{q}$ on the two-dimensional detector.
It is well known that the discrete atomic structure of matter is generally of no relevance for small-angle scattering. The cross sections are therefore expressed in terms of suitably coarse-grained continuum variables, represented by their Fourier transforms. The latter are denoted here by a tilde over the symbol, so that for the nuclear scattering-length density we have $\widetilde{N}(\mathbf{q})=\iiint N(\mathbf{r})e^{-\imath \mathbf{q}\mathbf{r}} d V$. Similarly, the Fourier image of the coordinate-dependent saturation magnetization of the material $M_s(\mathbf{r})$ is denoted as $\widetilde{M}_s(\mathbf{q})$, and $\widetilde{\mathbf{M}}(\mathbf{q}) = \{ \widetilde{M}_x(\mathbf{q}), \widetilde{M}_y(\mathbf{q}), \widetilde{M}_z(\mathbf{q}) \}$ is the Fourier transform of the magnetization vector field $\mathbf{M}(\mathbf{r}) = \{ M_x(\mathbf{r}), M_y(\mathbf{r}), M_z(\mathbf{r}) \}$. Then, the total unpolarized elastic SANS cross section $d \Sigma / d \Omega$ can be written as~\cite{rmp2019}:
\begin{equation}
\label{sigmasansperp2d}
\frac{d \Sigma}{d \Omega}(\mathbf{q}) = \frac{d \Sigma_{\mathrm{res}}}{d \Omega}(\mathbf{q}) + \frac{d \Sigma_{\mathrm{SM}}}{d \Omega}(\mathbf{q}) ,
\end{equation}
where
\begin{equation}
\label{sigmaresperp0}
\frac{d \Sigma_{\mathrm{res}}}{d \Omega}(\mathbf{q}) = \frac{8 \pi^3}{V} \left( |\widetilde{N}|^2 + b_H^2 |\widetilde{M}_s|^2 \sin^2\theta \right)
\end{equation}
represents the nuclear and magnetic residual SANS cross section, which is measured at complete magnetic saturation (large applied field), and the remaining term
\begin{eqnarray}
\label{sigmasmperp0}
\frac{d \Sigma_{\mathrm{SM}}}{d \Omega}(\mathbf{q}) = \frac{8 \pi^3}{V} b_H^2 \left( |\widetilde{M}_x|^2 + |\widetilde{M}_y|^2 \cos^2\theta \right. \nonumber \\ \left. + \left[ |\widetilde{M}_z|^2 - |\widetilde{M}_s|^2 \right] \sin^2\theta \right. \nonumber \\ \left. - (\widetilde{M}_y \widetilde{M}_z^{\ast} + \widetilde{M}_y^{\ast} \widetilde{M}_z) \sin\theta \cos\theta \right) ,
\end{eqnarray}
denotes the spin-misalignment SANS cross section, which vanishes at saturation when the real-space magnetization is given by $\mathbf{M} = \{0, 0, M_z = M_s(\mathbf{r}) \}$. In the preceding expressions $V$ is the scattering volume and $b_H = 2.91 \times 10^8 \, \mathrm{A}^{-1}\mathrm{m}^{-1}$ is the magnetic scattering length in the small-angle regime (the atomic magnetic form factor is approximated by $1$, since we are dealing with forward scattering). Note that Ref.~\onlinecite{metmi2015} uses a different definition of the Fourier transform, so that there appears an insubstantial difference in the prefactors.
\subsection{Second-order magnetic SANS}
\label{msanssecondorder}
The Fourier images of the magnetization components $\widetilde{\mathbf{M}}(\mathbf{q})$ for a particular material (class of materials) at different values of the applied field can be found by solving the corresponding micromagnetic problem. When the material is uniform and boundless, its magnetization will also be uniform under any nonzero applied field. For small deviations from uniformity, it is possible to build the analytical solution of the micromagnetic problem for $\widetilde{\mathbf{M}}(\mathbf{q})$ on top of the well-known theory of the approach-to-magnetic saturation~\cite{schloemann67,schloemann71}. The chief difference is that the latter is concerned with the value of the average magnetization $\widetilde{\mathbf{M}}(0)$, whereas the magnetic SANS cross section (\ref{sigmasmperp0}) depends on all its Fourier harmonics. Yet, the setting of the micromagnetic problem is very similar. It is assumed that the local saturation magnetization is a function of the position $\mathbf{r} = \left\{ x, y, z \right\}$ inside the material:
\begin{equation}
\label{msatdef}
M_s(\mathbf{r}) = M_s [1 + I_m(\mathbf{r})] ,
\end{equation}
where $I_m$ is an inhomogeneity function, small in magnitude, which describes the local variation of $M_s(\mathbf{r})$. Similar spatial inhomogeneities can be present in the magnetostatic exchange length $l_M^2(\mathbf{r}) = 2 A(\mathbf{r})/[\mu_0 M_s^2(\mathbf{r})] = l_0^2 [1 + I_e(\mathbf{r})]$ and in the dimensionless quality factor $Q(\mathbf{r}) = 2K(\mathbf{r})/[\mu_0 M_s^2(\mathbf{r})] = Q_0 I_k(\mathbf{r})$, where $A$ and $K$ are the spatially-dependent exchange stiffness and the uniaxial anisotropy constant, respectively. The spatial averages of the inhomogeneity functions are assumed to be zero:~$\langle I_{m,e,k}(\mathbf{r}) \rangle = 0$. Consequently, $\langle M_s(\mathbf{r}) \rangle = M_s$ is the average saturation magnetization of the sample, which can be measured with a magnetometer.
Assuming that the inhomogeneity functions are small quantities $I_{m,e,k} \ll 1$ of the same order, the solution of the micromagnetic problem can be expressed as a Taylor series:
\begin{equation}
\widetilde{\mathbf{M}} = \{ 0, 0, M_s \} \delta(\mathbf{q}) + \widetilde{\mathbf{M}}^{(1)} + \widetilde{\mathbf{M}}^{(2)} + \ldots ,
\label{taylorexpansionsat}
\end{equation}
where $\delta(\mathbf{q})$ is the Dirac's delta function, and $\widetilde{\mathbf{M}}^{(i)}$ contains the terms of the order $i$ in $I_{m,e,k}(\mathbf{r})$. The first term in Eq.~(\ref{taylorexpansionsat}) corresponds to the saturated state. Solving the micromagnetic problem up to the first order and obtaining expressions for $\widetilde{\mathbf{M}}^{(1)}$ allows one to express the magnetic SANS cross section up to the second order because Eq.~(\ref{sigmasmperp0}) is quadratic in $\widetilde{\mathbf{M}}$. The first set of such expressions was obtained in~\cite{michels2013} for the case of inhomogeneous saturation magnetization and anisotropy. They were verified and extended in many follow-up works (including Ref.~\onlinecite{metmi2015}) and admit a very convenient representation:
\begin{eqnarray}
\label{sigmasmperp}
\frac{d \Sigma_{\mathrm{SM}}}{d \Omega}(\mathbf{q}) = S_H(\mathbf{q}) R_H(\mathbf{q}, H_i) + S_M(\mathbf{q}) R_M(\mathbf{q}, H_i) ,
\end{eqnarray}
where the contribution $S_H R_H$ is due to perturbing magnetic anisotropy fields and the part $S_M R_M$ is related to magnetostatic fields; $H_i$ is the internal magnetic field, consisting of $H_0$ and of the average demagnetizing field due to the shape of the sample. The anisotropy-field scattering function
\begin{equation}
\label{shdef}
S_H(\mathbf{q}) = \frac{8 \pi^3}{V} b_H^2 |\mathbf{\widetilde{H}}_p|^2
\end{equation}
depends on the Fourier transform $\mathbf{\widetilde{H}}_p(\mathbf{q})$ of the local magnetic anisotropy field, whereas the scattering function of the longitudinal magnetization
\begin{equation}
\label{smdef}
S_M(\mathbf{q}) = \frac{8 \pi^3}{V} b_H^2 |\widetilde{M}_z|^2
\end{equation}
characterizes the spatial variations of the saturation magnetization. The latter can be related to the mean magnitude $\Delta M \propto \widetilde{M}_z$ of the magnetization jump at internal (e.g., particle-matrix) interfaces. The dimensionless micromagnetic response functions are given by:
\begin{equation}
\label{rhdefperp}
R_H(q, \theta, H_i) = \frac{p^2}{2} \left( 1 + \frac{\cos^2\theta}{\left( 1 + p \sin^2\theta \right)^2} \right)
\end{equation}
and
\begin{equation}
\label{rmdefperp}
R_M(q, \theta, H_i) = \frac{p^2 \sin^2\theta \cos^4\theta}{\left( 1 + p \sin^2\theta \right)^2} + \frac{2 p \sin^2\theta \cos^2\theta}{1 + p \sin^2\theta} ,
\end{equation}
where $p = 1/(h + l_M^2 q^2)$, $h = H_i/M_s$, and the magnetostatic exchange length equals $l_M \sim 3-10 \, \mathrm{nm}$ (Ref.~\onlinecite{kronfahn03}). Alternatively, the function $p = M_s/[H_i ( 1 + l_H^2 q^2 )]$ can be expressed via the micromagnetic exchange length $l_H(H_i) = \sqrt{2 A/(\mu_0 M_s H_i)}$, which characterizes the range over which perturbations in the spin structure decay~\cite{michels03prl,michelsprb2010,michels2013}
We emphasize that it is $d \Sigma_{\mathrm{SM}} / d \Omega$ which depends on the magnetic interactions (exchange, anisotropy, magnetostatics, etc.), while $d \Sigma_{\mathrm{res}} / d \Omega$ is determined by the geometry of the underlying grain microstructure (e.g., the particle shape or the particle-size distribution). One way to access the magnetic interactions is to subtract the residual scattering cross section measured at a large saturating field from the total $d \Sigma / d \Omega$ at a lower field. This is not always possible in experimental situations because of the difficulty to achieve complete magnetic saturation of the sample. The other approach~\cite{michels2013} is to use the bilinearity of Eq.~(\ref{sigmasmperp}) in $R_H$ and $R_M$, which are simple functions of $\mathbf{q}$, $H_0$, and $l_M$ only. Linear regression allows then to compute $S_M$, $S_H$, and by extrapolation $d \Sigma_{\mathrm{res}} / d \Omega$ at each $q$ without the necessity to magnetically saturate the sample. Analyzing in this way azimuthally-averaged SANS cross sections at different fields as functions of the magnitude of the scattering vector is a reliable and very precise method~\cite{michels2013} for obtaining the value of the exchange-stiffness constant $A$.
\subsection{Third-order effect in magnetic SANS}
\label{msansthirdorder}
Normally, the higher-order effects are masked by the lower-order ones. But in magnetic SANS the third-order contribution can be unmasked by considering the following combination of the perpendicular unpolarized cross-section values~\cite{metmi2015}:
\begin{eqnarray}
\Delta\Sigma_{\mathrm{SM}} = \left. \frac{d \Sigma_{\mathrm{SM}}}{d \Omega} \right|_{\theta = 0} - 2 \left. \frac{d \Sigma_{\mathrm{SM}}}{d \Omega}\right|_{\theta = \pi/2}.
\label{eq:deltasigma}
\end{eqnarray}
The second-order contribution in $\Delta\Sigma_{\mathrm{SM}}$ is exactly zero, which can also be seen from Eqs.~(\ref{sigmasmperp})$-$(\ref{rmdefperp}). This cancellation is a universal property of the SANS cross section from disordered ferromagnets, independent of the specific spatial profile of the defects. Assuming for simplicity that the inhomogeneity functions are related via $I_m = I$ and $I_k = \kappa I$ with $\kappa \lesssim 1$, the remaining third-order contribution is nonzero and takes on an especially simple form~\cite{metmi2015}:
\begin{eqnarray}
\frac{\Delta\Sigma_\mathrm{SM}V}{32 \pi^3 b_H^2} =
\left. \left\langle \frac{\widetilde{M}_x^{(1)}\otimes \widetilde{M}_x^{(1)}+\widetilde{M}_y^{(1)}\otimes \widetilde{M}_y^{(1)}}{2} \widetilde{I} \right\rangle \right|_{\genfrac{}{}{0pt}{}{q_z=0}{q_x = 0}} ,
\label{thirdorderintermediate}
\end{eqnarray}
where $\otimes$ denotes the discrete convolution in $\mathbf{q}$-space $q = q_y$, and the angular brackets denote a triple (configurational, directional, and anisotropy direction) average. Each of the $\widetilde{M}_x^{(1)}$ and $\widetilde{M}_y^{(1)}$ is proportional to $\widetilde{I}$ in the first order, so that their product multiplied by $\widetilde{I}$ is of the third order in $\widetilde{I}$. Note that in Ref.~\cite{metmi2015} discrete Fourier transforms were used with the dimensions of $\widetilde{M}_i$ and $M_i$ being the same. For spherical Gaussian defects with a spatial defect profile $\propto e^{-r^2/s^2}$, where $s$ is the defect size, it is possible to split the $\kappa$-dependent terms and obtain the following expression for $\Delta\Sigma_{\mathrm{SM}}$~\cite{metmi2015}:
\begin{eqnarray}
\Delta\Sigma_{\mathrm{SM}} \propto \kappa^2 g_{\mathrm{A}}(\mu,h,\lambda) + g_{\mathrm{MS}}(\mu, h,\lambda),
\label{eq:sigmaperpthree}
\end{eqnarray}
where the dimensionless functions $g_{\mathrm{A}}$ and $g_{\mathrm{MS}}$ depend on the reduced scattering vector $\mu = q s$, the reduced magnetic field $h$, and on the dimensionless parameter $\lambda = l_M / s$. These functions are plotted in Figs.~4 and 5 of Ref.~\onlinecite{metmi2015}.
Now we have all the tools at hand to address the main questions of this paper: 1)~whether the third-order magnetic SANS effect can be detected in experimental data and numerical micromagnetic simulations, so that the difference $\Delta\Sigma_{\mathrm{SM}}$ is nonzero; and 2)~whether the measured $\Delta\Sigma_{\mathrm{SM}}$ can indeed be described by Eqs.~(\ref{thirdorderintermediate}) and (\ref{eq:sigmaperpthree}).
\section{Results and discussion}
\label{results}
First, to confirm the existence of the third-order effect, we have performed
micromagnetic simulations of the magnetic SANS cross section in an artificial system of magnetic holes. These numerical computations (see Ref.~\onlinecite{erokhin2015} for details) were adapted to the microstructure of porous iron with a volume fraction of $32 \, \%$ and with randomly placed pore centers. The simulation code takes into account the four standard contributions to the total magnetic energy, i.e., energy in the external magnetic field, magnetic anisotropy, exchange and dipolar interaction energies. The sample volume $V = 0.2 \times 0.75 \times 0.75 \, \mathrm{\mu m}^3$ was divided into $N \sim 5 \times 10^5$ mesh elements, comprising both pores and nanocrystallites. Due to the flexibility of the mesh-generation algorithm, the shape of the pores can be controlled and was taken to be polyhedron-like. The pore-size distribution was assumed to be lognormal~\cite{krill98} with a median of $15 \, \mathrm{nm}$ and a variance of $1.16$, which yields a maximum of the distribution at $12 \, \mathrm{nm}$. The \textit{local} saturation magnetization of each Fe nanocrystallite was taken as $\mu_0 M_s = 2.2 \, \mathrm{T}$, which in conjunction with the above mentioned porosity value yields $\mu_0 \overline{M_s} \cong 1.5 \, \mathrm{T}$ for the entire sample. For the exchange-stiffness constant and the first cubic anisotropy constant of Fe, we have, respectively, assumed values of $A = 25 \, \mathrm{pJ/m}$ and $K_1 = 47 \, \mathrm{kJ/m^3}$ (Ref.~\onlinecite{cullitygraham05}). The direction of anisotropy axes varies randomly from crystallite to crystallite. The energy-minimization procedure provides (at some particular value of the applied magnetic field) the magnetization vector field $\mathbf{M}(\mathbf{r}) = \{ M_x(\mathbf{r}), M_y(\mathbf{r}), M_z(\mathbf{r}) \}$ of the sample on an \textit{irregular} lattice. This distribution is then mapped onto a \textit{regular} lattice, which permits us to calculate the magnetization Fourier coefficients and the ensuing neutron scattering cross section using Fast Fourier transformation. Further details can be found in Refs.~\onlinecite{erokhin2012prb,michels2012prb1,michels2014jmmm}.
Nuclear scattering was not considered and only the total magnetic SANS cross section [Eq.~(\ref{sigmasansperp2d})] without the nuclear term was computed. Numerical simulations are not limited by the series expansion and include the terms of all orders in the inhomogeneity amplitude.
The spin-misalignment SANS cross section Eq.~(\ref{sigmasmperp0}) was obtained by taking the difference between the total magnetic $d \Sigma / d \Omega$ at $0.6 \, \mathrm{T}$ and at a larger magnetic field of $10 \, \mathrm{T}$, which approximates the magnetically saturated state. The resulting difference pattern exhibits the clover-leaf anisotropy with maxima roughly along the diagonals of the detector (upper row in Fig.~\ref{fig3}). This angular anisotropy is related to the dipolar fields which emerge from the jump of the magnetization magnitude at the pore-matrix interface~\cite{erokhin2015}.
The quantity $\Delta\Sigma_{\mathrm{SM}}$ was computed by subtracting twice the spin-misalignment SANS cross section values along the vertical ($q_z = 0$) direction from its values along the horizontal ($q_y = 0$) direction. These curves (including the resulting $\Delta\Sigma_{\mathrm{SM}}$) are shown in Fig.~\ref{fig3}(a) and (b). The simulations yield nonzero values of $\Delta\Sigma_{\mathrm{SM}}$, which cannot be described by the second-order SANS theory. The plot of $\Delta\Sigma_{\mathrm{SM}}$ on a semi-logarithmic scale, shown in Fig.~\ref{fig3}(b), is mostly linear, which generally fits well with the prediction of Ref.~\onlinecite{metmi2015}.
\begin{figure}[tb!]
\centering{\includegraphics[width=1.0\columnwidth]{fig3}}
\caption{Results of micromagnetic simulations of nanoporous Fe for the third-order magnetic SANS effect~\cite{erokhin2015}. (upper row) Illustration of the subtraction procedure between the total $d \Sigma_{\mathrm{mag}} / d \Omega$ at $0.6 \, \mathrm{T}$ and at $10.0 \, \mathrm{T}$ (logarithmic color scale). (a)~Spin-misalignment SANS cross section along the horizontal and vertical directions (see inset). (b)~Resulting $\Delta\Sigma_{\mathrm{SM}}(q)$ computed according to Eq.~(\ref{eq:deltasigma}) (log-linear scale).}
\label{fig3}
\end{figure}
\begin{figure*}[tb]
\centering{\includegraphics[width=1.0\textwidth]{fig1}}
\caption{SANS results on NANOPERM [$(\mathrm{Fe}_{0.985}\mathrm{Co}_{0.015})_{89}\mathrm{Zr}_7\mathrm{B}_3$]. Total (nuclear and magnetic) unpolarized SANS cross section $d \Sigma / d \Omega$ in units of $100\, \mathrm{cm}^{-1} \mathrm{sr}^{-1}$ at (a)~$\mu_0 H_0 = 196 \, \mathrm{mT}$ and (b)~$\mu_0 H_0 = 1270 \, \mathrm{mT}$. (c)~Spin-misalignment SANS cross section $d \Sigma_{\mathrm{SM}} / d \Omega$ at $196 \, \mathrm{mT}$, i.e., $\frac{d \Sigma_{\mathrm{SM}}}{d \Omega}(196 \, \mathrm{mT}) = \frac{d \Sigma}{d \Omega}(196 \, \mathrm{mT}) - \frac{d \Sigma}{d \Omega}(1270 \, \mathrm{mT})$ (logarithmic color scale) ($\mathbf{k}_0 \perp \mathbf{H}_0$). The applied magnetic field $\mathbf{H}_0$ is horizontal in the plane. (d)~Solid line:~Normalized room-temperature magnetization curve. Data points ($1270 \, \mathrm{mT}$, $312 \, \mathrm{mT}$, $196 \, \mathrm{mT}$, $103 \, \mathrm{mT}$, $61 \, \mathrm{mT}$) specify the fields where the SANS measurements have been performed.}
\label{fig1}
\end{figure*}
To perform a quantitative analysis of the theoretical predictions for the third-order magnetic SANS effect we have used an existing experimental data set from the two-phase Fe-based alloy NANOPERM~\cite{suzuki06}. This material with a nominal composition of $(\mathrm{Fe}_{0.985}\mathrm{Co}_{0.015})_{89}\mathrm{Zr}_7\mathrm{B}_3$ consists of a dispersion of Fe nanoparticles, which are embedded in an amorphous magnetic matrix (particle size:~$15 \pm 2 \, \mathrm{nm}$; crystalline volume fraction:~$65 \, \%$; saturation magnetization:~$1.64 \, \mathrm{T}$). The raw SANS cross-section data for this material were already published in Ref.~\onlinecite{honecker2013}. The field-dependent azimuthally-averaged SANS cross sections can be excellently described by the second-order magnetic SANS theory (see Fig.~4(b) in \onlinecite{honecker2013}), yielding information on the magnetic interactions (exchange-stiffness constant, magnetostatic, and anisotropy fields).
Figure~\ref{fig1} shows the two-dimensional total unpolarized SANS cross section $d \Sigma / d \Omega$, computed from the raw data of Ref.~\onlinecite{honecker2013}, at an external field of $196 \, \mathrm{mT}$ [Fig.~\ref{fig1}(a)] and at a large value of the magnetic field of $1270 \, \mathrm{mT}$ [Fig.~\ref{fig1}(b)]. The experimental data set at $1270 \, \mathrm{mT}$ can be taken as an approximation to the residual SANS cross section $d \Sigma_{\mathrm{res}} / d \Omega$ [Eq.~(\ref{sigmaresperp0})], corresponding to the scattering signal in the completely saturated state [compare the hysteresis loop in Fig.~\ref{fig1}(d)]. It can be seen that the scattering at saturation exhibits a maximum intensity along the direction perpendicular to the field, which is due to the term $|\widetilde{M}_s|^2 \sin^2\theta$ in $d \Sigma_{\mathrm{res}} / d \Omega$. Reducing the field to $196 \, \mathrm{mT}$ results in the emergence of transversal spin-misalignment fluctuations (in addition to the $|\widetilde{M}_z|^2 \sin^2\theta$ contribution), which give rise to angular anisotropies with maxima along the horizontal direction and (roughly) along the detector diagonals [compare the expressions for both response functions in Eqs.~(\ref{rhdefperp}) and (\ref{rmdefperp})]. This is clearly revealed by inspection of the spin-misalignment SANS at $196 \, \mathrm{mT}$ [Fig.~\ref{fig1}(c)], which shows (i)~a weak clover-leaf-type anisotropy and (ii)~an elliptical elongation along the field direction. The scattered dots (speckles) at the outskirts of the cross section in Fig.~\ref{fig1}(c) indicate the presence of a (nearly) $q$-vector independent small random error in the data, which we estimate to be around $\pm 30 \, \mathrm{cm}^{-1} \mathrm{sr}^{-1}$. Note that azimuthal averaging, performed in Ref.~\onlinecite{honecker2013}, smoothes this error out. It has bigger impact on the present third-order effect analysis, which is based on the subtraction of the cross-section values along only two (vertical and horizontal) directions on the detector [Eq.~(\ref{eq:deltasigma})]. The origin of the clover-leaf anisotropy is related to the dipolar stray fields that are due to the jumps in $M_s$ at particle-matrix interfaces (as in the case of nanoporous Fe, see Fig.~\ref{fig3}). These stray fields decorate the Fe nanoparticles via the exertion of a magnetic torque on the magnetic moments of the matrix~\cite{honecker2013,bischof07,michels06prb}.
The analysis in Ref.~\onlinecite{honecker2013} yields a value for the exchange stiffness of $A = 4.7 \pm 0.9 \, \mathrm{pJ/m}$. Also, the fitted values of $S_H(q)$ are many orders of magnitude smaller than $S_M(q)$ (see Fig.~5 in Ref.~\onlinecite{honecker2013}). This means that magnetostatic effects (due to the small spatial variation of the saturation magnetization) are dominating over the anisotropy ones (due to the small spatial variation of the anisotropy constant). Thus, we can ignore the latter and assume $\kappa = 0$ in Eq.~(\ref{eq:sigmaperpthree}), which makes it independent of the function $g_{\mathrm{A}}$. Finally, the $q$-dependence of $S_M$ seems to be better described by an exponential function rather than a Gaussian (Gaussian spatial profile of the defects remains Gaussian in $q$-space). This was verified by us using the numerical data of Ref.~\onlinecite{honecker2013}. The exponential $S_M(q)$ corresponds to a Lorentzian-squared defect profile in real space, such as the following model for $I(\mathbf{r})$:
\begin{equation}
\label{defectsModel}
I(\mathbf{r}) = \sum_i \frac{a_i}{(1 + |\mathbf{r} - \mathbf{r}_i|^2/s^2)^2} - const ,
\end{equation}
where $a_i \ll 1$ and $\mathbf{r}_i$ are, respectively, the random amplitudes and positions of the defects, and the summation runs over the sample volume. The value of the additive constant is chosen to ensure that $\langle I(\mathbf{r}) \rangle = 0$, which is always possible for the considered defect profile.
Substituting the first-order micromagnetic solutions for $\widetilde{M}_x^{(1)}$ and $\widetilde{M}_y^{(1)}$ from Ref.~\onlinecite{metmi2015} into Eq.~(\ref{thirdorderintermediate}), passing from a summation to an integration, and assuming $\kappa = 0$ yields the following expression:
\begin{widetext}
\begin{eqnarray}
\label{eq:gms}
g_{\mathrm{MS}}=\frac{v}{(2\pi)^3}\left\langle\widetilde{I}(\mathbf{q})\iiint \frac{(x_{\mathbf{q}-\mathbf{q}^\prime} x_{\mathbf{q}^\prime}+y_{\mathbf{q}-\mathbf{q}^\prime} y_{\mathbf{q}^\prime})z_{\mathbf{q}-\mathbf{q}^\prime} z_{\mathbf{q}^\prime}\widetilde{I}(\mathbf{q}-\mathbf{q}^\prime)\widetilde{I}(\mathbf{q}^\prime)}{2(h_{\mathbf{q}-\mathbf{q}^\prime}+x^2_{\mathbf{q}-\mathbf{q}^\prime}+y^2_{\mathbf{q}-\mathbf{q}^\prime})(h_{\mathbf{q}^\prime}+x^2_{\mathbf{q}^\prime}+y^2_{\mathbf{q}^\prime})} d^3\mathbf{q}^\prime\right\rangle ,
\end{eqnarray}
\end{widetext}
where $\{x_\mathbf{q},y_\mathbf{q},z_\mathbf{q}\} = \mathbf{q}/q$, $v=\iiint 1/(1+r^2/s^2)^2 d^3\mathbf{r}=\pi^2 s^3$ is the volume of the single defect to make the result dimensionless, and $h_\mathbf{q} = h + l_M^2 |\mathbf{q}|^2$. The integral results from the convolution and the angular brackets correspond to the directional (over different representative volume orientations) and the ensemble [over different realizations of the random process for $I(\mathbf{r})$] averaging. Unlike the consideration of $g_{MS}$ in Ref.~\onlinecite{metmi2015}, the Eq.~(\ref{eq:gms}) does not assume a specific defect model $I(\mathbf{r})$. Inserting the Fourier transform $\widetilde{I}(\mathbf{q})$ of Eq.~(\ref{defectsModel}) and performing the averaging results in a slightly more complicated expression, which was used in the actual computations (see the Supplemental Mathematica file~\cite{suppmathfile}).
\begin{figure}[tb!]
\centering{\includegraphics[width=1.0\columnwidth]{fit3rdOrder}}
\vspace{-0.5cm}
\caption{Spin-misalignment SANS cross-section differences of NANOPERM [$(\mathrm{Fe}_{0.985}\mathrm{Co}_{0.015})_{89}\mathrm{Zr}_7\mathrm{B}_3$] at selected applied magnetic fields. Field values from bottom to top---$1270 \, \mathrm{mT}$, $312 \, \mathrm{mT}$, $196 \, \mathrm{mT}$, $103 \, \mathrm{mT}$. The points are the experimental data, the solid lines in (a) are computed from Eq.~(\ref{eq:ordIII}) with $\Lambda = \Lambda_0 = 0.29 \times 10^{6} \, \mathrm{cm}^{-1}\mathrm{sr}^{-1}$ and $g_{\mathrm{MS}}$, corresponding to the Lorentzian-squared defects (Eq.~(\ref{defectsModel}) with $s = 19.5 \, \mathrm{nm}$). Insets show the fitted dependencies of $\Delta\Sigma_\mathrm{res}(q)$ and $\Lambda(q)$. The horizontal line in (c) is $\Lambda = \Lambda_0$.}
\label{fig2}
\end{figure}
The main problem in analyzing experimental data, measured at finite fields, is to find and subtract the field-independent residual SANS cross section. Even though at a large magnetic field the average magnetization is close to saturation, there are still many local fluctuations, which can be detected by the extremely sensitive SANS technique. The assumption of magnetic saturation at the largest of the experimentally achievable magnetic field can be avoided by analyzing a combination of the cross-section values similar to Eq.~(\ref{eq:deltasigma}), but constructed from the total (nuclear and magnetic) cross-section data as opposed to the spin-misalignment part only:
\begin{eqnarray}
\label{eq:ordIII}
\Delta\Sigma & = &\left. \frac{d \Sigma}{d \Omega} \right|_{\theta = 0} - 2 \left. \frac{d \Sigma}{d \Omega}\right|_{\theta = \pi/2} \nonumber \\
& = & \Delta\Sigma_\mathrm{res}(q) + \Lambda \, g_{\mathrm{MS}}(q s, H_i/M_s, l_M/s).
\end{eqnarray}
For the same reasons as before, $\Delta\Sigma$ contains no second-order term. It consists only of the magnetic-field-independent residual contribution $\Delta\Sigma_\mathrm{res} = \frac{8 \pi^3}{V} [|\widetilde{N}|^2(\{0,0,q\}) - 2 |\widetilde{N}|^2(\{0,q,0\}) - 2 b_H^2 |\widetilde{M}_s|^2(\{0,q,0\})]$ and (if fourth and higher-order terms are neglected) of the third-order term. In the case of our material with $S_H \ll S_M$ the latter is proportional to $g_{\mathrm{MS}}$ with a field and $q$-independent scaling parameter $\Lambda$. Because the large nuclear and residual cross sections are subtracted twice in Eq.~(\ref{eq:ordIII}), both $\Delta\Sigma$ and $\Delta\Sigma_\mathrm{res}$ are negative, while $\Lambda$ is positive.
Linearity of Eq.~(\ref{eq:ordIII}) (in $g_{\mathrm{MS}}$) suggests the possibility of fitting the field dependence of the experimental $\Delta\Sigma$ at each $q$ as a function of the computed value of $g_{\mathrm{MS}}$ using linear regression. The only remaining parameter is the size of the defects $s$, which can be adjusted iteratively to minimize the total error of the fit. This procedure results in the best-fit value of $s = 19.5 \, \mathrm{nm}$ and in the corresponding $\Delta\Sigma_\mathrm{res}(q)$ and $\Lambda(q)$ dependencies as shown in Fig.~\ref{fig2}(b) and Fig.~\ref{fig2}(c). The $s$-value agrees very well with the nominal particle size of $15 \pm 2 \, \mathrm{nm}$ of the alloy.
It is important to note that, while theoretically the value of $\Lambda$ should be independent of $q$, this dependence was allowed during the fitting procedure. If the specific choice of the defect profile and the resulting $g_{\mathrm{MS}}$ is viable, a $q$-independent $\Lambda$-value should come as the result of the fit. As one can see from Fig.~\ref{fig2}(c), this is indeed the case. The error bars were computed using a Monte-Carlo procedure by adding a random $\pm 30 \, \mathrm{cm}^{-1}\mathrm{sr}^{-1}$ contribution to the measured $d \Sigma / d \Omega$-values and computing the standard deviation of the resulting $\Lambda$ at each $q$ across many realizations of this random process. The above value of the assumed absolute error has been estimated from the scatter of the data at the outskirts of the cross sections (at $q > 0.1 \, \mathrm{nm}^{-1}$) shown in Fig.~\ref{fig1}(c). For $q \lesssim 0.1 \, \mathrm{nm}^{-1}$, the $\Lambda(q)$ assume a nearly constant value of $\Lambda = \Lambda_0 = 0.29 \times 10^{6}\, \mathrm{cm}^{-1}\mathrm{sr}^{-1}$, shown by the horizontal line. The amplification of the error at larger $q$ is due to the small value of the cross section at these scattering vectors and, precisely for this reason, is, probably, of no relevance. This is corroborated by the theoretical lines in Fig.~\ref{fig2}(a), which are plotted according to Eq.~(\ref{eq:ordIII}) using the fixed $q$-independent value of $\Lambda = \Lambda_0$. The solid lines fit the experimental data reasonably well in the approach-to-saturation regime.
The fit in Fig.~\ref{fig2}(a) is good, but not perfect. The probable reason is the sensitivity of the cross-section difference to the details of the shape of the inhomogeneities. That is why we have carefully evaluated the second-order SANS cross section of this sample from Ref.~\onlinecite{honecker2013} and fitted $S_M(q)$ by the Fourier image of a Lorentzian-squared defect profile. While such a fit describes the $q < 0.1 \, \mathrm {nm}^{-1}$ region of $S_M(q)$ well, it exhibits discrepancies at larger $q$. These discrepancies are irrelevant to the second-order SANS theory and impact the cross section at large $q$-values only, where it is very small. However, because of the convolution in the third-order difference function Eq.~(\ref{eq:gms}), they influence the $\Delta\Sigma$ at smaller $q$ as well. It means that the interpretation of the third-order SANS cross-section differences is much more demanding on the precision of the defect model and can be a valuable tool for gaining additional insights about the shape of the inhomogeneities in the material.
While the cross-section values themselves are strictly positive, the $\Delta\Sigma$ may assume negative values for some $q$ (see Fig.~5 in Ref.~\onlinecite{metmi2015}). The $\Delta\Sigma_{\mathrm{SM}}$ in Fig.~\ref{fig2}(a) also have some visibly negative points on the lower curves.
We would like to remind that the microstructure of NANOPERM is very different from the one used in the simulations. The simulated system has nanopores instead of nanocrystallites as in the NANOPERM sample. That is why a direct comparison between Fig.~\ref{fig3} and Fig.~\ref{fig1} is not very useful. It is generally a very difficult problem to simulate a realistic random nanostructured system on a scale necessary for computing the SANS cross section. While both the numerical simulations and the analytical theory we use are approximate, their approximations are different. The simulation is not limited by the series expansion, but it is limited by the relatively small statistics of the material nanostructure, represented in the simulation volume. On the other hand, the analytical theory includes the full statistical averaging over an infinite volume, but is limited by the third-order terms in the material inhomogeneities amplitude. Yet, despite these shortcomings and differences between the simulation and theory both reveal the presence of the third-order effect.
An applied field of $\mu_0 H_0 = 1270 \, \mathrm{mT}$ seems to be rather large, and polarizes the material close to the saturation, as can be seen from the hysteresis loop in Fig.~\ref{fig1}(d). Yet, the corresponding third-order spin-misalignment SANS cross-section difference, shown as the bottom line in Fig.~\ref{fig2}(a), is far from zero (as it should be in the case of infinite external field). This is another illustration of the extreme sensitivity of SANS to the inhomogeneities of the sample's magnetization, by far exceeding that of traditional magnetometry.
One of the completely new possibilities which are opened-up by the observation of the third-order effect is the ability to extract the third statistical moment of the defects-magnitude distribution. This follows since $g_{\mathrm{MS}} \sim <a_i^3>$ [compare Eqs.~(\ref{defectsModel}) and (\ref{eq:gms})]. The third moment measures the skewness in the statistical distribution of the nanocrystallite sizes (if they are made of the same material). The skewness is zero if the distribution is symmetric around its center (like e.g.\ for a Gaussian). It will be interesting in future studies to explore the evolution of the skewness, which is expected to develop during the various stages of nanocrystallization.
\section{Conclusions}
We have demonstrated both numerically and experimentally the existence of the theoretically predicted third-order effect in the magnetic SANS cross section, which cannot in principle be accounted for by the second-order SANS theory. The model of Ref.~\onlinecite{metmi2015} is extended and the resulting expressions describe both the third-order effect and its field dependence well. Because of the inherent convolution in $\mathbf{q}$-space, the third-order effect is much more sensitive to the details of the defect profile as compared to the second-order SANS theory. We have provided here the general expression for its field and scattering-vector dependence, suitable for an arbitrary spatial-inhomogeneity profile, and used a Lorentzian-squared profile in our analysis. Analyzing the data with the help of the third-order SANS theory does not require new SANS measurements. It can make SANS an even more valuable and powerful tool for the microstructure analysis of magnetic materials.
\begin{acknowledgments}
We would like to thank Sergey Erokhin and Dmitry Berkov (General Numerics Research Lab, Jena, Germany) for providing the micromagnetic simulation results. KLM acknowledges the support of the Russian Science Foundation under the project RSF~16-11-10349.
\end{acknowledgments}
|
1,108,101,566,700 | arxiv | \section{Introduction} \label{sec:intro}
High contrast imaging surveys of stars constitute one of the foremost methods to find and study brown dwarfs
and extrasolar planets. Their results complement our knowledge on these populations drawn from the objects
detected by transit, radial velocity and other techniques. Modern imaging searches extend the accessible range
of parameter space of cool companions to wider separations and longer orbital periods ($a$\,$>$\,5\,au,
$P$\,$>$\,10\,yr) than those presently explored by radial velocities (RVs) or transits methods
\cite[e.g.,][]{2007ApJ...670.1367L, 2013ApJ...777..160B, 2015A&A...573A.127C, 2015MNRAS.453.2533M,
2016AA...594A..63G, 2017AA...603A...3V, 2018AJ....156..286S, 2019AJ....158..187B, 2021AA...651A..72V,
2020AA...635A.162L}. Moreover, directly detected brown dwarfs and planets provide a unique opportunity for
spectroscopic characterisation and thereby for detailed study of their fundamental physical properties, in
particular their atmospheres \cite[e.g.,][]{2015ApJ...804...96G, 2016ApJ...817..166S, 2016A&A...587A..58B,
2018AA...618A..63B, 2021AA...645A..17C}.
A large number of imaging programs have thus far focused on young stars (less than 1\,Gyr old) from the solar
neighbourhood ($d\lesssim$\,50\,pc), e.g., \citet{2013ApJ...777..160B, 2016AA...596A..83L, 2019AJ....158...13N,
2021AA...651A..72V}. Young nearby stars are ideal targets for direct imaging surveys because their potential
substellar companions, at the early stages of evolution are still warm and relatively bright, favouring their
detection. As a consequence of both the limitations in sensitivity to the coldest companions at field ages,
and enhanced chances of detection at young ages, the most successful searches by means of confirmed detections
are so far those carried on known young stars.
The first large imaging program sensitive to substellar companions that targeted the nearest stars was the
one led by \cite{2001AJ....121.2189O}. They observed a sample of 163 northern stars in 111 star systems,
located within 8\,pc from the Sun using the Adaptive Optics Coronograph on the Palomar 1.5\,m telescope for
the optical imaging and the Cassegrain Infrared Camera on the Palomar 5\,m Hale Telescope for the near-IR.
For about 80\% of the surveyed stars, companions more massive than 40\,$M_{\rm Jup}$ at an age of 5\,Gyr
would have been detected at separations between 40 and 120\,au. Among the most sensitive imaging surveys of
the nearest stars are those by \cite{2012AJ....144...64D} and \cite{2011ApJ...743..141C}. The first group
employed the NICMOS on the Hubble Space Telescope to obtain high resolution images of 255 stars within
$\sim$10\,pc, while the second used the IRAC on the Spitzer Space Telescope and observed 117 targets at
distances from 1.3 to 43.8\,pc.
Despite several remarkable discoveries like the methane brown dwarf Gliese 229B \citep{1995Natur.378..463N} or
the planetary system around HR 8799 \citep{2008Sci...322.1348M}, gas giant planets and brown dwarf companions
at orbital separations beyond a few astronomical units were found to be rare both by the surveys of young stars
and the nearest stars. In contrast to thousands of discoveries from RV and transit methods, only a few tens of
companions in or around the planetary mass regime were found by direct imaging surveys sensitive enough to
detect them. \cite{2012AJ....144...64D}, from analysis of their sub-sample of 138 M dwarfs, calculated a
multiplicity fraction of $2.3^{+5.0}_{-0.7}$\% for L and T-type companions to M dwarfs at orbital separations
of 10--70\,au. The IRAC/Spitzer search performed by \cite{2011ApJ...743..141C} had the ability to detect
600--1100\,K brown dwarf companions at semimajor axes $\gtrsim$\,35\,au and 500--600\,K companions beyond
60\,au. Using Monte Carlo simulations they estimated a 600--1100\,K T dwarf companion fraction of $<$\,3.4\%
at 35--1200\,au and $<$\,12.4\% for 500--600\,K companions at 60--1000\,au. Due to limitations in spatial
resolution, contrast and sensitivities achieved by available instruments, the orbital separations of less
than 10--15\,au remained largely unexplored for the presence of massive planets and brown dwarfs by the past
imaging surveys.
\begin{deluxetable*}{llrrcccrr}
\tablenum{1}
\tablecaption{CanariCam target sample\label{CCSample}}
\tablewidth{0pt}
\tabletypesize{\scriptsize}
\tablehead{
\colhead{Star} &
\colhead{Other Name} &
\colhead{RA (J2000)} &
\colhead{Dec (J2000)} &
\colhead{Spectral} &
\colhead{$d$} &
\colhead{$\pi$} &
\colhead{$\mu_{\alpha}\cos{\delta}$} &
\colhead{$\mu_{\delta}$} \\
\colhead{} &
\colhead{} &
\colhead{(hh:mm:sss)} &
\colhead{(dd:mm:ss)} &
\colhead{Type} &
\colhead{(pc)} &
\colhead{(mas)} &
\colhead{(mas/yr)} &
\colhead{(mas/yr)}
}
\startdata
GJ 699 & Barnard's Star & 17:57:48.499 & +04:41:36.11 & M3.5 & 1.827\,$\pm$\,0.0010 & 547.45\,$\pm$\,0.29 & -802.80\,$\pm$\,0.64 & 10362.54\,$\pm$\,0.36\\
GJ 406 & CN Leo & 10:56:28.826 & +07:00:52.34 & M6.0 & 2.409\,$\pm$\,0.0004 & 415.18\,$\pm$\,0.07 & -3866.34\,$\pm$\,0.08 & -2699.22\,$\pm$\,0.07\\
GJ 411 & Lalande 21185 & 11:03:20.194 & +35:58:11.57 & M2.0 & 2.546\,$\pm$\,0.0002 & 392.75\,$\pm$\,0.03 & -580.06\,$\pm$\,0.03 & -4776.59\,$\pm$\,0.03\\
GJ 244 & Sirius & 06:45:08.917 & -16:42:58.02 & A1.0 & 2.670\,$\pm$\,0.0017 & 374.49\,$\pm$\,0.23 & -461.57\,$\pm$\,0.28 & -914.52\,$\pm$\,0.33\\
GJ 65\,AB & BL Cet\,+\,UV Cet & 01:39:01.453 & -17:57:02.04 & M5.5, M6.0 & 2.720\,$\pm$\,0.0055 & 367.71\,$\pm$\,0.74 & 3385.32\,$\pm$\,0.67 & 544.39\,$\pm$\,0.38\\
GJ 729 & V1216 Sgr & 18:49:49.364 & -23:50:10.45 & M3.5 & 2.976\,$\pm$\,0.0003 & 336.03\,$\pm$\,0.03 & 639.37\,$\pm$\,0.04 & -193.96\,$\pm$\,0.03\\
GJ 905 & HH And & 23:41:55.036 & +44:10:38.82 & M5.0 & 3.160\,$\pm$\,0.0004 & 316.48\,$\pm$\,0.04 & 112.53\,$\pm$\,0.04 & -1591.65\,$\pm$\,0.03\\
GJ 144 & $\epsilon$ Eridani & 03:32:55.845 & -09:27:29.73 & K2.0 & 3.220\,$\pm$\,0.0014 & 310.58\,$\pm$\,0.14 & -974.76\,$\pm$\,0.16 & 20.88\,$\pm$\,0.12\\
GJ 447 & FI Vir & 11:47:44.397 & +00:48:16.40 & M4.0 & 3.375\,$\pm$\,0.0003 & 296.31\,$\pm$\,0.03 & 607.30\,$\pm$\,0.03 & -1223.03\,$\pm$\,0.02\\
GJ 866\,(AC)B & EZ Aqr & 22:38:36.081 & -15:17:23.89 & M5.0 & 3.406\,$\pm$\,0.0105 & 293.60\,$\pm$\,0.90 & 2314.8\,$\pm$\,8.0 & 2295.3\,$\pm$\,8.0\\
GJ 820\,A & 61 Cyg A & 21:06:53.940 & +38:44:57.90 & K5.0 & 3.497\,$\pm$\,0.0012 & 285.95\,$\pm$\,0.10 & 4164.17\,$\pm$\,0.19 & 3249.99\,$\pm$\,0.25\\
GJ 820\,B & 61 Cyg B & 21:06:55.264 & +38:44:31.36 & K7.0 & 3.495\,$\pm$\,0.0007 & 286.15\,$\pm$\,0.06 & 4105.79\,$\pm$\,0.09 & 3155.76\,$\pm$\,0.10\\
GJ 280 & Procyon & 07:39:18.119 & +05:13:29.95 & F5 & 3.507\,$\pm$\,0.0079 & 285.17\,$\pm$\,0.64 & -714.59\,$\pm$\,2.06 & -1036.80\,$\pm$\,1.15\\
GJ 725\,A & HD 173739 & 18:42:46.705 & +59:37:49.41 & M3.0 & 3.523\,$\pm$\,0.0003 & 283.84\,$\pm$\,0.02 & -1311.68\,$\pm$\,0.03 & 1792.33\,$\pm$\,0.03\\
GJ 725\,B & HD 173740 & 18:42:46.894 & +59:37:36.72 & M3.5 & 3.523\,$\pm$\,0.0004 & 283.84\,$\pm$\,0.03 & -1400.26\,$\pm$\,0.04 & 1862.53\,$\pm$\,0.03\\
GJ 15 A & GX And & 00:18:22.885 & +44:01:22.64 & M1.5 & 3.562\,$\pm$\,0.0003 & 280.71\,$\pm$\,0.02 & 2891.52\,$\pm$\,0.02 & 411.83\,$\pm$\,0.01\\
GJ 15 B & GQ And & 00:18:25.824 & +44:01:38.09 & M3.5 & 3.563\,$\pm$\,0.0004 & 280.69\,$\pm$\,0.03 & 2862.80\,$\pm$\,0.02 & 336.43\,$\pm$\,0.02\\
GJ 1111 & DX Cnc & 08:29:49.353 & +26:46:33.63 & M6.5 & 3.581\,$\pm$\,0.0008 & 279.25\,$\pm$\,0.06 & -1113.69\,$\pm$\,0.06 & -612.19\,$\pm$\,0.05\\
GJ 71 & $\tau$ Cet & 01:44:04.091 & -15:56:14.93 & G8.5 & 3.652\,$\pm$\,0.0023 & 273.81\,$\pm$\,0.17 & -1721.73\,$\pm$\,0.18 & 854.96\,$\pm$\,0.09\\
GJ 54.1 & YZ Cet & 01:12:30.637 & -16:59:56.36 & M4.5 & 3.717\,$\pm$\,0.0005 & 269.06\,$\pm$\,0.03 & 1205.07\,$\pm$\,0.04 & 637.55\,$\pm$\,0.05\\
GJ 273 & Luyten's Star & 07:27:25.093 & +05:12:35.63 & M3.5 & 3.786\,$\pm$\,0.0006 & 264.13\,$\pm$\,0.04 & 571.23\,$\pm$\,0.05 & -3691.49\,$\pm$\,0.04\\
SO 0253+16& Teegarden's Star & 02:53:00.891 & +16:52:52.64 & M7.0 & 3.832\,$\pm$\,0.0014 & 260.99\,$\pm$\,0.09 & 3429.08\,$\pm$\,0.09 & -3805.54\,$\pm$\,0.08\\
GJ 860\,AB & Kruger 60 AB & 22:27:59.557 & +57:41:42.08 & M3.0, M4.0 & 4.010\,$\pm$\,0.0026 & 249.39\,$\pm$\,0.16 & -725.23\,$\pm$\,0.54 & -223.46\,$\pm$\,0.35\\
GJ 83.1 & TZ Ari & 02:00:12.956 & +13:03:07.02 & M4.0 & 4.470\,$\pm$\,0.0014 & 223.73\,$\pm$\,0.07 & 1096.46\,$\pm$\,0.07 & -1771.53\,$\pm$\,0.06\\
GJ 687 & LHS 450 & 17:36:25.899 & +68:20:20.90 & M3.0 & 4.550\,$\pm$\,0.0004 & 219.79\,$\pm$\,0.02 & -320.68\,$\pm$\,0.02 & -1269.89\,$\pm$\,0.03\\
GJ 1245\,ABC & LHS 3494 & 19:53:54.482 & +44:24:51.34 &M5.5,\,M,\,M6 & 4.660\,$\pm$\,0.0010 & 214.57\,$\pm$\,0.05 & 349.36\,$\pm$\,0.06 & -480.32\,$\pm$\,0.05\\
GJ 876 & IL Aqr & 22:53:16.732 & -14:15:49.30 & M3.5 & 4.672\,$\pm$\,0.0008 & 214.04\,$\pm$\,0.04 & 957.72\,$\pm$\,0.04 & -673.60\,$\pm$\,0.03\\
GJ 1002 & LHS 2 & 00:06:43.197 & -07:32:17.02 & M5.0 & 4.846\,$\pm$\,0.0011 & 206.35\,$\pm$\,0.05 & -811.57\,$\pm$\,0.06 & -1893.25\,$\pm$\,0.03
\enddata
\end{deluxetable*}
We used the CanariCam instrument at the Gran Telescopio Canarias (GTC) to carry out deep, high spatial resolution
mid-infrared imaging in the 10 micron window, targeting the nearest known stars visible from a northern site
($\delta$\,$>$\,$-25^{\circ}$), to search for ultra-cool brown dwarfs and massive planets. We aimed to detect
companions at 1--10\,arcsec separations, which translate to 2.0--50\,au, or orbital periods typically longer than
10 years. Therefore, our search extends to orbital separations and periods not explored yet by previous imaging
or RV surveys. Since stars in the solar vicinity are typically old, at ages over 1\,Gyr, any low-mass substellar
companion will have cooled down to $T_{\rm eff}$ well below 1000\,K. At such temperatures, the maximum flux
emission shifts from the near- to mid-IR. Hence, CanariCam at GTC provided an opportunity to perform a competitive
search with respect to direct imaging surveys completed at the optical or near-IR using adaptive optics and
coronographic systems, by means of sensitivity to the coolest companions. In this work we present the generic
results of the program. The rest of the paper is structured as follows. Section~2 sets out the observed sample
of stars, Section~3 describes the observations and data processing steps, and Section~4 presents the analysis
and results: relative astrometry of resolved binaries, contrast curves and sensitivities, constraints on physical
parameters of detectable companions, and the upper limit estimates on the occurrence rate of companions. Section 5
contains comparison of our results to other surveys and a discussion regarding the stellar binaries and known
planets hosts in the sample. Conclusions and final remarks are presented in Section~6.
\section{The sample} \label{sec:sample}
The sample of CanariCam targets consists of the nearest known stars from the northern sky, visible from the
Roque de los Muchachos Observatory, that is, with declinations $\delta$\,$>$\,$-25^{\circ}$. We used the One
Hundred Nearest Star Systems list provided by the Research Consortium On Nearby Stars (RECONS;
\citealt{1997AJ....114..388H, 2006AJ....132.2360H, 2018AJ....155..265H}), complemented
with the astrometric data from the Gaia DR2 and EDR3 \citep{2018A&A...616A...1G, 2021AA...649A...1G}
where available, starting from the nearest star in the Northern Hemisphere, GJ~699 (Barnard's Star) and
moving to more distant ones.
\begin{deluxetable*}{lrrrrrrrr}
\tablenum{2}
\tablecaption{Near- and mid-infrared photometry (from 2MASS, WISE and Akari S9W) of stars in the sample\label{SamplePhot}}
\tablewidth{0pt}
\tabletypesize{\scriptsize}
\tablehead{
\colhead{Star} &
\colhead{$J$} &
\colhead{$H$} &
\colhead{$K_{s}$} &
\colhead{$W$1} &
\colhead{$W$2} &
\colhead{$W$3} &
\colhead{$W$4} &
\colhead{S9W}
\\
\colhead{} &
\colhead{(mag)} &
\colhead{(mag)} &
\colhead{(mag)} &
\colhead{(mag)} &
\colhead{(mag)} &
\colhead{(mag)} &
\colhead{(mag)} &
\colhead{(Jy)}
}
\startdata
GJ 699 & 5.244\,$\pm$\,0.020 & 4.834\,$\pm$\,0.034 & 4.524\,$\pm$\,0.020 & 4.386\,$\pm$\,0.073 & 3.600\,$\pm$\,0.062 & 4.036\,$\pm$\,0.016 & 3.921\,$\pm$\,0.025 & ... \\
GJ 406 & 7.085\,$\pm$\,0.024 & 6.482\,$\pm$\,0.042 & 6.084\,$\pm$\,0.017 & 5.807\,$\pm$\,0.055 & 5.487\,$\pm$\,0.031 & 5.481\,$\pm$\,0.015 & 5.310\,$\pm$\,0.031 & 0.38\,$\pm$\,0.02 \\
GJ 411 & 4.203\,$\pm$\,0.242 & 3.640\,$\pm$\,0.202 & 3.254\,$\pm$\,0.306 & 3.239\,$\pm$\,0.136 & 2.360\,$\pm$\,0.071 & 3.045\,$\pm$\,0.010 & 2.934\,$\pm$\,0.024 & 3.40\,$\pm$\,0.02 \\
GJ 244 & -1.391\,$\pm$\,0.109 & -1.391\,$\pm$\,0.184 & -1.390\,$\pm$\,0.214 & 2.387\,$\pm$\,0.059 & 0.786\,$\pm$\,0.112 & 0.497\,$\pm$\,0.018 & -1.330\,$\pm$\,0.005 & 198.0\,$\pm$\,6.9 \\
GJ 65\,AB & 6.283\,$\pm$\,0.019 & 5.690\,$\pm$\,0.029 & 5.343\,$\pm$\,0.021 & 5.053\,$\pm$\,0.072 & 4.575\,$\pm$\,0.041 & 4.762\,$\pm$\,0.015 & 4.616\,$\pm$\,0.025 & 0.65\,$\pm$\,0.01 \\
GJ 729 & 6.222\,$\pm$\,0.018 & 5.655\,$\pm$\,0.034 & 5.370\,$\pm$\,0.016 & 5.164\,$\pm$\,0.062 & 4.754\,$\pm$\,0.033 & 4.911\,$\pm$\,0.014 & 4.715\,$\pm$\,0.026 & 0.60\,$\pm$\,0.01 \\
GJ 905 & 6.884\,$\pm$\,0.026 & 6.247\,$\pm$\,0.027 & 5.929\,$\pm$\,0.020 & 5.694\,$\pm$\,0.056 & 5.410\,$\pm$\,0.029 & 5.393\,$\pm$\,0.015 & 5.254\,$\pm$\,0.031 & 0.38\,$\pm$\,0.01 \\
GJ 144 & 2.228\,$\pm$\,0.298 & 1.880\,$\pm$\,0.276 & 1.776\,$\pm$\,0.286 & 2.970\,$\pm$\,0.215 & 2.285\,$\pm$\,0.055 & 1.770\,$\pm$\,0.006 & 1.288\,$\pm$\,0.005 & 12.86\,$\pm$\,0.05 \\
GJ 447 & 6.505\,$\pm$\,0.023 & 5.945\,$\pm$\,0.024 & 5.654\,$\pm$\,0.024 & 5.457\,$\pm$\,0.064 & 5.012\,$\pm$\,0.034 & 5.176\,$\pm$\,0.013 & 5.027\,$\pm$\,0.031 & 0.53\,$\pm$\,0.01 \\
GJ 866\,(AC)B & 6.553\,$\pm$\,0.019 & 5.954\,$\pm$\,0.031 & 5.537\,$\pm$\,0.020 & 5.314\,$\pm$\,0.062 & 4.889\,$\pm$\,0.035 & 5.006\,$\pm$\,0.015 & 4.877\,$\pm$\,0.030 & 0.56\,$\pm$\,0.02 \\
GJ 820\,A & 3.114\,$\pm$\,0.268 & 2.540\,$\pm$\,0.198 & 2.248\,$\pm$\,0.318 & 2.822\,$\pm$\,0.317 & 2.120\,$\pm$\,0.080 & 2.334\,$\pm$\,0.009 & 2.206\,$\pm$\,0.011 & 7.00\,$\pm$\,0.09 \\
GJ 820\,B & 3.546\,$\pm$\,0.278 & 2.895\,$\pm$\,0.218 & 2.544\,$\pm$\,0.328 & 6.224\,$\pm$\,0.010 & 2.884\,$\pm$\,0.001 & 2.595\,$\pm$\,0.009 & 2.529\,$\pm$\,0.013 & 5.95\,$\pm$\,0.12 \\
GJ 280 & -0.498\,$\pm$\,0.151 & -0.666\,$\pm$\,0.270 & -0.658\,$\pm$\,0.322 & 2.147\,$\pm$\,0.397 & 0.625\,$\pm$\,0.255 & 1.148\,$\pm$\,0.022 & -0.646\,$\pm$\,0.003 & 109.4\,$\pm$\,1.7 \\
GJ 725\,A & 5.189\,$\pm$\,0.017 & 4.741\,$\pm$\,0.036 & 4.432\,$\pm$\,0.020 & 4.498\,$\pm$\,0.226 & 3.520\,$\pm$\,0.157 & 4.070\,$\pm$\,0.014 & 3.937\,$\pm$\,0.018 & 2.09\,$\pm$\,0.03 \\
GJ 725\,B & 5.721\,$\pm$\,0.020 & 5.197\,$\pm$\,0.024 & 5.000\,$\pm$\,0.023 & 5.014\,$\pm$\,0.325 & 4.309\,$\pm$\,0.206 & 4.588\,$\pm$\,0.016 & 4.464\,$\pm$\,0.025 & 2.09\,$\pm$\,0.03 \\
GJ 15\,A & 5.252\,$\pm$\,0.264 & 4.476\,$\pm$\,0.200 & 4.018\,$\pm$\,0.020 & 3.853\,$\pm$\,0.099 & 3.130\,$\pm$\,0.074 & 3.707\,$\pm$\,0.015 & 3.595\,$\pm$\,0.022 & 1.84\,$\pm$\,0.02 \\
GJ 15\,B & 6.789\,$\pm$\,0.024 & 6.191\,$\pm$\,0.016 & 5.948\,$\pm$\,0.024 & 5.745\,$\pm$\,0.045 & 5.419\,$\pm$\,0.028 & 5.463\,$\pm$\,0.015 & 5.303\,$\pm$\,0.030 & 1.84\,$\pm$\,0.02 \\
GJ 1111 & 8.235\,$\pm$\,0.021 & 7.617\,$\pm$\,0.018 & 7.260\,$\pm$\,0.024 & 7.030\,$\pm$\,0.031 & 6.819\,$\pm$\,0.020 & 6.630\,$\pm$\,0.015 & 6.467\,$\pm$\,0.058 & 0.14\,$\pm$\,0.01 \\
GJ 71 & 2.149\,$\pm$\,0.310 & 1.800\,$\pm$\,0.234 & 1.794\,$\pm$\,0.274 & 2.444\,$\pm$\,0.510 & 1.846\,$\pm$\,0.163 & 2.071\,$\pm$\,0.011 & 1.671\,$\pm$\,0.010 & 12.37\,$\pm$\,0.07 \\
GJ 54.1 & 7.258\,$\pm$\,0.020 & 6.749\,$\pm$\,0.033 & 6.420\,$\pm$\,0.017 & 6.167\,$\pm$\,0.044 & 5.929\,$\pm$\,0.021 & 5.888\,$\pm$\,0.014 & 5.719\,$\pm$\,0.036 & 0.27\,$\pm$\,0.01 \\
GJ 273 & 5.714\,$\pm$\,0.032 & 5.219\,$\pm$\,0.063 & 4.857\,$\pm$\,0.023 & 4.723\,$\pm$\,0.074 & 4.108\,$\pm$\,0.041 & 4.461\,$\pm$\,0.016 & 4.325\,$\pm$\,0.027 & 0.93\,$\pm$\,0.01 \\
SO 0253+16 & 8.394\,$\pm$\,0.027 & 7.883\,$\pm$\,0.040 & 7.585\,$\pm$\,0.046 & 7.322\,$\pm$\,0.027 & 7.057\,$\pm$\,0.020 & 6.897\,$\pm$\,0.017 & 6.718\,$\pm$\,0.076 & 0.10\,$\pm$\,0.01 \\
GJ 860\,AB & 5.575\,$\pm$\,0.027 & 5.038\,$\pm$\,0.034 & 4.777\,$\pm$\,0.029 & 4.690\,$\pm$\,0.075 & 4.089\,$\pm$\,0.037 & 4.299\,$\pm$\,0.014 & 4.122\,$\pm$\,0.025 & 1.10\,$\pm$\,0.02 \\
GJ 83.1 & 7.514\,$\pm$\,0.017 & 6.970\,$\pm$\,0.027 & 6.648\,$\pm$\,0.017 & 6.438\,$\pm$\,0.042 & 6.162\,$\pm$\,0.021 & 6.100\,$\pm$\,0.014 & 5.964\,$\pm$\,0.043 & 0.22\,$\pm$\,0.02 \\
GJ 687 & 5.335\,$\pm$\,0.021 & 4.766\,$\pm$\,0.033 & 4.548\,$\pm$\,0.021 & 4.397\,$\pm$\,0.094 & 3.763\,$\pm$\,0.061 & 4.182\,$\pm$\,0.015 & 4.064\,$\pm$\,0.018 & 1.16\,$\pm$\,0.01 \\
GJ 1245\,AC & 7.791\,$\pm$\,0.023 & 7.194\,$\pm$\,0.016 & 6.854\,$\pm$\,0.016 & 6.600\,$\pm$\,0.065 & 6.379\,$\pm$\,0.025 & 6.244\,$\pm$\,0.016 & 6.076\,$\pm$\,0.051 & ... \\
GJ 1245\,B & 8.275\,$\pm$\,0.026 & 7.728\,$\pm$\,0.031 & 7.387\,$\pm$\,0.018 & 7.178\,$\pm$\,0.066 & 6.968\,$\pm$\,0.029 & 6.853\,$\pm$\,0.022 & 6.765\,$\pm$\,0.089 & ... \\
GJ 876 & 5.934\,$\pm$\,0.019 & 5.349\,$\pm$\,0.049 & 5.010\,$\pm$\,0.021 & 4.844\,$\pm$\,0.077 & 4.374\,$\pm$\,0.046 & 4.635\,$\pm$\,0.014 & 4.538\,$\pm$\,0.026 & 0.79\,$\pm$\,0.03 \\
GJ 1002 & 8.323\,$\pm$\,0.019 & 7.792\,$\pm$\,0.034 & 7.439\,$\pm$\,0.021 & 7.176\,$\pm$\,0.028 & 6.993\,$\pm$\,0.020 & 6.860\,$\pm$\,0.016 & 6.766\,$\pm$\,0.080 & 0.14\,$\pm$\,0.02
\enddata
\end{deluxetable*}
In total we have observed 33 individual stars within 5\,pc arranged in 25 systems, five of which are double:
GJ~820\,A+B, GJ~15\,A+B, GJ~65\,AB, GJ~725\,A+B and GJ~860\,AB, and two of which are triple: GJ~866\,ABC and
GJ~1245\,ABC. We count here GJ~866\,ABC as two stars, since its individual components AC were not
resolved and (AC)B were marginally resolved in our observations. Additionally, two stars, Sirius and
Procyon, have known white dwarf companions. The notation ``A+B'' signifies that the components were observed
individually as separate CanariCam targets, and ``AB'' that both components were observed simultaneously as a
single target. The sample includes one A, one F and one G type star, three K stars and 27 M dwarfs. Such
distribution of spectral types implies that our statistical results will be significant only for M dwarfs.
The sample is a volume limited sample complete up to 4.0\,pc. We have imaged all of the 19 known stellar
systems at $\delta$\,$>$\,$-25^{\circ}$ within this distance, and 6 out of 15 known systems between 4 and 5\,pc
observable from our site. Due to observational limitations from target brightness, substellar objects were not
considered as target primaries. Hence, the Y2-type brown dwarf WISE 0855-0714 located at 2.23\,$\pm$\,0.04\,pc
\citep{2014ApJ...786L..18L, 2016AJ....152...78L} and the $\sim$500~K brown dwarf UGPS J072227.51-054031.2 at
4.12\,$\pm$\,0.04\,pc \citep{2012ApJ...748...74L} were not included in our sample. The remaining 9 objects
between 4 and 5\,pc were not observed because of the limited telescope time available for the programme.
Table \ref{CCSample} lists the observed stars, including compiled information on their equatorial coordinates
at J2000 epoch (proper motions taken into account), spectral types, trigonometric parallax, distance and proper
motions. Table~\ref{SamplePhot} list their near- and mid-IR photometry. Because all known stars in the
solar vicinity have large and well determined proper motions, our survey was designed to find common proper
motion companions. Any additional source detected within the field of view would have been considered as a
potential companion, without any criteria based on photometric colours. Its companionship could be easily
verified through second epoch observations.
We have searched through the literature to gather the available information on the planets discovered around
our sample stars by RVs, transits and other methods, as well as constraints on the substellar companions from
other surveys or signs of RV or astrometric trends indicating a possible presence of a distant companion.
Notes with selected essential information regarding each star are compiled in Table~\ref{known_exoplanets}
in the Appendix.
\section{Observations and data processing} \label{sec:observations}
The program was carried out in queue mode observations, starting in 2012 and completed in 2015. We used the
mid-infrared camera CanariCam \citep{2008SPIE.7014E..0RT} operating at the Nasmyth-A focal station of the
10.4\,m Gran Telescopio Canarias (GTC) at the Roque de los Muchachos Observatory on the island of La Palma
(Spain). CanariCam was designed to reach the diffraction limit of the GTC at mid-IR wavelengths (7.5--25\,$\mu$m).
The instrument uses a Raytheon 320$\times$240 Si:As detector with a pixel scale of 79.8\,$\pm$\,0.2\,mas, which
covers a field of view of 25.6\,$\times$\,19.2\,arcsec on the sky. We imaged our targets in the 10 micron
window, using a medium-band silicate filter centred at $\lambda$\,=\,8.7\,$\mu$m ($\delta\lambda$\,=\,1.1\,$\mu$m).
The choice of this particular bandpass was a compromise between the instrument performance, in particular filter
transmissivity, and the sky background contribution, which is significantly higher at the $N$ broad-band and
other narrow-band filters than at the Si-2 filter. Si-2 is also favoured by a better spatial resolution, since
the diffraction disc is larger at the other available narrow-band filters at longer wavelengths. Observations
were executed under the following restricted atmospheric conditions: spectroscopic (clear sky with possible
thin cirrus) or better, i.e., photometric/clear sky transparency, precipitable water vapor (PWV) at the level
of 5--12\,mm and image quality of $<$\,0\farcs3, corresponding to a seeing of $\sim$\,0\farcs8 in
the $R$ band.
Observations were performed with the standard chopping and nodding technique used in the mid-IR to remove the
sky emission and radiative offset. Chopping consists of switching the telescope secondary mirror at a typical
frequency of a few (2--5)\,Hz between the position of the source (on-source) and the nearby sky (off-source).
This rapid movement of the secondary mirror allows subtraction of the sky background emission that is varying
in time at frequencies below the chop frequency. Movement of the secondary mirror changes the optical
configuration of the telescope, resulting in two different emission patterns seen by the camera and producing
a spurious signal termed the radiative offset seen in the chop-differenced images. To remove the radiative
offset, the telescope is moved between two nod positions to swap over on- and off-source positions.
\startlongtable
\begin{deluxetable*}{lccccccccccc}
\tablenum{3}
\tablecaption{CanariCam observation log\label{obslog}}
\tablewidth{0pt}
\tabletypesize{\scriptsize}
\tablehead{
\colhead{Star} &
\colhead{OB} &
\colhead{Observation} &
\colhead{MJD} &
\colhead{Saveset} &
\colhead{On-source} &
\colhead{Instr.} &
\colhead{Chop} &
\colhead{Nod} &
\colhead{Readout} &
\colhead{Sky} &
\colhead{PWV}
\\
\colhead{} &
\colhead{\#} &
\colhead{date} &
\colhead{} &
\colhead{(s)} &
\colhead{(s)} &
\colhead{PA} &
\colhead{PA} &
\colhead{PA} &
\colhead{type} &
\colhead{} &
\colhead{(mm)}
}
\startdata
\hline
\multicolumn{12}{l}{GTC4-12BGCAN, Semester: 2012B} \\
\hline
GJ 1111 & 01 & 2012-12-26 & 56287.267541 & 5.96 & 3$\times$404 & 0 & 90 & -90 & S1R1-CR & Ph & $\sim$7.1 \\
& 02 & 2012-12-28 & 56289.211395 & 5.96 & 3$\times$404 & 300 & 150 & -30 & S1R1-CR & Ph & $\sim$4.2 \\
GJ 71 & 03 & 2012-09-29 & 56199.097535 & 5.96 & 3$\times$404 & 1.83 & 90 & -90 & S1R1-CR & Ph & $<$9.1 \\
& 04a & 2012-09-29 & 56199.138148 & 5.96 & 1$\times$404 & 1.83 & 90 & -90 & S1R1-CR & Ph & $<$9.1 \\
& 04a1 & 2012-10-05 & 56205.036337 & 5.96 & 3$\times$404 & 300 & 150 & -30 & S1R1-CR & L.Cs. & 9.0 \\
& 04a2 & 2012-12-03 & 56264.907141 & 5.96 & 1$\times$404 & 0 & 150 & -30 & S1R1-CR & Cl & 6.3 \\
GJ 406 & 05 & 2012-12-04 & 56265.186042 & 1.49 & 3$\times$454 & 0 & 90 & -90 & S1R1-CR & Cl & 6.0 \\
& 06 & 2013-01-29 & 56321.090712 & 1.49 & 3$\times$454 & 300 & 150 & -30 & S1R1-CR & Ph & 5.5--5.7\\
& 06a1 & 2012-12-04 & 56265.283756 & 1.49 & 2$\times$454 & 0 & 150 & -30 & S1R1-CR & Cl & 6.7 \\
& 06a2 & 2012-12-28 & 56289.257245 & 5.96 & 3$\times$404 & 300 & 150 & -30 & S1R1-CR & Ph & $<$5.0\\
GJ 144 & 07 & 2012-09-29 & 56199.208941 & 5.96, 1.49 & 404, 545, 378 & 1.83 & 90& -90 & S1R1-CR & Ph & $<$9.1 \\
& 08 & 2012-10-05 & 56205.085058 & 1.49 & 4$\times$454 & 300 & 150 & -30 & S1R1-CR & L.Cs. & 9.0 \\
\hline
\multicolumn{12}{l}{GTC9-12AGCAN, Semesters: 2012AB, 2013AB} \\
\hline
GJ 820 A & 01 & 2013-09-05 & 56540.063970 & 1.55 & 3$\times$432 & 0 & 90 & -90 & S1R3 & Ph & 7.7--8.9 \\
& 02 & 2013-09-05 & 56540.114497 & 1.55 & 3$\times$432 & 300 & 30 & -150 & S1R3 & Ph & 8.3--8.6 \\
GJ 699 & 05 & 2012-07-29 & 56137.973895 & 5.96 & 3$\times$404 & 0 & -90 & 90 & S1R1-CR & Ph & 8.6--9.3 \\
& 06 & 2012-07-30 & 56138.020220 & 5.96 & 3$\times$404 & 90 & -180 & 0 & S1R1-CR & Ph & 8.6--9.3 \\
& 19 & 2013-06-09 & 56452.160365 & 1.55 & 3$\times$432 & 0 & 90 & -90 & S1R3 & Ph & 6.7 \\
& 20 & 2013-06-10 & 56453.185434 & 1.55 & 3$\times$360 & 300 & 0 & 180 & S1R3 & Ph & 6.7 \\
GJ 729 & 09 & 2013-09-05 & 56540.897378 & 1.55 & 3$\times$432 & 0 & 90 & -90 & S1R3 & Ph & 8.7--9.2 \\
& 10 & 2013-09-14 & 56549.853825 & 1.55 & 3$\times$432 & 330 & 60 & -120 & S1R3 & Cl & 9.9--9.2 \\
GJ 905 & 11 & 2013-06-07 & 56450.190631 & 6.21 & 3$\times$417 & 0 & 90 & -90 & S1R3 & Ph & 6.3 \\
& 12 & 2013-06-08 & 56451.191418 & 6.21 & 3$\times$417 & 300 & -180 & 0 & S1R3 & Ph & 7.2 \\
GJ 15 A & 15 & 2012-12-27 & 56288.852112 & 5.96 & 3$\times$404 & 0 & 90 & -90 & S1R1-CR & Ph & $\sim$4.6 \\
& 16 & 2012-12-27 & 56288.916597 & 5.96 & 1211 & 90 & -180 & 0 & S1R1-CR & Ph & 4.6 \\
GJ 15 B & 17 & 2012-12-28 & 56289.825174 & 5.96 & 3$\times$404 & 0 & 90 & -90 & S1R1-CR & Ph & $<$5.5 \\
& 18 & 2012-12-28 & 56289.867014 & 5.96 & 3$\times$404 & 0 & -180 & 0 & S1R1-CR & Ph & $<$6.0 \\
GJ 54.1 & 21 & 2013-08-30 & 56534.128715 & 6.21 & 3$\times$417 & 0 & 90 & -90 & S1R3 & Ph & 4.7--5.0 \\
& 22 & 2013-08-30 & 56534.173160 & 6.21 & 4$\times$417 & 0 & 90 & -90 & S1R3 & Ph & 5.0--5.2 \\
GJ 65 AB & 26 & 2013-09-15 & 56550.091308 & 1.55 & 3$\times$432 & 0 & 90 & -90 & S1R3 & Cl & $<$12 \\
& 27 & 2013-09-15 & 56550.160683 & 1.55 & 3$\times$432 & 330 & 60 & -120 & S1R3 & Cl & 9.9--12 \\
GJ 866 (AC)B & 29 & 2013-09-08 & 56543.051701 & 1.55 & 3$\times$432 & 330 & 60 & -120 & S1R3 & Cl & 9.0--10.3 \\
GJ 280 & 30 & 2014-01-04 & 56661.065336 & 1.55 & 3$\times$432 & 0 & 90 & -90 & S1R3 & Cl & 7.9--9.1 \\
& 31 & 2014-01-04 & 56661.143403 & 1.55 & 3$\times$432 & 330 & 60 & -120 & S1R3 & Cl & 8.3--9.4 \\
GJ 725 A & 32 & 2013-09-05 & 56540.951082 & 1.55 & 3$\times$432 & 0 & 90 & -90 & S1R3 & Ph & 8.7--9.2 \\
& 33 & 2013-09-05 & 56540.996568 & 1.55 & 3$\times$432 & 330 & 60 & -120 & S1R3 & Ph & 8.4--9.1 \\
GJ 725 B & 34 & 2013-09-08 & 56543.926146 & 1.55 & 3$\times$432 & 0 & 90 & -90 & S1R3 & Cl & 8.1--9.1 \\
& 35 & 2013-09-08 & 56543.976256 & 1.55 & 3$\times$432 & 330 & 60 & -120 & S1R3 & Cl & 8.7--9.0 \\
\hline
\multicolumn{12}{l}{GTC8-14AGCAN, Semesters: 2014AB, 2015AB} \\
\hline
GJ 411 & 01 & 2015-02-01 & 57054.289022 & 1.55 & 2$\times$432, 360 & 0 & 90 & -90 & S1R3 & Sp & $\sim$8 \\
& 02 & 2015-02-03 & 57056.005301 & 1.55 & 3$\times$432 & 330 & 60 & -120 & S1R3 & Cl & 6.0 \\
& 39 & 2015-06-03 & 57176.938229 & 1.55 & 3$\times$432 & 0 & 90 & -90 & S1R3 & Cl & 10.8--12.2 \\
GJ 244 & 03 & 2015-02-06 & 57059.926707 & 1.55 & 3$\times$432 & 0 & 90 & -90 & S1R3 & Sp & 13.3 \\
& 04 & 2015-02-01 & 57054.923368 & 1.55 & 2$\times$432, 360 & 330 & 60 & -120 & S1R3 & L.Cs. & 7.0 \\
GJ 447 & 05 & 2014-05-10 & 56787.896094 & 6.21 & 3$\times$417 & 0 & 90 & -90 & S1R3 & Cl & 6.1--8.5 \\
& 06a & 2016-01-05 & 57392.188420 & 6.21 & 3$\times$417 & 30 & 60 & -120 & S1R3 & Cl & 5.2--6.6 \\
GJ 15 A & 09 & 2014-09-08 & 56908.079543 & 1.55 & 3$\times$432 & 0 & -90 & 90 & S1R3 & Ph & 7.5 \\
& 10 & 2014-09-08 & 56908.129653 & 1.55 & 3$\times$432 & 330 & -120 & 60 & S1R3 & Ph & 7.5 \\
GJ 15 B & 11 & 2014-09-23 & 56923.122396 & 1.55 & 4$\times$432 & 0 & 90 & -90 & S1R3 & Cl & 7.0 \\
& 11a & 2014-09-08 & 56908.178374 & 1.55 & 2$\times$432 & 0 & -90 & 90 & S1R3 & Ph & 7.5 \\
& 12 & 2014-09-23 & 56923.191435 & 1.55 & 360, 2$\times$432 & 330 & 60 & -120 & S1R3 & Cl & 7.0 \\
GJ 65 AB & 13 & 2014-12-02 & 56993.926748 & 1.55 & 3$\times$432 & 0 & 90 & -90 & S1R3 & Ph & 4.7 \\
& 14 & 2014-12-03 & 56994.886817 & 1.55 & 2$\times$432, 360 & 330 & 60 & -120 & S1R3 & Ph & 6.2--7.0 \\
GJ 820 B & 15 & 2014-06-11 & 56819.217245 & 1.55 & 2$\times$432, 360 & 0 & 90 & -90 & S1R3 & Ph & 10.8--11.1 \\
& 16 & 2014-06-12 & 56820.185712 & 1.55 & 3$\times$432 & 330 & 60 & -120 & S1R3 & Ph & 12.8--13.6 \\
GJ 866 (AC)B & 17 & 2014-09-22 & 56922.996505 & 1.55 & 3$\times$432 & 0 & 90 & -90 & S1R3 & Cl & $\sim$6.0 \\
& 18 & 2014-09-23 & 56923.051458 & 1.55 & 3$\times$432, 360 & 330 & 60 & -120 & S1R3 & Cl & 7.0 \\
GJ 144 & 19 & 2014-10-04 & 56934.128079 & 1.55 & 2$\times$432, 360 & 0 & 90 & -90 & S1R3 & Ph & 9.7--10.6 \\
GJ 729 & 20 & 2014-07-10 & 56848.043125 & 1.55 & 3$\times$432 & 330 & 60 & -120 & S1R3 & Ph & 4.5--5.8 \\
GJ 273 & 21 & 2014-03-13 & 56729.891626 & 1.55 & 3$\times$432 & 0 & 90 & -90 & S1R3 & Sp & $<$10 \\
& 22 & 2014-03-13 & 56729.997425 & 1.55 & 3$\times$432 & 330 & 60 & -120 & S1R3 & Sp & $<$10 \\
GJ 860 AB& 23 & 2014-09-02 & 56902.981678 & 6.21 & 417, 348, 209 & 0 & 90 & -90 & S1R3 & Cl & 12--14 \\
& 24 & 2014-09-03 & 56903.016580 & 6.21 & 278, 2$\times$417, 487 & 0 & 60 & -120 & S1R3 & Cl & 12--14 \\
SO0253+13& 30 & 2015-02-02 & 57055.942853 & 1.55 & 2$\times$432, 360 & 0 & 60 & -120 & S1R3 & T.Cs. & 6.0 \\
& 29\_2 & 2015-08-28 & 57262.166314 & 6.21 & 209, 2$\times$487, 417 & 0 & 90 & -90 & S1R3 & Cl & 6.5--7.4 \\
& 30\_2 & 2015-09-02 & 57267.227269 & 6.21 & 2$\times$417, 348 & 0 & 60 & -120 & S1R3 & Cl & n.a. \\
GJ 83.1 & 47 & 2015-08-25 & 57259.131956 & 6.21 & 3$\times$417 & 0 & 90 & -90 & S1R3 & Cl & 9.5. \\
& 48a & 2015-08-25 & 57259.180666 & 6.21 & 3$\times$417 & 330 & 60 & -120 & S1R3 & Cl & 9.3--9.9 \\
& 48 & 2015-08-27 & 57261.200226 & 6.21 & 3$\times$417 & 330 & 60 & -120 & S1R3 & Cl & 5.5 \\
GJ 687 & 49 & 2015-08-22 & 57256.935787 & 1.55 & 3$\times$432 & 0 & 90 & -90 & S1R3 & Ph & 8.4 \\
& 50a & 2015-08-22 & 57256.986562 & 1.55 & 3$\times$432 & 330 & 60 & -120 & S1R3 & Ph & 7.8--8.7 \\
& 50 & 2015-08-24 & 57258.972818 & 1.55 & 2$\times$432, 360 & 330 & 60 & -120 & S1R3 & Cl & 7.6--9.7 \\
GJ 1245 & 51 & 2015-08-19 & 57253.955602 & 6.21 & 3$\times$417 & 0 & 90 & -90 & S1R3 & Cl & 10.1--10.8 \\
& 52 & 2015-08-19 & 57254.003559 & 6.21 & 3$\times$417 & 330 & 60 & -120 & S1R3 & Cl & 10.8--12.2 \\
GJ 876 & 53 & 2015-08-25 & 57259.022564 & 1.55 & 2$\times$432, 360 & 0 & 90 & -90 & S1R3 & Cl & 8.0--9.0 \\
& 54 & 2015-08-25 & 57259.067002 & 1.55 & 2$\times$432, 360 & 330 & 60 & -120 & S1R3 & Cl & 8.2--9.5 \\
GJ 1002 & 55 & 2015-08-07 & 57241.139595 & 6.21 & 4$\times$417 & 0 & 90 & -90 & S1R3 & Cl & 7.8 \\
& 56 & 2015-09-16 & 57281.062135 & 6.21 & 3$\times$417 & 330 & 60 & -120 & S1R3 & Ph & 7.9--8.5
\enddata
\tablecomments{Sky conditions: Ph -- photometric, Cl -- clear, Sp -- spectroscopic, L.Cs. -- light cirrus, T.Cs. -- thick cirrus}
\end{deluxetable*}
We used an ABBA nodding sequence and ``on-chip'' chopping and nodding, with a chop-throw and nod offset of
8\,arcsec, a chopping frequency of 1.93 or 2.01\,Hz and a nod settle time of about 45\,s. On chip method is
recommended whenever the scientific target is point-like, since both on-source and off-source chop positions
contain the signal of the target inside the detector field of view and can be aligned and combined. Individual
frames of 26 and 19\,ms exposures were co-added by CanariCam control software to savesets of 1.6 and 6\,s
depending on the brightness of the source. We used an on-source integration time of 40\,min in total, divided
into two observing blocks (OBs) of 20\,min. Each block contained three data cube files composed of a set of
individual images (savesets) at subsequent chopping and nodding positions. For the two observing blocks we set
the instrument at two different position angles to rotate the field of view typically by 30\,deg (60 and 90\,deg
rotations were also used in some cases), and adjusted the configuration of chop and nod position angles so as
to maintain the chop/nod parallel to the longer axis of the detector. The use of two different orientations of
instrument position angle was a way to initially check the reliability of potential faint sources, distinguish
from bad pixels and explore the region along the horizontal axis of detector, where the cross-talk of the star
in the 16 channels is more evident and the areas otherwise obscured by the negative off-source chops.
A detailed observation log is presented in Table~\ref{obslog}.
\begin{figure}
\centering
\includegraphics[width=1.0\columnwidth]{GJ15A.pdf}
\caption{The final 8.7\,$\mu$m image mosaic of one of the target stars, GJ\,15A, encompassing
the full area covered by the CanariCam data. Counts are in linear scale and in the range
$\pm$7$\sigma$ relative to the zero background. The deepest central region, where the stacks
area overlap, is a rectangle of $\sim25\arcsec\times19\arcsec$. A zoomed in inset of
$1.5\arcsec\times1.5\arcsec$ shows the core of the PSF with the first Airy disk visible.
North is up and east is to the left.
\label{CCimgs}}
\end{figure}
CanariCam images are stored in the standard multi-extension FITS files, with a structure of [320, 240, 2, M][N],
where 320 and 240 are the image pixel dimensions, 2 is the number of chop positions, M of savesets and N of
nod positions. The data were processed using a set of dedicated {\tt\string IRAF/PyRAF}\footnote{{\tt PyRAF}
is a product of the Space Telescope Science Institute, which is operated by AURA for NASA.} scripts developed
within our group.
\begin{figure*}
\centering
\includegraphics[height=2.8cm,keepaspectratio]{GJ65AB_20130915.pdf}
\includegraphics[clip=true,trim=30pt 0pt 0pt 0pt,height=2.8cm,keepaspectratio]{GJ65AB_20141202.pdf}
\includegraphics[clip=true,trim=30pt 0pt 0pt 0pt,height=2.8cm,keepaspectratio]{GJ860AB.pdf}
\includegraphics[clip=true,trim=30pt 0pt 0pt 0pt,height=2.8cm,keepaspectratio]{GJ1245ABC.pdf}
\includegraphics[clip=true,trim=30pt 0pt 0pt 0pt,height=2.8cm,keepaspectratio]{zzob29_GJ866.pdf}
\caption{Images of the resolved binary and multiple stars: GJ~65AB (at two different epochs),
GJ~860AB, GJ~1245ABC and GJ~866(AC)B at the first epoch, 2013 Sep. 8. A 1\,arcsec scale
is indicated. North is up and East is to the left.
\label{CCimgs_binaries}}
\end{figure*}
As a first step, the off-source savesets, where the star is not located at the centre of the detector, were
subtracted from the corresponding on-source savesets, for respective nod beam position. These chop-/sky-subtracted
frames, where the star is located at the centre of the detector, were then aligned to correct for relatively small
misalignments (typically of less than five pixels) with respect to the preliminary shifts computed from the chop and
nod pointing offsets. Then each pair of frames corresponding to the A and B nod positions was combined, to subtract
the radiative offset. The sky-subtracted frames were multiplied by $-$1 to recover the negative contributions of
the star (off-source position of the secondary mirror).
Because the negatives in the A and B nod positions do not overlap, being at opposite sides and at 8\,arcsec of
the on-source central location, they were also combined to subtract the radiative offset before they were
aligned. Residual detector levels constant along single columns or lines but varying across these remained in
both the positive and negative chop- and nod-subtracted frames; these were background fitted (masking the target)
and subtracted.
The alignment itself was applied at once, to all (positive and negative) images of consecutive repetitions of
an OB, relative to a same reference image, and so that the centroids of the target in all images including the
reference image were shifted to a same, integer pixel position value. Therefore, the ulterior alignment and
combination, even with other epochs, were simplified to integer pixel shifts, obviating the need of reinterpolation.
Before aligning, the images were copied into larger ones to avoid the trimming of outer data regions. Then the
frames were average-combined per repetition using a shallow sigma upper and lower clipping to discard occasional
short transients and sharp outliers.
Each combination involved masking the negative counts of the target. Horizontal patterns of cross-talk
features, apparent for the brightest targets were removed. Repetitions from OBs acquired with position
angles differing from the North-up East-left orientation were resampled to common orientation using the
{\tt\string mscimage} task. For the combination of the stacks of the different OBs, the repetitions were
flux-scaled according to their zero-point magnitude -- as measured on the target -- and weighted inversely
proportional to the scaled variance of their background noise and the square of the full-width half-maximum
(FWHM) of the target.
A final processed image mosaic of one of the target stars -- GJ~15A, with a total on-source time of 86\,min is
displayed in Figure~\ref{CCimgs}. The counts are represented in linear scale and within $\pm7\sigma$ of the
background level, where $\sigma$ is the standard deviation of the background noise. The central region with the
highest sensitivity, where the sky area of stacked frames overlap, covers a rectangle of approximately
$25\arcsec\times19\arcsec$ ($\sim$89$\times$67\,au at the distance of GJ~15A). A collection of the processed
CanariCam images of all the observed stars is presented in Figs.~\ref{CCallimages} and \ref{CCallimages_fullFOV}
in the Appendix. Fig.~\ref{CCallimages} contains a set of common size ($25\farcs6\times19\farcs2$) central
parts of the images, with the highest sensitivity and smooth background levels, whereas
Fig.~\ref{CCallimages_fullFOV} includes full FOV mosaics. All the reduced image stacks are available at:
\url{https://cloud.iac.es/index.php/s/kT3cCdP9Wxw92gZ}.
\section{Analysis and results}
\subsection{Relative astrometry of resolved binaries}
As part of our CanariCam observations, we resolved the components of the binary stars GJ~65AB and GJ~860AB,
and the triple systems GJ~1245ABC and GJ~866(AC)B. Cut-out images of these systems are displayed in
Figure~\ref{CCimgs_binaries}. We measured the relative angular separations and position angles
($\rho$, $\theta$) of the resolved components. Observations of GJ~65AB and GJ~866(AC)B were repeated
at two separate epochs, about 1.2 and 1.0\,yr apart, respectively. In case of GJ~866, the A and C pair is
a spectroscopic binary \citep{1999AA...350L..39D, 2000AA...353..253W} at $\rho\sim0.01\arcsec$, and thus
only the B component was resolved in the first epoch. In the second epoch the components got closer by
their orbital motion, to a separation below the angular resolution of 0.298\arcsec achieved on the images.
Sirius B was marginally detected at $\rho\sim$10.3\arcsec, $\theta\sim$\,77.7\,deg, but its low signal-to-noise
(S/N$\sim$4) precludes more accurate measurements.
\begin{deluxetable}{lccc}
\tablenum{4}
\tablecaption{Relative astrometry of resolved binary/multiple stars\label{astrometry}}
\tablewidth{0pt}
\tabletypesize{\scriptsize}
\tablehead{
\colhead{Star} &
\colhead{Epoch (MJD)} &
\colhead{$\rho$ ($\arcsec$)} &
\colhead{$\theta$ ($^{\circ}$)} }
\startdata
GJ 65AB & 56550.125995 & 2.240\,$\pm$\,0.012 & 22.94\,$\pm$\,0.39\\
& 56994.406782 & 2.267\,$\pm$\,0.010 & 16.27\,$\pm$\,0.35\\
GJ 860AB & 56902.999129 & 1.4338\,$\pm$\,0.0036 & 307.61\,$\pm$\,0.33\\
GJ 866(AC)B & 56543.051701 & 0.33\,$\pm$\,0.02 & 331.65\,$\pm$\,1.90\\
& 56922.996505 & $<$0.298 & ... \\
GJ 1245AB & 57253.979581 & 6.0063\,$\pm$\,0.0080 & 69.63\,$\pm$\,0.16\\
GJ 1245AC & 57253.979581 & 0.582\,$\pm$\,0.007 & 271.26\,$\pm$\,0.73\\
\enddata
\end{deluxetable}
We obtained centroids of the sources using the {\tt\string IRAF imcentroid} task, and transformed the $X$ and
$Y$ pixel coordinates with their respective errors into angular separations and North-to-East position angles
using the CanariCam pixel scale of 79.8\,$\pm$\,0.2\,mas ({\url{http://www.gtc.iac.es/instruments/canaricam/canaricam.php}})
and the orientation given by the instrument position angle in the image header. We checked that the precision
of this orientation is better than 0.3\,deg by inspecting the alignment of the chopping-nodding throw with
the detector $X$ axis. We did not perform any calibration observations. Therefore, in the determination of
the relative astrometry we relied on the default internal calibration of the instrument position angle and
pixel scale. Measured values are listed in Table~\ref{astrometry}.
\subsection{Contrast curves and achieved sensitivities} \label{subsec:contr}
\begin{figure*}
\centering
\includegraphics[scale=0.55,keepaspectratio=true,clip=true,trim=0pt 0pt 0pt 0pt]{contrast_curves_all.pdf}
\includegraphics[scale=0.55,keepaspectratio=true,clip=true,trim=39pt 0pt 0pt 0pt]{contrast_curves_per_mags.pdf}
\caption{{\it Left:} Si-2 (8.7\,$\mu$m) contrast curves at the 3$\sigma$ level
for all individual targets. The curves are ordered and color coded by distance
of the star (as in Table~\ref{detlims}). {\it Right:} the mean contrast curves
per given interval of brightness of the stars.
\label{contrasts}}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[scale=0.55,keepaspectratio=true,clip=true,trim=0pt 0pt 0pt 0pt]{detectability.pdf}
\includegraphics[scale=0.55,keepaspectratio=true,clip=true,trim=39pt 0pt 0pt 0pt]{detectability_mean.pdf}
\caption{CanariCam Si-2 band (8.7\,$\mu$m) detection limits. For each individual
star on the left panel, and on the right panel, average curves for stars brighter
and fainter than Si-2\,=\,3.0\,mag, and the best (GJ 406) and the worst (GJ 447) cases.
\label{detlimits}}
\end{figure*}
Since our observations have a bright star at the centre, by aligning the individual chop-subtracted frames we
could improve the resulting FWHM as compared with straight stacking of images done by default by the automatic
data reduction pipeline of the instrument provided by the observatory, and to some extent compensate the lack
of fast-guiding mode of the telescope. The mean FWHM of the PSFs of all images is of 0.28\arcsec (3.5\,pix),
with the best and the worst values of 0.23\arcsec (2.8\,pix) and 0.55\arcsec (6.9\,pix), respectively. The
quality of our data is close to the theoretical FWHM of the diffraction-limited PSF, which for GTC is
0.23\arcsec at 8.7\,$\mu$m.
We measured the detection limits on the deepest region of the final images of each target in the survey
by using the ratio of the peak counts of the star to 3 times the background noise ($\sigma$). To determine the
3$\sigma$ limiting magnitudes of the images we estimated the magnitudes of the sample stars in the Si-2 filter,
using the {\it JHKs} photometry from the 2MASS and WISE W1, W2, W3, W4 photometry from the All-Sky and AllWISE
Source Catalogs \citep{2010AJ....140.1868W}. In cases when the mid-IR photometry from WISE was highly
affected by saturation, we used other measurements available in the literature from Spitzer/IRAC and Akari
S09W and L18W bands \citep{2007PASJ...59S.369M}. We converted the 2MASS and WISE magnitudes into fluxes
using the corresponding Vega zero points and their errors for each band from \cite{2003AJ....126.1090C} and
\cite{2011ApJ...735..112J}. Then, we fitted a power funcion $f(\lambda) = c_1\lambda^{c_2}+c_3$ to the available
measurements via a least squares method and used the obtained parameters to estimate the average flux at
$\lambda$\,=\,8.7\,$\mu$m. This approach does not take into account the different spectral types. However,
we checked that the effect on the estimated magnitudes at this wavelength is negligible relative to overall
uncertainties. The Si-2 magnitude of each star was calculated using the Vega system zero point determined for
this CanariCam filter. For Sirius A (GJ 244) we used directly the well-calibrated Si-2 flux within the standard
filters from Gemini/T-ReCS observations \citep{2011ApJ...730...53S}. For tight binary stars in the sample
resolved by CanariCam but unresolved by WISE, the magnitudes of individual components were descomposed from the
integrated magntiudes and peak-to-peak flux ratios. The values of FWHM, and Si-2 3$\sigma$ detection limits
for individual targets are listed in Table~\ref{detlims}.
To measure the sensitivity as a function of angular separation from the central star, we computed the background
noise, $\sigma$, as a function of radial separation from the star, by measuring the standard deviation in 1 pixel
wide concentric annuli around it. The 3$\sigma$ noise counts were converted to contrast (difference in magnitude
between the primary star and the measured quantity, noise in this case) by relating to the peak pixel value of
the star's PSF. Results of this method were found to be consistent with a more realistic procedure based on
inclusion of artificial sources, as in the analysis of the Barnard's Star data described in \cite{2015MNRAS.452.1677G}.
Then, the sensitivity limit was calculated using the corresponding Si-2 magnitude of a given star.
\begin{figure}
\centering
\includegraphics[width=1.0\columnwidth,keepaspectratio=true]{GJ820A.pdf}
\caption{CanariCam image of GJ 820A and the faint (Si-2$\sim$10.4\,mag)
source detected at $\rho=7\farcs20\pm0\farcs04$, $\theta=256.2\pm0.3$\,deg
(epoch J2013.75), indicated by two yellow lines. The faint source is the
background star TYC 3168-590-1.
\label{gj820A}}
\end{figure}
The set of graphs of Figs.~\ref{contrasts} and \ref{detlimits} comprises the 3$\sigma$ contrast curves and
detection limit curves of the survey. The left plot of Fig.~\ref{contrasts} collects the contrast curves
($\Delta$Si-2\,mag as a function of angular separation $\rho$) for all observed stars individually. On the right
graph are plotted the average achieved contrast versus separation, for a given brightness range of the target
stars. For the brightest stars with Si-2\,$<$\,2.0\,mag the maximum dynamic range, of about 10\,mag,
is achieved at $\sim$3\,arcsec. For stars fainter than 4.0\,mag (20 stars of the sample) the maximum contrast
is reached at 1.0--1.5\,arcsec. The detection limit curves for each observed star are plotted on the left
panel of Figure \ref{detlimits}. On the right panel, we plot the mean detectability curves for stars brighter
and fainter than 3.0\,mag, and also the best and the worst cases, which were on GJ 699 and GJ 273, respectively.
For most of the observed stars ($\sim$80\%), i.e., those with Si-2\,$<$\,3.0\,mag, the detectability limit of
Si-2\,=\,11.3\,$\pm$\,0.2\,mag on average, was reached at $\rho\gtrsim$\,1.0--1.5\,arcsec separation.
For eight targets (GJ~699, GJ~65AB, GJ~729, GJ~144, GJ~447, GJ~866, GJ~15A and GJ~15B), observations were
performed at two epochs separated by $\sim$1.0--1.7\,yr. In these cases, the orbital motion of companions
may not be negligible. We estimate, considering a 20--40\,$M_{\rm Jup}$ companion in a circular, face-on
orbit, that the angular shift induced by orbital motion becomes significant, i.e., exceeds the average
spatial resolution of the images, for $a\lesssim8.0-16.5$\,au orbits, corresponding to
$\rho\lesssim2\farcs0-6\farcs5$ angular separations at $d$\,=\,2.5--3.5\,pc. Stacking of images of well
separated epochs would result either in a smearing or point-source doubling for any potential close-in
faint companion. In any case, we searched over these closest separations around that stars in the stacks
of two epochs observations both combined all together and of each epoch separately. The constrast
curves and detection limits reported for these targets were computed on single-epochs images stacks.
\subsection{Constraints on substellar companions} \label{sec:frequencies}
We did not find any new companion to the stars imaged in this survey. All the final processed images were
examined by us for the presence of faint, point-like sources directly by a visual inspection. We searched for
candidate objects both on images combined of all the available CanariCam data, as well as on images stacked
separately at different observing epochs or at different orientations of instrument position angle (typically
contained in two separate observing blocks of a half of the total on-source time available). We also looked
for candidates in the regions masked by the negatives by using the complementing images. We did not employ
any PSF subtraction method because adjusting the image display contrast suffice to efficiently inspect the
immediate surroundings of the target stars even to one FWHM separation of the stellar PSF.
\begin{figure}
\centering
\includegraphics[width=0.8\columnwidth,keepaspectratio=true,clip=true,trim=0pt 10pt 10pt 0pt]{detection_models.pdf}
\caption{Theoretical absolute magnitudes versus mass of brown dwarfs and
giant planets at Si-2 8.7\,$\mu$m band obtained using the solar abundance
Ames-COND models at ages of 0.1, 0.5, 1, 5 and 10\,Gyr. Several corresponding
isotherms are plotted with dotted lines. The solid and dashed horizontal
lines mark the mean 3$\sigma$ detection limit range of the survey and a
1$\sigma$ dispersion: $M_{\rm Si2}$\,=\,13.6\,$\pm$\,0.7\,mag.
\label{massesfig}}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=0.85\columnwidth,keepaspectratio=true,clip=true,trim=0pt 0pt 0pt 0pt]{completeness_map.pdf}
\includegraphics[width=0.85\columnwidth,keepaspectratio=true,clip=true,trim=0pt 0pt 0pt 0pt]{contours_map.pdf}
\caption{{\it Left panel:} Overall completeness map of the survey. The contour lines illustrate the number of stars around
which the survey is sensitive to substellar companions as a function of companion mass and projected orbital separation.
{\it Right panel:} Upper limits constraints on substellar companions occurrence frequencies, in the same range of masses and
projected separations.
\label{completeness_map}}
\end{figure*}
The only one additional faint source was detected within the field of view on the images of GJ~820A
(displayed in Fig.~\ref{gj820A}). It was detected in both OBs taken with different instrument position
angle (North to East sky orientation rotated by 60\,deg), with an apparent magnitude of 10.4\,$\pm$\,0.3
in the Si-2 band, at an angular separation and position angle of $\rho\,=\,7.20\pm0.04$\,arcsec and
$\theta\,=\,256.2\pm0.3$\,deg, respectively. We calculated its equatorial coordinates at the epoch of the
CanariCam observation based on the precise proper motion of GJ~820A and the measured $\rho$ and $\theta$.
Using available archival observations and catalogues of this area of the sky, we found that the source is
the background star TYC 3168-590-1 (2MASS J21065820+3845411) with $V_T$=10.737$\pm$0.066, $J$=10.669$\pm$0.024,
$H$=10.466$\pm$0.016, $K_{\rm s}$=10.405$\pm$0.013\,mag. It is also catalogued in Gaia EDR3 with
$G$=11.517$\pm$0.003\,mag and a parallax of 2.51$\pm$0.01\,mas. Apart from this source, no other additional
sources were identified. In this section we translate the detection limits of each star to constraints on
the physical properties of detectable substellar companions, specifically to their masses and effective
temperatures.
Because the substellar objects continuously cool down as they evolve, it is not possible to determine their
mass applying unique relations independent of age, such as the mass--luminosity or mass--effective temperature
relations for the main-sequence stars. Therefore, in this case one needs to rely on theoretical models providing
a grid of luminosities, effective temperatures and synthetic photometry for different substellar masses as a
function of age. In this work, to estimate the minimum masses and temperatures of companions that would have
been detected, we used the Ames-COND models \citep{2001ApJ...556..357A, 2012EAS....57....3A, 2003A&A...402..701B}
for solar metallicity. The COND models are valid up to $T_{\rm eff}$ of 1300\,K and extend down to 100\,K.
They include the formation of dust in the atmospheres, but dust grains are considered to settle below the
photosphere and are not included in the photospheric opacity. To compute the synthetic magnitudes for the
Si-2 8.7\,$\mu$m band we used the {\tt\string PHOENIX} Star, Brown Dwarf \& Planet Simulator available
online\footnote{\url{http://phoenix.ens-lyon.fr/simulator/index.faces}}. For input we used the transmission
file of the Si-2 filter and obtained the isochrones for a set of five different ages: 0.1, 0.5, 1, 5, and 10\,Gyr.
In Fig.~\ref{massesfig} are plotted the synthetic Si-2 absolute magnitudes versus masses obtained using the
COND models, for objects at these five ages, jointly with several isotherms in the $T_{\rm eff}$ range between
300 and 1000\,K. The solid and two dashed horizontal lines mark the mean detectability level reached in this
program, with the 1$\sigma$ dispersion taken as uncertainty: $M_{\rm Si2}$\,=\,13.6\,$\pm$\,0.7\,mag. For an
age of 1\,Gyr these limits would extend to objects at the deuterium-burning mass limit and, for 10\,Gyr, to
about 40\,$M_{\rm Jup}$. Assuming an age of 5\,Gyr as a typical age expected for stars in the solar vicinity,
this sensitivity limit translates to an average minimum mass and temperature of companions that would have
been detected of $m$\,=\,30\,$\pm$\,3\,$M_{\rm Jup}$ and $T_{\rm eff}$\,=\,600\,$\pm$\,40\,K. The derived
constraints on companions masses and temperatures around each observed star, assuming a 5\,Gyr age, are listed
in Table~\ref{detlims}.
We converted each detection limit curve into mass limits using the COND evolutionary models and considering
the nominal 5\,Gyr age, and counted the number of stars for which a companion at a given mass and projected
orbital separation would be detectable. The resulting contour map representing the overall depth of our search for
the observed sample over a grid of companion masses and separations is presented in Fig.~\ref{completeness_map}.
\begin{deluxetable*}{lcccccccc}
\tablenum{5}
\tablecaption{Detection limits of the CanariCam search\label{detlims}}
\tablewidth{0pt}
\tabletypesize{\footnotesize}
\tablehead{
\colhead{Star} &
\colhead{FWHM} &
\colhead{FWHM} &
\colhead{Determined} &
\colhead{Detection limit} &
\colhead{$M_{\rm Si2}$} &
\colhead{$M_{\rm min,comp}$} &
\colhead{$T_{\rm eff,min,comp}$} &
\colhead{$s$} \\
&
\colhead{(pix)} &
\colhead{('')} &
\colhead{Si-2 (mag)} &
\colhead{Si-2 (mag)} &
\colhead{(mag)} &
\colhead{($M_{\rm Jup}$)} &
\colhead{(K)} &
\colhead{(au)}
}
\startdata
GJ 699 & 3.07 & 0.246 & 4.12\,$\pm$\,0.19 & 11.68\,$\pm$\,0.19 & 15.37\,$\pm$\,0.19 & 15.0\,$\pm$\,2.0 & 400\,$\pm$\,30 & 3--18 \\
GJ 406 & 3.25 & 0.260 & 5.59\,$\pm$\,0.23 & 11.97\,$\pm$\,0.23 & 15.08\,$\pm$\,0.24 & 19.5\,$\pm$\,2.0 & 450\,$\pm$\,30 & 3--24 \\
GJ 411 & 3.24 & 0.259 & 3.03\,$\pm$\,0.13 & 10.75\,$\pm$\,0.13 & 13.72\,$\pm$\,0.14 & 29.0\,$\pm$\,3.0 & 570\,$\pm$\,40 & 5--25 \\
GJ 244 & 3.24 & 0.259 & -1.32\,$\pm$\,0.10& 10.05\,$\pm$\,0.14 & 12.95\,$\pm$\,0.15 & 40.0\,$\pm$\,3.0 & 740\,$\pm$\,50 & 18--26 \\
GJ 65AB & 3.38 & 0.270 & 5.69\,$\pm$\,0.19 & 11.26\,$\pm$\,0.19 & 14.13\,$\pm$\,0.21 & 25.0\,$\pm$\,2.0 & 520\,$\pm$\,30 & 3--27 \\
GJ 729 & 3.47 & 0.278 & 4.97\,$\pm$\,0.19 & 10.73\,$\pm$\,0.19 & 13.37\,$\pm$\,0.20 & 35.0\,$\pm$\,4.0 & 650\,$\pm$\,20 & 4--30 \\
GJ 905 & 2.99 & 0.239 & 5.49\,$\pm$\,0.19 & 11.87\,$\pm$\,0.19 & 14.37\,$\pm$\,0.19 & 23.0\,$\pm$\,2.0 & 490\,$\pm$\,25 & 3--32 \\
GJ 144 & 3.91 & 0.313 & 1.67\,$\pm$\,0.19 & 10.70\,$\pm$\,0.19 & 13.16\,$\pm$\,0.19 & 37.0\,$\pm$\,3.0 & 680\,$\pm$\,50 & 7--32 \\
GJ 447 & 3.96 & 0.317 & 5.01\,$\pm$\,0.22 & 10.20\,$\pm$\,0.22 & 12.57\,$\pm$\,0.23 & 46.5\,$\pm$\,5.0 & 830\,$\pm$\,40 & 3--34 \\
GJ 866(AC)B& 4.40 & 0.352 & 5.13\,$\pm$\,0.22 & 11.13\,$\pm$\,0.22 & 13.43\,$\pm$\,0.25 & 32.5\,$\pm$\,4.0 & 600\,$\pm$\,60 & 4--35 \\
GJ 820A & 2.95 & 0.236 & 2.37\,$\pm$\,0.08 & 11.50\,$\pm$\,0.08 & 13.79\,$\pm$\,0.08 & 28.0\,$\pm$\,1.5 & 560\,$\pm$\,20 & 7--35 \\
GJ 820B & 3.15 & 0.252 & 2.67\,$\pm$\,0.21 & 11.24\,$\pm$\,0.21 & 13.52\,$\pm$\,0.21 & 31.0\,$\pm$\,3.5 & 600\,$\pm$\,50 & 6--35 \\
GJ 280 & 6.90 & 0.552 & 1.07\,$\pm$\,0.25 & 11.01\,$\pm$\,0.25 & 13.29\,$\pm$\,0.26 & 36.0\,$\pm$\,4.0 & 670\,$\pm$\,60 & 25--35 \\
GJ 725A & 2.99 & 0.239 & 4.21\,$\pm$\,0.14 & 11.45\,$\pm$\,0.14 & 13.72\,$\pm$\,0.15 & 29.0\,$\pm$\,2.0 & 570\,$\pm$\,25 & 4--35 \\
GJ 725B & 3.15 & 0.252 & 4.71\,$\pm$\,0.17 & 11.45\,$\pm$\,0.17 & 13.71\,$\pm$\,0.18 & 29.0\,$\pm$\,2.5 & 570\,$\pm$\,30 & 5--35 \\
GJ 15A & 2.83 & 0.226 & 3.80\,$\pm$\,0.06 & 11.88\,$\pm$\,0.06 & 14.11\,$\pm$\,0.06 & 25.0\,$\pm$\,1.0 & 520\,$\pm$\,15 & 4--36 \\
GJ 15B & 3.00 & 0.240 & 5.55\,$\pm$\,0.09 & 11.84\,$\pm$\,0.09 & 14.07\,$\pm$\,0.10 & 25.5\,$\pm$\,1.5 & 530\,$\pm$\,15 & 4--36 \\
GJ 1111 & 3.79 & 0.303 & 6.70\,$\pm$\,0.05 & 11.50\,$\pm$\,0.06 & 13.70\,$\pm$\,0.08 & 29.0\,$\pm$\,3.0 & 570\,$\pm$\,45 & 3--36 \\
GJ 71 & 3.58 & 0.286 & 1.78\,$\pm$\,0.26 & 11.18\,$\pm$\,0.26 & 13.37\,$\pm$\,0.26 & 35.0\,$\pm$\,5.0 & 650\,$\pm$\,70 & 10--37 \\
GJ 54.1 & 3.72 & 0.298 & 5.96\,$\pm$\,0.23 & 11.05\,$\pm$\,0.23 & 13.20\,$\pm$\,0.25 & 36.5\,$\pm$\,4.0 & 680\,$\pm$\,60 & 3--37 \\
GJ 273 & 3.23 & 0.258 & 4.48\,$\pm$\,0.11 & 10.50\,$\pm$\,0.11 & 12.63\,$\pm$\,0.12 & 47.0\,$\pm$\,2.5 & 850\,$\pm$\,40 & 4--19 \\
SO 0253+16& 4.22 & 0.338 & 6.96\,$\pm$\,0.03 & 10.83\,$\pm$\,0.03 & 12.90\,$\pm$\,0.04 & 40.0\,$\pm$\,1.5 & 740\,$\pm$\,20 & 3--39 \\
GJ 860AB& 3.22 & 0.258 & 4.82\,$\pm$\,0.10 & 11.08\,$\pm$\,0.10 & 13.05\,$\pm$\,0.11 & 38.5\,$\pm$\,2.0 & 710\,$\pm$\,30 & 5--40 \\
GJ 83.1 & 4.19 & 0.335 & 6.17\,$\pm$\,0.28 & 11.26\,$\pm$\,0.28 & 13.02\,$\pm$\,0.31 & 38.5\,$\pm$\,5.0 & 710\,$\pm$\,75 & 4--44 \\
GJ 687 & 3.30 & 0.264 & 4.31\,$\pm$\,0.13 & 11.23\,$\pm$\,0.13 & 12.95\,$\pm$\,0.14 & 40.0\,$\pm$\,3.0 & 740\,$\pm$\,50 & 5--45 \\
GJ 1245ABC& 3.40 & 0.272 & 6.90\,$\pm$\,0.14 & 11.10\,$\pm$\,0.14 & 12.81\,$\pm$\,0.15 & 41.5\,$\pm$\,3.5 & 760\,$\pm$\,55 & 5--45 \\
GJ 876 & 3.22 & 0.258 & 4.76\,$\pm$\,0.14 & 11.20\,$\pm$\,0.14 & 12.86\,$\pm$\,0.15 & 40.5\,$\pm$\,3.0 & 740\,$\pm$\,40 & 5--47 \\
GJ 1002 & 3.98 & 0.318 & 6.89\,$\pm$\,0.04 & 10.88\,$\pm$\,0.04 & 12.52\,$\pm$\,0.08 & 47.0\,$\pm$\,2.5 & 850\,$\pm$\,35 & 4--47
\enddata
\tablecomments{M$_{\rm Si2}$ -- absolute Si2 magnitude of an object corresponding to the detection limit,
$M_{\rm min,comp}$, $T_{\rm eff,min,comp}$ -- lower limits on mass and effective temperature of detectable
companions, $s$ -- range of projected physical separations at which detection limits and minimum
$M_{\rm comp}$ and $T_{\rm eff}$ apply.}
\end{deluxetable*}
In Table~\ref{tab_ages}, we show the estimations of ages for our sample from the literature and those derived
using rotation periods obtained from the literature and the gyrochronology relations from \cite{2007ApJ...669.1167B},
\cite{2008ApJ...687.1264M}, and \cite{2015MNRAS.450.1787A}. Most of the stars have an estimated age in the
range of 1--10\,Gyr, which justifies the selection of the 5\,Gyr isochrone. However, some of them are below
the 1\,Gyr age and, in these cases, our estimations of detection mass limits were conservative.
We excluded the presence of brown dwarf companions with masses $m$\,$\gtrsim$\,40\,$M_{\rm Jup}$ and
$T_{\rm eff}$ higher than $\sim$750\,K around 24 of the observed stellar systems in the solar vicinity at
distances within 4.6\,pc, at the range of angular separations from 1.0--3.0\,arcsec depending on the
brightness of the target star, and up to 10\,arcsec. For an average distance of 3.5\,pc these angual
separations correspond to projected orbital separations in the range from 3.5--10.5 to 35\,au. The
non-detection of substellar companions throughout this search allowed us to determine an upper limit to
the real fraction of companions at the distances and masses mentioned above. We did not assume any
shape of underlying distribution of the population of companions in terms of their masses and semi-major
axes, which is equivalent to a uniform, linear flat distribution. Considering that the number of objects
with companions can be well represented by a Poisson distribution, the probability of having no companions
is given by the formula:
\begin{equation}
P\left[k\equiv0\right]=\left(e^{-\lambda}\frac{\lambda^k}{k!}\right)_{k\equiv0}=e^{-\lambda}
\end{equation}
\noindent
where $\lambda$ is the Poisson parameter, $k$ is the number of occurrences, $\lambda$\,=\,$np$, with $n$ being
the number of events (observed stars) and $p$ the real fraction of companions. For a certain confidence level
($\gamma$), where $P$\,=\,1$-\gamma$, the upper limit to the real frequency of substellar companions is
$p$\,=\,$-$ln($P$)/$n$. Considering this general sample of 24 stars, the frequency of such companions
is below 9.6\% at a confidence level of 90\%.
Among these 24 stars, 18 stars are M dwarfs. For them, we were sensitive to substellar companions with
$m$\,$\gtrsim$\,40\,$M_{\rm Jup}$ and $T_{\rm eff}$\,$\gtrsim$\,750\,K at 1--10\,arcsec separations, which
give 1.8--18\,au, $\sim$5--50\,au and 3.5--35\,au projected orbital separations for the closest, furthest
and average distance of stars in our sample. Excluding the 10\% of the nearest and the furthest observed stars,
which set the minimum lower limit and the maximum upper limit of the projected physical separations, we
covered from 2.5 to 45\,au in 90\% of this sample, i.e. our survey is complete from 2.5 to 45\,au with a
confidence level of 90\%. Previous imaging programs that targeted the nearest stars \citep{2001AJ....121.2189O,
2011ApJ...743..141C, 2012AJ....144...64D} could already detect companions with such masses. However, they
explored wider orbital separations, beyond 10--30\,au. The CanariCam survey of nearby M dwarfs allowed us to
study for the first time the occurrence of substellar companions with $m$\,$>$\,40\,$M_{\rm Jup}$ at orbits
below 10\,au. With a 90\% level of confidence we set an upper limit of 12.8\% on their frequency.
This value is consistent with constraints derived by other imaging surveys at larger orbital separations,
indicating that brown dwarf companions around low-mass M-type stars of the solar vicinity are rare, both
at close orbits between 2.5 and 10\,au and at wider ones.
For 11 of the observed M dwarfs we were able to detect companions at 1--10\,arcsec separations with
$m$\,$\gtrsim$\,30\,$M_{\rm Jup}$ and $T_{\rm eff}$\,$\gtrsim$\,600\,K, equivalent to masses and temperatures
of the expected T/Y dwarf boundary. We were thus able to establish for the first time a constrain on
the upper limit on the frequency of the L and T type companions around M dwarfs, at a range of projected
orbital separations between 2--3.5 and 35\,au. With a 90\% confidence level, we found that frequency of such
companions is less than 20\% around M stars. Such range of masses and temperatures was explored by imaging
surveys so far only in young nearby stars, at wider orbital separations, typically beyond 30--50\,au. This
imaging search is the first one that probes the presence of L and T companions to M stars at orbital
separations around and below 10\,au, down to 2.0--3.5\,au.
\begin{deluxetable*}{p{0.25\textwidth} p{0.3\textwidth} p{0.16\textwidth} p{0.06\textwidth} p{0.09\textwidth} p{0.04\textwidth}}
\tablenum{6}
\tablecaption{Frequencies of substellar companions from imaging and RV surveys\label{surveys}}
\tablecolumns{6}
\tabletypesize{\scriptsize}
\tablehead{
\colhead{Survey,} \vspace{-0.2cm} &
\colhead{Sample} &
\colhead{Occurrence} &
\colhead{Masses} &
\colhead{Separations,} &
\colhead{Conf.} \\
\colhead{reference} &
\colhead{} &
\colhead{rate} &
\colhead{($M_{\rm Jup}$)} &
\colhead{orb. periods} &
\colhead{level}
}
\startdata
SHINE (VLT/SPHERE, \citealt{2021AA...651A..72V}) & subset of 150 stars in young moving groups, typically $\sim$10--150\,Myr & 12.6 (+12.9, -7.1)\% for M dwarfs & 1--75 & 5--300 au & \\
GPIES (Gemini South/GPI, \citealt{2019AJ....158...13N}) & 300 stars, young ($<$1 Gyr old), BA and FGK type 0.2--5.0\,$M_{\odot}$, (M dwarfs not included) & 4.7 (+4.4, -2.7)\% \hspace{0.5cm} \newline 0.8 (+0.8, -0.5)\% & 5--80 \newline 13--80 & 10--100 au & 95\% \\
VLT/NaCo large program (\citealt{2017AA...603A...3V}) & young nearby stars, mostly $<$1\,Gyr, $d$\,$\le$\,100\,pc, 199 individual stars, FGK types (not M dwarfs) a few stars at $>$1\,Gyr & 2.45\% (0.25--5.55\%) & 5--75 & 5--500 au & 95\% \\
Spitzer/IRAC (\citealt{2016ApJ...824...58D})$^a$ & 73 young stars and 48 exoplanet host stars & $<$9\% & 0.5--13 & 100--1000 au & 95\% \\
IDPS survey (\citealt{2016AA...594A..63G}) & 292 young nearby stars, median age 120 Myr, 5, 107, 63, 24, 44, 49 B, A, F, G, K, M stars & 1.05 (+2.8, -0.7)\% \newline for M dwarfs: $<$9.15\% & 0.5--14 \newline 1--13 & 20--300 au \newline 10--200 au & 95\% \\
\cite{2010ApJ...717..878N} analysis & 118 stars, majority young stars, only a few at $>$1\,Gyr age & for FGKM stars: $<$20\% \newline for M stars: $<$20\% & $\ge$4 & 22--507 au \newline 9--207 au & 95\% \newline 68\% \\
GEMINI/NICI campaign,\newline \cite{2013ApJ...777..160B} & 80 members of young moving groups, 23 K and 33 M stars & $<$18\% ($<$6\%)$^b$ \newline $<$21\% ($<$7\%)$^b$ & 1--20 & 10--150\,au \newline 10--50\,au & 95.4\% \\
SEEDS survey, \citealt{2017AJ....153..106U} & 68 young stellar objects ($<$10\,Myr) & $\sim$2.9\% & 1--70 & 50--400\,au & \\
VLT/NaCo {\it L}'-band imaging \cite{2016AA...596A..83L} & 58 young and nearby M dwarfs & 4.4 (+3.2, -1.3)\% & $>$2 & 8--400 au & 68\% \\
PALMS (Keck/NIRC2, Subaru/HiCIAO, \citealt{2015ApJS..216....7B}) & 122 nearby ($<$40\,pc) young M dwarfs, 78 single M dwarfs, 90\% of stars younger than the Hyades (620\,Myr) & $<$6.0\% ($<$9.9\%)$^c$ \newline 4.5 (+3.1, -2.1)\% & 5--13 \newline 13--75 & 10--100 au \newline 10--200 au & \\
\cite{2019AJ....158..187B} analysis, AO and seeing-limited imaging combined & 344 members of nearby young associations, $\sim$120 M dwarfs & 2.6 (+7, -1)\% & 1--20 & 20--5000\,au & 95\% \\
\cite{2016PASP..128j2001B} analysis & 384 unique and single young ($\sim$5--300\,Myr) stars, stellar masses between 0.1 and 3.0\,$M_{\odot}$ & for M stars: $<$4.2\% & 5--13 & 10--100 au & 95\% \\
\cite{2014ApJ...794..159B} analysis & merged samples, 248 unique stars, SpT from late B to mid M, $d$\,$\sim$\,5--130\,pc & 0.52--4.9\% & 5--70 & 10--100\,au & 95\%\\
TRENDS \cite{2014ApJ...781...28M}, RV+imaging survey, & RVs of 111 M-dwarfs within 16\,pc, imaging follow-up of 4 targets with RV drift & 6.5\,$\pm$\,3.0\% & 1--13 & $<$20 au & \\
\cite{2014ApJ...791...91C, 2016ApJ...819..125C} analysis, RV, microlensing, imaging & synthesis of various samples of M dwarfs & 3.8 (+1.9, -2.0)\% & 1--13 & 1--10$^5$\,days & \\
HARPS (RV, \citealt{2013AA...549A.109B}) & 102 nearby ($<$11\,pc) M dwarfs & 4 (+5, -1)\% \newline $<$1\% & $\sim$0.3--3 \newline $\sim$3--30$^d$ & 10$^3$--10$^4$\,days \newline $<$10$^4$\,days & \\
AAPS survey (RV, \citealt{2016ApJ...819...28W, 2020MNRAS.492..377W}) & 203 solar type stars (FGK) & 6.7 (+2, -1)\% & 0.3--13 & $\sim$3--8\,au & 68.7\% \\
CLS survey (RV, \citealt{2021ApJS..255....8R, 2021ApJS..255...14F}) & 719 nearby FGKM stars & 14.1 (+2.0, -1.8)\% \newline 8.9 (+3.0, -2.4)\% & $\ge$0.1$^d$ & 2--8\,au \newline 8--32\,au & \\
CARMENES survey (RV, \citealt{2021arXiv210703802S}) & subsample of 71 M dwarfs & 6 (+4, -3)\% & $\ge$0.3$^d$ & $<$10$^3$\,days & \\
This work & 18 nearest M dwarfs at $\delta$\,$>$\,$-25^{\circ}$ & $<$12.8\% \newline $<$20\% & $\ge$40 \newline $\ge$30 & 3.5--35\,au & 90\%
\enddata
\tablecomments{ $a$) -- re-analysis of archival IRAC data; $b$) -- determined applying DUSTY and COND models, respectively; $c$) -- assuming a hot-start (cold-start) formation scenario; $d$) -- minimum masses, $m\sin i$}
\end{deluxetable*}
\section{Discussion} \label{sec:discussion}
\subsection{Comparison to other surveys}
We attempt to compare the determined frequency limits of stars harboring substellar companions to results
of previous works. Of the numerous programs aiming to detect giant planets and brown dwarf companions, we
focus mainly on large, high-contrast imaging surveys including general field surveys probing wide
separations and the most recent surveys (see references in Table~\ref{surveys}) that placed constraints
on substellar companions around M dwarfs. The essential outcomes of many of these programs can be found
summarized in, e.g., Table~1 of \cite{2015AA...573A.127C}, \cite{2016PASP..128j2001B} and \cite{2021AA...651A..72V}.
We also consider the results from several RV programs and studies that combine the results from various
techniques.
Table~\ref{surveys} summarizes the frequency constraints and the explored intervals of masses and orbital
separations or periods, determined by each of these surveys. It is not straightforward to compare the
results of these surveys to one another and to our results, because the domains of probed parameter
ranges are not the same and different teams make different assumptions regarding the distributions laws
of the substellar mass companions. In general, all the studies agree that the occurrence rates of substellar
objects are below 20\%, regardless of the companions masses/separations intervals in question and masses
of the primaries. The majority of surveys points to a maximum frequency of 12\% or below, typically of
a few percent ($\sim$2--5\%).
As for the low-mass stars, high-contrast imaging searches targeting specifically the M dwarfs, or
those including them as part of the target lists, explored objects at young ages of a few tens to few
hundred million years. The most sensitive ones were capable of detecting Jupiter-mass planets at separations
down to 10\,au, and, most recently, even to 5\,au by the SHINE survey \citep{2021AA...651A..72V}.
On the other side, Doppler measurements restrict the frequencies of planetary companions with minimum
masses ($m\sin i$) as small as a few tens Earth masses and are starting to explore orbital periods of
up to 10$^4$--10$^5$ days, equivalent to approximately 5--8\,au \citep{2021ApJS..255...14F, 2021arXiv210703802S}.
In this context, our results bridge the explored separation ranges between wide-orbit imaging constraints
and those from RVs approaching the snow line. Our limits on the frequency of substellar companions
are compatible with previous studies confirming that their presence around M dwarfs is rare. In
comparison with recent high contrast imaging programs, our survey is more sensitive to companions of
somewhat higher masses, but at more advanced ages of a few Gyr, and hence of significantly lower
$T_{\rm eff}$, extending down to 600\,K.
\subsection{Stellar binaries in the sample}
From theory it is expected that a stellar companion alters the formation processes of exoplanets within
a protoplanetary disc around the host star (see e.g. a review by \citealt{2015pes..book..309T}). The
binary component will also introduce a parameter space of dynamical instability in which we would not
expect planets or brown dwarf companions to persist on long timescales \citep{1999AJ....117..621H,
2015enas.book.1942H}.
A non-negligible part of stars in our targets sample has one or more stellar companions known, including
five stellar binaries, two triple systems and two stars with a white dwarf companion. We measure a
multiplicity frequency (which quantifies the number of multiple systems within our sample) and a
companion frequency (which quantifies the total number of companions) of 36$\pm$14\% and 44$\pm$16\%,
respectively, which are consistent within wide uncertainties with the comprehensive determinations
by \cite{2021AA...650A.201R} in the 10 pc sample and by \cite{2021MNRAS.tmp.2160B} for the M dwarfs.
For the statistical analysis, we considered the relatively close systems ($s$\,$\lesssim$\,50\,au;
Sirius, GJ 65, GJ 866, Procyon, GJ 860 and GJ 1245) as individual systems and those having components at
wider average orbital separations ($s$\,$\gtrsim$\,50\,au; GJ 820AB, GJ 725AB and GJ 15AB) as two
individual stars.
We add a note of caution that such aproach introduces a potential physical bias in the interpretation
of our analysis. Our search probes both circumstellar and circumbinary companions depending
on the separation of the binaries, which are very different science cases to one another and to a search
around single stars only. We recored a null companion detection in these systems. However, any secondary
component introduces a region of instability at certain range of physical separations for both S-
and P-type orbits \citep{1999AJ....117..621H, 2007AA...462..345D}, in which we would not expect an
additional body to be found. Thus, an element of bias is inherent in the statistical result derived
from a combined single and binary star sample.
Several works have concluded that binarity has a minimal effect on overall planet frequency
\citep{2007AA...468..721B, 2013MNRAS.428..182B, 2015ApJ...814..148P, 2020AA...635A..74S}.
On the other hand, some recent observational studies demonstrate an excess of wide stellar companions
to stars which host high-mass hot Jupiters and brown dwarf companions on short-period orbits
\citep{2016ApJ...827....8N, 2019MNRAS.485.4967F, 2019arXiv191201699M, 2021FrASS...8...16F}.
These conclude that certain types of binaries may support the proccess of formation of close-in
high-mass planetary and substellar companions. Although early research on this front have been carried,
e.g. by the SPOTS \citep{2016AA...593A..38B, 2018AA...619A..43A} and the VIBES \citep{2020AA...643A..98H}
surveys, a more detailed analysis of the effect multiplicity has on the occurrence of substellar
companions in general requires a larger, dedicated sample that is complete to both single stars
and binaries.
\subsection{Known planets hosts}
As many as 12 of the 33 stars in the sample have at least one small planet of a few to a few tens Earth masses
on a close orbit at around 1\,au or less. All of these cases, except the G8.5V-type $\tau$~Cet, are M dwarf
stars. In contrast, only two of these stars were found to host more massive planets -- K2.0V $\epsilon$ Eri
(b: $m\sin i$\,=\,0.78\,$M_{\rm Jup}$ at 3.48$\pm$0.02\,au) and M3.5V GJ 876 (b: $m\sin i$\,=\,2.27\,$M_{\rm Jup}$
at $a$\,=\,0.21\,au; c: $m\sin i$\,=\,0.71\,$M_{\rm Jup}$ at $a$\,=\,0.13\,au). This goes in line with results
of surveys that noticed initial indications that frequencies of more massive giant planets scale positively
with the host star mass \cite[e.g.,][]{2010PASP..122..905J, 2016PASP..128j2001B}.
There are no theoretical premises indicating that such close-in planets will preclude the formation of more
massive companions in wider orbits. Instead, observational studies showed that hot Jupiters host stars tend to
have far-away companions \cite[e.g.,][]{2014ApJ...785..126K, 2014AA...569A.120L, 2015ApJ...806..248W}.
The dynamical interactions between multiple planets or a distant brown dwarf companion may affect the
final orbital configuration of the system. A variety of mechanisms, such as planet-planet scattering, the
Kozai-Lidov effect or secular gravitational interactions \citep{2015MNRAS.448.1044H, 2016ARAA..54..441N},
have been invoked for smaller planets to explain inwards migration to short orbital periods
\citep{2003ApJ...588..494M}.
\section{Conclusions and final remarks} \label{sec:conclusions}
We completed a deep, high spatial resolution imaging search of substellar companions around the nearest
northern stars in the mid-IR at 8.7\,$\mu$m using the CanariCam instrument at the 10.4\,m GTC telescope.
Our target sample included 25 stellar systems composed of 33 individual stars, with declinations
$\delta$\,$>$\,$-$25$^{\circ}$ within 5\,pc of the Sun. No previously undetected companions were identified
in our survey.
We explored the angular separations between 1--3 and 10\,arcsec with sensitivities sufficient to detect
companions with masses and temperatures higher than 40\,$M_{\rm Jup}$ and 750\,K for 24 of the
observed stars, and as low as 30\,$M_{\rm Jup}$ and 600\,K for 11 M-type stars. Considering an
average distance of 3.5\,pc of our sample, 3.5--35\,au projected orbital separations were probed for
faint companions. The non-detections enabled us to determine upper limits for the fraction of substellar
companions. At a 90\% confidence level, we found that less than 9.6\% of the nearby stars have
companions with $m$\,$\gtrsim$\,40\,$M_{\rm Jup}$ and $T_{\rm eff}$\,$\gtrsim$\,750\,K, and that less
than 20\% of the closest M dwarfs have L and T companions with $m$\,$>$\,30\,$M_{\rm Jup}$ and
$T_{\rm eff}$\,$\gtrsim$\,600\,K within the range of explored physical separations. This is one the
first imaging programs capable to detect mature substellar companions in this range of masses and
temperatures below 10\,au separations and provides evidence that substellar companions to low-mass
M dwarfs are rare also at such closer orbits. Concurrently, extending the constraints beyond 5\,au,
our results are complementary to the evidences brought by RV programmes \cite[e.g.,][]{2013AA...549A.109B,
2015ARAA..53..409W, 2019Sci...365.1441M, 2021arXiv210703802S}, which find occurrence of giant planets
around M stars to be less than 3\% at orbital periods up to 25--30\,yr ($\sim$5\,au).
This work demonstrates that the modern ground-based mid-IR imaging instruments operating on 10\,m class
telescopes can reach angular resolutions and sensitivity limits as good as and, in certain cases (e.g.
nearby, relatively old stars), better than adaptive optics systems in the optical or near-IR or space
telescopes. This technique presents a high potential for direct imaging detection and studies of brown
dwarfs and exoplanets. Our survey was not extensive enough to determine more precisely the true fraction
of L and T brown dwarf companions at close orbits. Nonetheless results on the observed sample provide
valuable constraints for next generation facilities, such as the James Webb Space Telescope or the
Extremely Large Telescope, which will allow for detection and accurate characterization of the coldest
companions of stars \citep{2015IJAsB..14..279Q, 2018AJ....156..276D}.
\begin{acknowledgments}
We thank the anonymous referee for a careful review of our manuscript and his/her constructive comments
which substantially helped improving the quality of the paper. We are grateful to the GTC staff for
performing the CanariCam observations. Based on observations made with the Gran Telescopio Canarias (GTC),
installed in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrof\'isica de
Canarias, in the island of La Palma.
BG and DJP acknowledge support from the UK Science and Technology Facilities Council (STFC) via the
Consolidated Grant ST/R000905/1. VJSB, MRZO and JAC acknowledge financial support from the Agencia
Estatal de Investigaci\'on of the Ministerio de Ciencia, Innovaci\'on y Universidades and the
European Regional Development Fund through projects PID2019-109522GB-C5[1,3]
We acknowledge the use of Carmencita, the CARMENES input catalogue (\citealt{2016csss.confE.148C}).
This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France.
\end{acknowledgments}
\newpage
\subsubsection*{#1}}
\pagestyle{headings}
\markright{Reference sheet: \texttt{natbib}}
\usepackage{shortvrb}
\MakeShortVerb{\|}
\begin{document}
\thispagestyle{plain}
\newcommand{\textsc{Bib}\TeX}{\textsc{Bib}\TeX}
\newcommand{\texttt{#1}\def\filedate{#2}\def\fileversion{#3}}}{\texttt{#1}\def\filedate{#2}\def\fileversion{#3}}}
\begin{center}{\bfseries\Large
Reference sheet for \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ usage}\\
\large(Describing version \fileversion\ from \filedate)
\end{center}
\begin{quote}\slshape
For a more detailed description of the \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package, \LaTeX\ the
source file \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\texttt{.dtx}.
\end{quote}
\head{Overview}
The \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package is a reimplementation of the \LaTeX\ |\cite| command,
to work with both author--year and numerical citations. It is compatible with
the standard bibliographic style files, such as \texttt{plain.bst}, as well as
with those for \texttt{harvard}, \texttt{apalike}, \texttt{chicago},
\texttt{astron}, \texttt{authordate}, and of course \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}.
\head{Loading}
Load with |\usepackage[|\emph{options}|]{|\texttt{#1}\def\filedate{#2}\def\fileversion{#3}}|}|. See list of
\emph{options} at the end.
\head{Replacement bibliography styles}
I provide three new \texttt{.bst} files to replace the standard \LaTeX\
numerical ones:
\begin{quote}\ttfamily
plainnat.bst \qquad abbrvnat.bst \qquad unsrtnat.bst
\end{quote}
\head{Basic commands}
The \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package has two basic citation commands, |\citet| and
|\citep| for \emph{textual} and \emph{parenthetical} citations, respectively.
There also exist the starred versions |\citet*| and |\citep*| that print
the full author list, and not just the abbreviated one.
All of these may take one or two optional arguments to add some text before
and after the citation.
\begin{quote}
\begin{tabular}{l@{\quad$\Rightarrow$\quad}l}
|\citet{jon90}| & Jones et al. (1990)\\
|\citet[chap.~2]{jon90}| & Jones et al. (1990, chap.~2)\\[0.5ex]
|\citep{jon90}| & (Jones et al., 1990)\\
|\citep[chap.~2]{jon90}| & (Jones et al., 1990, chap.~2)\\
|\citep[see][]{jon90}| & (see Jones et al., 1990)\\
|\citep[see][chap.~2]{jon90}| & (see Jones et al., 1990, chap.~2)\\[0.5ex]
|\citet*{jon90}| & Jones, Baker, and Williams (1990)\\
|\citep*{jon90}| & (Jones, Baker, and Williams, 1990)
\end{tabular}
\end{quote}
\head{Multiple citations}
Multiple citations may be made by including more than one
citation key in the |\cite| command argument.
\begin{quote}
\begin{tabular}{l@{\quad$\Rightarrow$\quad}l}
|\citet{jon90,jam91}| & Jones et al. (1990); James et al. (1991)\\
|\citep{jon90,jam91}| & (Jones et al., 1990; James et al. 1991)\\
|\citep{jon90,jon91}| & (Jones et al., 1990, 1991)\\
|\citep{jon90a,jon90b}| & (Jones et al., 1990a,b)
\end{tabular}
\end{quote}
\head{Numerical mode}
These examples are for author--year citation mode. In numerical mode, the
results are different.
\begin{quote}
\begin{tabular}{l@{\quad$\Rightarrow$\quad}l}
|\citet{jon90}| & Jones et al. [21]\\
|\citet[chap.~2]{jon90}| & Jones et al. [21, chap.~2]\\[0.5ex]
|\citep{jon90}| & [21]\\
|\citep[chap.~2]{jon90}| & [21, chap.~2]\\
|\citep[see][]{jon90}| & [see 21]\\
|\citep[see][chap.~2]{jon90}| & [see 21, chap.~2]\\[0.5ex]
|\citep{jon90a,jon90b}| & [21, 32]
\end{tabular}
\end{quote}
\head{Suppressed parentheses}
As an alternative form of citation, |\citealt| is the same as |\citet| but
\emph{without parentheses}. Similarly, |\citealp| is |\citep| without
parentheses. Multiple references, notes, and the starred variants
also exist.
\begin{quote}
\begin{tabular}{l@{\quad$\Rightarrow$\quad}l}
|\citealt{jon90}| & Jones et al.\ 1990\\
|\citealt*{jon90}| & Jones, Baker, and Williams 1990\\
|\citealp{jon90}| & Jones et al., 1990\\
|\citealp*{jon90}| & Jones, Baker, and Williams, 1990\\
|\citealp{jon90,jam91}| & Jones et al., 1990; James et al., 1991\\
|\citealp[pg.~32]{jon90}| & Jones et al., 1990, pg.~32\\
|\citetext{priv.\ comm.}| & (priv.\ comm.)
\end{tabular}
\end{quote}
The |\citetext| command
allows arbitrary text to be placed in the current citation parentheses.
This may be used in combination with |\citealp|.
\head{Partial citations}
In author--year schemes, it is sometimes desirable to be able to refer to
the authors without the year, or vice versa. This is provided with the
extra commands
\begin{quote}
\begin{tabular}{l@{\quad$\Rightarrow$\quad}l}
|\citeauthor{jon90}| & Jones et al.\\
|\citeauthor*{jon90}| & Jones, Baker, and Williams\\
|\citeyear{jon90}| & 1990\\
|\citeyearpar{jon90}| & (1990)
\end{tabular}
\end{quote}
\head{Forcing upper cased names}
If the first author's name contains a \textsl{von} part, such as ``della
Robbia'', then |\citet{dRob98}| produces ``della Robbia (1998)'', even at the
beginning of a sentence. One can force the first letter to be in upper case
with the command |\Citet| instead. Other upper case commands also exist.
\begin{quote}
\begin{tabular}{rl@{\quad$\Rightarrow$\quad}l}
when & |\citet{dRob98}| & della Robbia (1998) \\
then & |\Citet{dRob98}| & Della Robbia (1998) \\
& |\Citep{dRob98}| & (Della Robbia, 1998) \\
& |\Citealt{dRob98}| & Della Robbia 1998 \\
& |\Citealp{dRob98}| & Della Robbia, 1998 \\
& |\Citeauthor{dRob98}| & Della Robbia
\end{tabular}
\end{quote}
These commands also exist in starred versions for full author names.
\head{Citation aliasing}
Sometimes one wants to refer to a reference with a special designation,
rather than by the authors, i.e. as Paper~I, Paper~II. Such aliases can be
defined and used, textual and/or parenthetical with:
\begin{quote}
\begin{tabular}{lcl}
|\defcitealias{jon90}{Paper~I}|\\
|\citetalias{jon90}| & $\Rightarrow$ & Paper~I\\
|\citepalias{jon90}| & $\Rightarrow$ & (Paper~I)
\end{tabular}
\end{quote}
These citation commands function much like |\citet| and |\citep|: they may
take multiple keys in the argument, may contain notes, and are marked as
hyperlinks.
\head{Selecting citation style and punctuation}
Use the command |\bibpunct| with one optional and 6 mandatory arguments:
\begin{enumerate}
\item the opening bracket symbol, default = (
\item the closing bracket symbol, default = )
\item the punctuation between multiple citations, default = ;
\item the letter `n' for numerical style, or `s' for numerical superscript
style, any other letter for
author--year, default = author--year;
\item the punctuation that comes between the author names and the year
\item the punctuation that comes between years or numbers when common author
lists are suppressed (default = ,);
\end{enumerate}
The optional argument is the character preceding a post-note, default is a
comma plus space. In redefining this character, one must include a space if
one is wanted.
Example~1, |\bibpunct{[}{]}{,}{a}{}{;}| changes the output of
\begin{quote}
|\citep{jon90,jon91,jam92}|
\end{quote}
into [Jones et al. 1990; 1991, James et al. 1992].
Example~2, |\bibpunct[; ]{(}{)}{,}{a}{}{;}| changes the output of
\begin{quote}
|\citep[and references therein]{jon90}|
\end{quote}
into (Jones et al. 1990; and references therein).
\head{Other formatting options}
Redefine |\bibsection| to the desired sectioning command for introducing
the list of references. This is normally |\section*| or |\chapter*|.
Define |\bibpreamble| to be any text that is to be printed after the heading but
before the actual list of references.
Define |\bibfont| to be a font declaration, e.g.\ |\small| to apply to
the list of references.
Define |\citenumfont| to be a font declaration or command like |\itshape|
or |\textit|.
Redefine |\bibnumfmt| as a command with an argument to format the numbers in
the list of references. The default definition is |[#1]|.
The indentation after the first line of each reference is given by
|\bibhang|; change this with the |\setlength| command.
The vertical spacing between references is set by |\bibsep|; change this with
the |\setlength| command.
\head{Automatic indexing of citations}
If one wishes to have the citations entered in the \texttt{.idx} indexing
file, it is only necessary to issue |\citeindextrue| at any point in the
document. All following |\cite| commands, of all variations, then insert
the corresponding entry to that file. With |\citeindexfalse|, these
entries will no longer be made.
\head{Use with \texttt{chapterbib} package}
The \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package is compatible with the \texttt{chapterbib} package
which makes it possible to have several bibliographies in one document.
The package makes use of the |\include| command, and each |\include|d file
has its own bibliography.
The order in which the \texttt{chapterbib} and \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ packages are loaded
is unimportant.
The \texttt{chapterbib} package provides an option \texttt{sectionbib}
that puts the bibliography in a |\section*| instead of |\chapter*|,
something that makes sense if there is a bibliography in each chapter.
This option will not work when \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ is also loaded; instead, add
the option to \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}.
Every |\include|d file must contain its own
|\bibliography| command where the bibliography is to appear. The database
files listed as arguments to this command can be different in each file,
of course. However, what is not so obvious, is that each file must also
contain a |\bibliographystyle| command, \emph{preferably with the same
style argument}.
\head{Sorting and compressing citations}
Do not use the \texttt{cite} package with \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}; rather use one of the
options \texttt{sort} or \texttt{sort\&compress}.
These also work with author--year citations, making multiple citations appear
in their order in the reference list.
\head{Long author list on first citation}
Use option \texttt{longnamesfirst} to have first citation automatically give
the full list of authors.
Suppress this for certain citations with |\shortcites{|\emph{key-list}|}|,
given before the first citation.
\head{Local configuration}
Any local recoding or definitions can be put in \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\texttt{.cfg} which
is read in after the main package file.
\head{Options that can be added to \texttt{\char`\\ usepackage}}
\begin{description}
\item[\ttfamily round] (default) for round parentheses;
\item[\ttfamily square] for square brackets;
\item[\ttfamily curly] for curly braces;
\item[\ttfamily angle] for angle brackets;
\item[\ttfamily colon] (default) to separate multiple citations with
colons;
\item[\ttfamily comma] to use commas as separaters;
\item[\ttfamily authoryear] (default) for author--year citations;
\item[\ttfamily numbers] for numerical citations;
\item[\ttfamily super] for superscripted numerical citations, as in
\textsl{Nature};
\item[\ttfamily sort] orders multiple citations into the sequence in
which they appear in the list of references;
\item[\ttfamily sort\&compress] as \texttt{sort} but in addition multiple
numerical citations are compressed if possible (as 3--6, 15);
\item[\ttfamily longnamesfirst] makes the first citation of any reference
the equivalent of the starred variant (full author list) and subsequent
citations normal (abbreviated list);
\item[\ttfamily sectionbib] redefines |\thebibliography| to issue
|\section*| instead of |\chapter*|; valid only for classes with a
|\chapter| command; to be used with the \texttt{chapterbib} package;
\item[\ttfamily nonamebreak] keeps all the authors' names in a citation on
one line; causes overfull hboxes but helps with some \texttt{hyperref}
problems.
\end{description}
\end{document}
|
1,108,101,566,701 | arxiv | \section{Introduction}
Recently there is an increasing interest in the Gauss-Bonnet
theory with a scalar field to look for possible theoretical
explanation to some cosmological problems such as acceleration of
the universe \cite{noj1}. Accelerated cosmological solutions were
first suggested in \cite{odin1}, \cite{odin2} and also discussed
in \cite{mota1}, \cite{mota2}. It is also expected that this
theory or its modifications may have some contributions to some
astrophysical phenomena. For this purpose, spherically symmetric
solutions of this theory were first studied in \cite{des1},
\cite{des2}. It has been observed that the Post-Newtonian
approximation does not give any new contribution in addition to
the post-Newtonian parameters of the general relativity
\cite{teb}. Black hole solutions in the framework of the GB
gravity are investigated recently in \cite{galt} (see also
\cite{mig}, \cite{kant}). There are also attempts to find exact
solutions and to study the stability of the Gauss-Bonnet theory in
various dimensions with actions containing higher derivative
scalar field couplings \cite{pcd}, \cite{jak}, \cite{dav}.
Since the Gauss-Bonnet term is a topological invariant in four
dimensions it does not contribute to the Einstein field
equations. On the other hand it contributes to the field equations
if it couples to a spin-0 zero field. In this work we consider a
four dimensional action containing the Einstein-Hilbert part,
massless scalar field and the Gauss-Bonnet term coupled with the
scalar field. The corresponding action is given by \cite{teb}
\begin{equation}
S=\int d^4x\, \sqrt{-g}\, [ {R \over 2\kappa^2}-{1 \over 2}
\partial_{\mu} \phi\, \partial^{\mu}\, \phi-V(\phi)+f(\phi) GB]
\end{equation}
where $\kappa^2=8\pi G$ ($c=\hbar=1$) and
\begin{equation}\label{den00}
GB=R^2-4R^{\alpha \beta} R_{\alpha \beta}+R^{\alpha \beta \sigma
\gamma} R_{\alpha \beta \sigma \gamma}
\end{equation}
and $f$ is an arbitrary function of the scalar filed $\phi$
(coupling function). Here $V$ is potential term for the scalar
field. The field equations are given by
\begin{eqnarray}\label{den0}
R_{\mu \nu}&=&\kappa^2\, [{1 \over 2} \partial_{\mu}\, \phi
\partial_{\nu}\, \phi +{1 \over 2} V(\phi)\, g_{\mu \nu}+2
(\nabla_{\mu} \nabla_{\nu}f)R-g_{\mu \nu}\, (\nabla^{\rho}\nabla_{\rho} f) R \nonumber \\
&&-4(\nabla^\rho \nabla_{\mu}f )R_{\nu \rho} -4(\nabla^\rho
\nabla_{\nu}f )R_{\mu \rho}+4 (\nabla^{\rho}\nabla_{\rho} f) R_{\mu \nu} \nonumber \\
&&+2g_{\mu \nu} (\nabla^{\rho} \nabla^{\sigma} f)R_{\rho \sigma}-4
(\nabla^{\rho} \nabla^{\sigma} f) R_{\mu \rho \nu \sigma}]
\end{eqnarray}
\begin{equation}\label{den2}
\nabla^{\rho} \nabla_{\rho} \phi-V^{\prime}(\phi)+f^{\prime} GB=0
\end{equation}
Einstein field equations are usually solved under certain
assumptions like spherical symmetry, plane symmetry and axial
symmetry. In some cases we assume a form for the spacetime metric
like conformally flat, Kerr-Schild and G{\" o}del types. In each
one we create a class of exact solutions of Einstein's field
equations \cite{cramer}. In this work our intention is open such a
direction in GB theory and obtain exact solutions of this theory
and its modifications under certain assumptions. To this end we
now assume the spacetime geometry $(M,g)$ is such that ({\it
assumption 1})
\begin{equation} \label{den1}
\nabla_{\mu} \nabla_{\nu} f=\Lambda_{1} g_{\mu \nu}+\Lambda_{2}
\ell_{\mu} \ell_{\nu}
\end{equation}
where $\Lambda_{1}$ and $\Lambda_{2}$ are scalar functions and
$\ell_{\mu}$ is a vector field. In the sequel we will assume that
$\Lambda_{2}=0$ ({\it assumption 2}). Eq.(\ref{den1}) restricts
the space-time $(M,g)$. Among these space-times admitting
(\ref{den1}) we have conformally flat space-times ({\it assumption
3}).
\begin{equation}\label{met}
g_{\mu \nu}=\psi^{-2}\, \eta_{\mu \nu}
\end{equation}
where $\psi$ is a scalar function. In such space-times the
conformal tensor vanishes identically. Hence
\begin{equation}
GB=-2R^{\alpha \beta} R_{\alpha \beta}+{2 \over 3}R^2
\end{equation}
Then the field equations (\ref{den0}) reduce to
\begin{equation}\label{den3}
(1-4\Lambda_{1} \kappa^2)\,R_{\mu \nu}=\kappa^2\, [{1 \over 2}
\partial_{\mu}\, \phi
\partial_{\nu}\, \phi +{1 \over 2} V(\phi)\, g_{\mu \nu}]
\end{equation}
We have now the last assumption: All functions depend on
$z=k_{\mu}x^{\mu}$ where $k_{\mu}$ is a constant vector,
$\partial_{\mu} k_{\nu}=0$. Then from (\ref{den1}) we get
\begin{equation}
f^{\prime}=C \psi^{-2}, \Lambda_{1}=-Ck^2 {\psi^{\prime}\over
\psi}
\end{equation}
where $C$ is an arbitrary constant and $k^2 =\eta^{\mu \nu}k_{\mu}
k_{\nu}$. By using (\ref{den3}) and the Ricci tensor
\begin{equation}\label{ric}
R_{\mu \nu}=2{\psi_{,\mu \nu} \over \psi}+[{1 \over \psi}
\eta^{\alpha \beta}\, \psi_{,\alpha \beta}-{3 \over \psi^2}\,
\eta^{\alpha \beta}\, \psi_{,\alpha} \psi_{,\beta}]\, \eta_{\mu
\nu},
\end{equation}
for the metric (\ref{met}) we obtain the following equations
\begin{eqnarray}
(1-4\Lambda_{1}\kappa^2)\psi^{-1} \psi^{\prime \prime}={\kappa^2
\over 4}
(\phi^{\prime})^2,\label{den4}\\
V=-{2k^2 \over
\kappa^2}(1-4\Lambda_{1}\kappa^2)[3(\psi^{\prime})^2-\psi
\psi^{\prime \prime}],\label{pot}\\
k^2 \psi^4 (\psi^{-2} \phi^{\prime})^{\prime}-\dot{V}+\dot{f}
GB=0, \label{den5}\\
f^{\prime}=C \psi^{-2}, \Lambda_{1}=-Ck^2 {\psi^{\prime} \over
\psi}
\end{eqnarray}
where
\begin{equation}
GB=72(k^2)^2\psi^4[(\psi^{-1} \psi^{\prime})^2-\psi^{-1}
\psi^{\prime \prime}](\psi^{-1} \psi^{\prime})^2
\end{equation}
and a dot over a letter denotes derivative with respect to the
scalar field $\phi$. Eqs(\ref{den4}) and (\ref{den5}) give
coupled ODEs for the functions $\psi$ and $\phi$. Letting
$\psi^{\prime}/\psi=u$ and $\phi^{\prime}=v$ then these equations
become
\begin{eqnarray}
(1+4Ck^2 \kappa^2 u)(u^{\prime}+u^2)={\kappa^2 \over 4} v^2, \label{eq01}\\
k^2\, \psi^2\, [(v^{\prime}-2uv)v-27Ck^2\,u^{\prime}u^2]=
V^{\prime} \label{eq02}
\end{eqnarray}
where $V$ is given by (from (\ref{pot}))
\begin{equation}
V=-{2k^2 \over \kappa^2} (1+4Ck^2 \kappa^2 u)(2u^2-u^{\prime})\,
\psi^2 \label{eq03}
\end{equation}
Inserting $V$ from (\ref{eq03}) into Eq. (\ref{eq02}) (and using
(\ref{eq01}) in (\ref{eq02})) we obtain simply
\begin{equation}
3C (k^2)^2 u^2 u^{\prime}=0
\end{equation}
\noindent
Hence we have the following solutions.
\vspace{0.3cm}
{\bf (A)\,\, $C=0$:} This corresponds to pure Einstein field
equations with a massless scalar field. The effect of the Gauss
Bonnet term disappears. Solutions of these field equations have
been given in \cite{gur}
\vspace{0.3cm}
{\bf (B)\,\, $k^2=0$:} The vector field $k_{\mu}$ is null. Then
the only field equation is
\begin{equation}\label{denk14}
u^{\prime}+u^2={\kappa^2 \over 4} v^2
\end{equation}
and $V$ becomes zero. There is a single equation for the two
fields $u$ and $v$. This means that, if one of the fields $u$ or
$v$ is given then the other one is determined directly. The metric
takes the form
\begin{equation}
ds^2=\psi(p)^{-2}\, [2dp dq+dx^2+dy^2]
\end{equation}
where $p$ and $q$ are null coordinates and
$k_{\mu}=\delta_{\mu}^{p}$ and the above equation (\ref{denk14})
becomes
\begin{equation}
\psi_{pp}={\kappa^2 \over 4} (\phi^{\prime})^2\, \psi
\end{equation}
and the Einstein tensor represents a null fluid with zero
pressure.
\begin{equation}
G_{\mu \nu}={\kappa^2 \over 2}\, (\phi^{\prime})^2\, k_{\mu}\,
k_{\nu}
\end{equation}
Although the coupling function $f$ is nonzero the effect of the GB
term is absent in this type. Such a class of solutions belongs to
class (A).
\vspace{0.3cm}
{\bf (C) $k^2 \ne 0$:} The vector field $k_{\mu}$ is non-null.
Then $u=m$ a real constant which leads to the following solution.
\begin{equation}
\psi=\psi_{0}\, e^{m\,z}, ~~\phi=\phi_{0}+\phi_{1}\,z
\end{equation}
where $\psi_{0}$ and $\phi_{0}$ are arbitrary constants and
\begin{equation}
(1+4Ck^2 \kappa^2\,m)m^2\, ={\kappa^2 \over 4}\, \phi_{1}^2,~~V
=-k^2\, \phi_{1}^2\, \psi^2
\end{equation}
where $\phi_{1} \ne 0$. The potential function $V$ takes the form
\begin{equation}
V(\phi)= V_{0}\, e^{\pm \, {\phi \over \xi}}, ~~V_{0}=-k^2\,
\phi_{1}^2\, \psi_{0}^2\, e^{\mp {\phi_{0} \over \xi}}
\end{equation}
where $\xi=1+4Ck^2\, \kappa^2\, m$ and coupling function $f$ takes
the form
\begin{equation}
f=f_{0}-f_{1}\, e^{\mp {\phi \over \xi}},~~~ f_{1}=(C / \xi\,
\psi_{0}^2)\,e^{\mp \phi_{0} \over \xi}
\end{equation}
The solution we obtained here is free of singularities but not
asymptotically flat. On the other hand, by using this solution it
is possible to obtain an asymptotically flat cosmological
solution.
\vspace{0.3cm}
This solution is well understood in a new coordinate chart
$\{x^a,t\}$ where the line element takes the following form (after
a scaling)
\begin{equation}
ds^2={t^2 \over t_{0}^2}\, \eta_{ab}\,dx^a dx^b+\epsilon dt^2
\end{equation}
where $t_{0}$ is a nonzero constant. If $t$ is a spacelike
coordinate then $\epsilon=1$ and Latin indices take values
$a=0,1,2$. If $t$ is a timelike coordinate then $\epsilon=-1$ and
Latin indices take values $a=1,2,3$. $\eta_{ab}$ is the metric of
the flat three dimensional geometry orthogonal to the
$u$-direction. The Ricci tensor of the four dimensional metric
\begin{equation}
R_{tt}=0,~~ R_{ta}=0,~~ R_{ab}=-{2\epsilon \over
t_{0}^2}\,\eta_{ab}
\end{equation}
Hence the solution takes the form
\begin{equation}
\phi^{\prime}=\pm {2\sqrt{\xi} \over t}, ~~V(\phi)=-{4\epsilon \xi
\over t^2}
\end{equation}
where $\xi=1+4\kappa^2 C$,~ $\Lambda_{1}=C$ a constant, and
$f=f_{0}+{\epsilon C \over 2} t^2$ , $f_{0}$ is an arbitrary
constant. The curvature scalars are given by
\begin{equation}
R={6 \over t^2},~~ R_{\mu \nu}\,R^{\mu \nu}={12 \over t^4}
\end{equation}
and the Gauss-Bonnet scalar density $GB=0$. It clear that $t=0$ is
the spacetime singularity. Letting
$u_{\alpha}=\delta_{\alpha}^{t}$, the Einstein tensor becomes
\begin{equation}
G_{\alpha \beta}={2 \over t^2}\, u_{\alpha}\, u_{\beta}+{\epsilon
\over t^2}\, g_{\alpha \beta}
\end{equation}
This tensor has a physical meaning when $\epsilon=-1$ in which
case the Gauss-Bonnet gravity produces a singular cosmological
model. The Einstein tensor represents a perfect fluid with an
energy density $\rho=3/t^2$ and a negative pressure $p=-1/t^2$.
Both of them are singular at $t=0$.
\vspace{0.3cm}
We have found the most general solutions of the Gauss-Bonnet
gravity coupled to a scalar field under the assumptions stated in
the text. One solution (B) depends on a null coordinate whose
Einstein tensor corresponds to the energy momentum tensor of a
null fluid with zero pressure. The other solution (C) depends on
variable $t$ whose curvature invariants are all singular at $t=0$.
When $t$ represents the time coordinate then GB gravity gives a
cosmological model with a negative pressure. The solution is
singular on the 3-surface $t=0$.
We would like to conclude with a remark. The field equations
(\ref{den0}) and (\ref{den2}) of the GB theory with a scalar field
resemble to the field equations of the modified Gauss-Bonnet
theory \cite{noj1}, \cite{noj2}. In the latter case the scalar
field $\phi$ and the potential term $V(\phi)$ are absent in the
action and the function $f=f(GB)$ depends on the GB term
(\ref{den00}). We remark that the flat metric is the only solution
of the modified Gauss-Bonnet field equations under the assumptions
made in the text. It seems that scalar field is crucial to obtain
non-flat metrics. It is however interesting to search for the
solutions of the modified GB field equations. For this purpose we
are planning to relax our assumptions 2 and 3 in a forthcoming
publication.
\vspace{0.4cm}
I would like to thank the referees for their
constructive comments. This work is partially supported by the
Scientific and Technological Research Council of Turkey (TUBITAK)
and Turkish Academy of Sciences (TUBA).
|
1,108,101,566,702 | arxiv | \section*{Introduction}
The description of
the correlation functions for the models solved by means of Bethe
ansatz is based on the representation for the correlators as the
Fredholm determinants of linear integral operators. Such
representations were obtained for the first time in \cite{l1,l2} for
the simplest two point equal-time correlators for the one-dimensional
impenetrable bosons model. Later they were generalized for the case of
time-dependent correlators for the models being the free-fermion point
of models solved by means of Bethe Ansatz (impenetrable bosons
\cite{ks} and the isotropic XX0 Heisenberg chain \cite{cikt1,cikt2})
and also for the case of finite interaction (\cite{k,kks} for the
one-dimensional Bose gas and \cite{efik} for the XXZ chain). Such
representations allow to write classical integrable equations for
correlators which can be used, in particular, to calculate the long
time and large distance asymptotics for the correlation functions
\cite{iik1,iiks2,iik2,iikv}.
It was understood a time ago \cite{TMcC,WMcC,McCTW}
that the language of the classical differential equations is
natural for the description of the correlation functions
of quantum integrable models. The recent progress in this direction
is described in detail in the book \cite{bik} for the example
of impenetrable bosons based on the approach elaborated
in \cite{iik1,iiks1,iiks2,iiks3,iik2,iikv}. The most important
part of this approach is to consider the Fredholm determinant
of the linear operator appearing in the representation of
the correlators as a $\tau$-function for a new system of classical
integrable equations (see also \cite{jmms}). These linear operators
should be of a very special form - so called "integrable"
integral operators (see \cite{bik,iiks4}).
In this paper we obtain some determinant representations
for equal-time temperature correlators for the anisotropic
Heisenberg XY chain both for finite lattice and in thermodynamic
limit. This model was introduced and studied for the first
time in \cite{lsm}. Later it was investigated by many authors
(see \cite{McC,McCBA,McCPS} and references therein).
To compute the correlation functions we use a modification
of the approach proposed in \cite{ki} and based on using
the integration over the Grassman variables and the corresponding
coherent states. It is an essential point in our paper.
We don't use, however, the functional integral as in \cite{ki}.
Using the coherent states simplifies the calculations
leading automatically to the answers in the necessary form
(see for example Appendix B where we reproduce the results
\cite{cikt1,cikt2} for the XX0 chain using our method).
The representations for the equal-time
temperature correlators of the anisotropic XY chain
as determinants of $M\times M$ matrices obtained in this paper
being a direct generalization of the representations for
the isotropic case \cite{cikt1,cikt2,cit} differ from them
only by changing "Fermi" (or "Bose") weight in the kernels
of integral operators for a weight depending on the anisotropy
parameter. On the other hand these representations generalize the
results for the anisotropic chain obtained in \cite{iks} for zero
temperature. The "integrability" of the integral operators appearing
in the thermodynamic limit is evident.
One should note that the isotropic XX0 chain is the "free-fermion
point" for the XXZ Heisenberg chain which is a model solved by means
of the standard Bethe Ansatz. The anisotropic XY chain is the
"free-fermion line" for the XYZ chain which is a model where
the usual Bethe Ansatz doesn't work. Therefore it is very
interesting from our point of view that the answers for the
equal-time correlators have the form of a simple deformation
of the answers for the isotropic case. We hope to give a
corresponding description for the time-depending correlation
functions in our next publication.
Our paper is organized as follows.
In the first section we describe the model and give the basic
facts about the diagonalization of the Hamiltonian using the Bogolyubov
transformation following the classical works \cite{lsm,McCBA}. In
section 2 we introduce the coherent states for isotropic
XX0 model and calculate the matrix elements of operators
between these states. In section 3 we describe the coherent
states for the anisotropic XY chain and consider their relations
with the coherent states of the isotropic chain. In section 4
the simplest correlator is calculated. In section 5 we calculate
the local spin correlator for the finite anisotropic lattice.
In section 6 we obtain the results in the thermodynamic limit.
In appendix A the basic facts about Grassmanian coherent states
are given. In appendix B the derivation of the results for the
isotropic case is presented.
\section{The XY Heisenberg chain}
\setcounter{equation}{0}
The Hamiltonian of the XY spin chain describing the interaction
between the nearest
neighbors spins 1/2 placed in the sites of one-dimensional periodical
lattice in a constant magnetic field $h$ is
\beq
{\bf H}={\bf H}_0+\gamma{\bf H}_1-h{\bf S}^z,
\label{(1.1)}
\eeq
where
\beq
\label{(1.2)}
{\bf H}_0=-\frac 12 \sum\limits_{m=1}^M (\sigma^+_m\sigma^-_{m+1}+\sigma^-_m\sigma^+_{m+1});
\eeq
\beq
\label{(1.3)}
{\bf H}_1=-\frac 12 \sum\limits_{m=1}^M (\sigma^+_m\sigma^+_{m+1}+\sigma^-_m\sigma^-_{m+1});
\eeq
and the third component of the total spin is
\beq
\label{(1.4)}
{\bf S}^z=\sum\limits_{m=1}^M\sigma_m^z.
\eeq
The total number of sites $M$ is supposed to be even. Pauli matrices
are defined as usual
\beq
\label{(1.5)}
[\sigma^\alpha_m,\sigma^\beta_n]=2i\delta_{mn}\epsilon^{\alpha\beta\gamma}\sigma^\gamma_m (\alpha,\beta,\gamma=x,y,z);
\eeq
\[\sigma^\pm_m=\frac 12(\sigma_m^x\pm i\sigma_m^y),\]
with the periodical boundary conditions
\beq
\label{(1.6)}
\sigma^\alpha_{M+1}=\sigma^\alpha_1.
\eeq
Due to the symmetries of the Hamiltonian the sign of the magnetic field
(as the sign of the Hamiltonian) is not essential. We will assume that
$h\ge 0$.
The Jordan-Wigner transformation
\[a_m=\exp \left\{ i\pi{\bf Q}(m-1)\right\}\sigma^+_m;\]
\beq
a^+_m=\sigma^-_m\exp \left\{ i\pi{\bf Q}(m-1)\right\},
\label{(1.7)}
\eeq
introduces the canonical fermion fields $a_m,a_m^+$ on the lattice,
\[ [a_m,a_n]_+\equiv a_m a_n + a_n a_m= 0;\]
\beq
\label{(1.8)}
[a^+_m,a^+_n]_+=0, [a_m,a^+_n]_+=\delta_{mn}.
\eeq
Operator ${\bf Q}(m)$ is the operator of number of particles on the
first $m$ sites of the lattice,
\beq
\label{(1.9)}
{\bf Q}(m)=\sum\limits_{j=1}^m q_j, \eeq where $q_m$ is the operator of number
of particles in the site $m$: \beq \label{(1.10)} q_m=a^+_m
a_m=\sigma_m^-\sigma_m^+=\frac 12 (1-\sigma^z_m). \eeq
The operator of the
total number of particles,
\beq
\label{(1.11)}
{\bf N}={\bf Q}(M),
\eeq
commutes with the operators ${\bf H}_0$ and ${\bf S}^z$ but does not
commute with the operator ${\bf H}_1$ and thus the total Hamiltonian
${\bf H}$ does not conserve the number of "$a$-fermions". At the same
time the operator $(-1)^{\bf N}=\exp \{\pm i\pi {\bf N}\}$,
anticommuting with the creation and annihilation operators
\beq
\label{(1.12)}
[(-1)^N,a^+_m]_+=[(-1)^N,a_m]_+=0,
\eeq
commutes with
any bilinear in $a_m,a_m^+$ operators, in particular, with the
Hamiltonian \beq \label{(1.13)} [(-1)^{\bf N},{\bf H}]=0. \eeq
Periodical boundary conditions \eqref{(1.6)} for the spins lead
to the following conditions for the fermions:
\beq
\label{(1.14)}
a_{M+1}=(-1)^{\bf N} a_1; \bl{1} a^+_{M+1}=a^+_1 (-1)^{\bf N}.
\eeq
Introducing projectors ${\bf P^\pm}$
\[{\bf P^\pm}=\frac 12 (1\pm (-1)^{\bf N});\]
\beq
\label{(1.15)}
({\bf P^\pm})^2={\bf P^\pm}; \bl{1} {\bf P^+}+{\bf P^-}=1; \bl{1}
{\bf P^+}{\bf P^-}={\bf P^-}{\bf P^+}=0;
\eeq
\[{\bf P^\pm}a_m=a_m{\bf P^\mp}; \bl{1} [{\bf H},{\bf P^\pm}]=0,\]
one can rewrite the Hamiltonian in the following form \cite{McCBA}
\beq
\label{(1.16)}
{\bf H}={\bf H^+}{\bf P^+}+{\bf H^-}{\bf P^-}.
\eeq
Both operators ${\bf H}^\pm$ can be rewritten formally in the same form
\beq
\label{(1.17)}
{\bf H^\pm}=\frac 12 \sum\limits_{m=1}^M \left[ (a^+_m a_{m+1}+ a^+_{m+1} a_m)+
\gamma (a^+_m a^+_{m+1}+ a_{m+1} a_m)\right]+
\eeq
\[+h\sum\limits_{m=1}^M a^+_m a_m-
\frac{hM}{2},\]
the only difference between ${\bf H^+}$ and ${\bf H^-}$ being the
boundary conditions: \[a_{M+1}=-a_1;\bl{1} a^+_{M+1}=-a^+_1;\bl{1}
{\rm for} \bl{1}{\bf H^+},\] \beq \label{(1.18)} a_{M+1}=a_1;\bl{1}
a^+_{M+1}=a^+_1;\bl{1} {\rm for} \bl{1} {\bf H^-}. \eeq
Hence the Fourier transformations to the momentum representation are
different for these Hamiltonians. We
denote the sets of permitted quasimomenta $X^+$ for ${\bf H^+}$ and
$X^-$ for ${\bf H^-}$:
\beq
\label{(1.19)}
X^\pm=\left\{ p: \exp\{ ipM\}=\mp 1,\bl{1} p\in (-\pi,\pi]\right\},
\eeq
or explicitly:
\[X^+=\left\{ p_l=-\pi -\frac{\pi}{M}+
\frac{2\pi}{M}l, \bl{1}l=1,2,\dots,M\right\},\]
\beq
X^-=\left\{ p_l=-\pi -\frac{2\pi}{M}l, \bl{1}l=1,2,\dots,M\right\}.
\label{(1.20)}
\eeq
The corresponding formulae for the Fourier transformation can be
written in the following form
\[a_m=\frac{\exp \{-i\pi /4\}}{\sqrt{M}}\sum\limits_{p\in X^\pm} a_p
\exp\left\{ i(m-1)p\right\},\]
\beq
\label{(1.21)}
a^+_m=\frac{\exp \{i\pi /4\}}{\sqrt{M}}\sum\limits_{p\in X^\pm} a^+_p
\exp\left\{ -i(m-1)p\right\},
\eeq
(the summation is taken over $p\in X^+$ for ${\bf H^+}$ and over $p\in
X^-$ for ${\bf H^-}$) and \[a_p=\frac{\exp \{i\pi
/4\}}{\sqrt{M}}\sum\limits_{m=1}^M a_m \exp\left\{ -i(m-1)p\right\},\] \beq
\label{(1.22)}
a^+_p=\frac{\exp \{-i\pi /4\}}{\sqrt{M}}\sum\limits_{m=1}^M a^+_m
\exp\left\{ i(m-1)p\right\}.
\eeq
The Hamiltonians ${\bf H^\pm}$ in
the momenta representation are written as
\beq
\label{(1.23)}
{\bf H^\pm}=\sum\limits_{p\in X^\pm}\left[\varepsilon (p)a^+_p a_p+
\frac {\Gamma (p)}{2}(a^+_p a^+_{-p}+a_{-p}a_p)\right]-\frac{Mh}{2},
\eeq
where
\beq
\label{(1.24)}
\varepsilon (p)=h-\cos p;\bl{1} \Gamma (p)=\gamma\sin p.
\eeq
Diagonalization of these Hamiltonians can be done using
the Bogolyubov transformation (different for ${\bf H^+}$ and ${\bf H^-}$)
leading to new canonical fermion operators $A_p$ and $A_p^+$
\beq
\label{(1.25)}
\begin{array}{c}
A_p=\alpha (p)a_p-\beta (p)a^+_{-p};\\
A^+_p=\alpha (p)a^+_p+\beta (p)a_{-p},
\end{array}
\eeq
where
\beq
\alpha(p)=\cos \frac{\theta (p)}{2};\bl{1}\beta(p)=\sin\frac{\theta (p)}{2},
\label{(1.26)}
\eeq
and the angle $\theta (p)$ is defined by relations:
\beq
\begin{array}{c}
\cos\theta (p)=\frac{\varepsilon (p)}{E(p)};\bl{1}
\sin\theta (p)=-\frac{\Gamma (p)}{E(p)};\\
p\neq 0,\pi; \theta (p)=-\theta (-p),
\end{array}
\label{(1.27)}
\eeq
\beq
\label{(1.28)}
E(p)=\sqrt{\varepsilon^2(p)+\Gamma^2(p)}\geq0, \bl{1}(p\neq0,\pi).
\eeq
The momenta $p=0,\pi$ (appearing only in ${\bf H^-}$) should be treated
separately. Following \cite{McCBA} we put
\beq
\label{(1.29)}
\begin{array}{c}
A_0=a_0;\bl{1} A^+_0=a^+_0,\\
A_\pi=a_\pi;\bl{1} A^+_\pi=a^+_\pi,
\end{array}
\eeq
and
\beq
\label{(1.30)}
\begin{array}{c}
E(0)=h-1=\varepsilon (0)\bl{1} (E(0)<0 \bl{1} {\rm for} \bl{1} h<1)\\
E(\pi)=h+1=\varepsilon (\pi)>0.
\end{array}
\eeq
The Hamiltonians can be diagonalized as follows
\beq
\label{(1.31)}
{\bf H^\pm}=\sum\limits_{p\in X^\pm}E(p)A^+_p A_p+E_0^\pm,
\eeq
where the "vacuum energy" is
\beq
\label{(1.32)}
E_0^\pm=-\frac 12 \sum\limits_{p\in X^\pm}E(p),
\eeq
(to calculate $E_0^-$ one should take into account the definition (1.30)).
One should note that for $h<1$ the value $E_0^-$ is not the ground
state energy $E_g^-$, in this case $E_g^-=E_0^-+\varepsilon (0)$.
\section{Coherent states for the XX0 chain}
\setcounter{equation}{0}
In this section we give some formulae for matrix elements of
operators between the coherent states of the isotropic XX0 chain which
are necessary for the following calculations. The corresponding
Hamiltonians ${\bf H}_{\rm XX0}^\pm$ (see \eqref{(1.23)})
\beq
\label{(2.1)}
{\bf H^\pm_{\rm XX0}}=\sum\limits_{p\in X^\pm} \varepsilon (p)a^+_p a_p-\frac{hM}{2}
\eeq
will be denoted simply ${\bf H^\pm}$ in this section for the
simplification. They are diagonal already in terms of the operators
$a_p,a_p^+$ \eqref{(1.21)}. One should note, however, that the Fock
vacuum (which is the same for ${\bf H^\pm}$),
\beq
\label{(2.2)}
a_m|0\rangle=0, \bl{1}
\langle 0|a^+_m =0 \bl{1} (m=1,2,\dots ,M),
\eeq
\[a_p|0\rangle=0, \bl{1} \langle
0|a^+_p =0, \bl{1} (p\in X^\pm ),\bl{2}\langle 0|0\rangle=1,\]
is the ground state
only for $h>1$.
One introduces the coherent states (see Appendix A) different for the
Hamiltonians ${\bf H^+}$ and ${\bf H^-}$
\beq
\label{(2.3)}
|\phi,\pm\rangle =\exp\left\{\sum\limits_{q\in X^\pm} a^+_q\phi_q\right\}|0\rangle,
\eeq
\beq
\label{(2.4)}
\langle\phi^*,\pm| =\l0|\exp\left\{\sum\limits_{q\in X^\pm}\phi^*_q a_q\right\}.
\eeq
The parameters $\phi_q,\phi_q^*$ (Grassman algebra elements)
anticommute with other parameters and with all the operators
$a, a^+$. The main properties of the coherent states \eqref{(2.3)},
\eqref{(2.4)}
are described in Appendix A. They are eigenstates for the operators
$a_p$ and $a^+_p$:
\beq
\label{(2.5)}
a_p|\phi,\pm\rangle =\phi_p|\phi,\pm\rangle\bl{1}(p\in X^\pm),
\eeq
\beq
\label{(2.6)}
\langle\phi^*,\pm|a^+_p=\phi^*_p\langle\phi^*,\pm|\bl{1}(p\in X^\pm).
\eeq
The scalar product of the coherent states of one type is given by
the usual formulae \eqref{(A.4)}:
\beq
\label{(2.7)}
\langle\phi^*,+|\phi,+\rangle=\exp\left\{\sum\limits_{p\in X^+}\phi^*_p \phi_p\right\};
\eeq
\beq
\label{(2.8)}
\langle\phi^*,-|\phi,-\rangle=\exp\left\{\sum\limits_{q\in X^-}\phi^*_q \phi_q\right\}.
\eeq
When it cannot cause misunderstandings the sums on the
right hand sides of the equations \eqref{(2.3)}, \eqref{(2.4)},
\eqref{(2.7)}, \eqref{(2.8)} will be denoted $a^+\phi=\sum
a^+_q\phi_q;\phi^+\phi= \sum\phi^+_p\phi_p$ etc.
The scalar products of the coherent states of different type are
given by
\beq
\label{(2.9)}
\langle\phi^*,+|\psi,-\rangle=\exp\left\{\sum\limits_{p\in X^+ ,q\in X^-}\phi^*_p
L_{pq}\psi_q\right\}=\exp\{\phi^*L\psi\},
\eeq
\[\langle\psi^*,-|\phi,+\rangle=\exp\left\{\sum\limits_{p\in X^- ,q\in X^+}\psi^*_p
L_{pq}\phi_q\right\}=\exp\{\psi^*L\phi\},\]
where the matrix element of the $M\times M$ matrix $L$ are
\beq
\label{(2.10)}
L_{pq}=\frac 2M\frac{1}{1-\exp\{-i(p-q)\}}=\frac iM\left(\cot\frac
{q-p}{2}-i\right).
\eeq
It becomes evident if one takes into account that
the Fock vacuum is the same for both types of states
and rewrites the scalar products in the "coordinate
representation" using the formulae \eqref{(1.21)} and \eqref{(1.22)}.
For example,
\[\langle\phi^*,+|\psi,-\rangle=\langle 0|\exp\left\{\sum\limits_{m=1}^M\phi^*_m a_m\right\}
\exp\left\{\sum\limits_{m=1}^M a^+_m\psi_m\right\}|0\rangle=\]
\beq
\label{(2.11)}
=\exp\left\{\sum\limits_{m=1}^M \phi_m^*\psi_m\right\},
\eeq
where
\[\phi^*_m=\frac{\exp \{i\pi /4\}}{\sqrt{M}}\sum\limits_{p\in X^+} \phi^*_p
\exp\left\{ -i(m-1)p\right\},\]
\beq
\label{(2.12)}
\psi_m=\frac{\exp \{-i\pi /4\}}{\sqrt{M}}\sum\limits_{q\in X^-} \psi_q
\exp\left\{ i(m-1)q\right\}.
\eeq
Consider now the matrix elements of the operator
$\exp \{\alpha {\bf Q}(m)\}$ where (see \eqref{(1.9)}) ${\bf Q}(m)$ is the
number of particles operator at the first $m$ sites
of the lattice:
\beq
\label{(2.13)}
{\bf Q}=\sum\limits_{l=1}^m a^+_l a_l=\sum\limits_{p_1,p_2}a^+_{p_1}Q_{p_1,p_2}(m)a_{p_2}
\equiv a^+Q(m)a,
\eeq
(the quasimomenta $p_1$ and $p_2$ here correspond to the
states of the same type: $p_1, p_2\in X^+$ or $p_1, p_2\in X^-$).
Using the properties of the matrix $Q(m)$ (evident in the
coordinate representation),
\beq
\label{(2.14)}
Q^2(m)=Q(m); \bl{1}\exp\{\alpha Q(m)\}=I+(e^\alpha -1)Q(m),
\eeq
one has for the matrix elements between two states of the
same type
\[\langle\phi^*,\pm|\exp\{\alpha{\bf Q}(m)\}|\phi,\pm\rangle=\exp\left\{
\phi^*[I+(e^\alpha -1)Q(m)]\phi\right\}=\]
\beq
\label{(2.15)}
=\exp\left\{\sum\limits_{p\in X^\pm ,q\in X^\pm}\phi^*_p
[\delta_{pq}+(e^\alpha -1)Q_{pq}(m)]\phi_q\right\},
\eeq
the matrix elements of the $M\times M$ matrix $Q(m)$
being given as
\beq
\label{(2.16)}
Q_{pq}(m)=\exp\{ -i(m-1)p/2\}Q_{pq}^{(0)}(m)\exp\{ i(m-1)q/2\},
\eeq
\beq
\label{(2.17)}
Q_{pq}^{(0)}(m)=\frac 1M \frac{\sin \frac{m(p-q)}{2}}{\sin
\frac{p-q}{2}},
\eeq
(for the diagonal matrix elements one should use the l'H\^opital rule,
$Q_{pp}(m)=Q_{pp}^{(0)}(m)=m/M$).
One has for the states of different types, analogously to
\eqref{(2.9)}, \[\langle\phi^*,\pm|\exp\{\alpha{\bf
Q}(m)\}|\phi,\mp\rangle=\exp\left\{ \phi^*[L+(e^\alpha -1)Q(m)]\phi\right\}=\]
\beq
\label{(2.18)}
=\exp\left\{\sum\limits_{p\in X^\pm ,q\in X^\mp}\phi^*_p
[L_{pq}+(e^\alpha -1)Q_{pq}(m)]\phi_q\right\},
\eeq
where the matrix elements $L_{pq}$ and $Q_{pq}(m)$ are given by the
formulae \eqref{(2.10)}, \eqref{(2.16)} (one should note however that
now if $p\in X^+$ then $q\in X^-$ and vice versa). In particular, one
has for the operator $\exp \{i\pi {\bf Q}(m)\}$ entering the
Jordan-Wigner transformation \[\langle\phi^*,\pm|\exp\{i\pi{\bf
Q}(m)\}|\phi,\mp\rangle=\exp\left\{ \phi^*L(m)\phi\right\}=\] \beq
\label{(2.19)}
=\exp\left\{\sum\limits_{p\in X^\pm ,q\in X^\mp}\phi^*_p
L_{pq}(m)\phi_q\right\},
\eeq
where
\beq
\label{(2.20)}
L_{pq}(m)=\exp\{-imp\}L_{pq}\exp\{imq\}.
\eeq
Turn now to the matrix elements ("form factors") of the local
spins. Since
\beq
\label{(2.21)}
{\bf P}^{\pm}\sigma^\alpha_m=\sigma^\alpha_m{\bf P}^{\mp} \bl{1}(\alpha=x,y,\bl{1} {\rm or
}\bl{1} \alpha=\pm),
\eeq
we need only the matrix elements of the local
operators $\sigma_m^\pm$ between the states of different type. A direct
calculation using \eqref{(1.7)}, \eqref{(1.22)} and \eqref{(2.18)}
gives : \[\langle\phi^*,\pm|\sigma^+_m|\psi,\mp\rangle=\psi_m(\mp)\exp\left\{
\phi^*L(m-1)\psi\right\}=\]
\beq
\label{(2.22)}
=\psi_m(\mp)\exp\left\{\sum\limits_{p\in X^\pm ,q\in X^\mp}\phi^*_p
L_{pq}(m-1)\psi_q\right\},
\eeq
\[\langle\phi^*,\pm|\sigma^-_m|\psi,\mp\rangle=\phi^*_m(\pm)\exp\left\{
\phi^*L(m-1)\psi\right\},\]
where we use natural notations
\[\psi_m(\pm)=\frac{\exp \{-i\pi /4\}}{\sqrt{M}}\sum\limits_{q\in X^\pm} \psi_q
\exp\left\{ i(m-1)q\right\},\]
\beq
\label{(2.23)}
\phi^*_m(\pm)=\frac{\exp \{i\pi /4\}}{\sqrt{M}}\sum\limits_{p\in X^\pm} \phi^*_p
\exp\left\{ -i(m-1)p\right\}.
\eeq
One should note that the matrices $L$ \eqref{(2.10)}, $L(m)$ \eqref{(2.20)} and $Q(m)$
\eqref{(2.16)} are related by the following simple formula which can be proved
by direct calculation:
\beq
\label{(2.24)}
\sum\limits_q L_{p_1q}(m)L_{qp_2}=\delta_{p_1p_2}-2Q_{p_1p_2}(m),
\eeq
\[(p_1,p_2\in X^+,\bl{1} q\in X^-\bl{1}{\rm or}\bl{1}
(p_1,p_2\in X^-,\bl{1} q\in X^+).\]
Using these formulae one can reproduce the time-dependent temperature
correlation functions for the isotropic XX0 chain obtained in
\cite{cikt1,cikt2,cit}. The
corresponding calculation is given it in Appendix B. In the next
sections we consider the anisotropic XY chain.
\section{Coherent states for the anisotropic XY chain}
\setcounter{equation}{0}
In this section the formulae for the matrix elements
of operators between the coherent states of the anisotropic
XY chain are given. We consider now two sets of canonical
fermion operators, $a_p,a_p^+$ and $A_p,A_p^+$, related
by the Bogolyubov transformation \eqref{(1.25)}. For each of these sets
we introduce the coherent states ("old" and "new" ones) and
calculate the scalar products of old and new states.
Consider first the relations between the two vacuum states $|0\rangle$ and
$|0\rangle\r$ for the old and new sets
\beq
\label{(3.1)}
a_p|0\rangle=0, \bl{1}\langle a^+_p=0, \bl{1} \forall p;\bl{1}\l0|0\rangle=1,
\eeq
\beq
\label{(3.2)}
A_p|0\rangle\r=0, \bl{1}\langle\l A^+_p=0, \bl{1} \bl{1}\langle\l0|0\rangle\r=1.
\eeq
These states are related as follows:
\beq
\label{(3.3)}
|0\rangle\r=N^{-1/2}\Omega^+|0\rangle;\bl{1}\langle\l0|=N^{-1/2}\l0|\Omega,
\eeq
where the operators $\Omega^+$ and $\Omega$ are
\beq
\label{(3.4)}
\begin{array}{c}
\Omega^+=\exp\left\{\frac 12\sum\limits_p\tau (p)a^+_p a^+_{-p}\right\};\\
\Omega=\exp\left\{\frac 12\sum\limits_p\tau (p)a_{-p} a_{p}\right\},
\end{array}
\eeq
and
\beq
\label{(3.5)}
\tau (p)\equiv\tan\frac{\theta (p)}{2}.
\eeq
The normalization coefficient $N$ can be represented
as a determinant of
a diagonal $M\times M$ matrix,
\beq
\label{(3.6)}
N=\langle 0|\Omega^+\Omega|0\rangle=\mbox{det} (I+T),
\eeq
where $I$ is the identity matrix and
\beq
\label{(3.7)}
T=\mbox{diag}(i\tau (p)).
\eeq
Properties \eqref{(3.2)} for the states \eqref{(3.3)} can be easily checked
using the commutation relations
\[[a_p,\Omega^+]=\tau(p)\Omega^+a^+_{-p}\bl{1} {\rm and}\bl{1}
[\Omega,a_p^+]=\tau(p)a_{-p}\Omega.\]
To calculate the normalization coefficient one makes use of the
equation \eqref{(A.6)} and representing
\[N=\int d\xi
d\xi^*\exp\left\{ -\xi^*\xi +\frac 12 \tau(p)(\xi^*_p\xi^*_{-p}+
\xi_{-p}\xi_p)\right\}.\]
Let us make the change of variables
\beq
\label{(3.8)}
\omega_p=\frac{1}{\sqrt{2}}(\xi_p+i\xi^*_{-p});\bl{1}
\omega^*_p=\frac{1}{\sqrt{2}}(\xi^*_p+i\xi_{-p}).
\eeq
The Jacobian of this transformation is equal to 1, $d\xi d\xi^*=d\omega
d\omega^*$. One can easily check that \beq \label{(3.9)}
\xi^*\xi=\omega^*\omega;\bl{1}\frac 12\sum\limits_p\tau (p)(\xi^*_p\xi^*_{-p}+
\\xi_{-p}\xi_p)=-\omega^*T\omega,
\eeq
where the matrix $T$ is defined in \eqref{(3.7)}. Hence, $N=\int d\omega d\omega^*
\exp\{-\omega^*(I+T)\omega\}=\mbox{det} (I+T)$, which proves the relation \eqref{(3.6)}.
Introduce now the coherent states for the new set of operators
(compare with \eqref{(A.2)}):
\[|X\rangle\r=\exp\left\{\sum\limits_pA^+_pX_p\right\}|0\rangle\r;\]
\beq
\label{(3.10)}
\langle\l X^*|=\langle\l0|\exp\left\{\sum\limits_p X^*_p A_p\right\}.
\eeq
The Grassman algebra elements $X_p, X_p^*$ have the same properties
as the old parameters $\xi_p,\xi_p^*$. The formulae
\eqref{(A.2)}-\eqref{(A.10)}
are valid, of course, also for the new states. The direct calculation
gives the following representations for the scalar products of old and
new states
\[\langle\xi^*|X\rangle\r=N^{-1/2}\exp\left\{\frac
12\sum\limits_p\tau (p)(\xi^*_p \xi^*_{-p}-
X_{-p}X_p)+\sum\limits_p\alpha^{-1}(p)\xi^*_pX_p\right\},\]
\beq \label{(3.11)} \langle\l
X^*|\xi\rangle=N^{-1/2}\exp\left\{\frac 12\sum\limits_p\tau (p)( \xi_{-p}\xi_p-
X^*_pX^*_{-p})+\sum\limits_p\alpha^{-1}(p)X^*_p\xi_p\right\}.\eeq
Considering the XY
model it is necessary to introduce different sets of operators
$a_p,a_p^+$ and $A_p,A_p^+$ corresponding to the sets of momenta $X_+$
(for ${\bf H^+}$) and $X_-$ (for ${\bf H^-}$), see
\eqref{(1.19)},\eqref{(1.20)}. Thus the new vacuums $|\Omega\rangle\r$ will
be also different for ${\bf H^+}$ and ${\bf H^-}$ (unlike the
old vacuum $|0\rangle$). We will not usually mark
in our notations this difference but one should have it in mind.
Using the equation \eqref{(3.11)}, one can calculate the matrix
elements of the operators $\exp\{-\beta {\bf H^\pm}\}$ diagonal in the new
representations (different for ${\bf H^+}$ and
${\bf H^-}$!) between old coherent states
(also different for ${\bf H^+}$ and ${\bf H^-}$);
\beq
\label{(3.12)}
\langle\xi^*|e^{-\beta {\bf H^\pm}}|\xi\rangle=N^{-1}e^{-\beta E_0^\pm}\mbox{det} (I-J_\beta T)
e^{\omega^*D\omega},
\eeq
where the variables $\omega_p, \omega_p^*$ ($p\in X^+$ for ${\bf H^+}$
and $p\in X^-$ for ${\bf H^-}$) are defined by the equation
\eqref{(3.8)}, and diagonal $M\times M$ matrices $J_\beta$ and $D$ are
\beq \label{(3.13)} J_\beta=\mbox{diag}(j_\beta (p));\bl{1} j_\beta
(p)=\exp\{-\beta E(p)\}, \eeq
\beq \label{(3.14)} D=\mbox{diag}
(d(p));\bl{1} d(p)= \frac{j_\beta (p)-i\tau (p)}{1-i\tau(p)j_\beta (p)}, \eeq
(the matrix $T$ is defined in \eqref{(3.7)}).
To prove it we use the completeness of the new states \eqref{(A.6)},
\[\langle\xi^*|e^{-\beta {\bf H^\pm}}|\xi\rangle=\int dX dX^* dY dY^*\times\]
\[\times\langle\xi^*|X\rangle\r\langle\l Y^*|e^{-\beta {\bf H^\pm}}|Y\rangle\r\langle\l X^*|\xi\rangle
\exp\{-Y^*X-X^*Y\},\]
and note that the operators ${\bf H^\pm}$ are diagonal in the corresponding
new representations \eqref{(1.31)}:
\beq
\label{(3.15)}
\langle\l Y^*|e^{-\beta {\bf H^\pm}}|Y\rangle\r=e^{-\beta E_0^\pm}\exp\left\{\sum\limits_{p\in X^\pm}
e^{-\beta E(p)}Y^*_pY_p\right\}.
\eeq
After the integration over $Y, Y^*$ one gets
\[\langle\xi^*|e^{-\beta {\bf H^\pm}}|\xi\rangle=\int dX dX^*\langle\xi^*|X\rangle\r\langle\l X^*|\xi\rangle\times\]
\[\times e^{-\beta E_0^\pm}\exp\left\{\sum\limits_{p\in X^\pm}e^{-\beta E(p) X^*_pX_p}
\right\}.\]
The integral over $X, X^*$ can be calculated by means of the change of
variables as in \eqref{(3.8)}. Finally one gets the formula
\eqref{(3.12)}.
In the following sections the results obtained here will be used to
calculate the equal-time correlators.
\section{ The simplest correlator for the XY chain}
\setcounter{equation}{0}
In this section the partition function $Z$ and the
generating functional of the third components of local spins
are calculated for the anisotropic chain. We begin
by considering the partition function. The initial representation
is the same as in th isotropic case \eqref{(B.1)}
\beq
\label{(4.1)}
Z=\frac 12 (Z_F^++Z_F^-+Z_B^+-Z_B^-),
\eeq
where
\beq
\label{(4.2)}
\begin{array}{c}
Z_F^\pm=\mbox{Tr}\exp\{-\beta{\bf H^\pm}\};\\
Z_B^\pm=\mbox{Tr}(\exp\{-\beta{\bf H^\pm}\}(-1)^{\bf N}).
\end{array}
\eeq
One gets the following representations for the contributions
(see for example \cite{ki}; the $M \times M $ matrix $J_\beta$ is
defined in \eqref{(3.13)}):
\beq
\label{(4.3)}
Z_F^\pm=e^{-\beta E_0^\pm}\mbox{det} (I+J_\beta )=\prod\limits_{p\in X^\pm}\left(
2\cosh\frac{\beta E(p)}{2}\right),
\eeq
\beq
\label{(4.4)}
Z_B^\pm=e^{-\beta E_0^\pm}\mbox{det} (I-J_\beta )=\prod\limits_{p\in X^\pm}\left(
2\sinh\frac{\beta E(p)}{2}\right).
\eeq
To obtain, e.g., the fermionic contributions one
should use the representation \eqref{(A.5)} for the trace of
operators:
\[Z_F^\pm=\int dY dY^* \langle\l Y^*|e^{-\beta {\bf H^\pm}}|Y\rangle\r\exp\{Y^*Y\},\]
and the representation \eqref{(3.15)} for the matrix element involved;
after that one should calculate the Gaussian integral leading to
the equality \eqref{(4.3)}. The "bosonic" contrubutions can be
calculated analogously.
It is worth mentioning that it is sometimes convenient to represent
the answer differently using the old coherent states, for example,
\[Z_F^\pm=\int d\xi d\xi^*\langle\xi^*|e^{-\beta {\bf
H^\pm}}|\xi\rangle\exp\{\xi^*\xi\}. \]
By means of equation \eqref{(3.12)} for the
corresponding matrix element one gets
\beq \label{(4.5)}
Z_F^\pm=N^{-1}e^{-\beta E_0^\pm}\mbox{det} (I-J_\beta T)\mbox{det} (I+D),
\eeq
\beq
\label{(4.6)}
Z_B^\pm=N^{-1}e^{-\beta E_0^\pm}\mbox{det} (I-J_\beta T)\mbox{det} (I-D).
\eeq
Turn now to the simplest equal-time temperature correlator
\beq
\label{(4.7)}
G(M)=\frac 1Z\mbox{Tr}\left(e^{\alpha{\bf Q}(m)}e^{-\beta{\bf H}}\right).
\eeq
As for the isotropic chain (\eqref{(B.8)},\eqref{(B.9)}) it can be represented as
a sum of four contributions
\beq
\label{(4.8)}
G(m)=\frac 1{2Z} (Z_F^+G_F^++Z_F^-G_F^-+Z_B^+G_B^+-Z_B^-G_B^-),
\eeq
where
\[Z_F^\pm G_F^\pm=\mbox{Tr}\left(e^{\alpha{\bf Q}(m)}e^{-\beta{\bf
H^\pm}}\right),\]
\beq \label{(4.9)} Z_B^\pm
G_B^\pm=\mbox{Tr}\left(e^{\alpha{\bf Q}(m)}e^{-\beta{\bf H^\pm}}(-1)^{\bf N}
\right).
\eeq
One has for the contributions $G_{F,B}^\pm$ the following representations
as determinants of $M\times M$ matrices
\beq
\label{(4.10)}
G_F^\pm=\mbox{det}\left(I+(e^\alpha-1)Q^{(0)}(m)\Omega_F\right),
\eeq
\beq
\label{(4.11)}
G_B^\pm=\mbox{det}\left(I-(e^\alpha-1)Q^{(0)}(m)\Omega_B\right).
\eeq
Here the matrix elements of $Q^{(0)}(m)$ are given by \eqref{(2.16)}
with $p,q\in X^+$ for $G^+_{F,B}$ and $p,q\in X^-$ for $G^-_{F,B}$.
Diagonal $M\times M$ matrices $\Omega_F$ and $\Omega_B$ are given
by the following formulae ($p\in X^\pm$ for $G^\pm_{F,B}$)
\[
\Omega_F=D(I+D)^{-1}=\mbox{diag}(\omega_F(p));\]
\beq
\label{(4.12)}
\omega_F(p)=\frac 12 \left( 1-e^{i\theta (p)}\tanh\frac{\beta E(p)}{2}\right),
\eeq
\[\Omega_B=D(I-D)^{-1}=\mbox{diag}(\omega_B(p));\]
\beq
\label{(4.13)}
\omega_B(p)=\frac 12 \left( e^{i\theta (p)}\coth\frac{\beta E(p)}{2}-1\right),
\eeq
(the matrix $D$ is defined in \eqref{(3.14)}).
Explain now how to calculate the contribution $G_F^+$:
\beq
\label{(4.14)}
Z_F^+G_F^+=\int d\xi d\xi^*d\eta d\eta^*\langle\eta^*|e^{\alpha{\bf Q}(m)}|\eta\rangle
\langle\xi^*|e^{-\beta{\bf H^+}}|\xi\rangle e^{\eta^*\xi-\xi^*\eta}.
\eeq
The matrix elements on the right hand side are given by \eqref{(2.15)}
and \eqref{(3.12)}. We change the variables $\eta^*\rightarrow -\eta^*$;
since $M$ is even, the integration measure is invariant. After that
we change the variables as in \eqref{(3.8)}
\beq
\label{(4.15)}
\begin{array}{c}
\omega_p=\frac{1}{\sqrt{2}}(\xi_p+i\xi^*_{-p});\bl{1}
\omega^*_p=\frac{1}{\sqrt{2}}(\xi^*_p+i\xi_{-p}),\\
\rho_p=\frac{1}{\sqrt{2}}(\eta_p+i\eta^*_{-p});\bl{1}
\rho^*_p=\frac{1}{\sqrt{2}}(\eta^*_p+i\eta_{-p}).
\end{array}
\eeq
As a result of this change of variables the measure remains
invariant and
\[\eta^*\xi+\xi^*\eta=\rho^*\omega+\omega^*\rho,\]
\beq
\label{(4.16)}
\eta^*(I+(e^\alpha-1)Q(m))\eta=\rho^*(I+(e^\alpha-1)Q(m))\rho.
\eeq
The integration over $\omega,\omega^*$ of the factors depending
on these variables in \eqref{(4.14)} gives $\mbox{det} D\exp\{-\rho^*D^{-1}\rho\}$.
One can calculate the remaining Gauss integral on $\rho, \rho^*$
taking into account the representation \eqref{(4.5)} for the partition
functions and equation \eqref{(4.12)}:
\beq
\label{(4.17)}
Z_F^+G_F^+=Z_F^+\mbox{det}(I+(e^\alpha-1)Q(m)\Omega_F).
\eeq
One should note that the matrix $Q(m)$ \eqref{(2.16)} differs only by an
evident similarity transformation with a diagonal matrix from the
matrix $Q^{(0)}(m)$. Thus the representation \eqref{(4.10)} for $G_F^+$ is
proved. The derivation of the contribution $G_F^-$ is almost the same,
one should only use the momenta from the set $X^-$. To calculate the
bosonic contributions one should use the
property of the operator $(-1)^{\bf N}$,
\beq
\label{(4.18)}
(-1)^{\bf N}|\xi\rangle=|-\xi\rangle,
\eeq
in the representation similar to \eqref{(4.14)} which change evidently
the calculations leading to the result \eqref{(4.11)}.
We should make a remark about the equations \eqref{(4.10)},
\eqref{(4.11)}. Since the sets $X^+$ and $X^-$ are symmetric under the
change of momenta $p_i\rightarrow -p_i$ (there is an exception, it is
the momenta $0$ and $\pi$ from the set $X^-$; but the result is valid
also in this case) one can rewrite the answer as (the sign "+" on the
right hand side corresponds to $G_F$ and the sign "-" corresponds to
$G_B$)
\[G^\pm_{F,B}=\mbox{det}(I\pm(e^\alpha-1)Q(m)\Omega_{F,B})=\] \[=\mbox{det}(I\pm
(e^\alpha-1)\Omega^{1/2}_{F,B}Q(m)\Omega^{1/2}_{F,B})=\]
\beq
\label{(4.19)}
=\mbox{det}(I\pm
(e^\alpha-1)\bar{\Omega}^{1/2}_{F,B}Q(m)\bar{\Omega}^{1/2}_{F,B})= \eeq
\[=\mbox{det}(I\pm(e^\alpha-1)Q(m)\bar{\Omega}_{F,B}).\]
The bar means here the complex conjugation and $\omega_{F,B}(-p)=
\bar{\omega}_{F,B}(p)$
To conclude we discuss some limiting cases.
In the zero temperature limit ($\beta\rightarrow\infty$) the "odd" contributions
to the correlator are cancelled and one gets the representation for $G(m)$
\beq
\label{(4.20)}
G(m)=\mbox{det}(I+(e^\alpha-1)Q^{(0)}(m)\Omega_0)\bl{1} (T=0),
\eeq
where
\beq
\label{(4.21)}
\Omega_0=\mbox{diag}(\omega_0(p)),\bl{1} \omega_0(p)=\frac 12
\left(1-e^{i\theta(p)}\right),
\eeq
coinciding with the result obtained in \cite{iks}. On the other hand
for the isotropic case ($\gamma =0$) taking into account that the angle
$\theta (p)$ in the Bogolyubov transformation is
\beq
\label{(4.22)}
\begin{array}{c}
\theta(p)=-\pi\mbox{sign}p,\bl{1}|p|<k_F;\\
\theta(p)=0,\bl{1}|p|>k_F\bl{1} (\gamma=0),
\end{array}
\eeq
one gets for the weights
\[\omega_F(p)=\frac 12 \left( 1-\tanh\frac{\beta
\varepsilon(p)}{2}\right),\]
\beq
\label{(4.23)}
\omega_B(p)=\frac 12 \left( \coth\frac{\beta
\varepsilon(p)}{2}-1\right) \bl{1} (\gamma=0).
\eeq
Here $k_F=\arccos
h$ is the Fermi momentum and $\varepsilon (p)$ is the dispersion of the
XX0 chain (see \eqref{(1.24)}). Thus one reproduces the answers for
the isotropic case \cite{cikt1,cit}.
\section{Equal-time correlators of the local spins}
\setcounter{equation}{0}
Here we consider the equal-time correlation functions of
the local spin operators ($\beta\equiv 1/T$):
\beq
\label{(5.1)}
G^{(ab)}(m)=\langle\sigma^a_{m+1}\sigma^b_1\rangle_T=\frac 1Z \mbox{Tr}(\sigma^a_{m+1}\sigma^b_1
e^{-\beta{\bf H}}),\bl{1} a,b=+,-.
\eeq
These correlators on a finite lattice of length $M$ can be represented
(as in \eqref{(B.8)}) as a sum of four contributions
\beq
\label{(5.2)}
G^{(ab)}=\frac 1{2Z} \left(Z_F^+G_F^{(ab),+}+Z_F^-G_F^{(ab),-}
+Z_B^+G_B^{(ab),+}-Z_B^-G_B^{(ab),-}\right),
\eeq
where
\[Z_F^\pm G_F^{(ab),\pm}=\mbox{Tr}\left(\sigma^a_{m+1}\sigma^b_1e^{-\beta{\bf
H^\pm}}\right),\]
\beq
\label{(5.3)}
Z_B^\pm
G_B^{(ab),\pm}=\mbox{Tr}\left(\sigma^a_{m+1}\sigma^b_1e^{-\beta{\bf
H^\pm}}(-1)^{\bf N} \right),
\eeq
(partition functions
$Z_{F,B}^\pm$ are given by \eqref{(4.3)}, \eqref{(4.4)}). For the
contributions we obtain the following representations as determinants
of $M\times M$ matrices:
\beq
\label{(5.4)}
G_F^{(-+),\pm}=G_F^{(+-),\pm}=\left.\frac{\partial}{\partial\alpha}\mbox{det} (I+U\Omega_F
+\alpha C\Omega_F)\right|_{\alpha=0}\bl{1}(m>0),
\eeq
\[
G_F^{(-+),\pm}=\frac 1M \mbox{tr}\Omega_F=\frac 1M\sum\limits_p\omega_F(p)\bl{1}
(m=0);\]
\beq
\label{(5.5)}
G_F^{(+-),\pm}=1-\frac 1M \mbox{tr}\Omega_F\bl{1}
(m=0),
\eeq
\beq
\label{(5.6)}
G_F^{(++),\pm}=G_F^{(--),\pm}=\left.\frac{\partial}{\partial\alpha}\mbox{det} (I+U\Omega_F
-i\alpha S\Omega_F)\right|_{\alpha=0}\bl{1}(m\ge 0),
\eeq
\beq
\label{(5.7)}
G_B^{(-+),\pm}=G_B^{(+-),\pm}=\left.\frac{\partial}{\partial\alpha}\mbox{det} (I-U\Omega_F
-\alpha C\Omega_F)\right|_{\alpha=0}\bl{1}(m>0),
\eeq
\beq
\label{(5.8)}
G_B^{(-+),\pm}=-\frac 1M \mbox{tr}\Omega_B;\bl{1}
G_F^{(+-),\pm}=1+\frac 1M \mbox{tr}\Omega_B\bl{1} (m=0),
\eeq
\beq
\label{(5.9)}
G_B^{(++),\pm}=G_B^{(--),\pm}=\left.\frac{\partial}{\partial\alpha}\mbox{det} (I-U\Omega_F
-i\alpha S\Omega_F)\right|_{\alpha=0}\bl{1}(m\ge 0).
\eeq
The diagonal $M\times M$ weight matrices $\Omega_F$ and $\Omega_B$ are
defined in \eqref{(4.12)}, \eqref{(4.13)}. Matrix elements of $M\times
M$ matrices $U,C,S$ are given by the following formulae \beq
\label{(5.10)}
U_{p_1p_2}=-\exp\left\{\frac i2(p_1-p_2)\right\}Q^{(0)}_{p_1,p_2}(m),
\eeq
(the definition of the matrix $Q^{(0)}(m)$ is in \eqref{(2.17)}),
\beq
\label{(5.11)}
C_{p_1p_2}=\frac 1M\cos\frac m2(p_1+p_2);
\eeq
\beq
\label{(5.12)}
S_{p_1p_2}=\frac 1M\sin\frac m2(p_1+p_2).
\eeq
It is necessary to emphasize that the momenta numerating the matrix elements
$p_1,p_2\in X^+$ for the contributions $G_{F,B}^{(ab),+}$ and
$p_1,p_2\in X^-$ for the contributions $G_{F,B}^{(ab),-}$; the same
thing is true for the weight matrices.
Calculating the functions $G_{F,B}^{(ab),\pm}$ is reduced to calculating
Gaussian integrals in the Grassmanian variables. For example, using
\eqref{(A.5)} and \eqref{(A.6)} we represent
\beq
\label{(5.13)}
Z_F^+G_F^{(ab)+}=\int d\xi d\xi^*d\eta d\eta^*\langle\eta^*|
\sigma^a_{m+1}\sigma_1^b|\eta\rangle
\langle\xi^*|e^{-\beta{\bf H^+}}|\xi\rangle e^{\eta^*\xi-\xi^*\eta}.
\eeq
Consider the calculation of the matrix element
\[F_{ab}=\langle\eta^*|\sigma^a_{m+1}\sigma_1^b|\eta\rangle=\]
\[=\int d\zeta d\zeta^*\langle\eta^*|\sigma^a_{m+1}|\zeta\rangle\langle\zeta^*|
\sigma_1^b|\eta\rangle e^{-\zeta^*\zeta}.\]
Let us use the formulae \eqref{(2.22)} for the matrix elements of spin
operators and make the change of variables \beq \label{(5.14)} \begin{array}{c}
\tilde{\eta}_p^*=e^{-\frac{im}{2}p}\eta^*_p;\bl{1}
\tilde{\eta}_p=e^{\frac{im}{2}p}\eta_p\bl{1}(p\in X^+),\\
\tilde{\zeta}_q^*=e^{-\frac{im}{2}q}\zeta^*_q;\bl{1}
\tilde{\zeta}_q=e^{\frac{im}{2}q}\zeta_q\bl{1}(q\in X^-).
\end{array}
\eeq
One gets the representation for the matrix element using the new variables
(we omit tildes over the new variables \eqref{(5.14)}):
\beq
\label{(5.15)}
F_{ab}=\frac{\partial}{\partial\alpha}\int \left.d\zeta d\zeta^*\exp\{\omega+\alpha
f_{ab}\}\right|_{\alpha=0},
\eeq
where
\beq
\label{(5.16)}
\omega=\eta^*PL\bar{P}\zeta+\zeta^*\bar{P}LP\eta-\zeta^*\zeta,
\eeq
\beq
\label{(5.17)}
f_{-+}=\eta^*R_+\eta;\bl{1}(R_+)_{p_1p_2}=
\frac 1M e^{-\frac{im}{2}(p_1+p_2)},
\eeq
\beq
\label{(5.18)}
f_{+-}=\zeta^*\bar{R}_+\zeta;\bl{1}(\bar{R}_+)_{q_1q_2}=
\frac 1M e^{\frac{im}{2}(q_1+q_2)},
\eeq
\beq
\label{(5.19)}
f_{++}=i\eta R_-\zeta;\bl{1}(R_-)_{pq}=
\frac 1M e^{-\frac{im}{2}(p-q)},
\eeq
\beq
\label{(5.20)}
f_{--}=i\zeta^* \bar{R}_-\eta^*;\bl{1}(\bar{R}_-)_{qp}=
\frac 1M e^{\frac{im}{2}(q-p)}=(R_-)_{pq},
\eeq
(the bar means the complex conjugation). The diagonal
matrix $P$ has the form
\beq
\label{(5.21)}
P=\mbox{diag}\left(e^{-\frac{im}{2}p}\right).
\eeq
Use now the following identities
\[\sum\limits_q L_{pq}e^{imq}=(2\delta_{m,0}-1)e^{ipm};\]
\beq
\label{(5.22)}
\sum\limits_q e^{imq} L_{qp}=e^{ipm} \bl{1} (p\in X^+,q\in X^-,
\bl{1}m=0,1,\dots,M-1),
\eeq
\beq
\label{(5.23)}
(PL\bar{P}\bar{R}_+\bar{P}LP)_{p_1p_2}=-(\bar{R}_+)_{p_1p_2}(1-2\delta_{m,0})
\eeq
\beq
\label{(5.24)}
(R_-\bar{P}LP)_{p_1p_2}=(R_-)_{p_1p_2}
\eeq
\beq
\label{(5.25)}
(PL\bar{P}\bar{R}_-)_{p_1p_2}=-(1-2\delta_{m,0})(\bar{R}_-)_{p_1p_2}
\eeq
It should be emphasized that the matrix
elements of the matrices on the left hand side
of \eqref{(5.23)} and \eqref{(5.24)} are
\[(\bar{R}_+)_{q_1q_2}(q_{1,2}\in X^-)\bl{1}
{\rm and} \bl{1}
(R_-)_{pq}(p\in X^+,q\in X^-),\]
and on the right hand side $(\bar{R}_+)_{p_1p_2}$,
$(R_-)_{p_1p_2}(p_{1,2}\in X^+)$. As a result,
the matrix elements $F_{ab}$ \eqref{(5.15)} are represented
as follows
\[F_{-+}=\left.\pd{\alpha}\exp\{\eta^*(I+U)\eta+\alpha\eta^*R_+\eta\}\right|_{\alpha=0},\]
\[F_{+-}=\left.(\delta_{m,0}+\pd{\alpha})\exp\{\eta^*(I+U)\eta+
\alpha\eta^*\tilde{\bar{R}}_+\eta\}\right|_{\alpha=0},\]
\beq
\label{(5.26)}
F_{++}=\left.\pd{\alpha}\exp\{\eta^*(I+U)\eta+\alpha\eta^*S_-\eta\}\right|_{\alpha=0},
\eeq
\[F_{--}=\left.\pd{\alpha}\exp\{\eta^*(I+U)\eta-\alpha\eta^*S_-\eta\}\right|_{\alpha=0}.\]
Here the matrix elements of the $M\times M$ matrix $U$ are
defined as $U_{p_1p_2}=(PL\bar{P}^2LP)_{p_1p_2}-\delta_{p_1p_2}$
and this representation leads to \eqref{(5.10)} if
one takes into account \eqref{(2.24)}; the matrices
$R_+$, $\bar{R}_+$ are given by \eqref{(5.17)}, \eqref{(5.18)} and \eqref{(5.12)};
the matrix $\tilde{\bar{R}_+}$ is defined by the relation
\beq
\label{(5.27)}
(\tilde{\bar{R}}_+)_{p_1p_2}=(\bar{R}_+)_{p_1p_2}(1-2\delta_{m,0}),
\eeq
\beq
\label{(5.28)}
(S_-)_{p_1p_2}=\frac 1M\sin\frac m2 (p_1-p_2).
\eeq
Now we put the expressions \eqref{(5.12)} into \eqref{(5.3)} using also \eqref{(3.12)}.
Then it is not difficult to get for the contribution $G_F^{(-+),+}$
\[ Z_F^+G_F^{(-+),+}=N^{-1}e^{-\beta E_0^+}\mbox{det}(I-J_\beta T)\pd{\alpha}\int d\xi d\xi^*
d\eta d\eta^*\times\]
\beq
\label{(5.29)}
\times\exp\{-\eta^*(I+U+\alpha R_+)\eta+\omega^*D\omega-
\eta^*\xi-\xi^*\eta\},
\eeq
(besides \eqref{(3.8)}, we changed the variables as in \eqref{(5.14)}
and also we changed $\eta^*\rightarrow -\eta^*$). Finally we change the
variables as in \eqref{(4.15)}, $(\eta ,\eta^*)\rightarrow(\rho
,\rho^*)$; $(\xi, \xi^*) \rightarrow (\omega, \omega^*)$. Using
\eqref{(4.16)} and equalities \[\eta^*(I+U+\alpha R_+)\eta =\rho^*(I+U+\alpha
C)\rho -\frac{\alpha}{2}(\rho^* S_-\rho^*-\rho S_-\rho),\] we integrate
over $\omega, \omega^*$ and then over $\rho ,\rho^*$ in \eqref{(5.29)}.
As a result we get the following representation as a determinant of
a $2M\times 2M$ matrix:
\[ Z_F^+ G_F^{(-+),+}=N^{-1} e^{-\beta E_0^+}\mbox{det} (I-J_\beta T)\mbox{det} D\times\]
\[\times\frac{\partial}{\partial\alpha}\mbox{det}^{1/2}\left(\begin{array}{ccc}
\alpha S_- & \vdots & -B \\ \cdots & \cdots & \cdots \\ B^T & \vdots & -\alpha S_-
\end{array}\right),\]
where we denoted $B=I+D^{-1}+U+\alpha C$. Using the generalized
Gauss algorithm it is not difficult to check that
\[\mbox{det}\left(\begin{array}{ccc}
\alpha S_- & \vdots & -B \\ \cdots & \cdots & \cdots \\ B^T & \vdots & -\alpha S_-
\end{array}\right)=\mbox{det}^2 B+O(\alpha^2),\]
(it is valid also for $m$=0). It leads to the representation for
$G_F^{(-+),+}$. Analogously one can calculate other contributions.
One should note that the expressions \eqref{(5.4)}-\eqref{(5.9)} for equal-time
temperature correlators of the anisotropic XY chain differ
from the corresponding expressions for the isotropic XX0
chain only by changing the weights. For example, the corresponding modification
for the "fermionic" weightis as before
\beq
\label{(5.30)}
\frac 12 \left(1-\tanh\frac{\beta\varepsilon (p)}{2}\right)\rightarrow
\frac 12 \left(1-e^{i\theta (p)}\tanh\frac{\beta E(p)}{2}\right).
\eeq
\section{The thermodynamic limit}
\setcounter{equation}{0}
Now we consider the correlators for the anisotropic XY chain
in the thermodynamic limit (the length of the chain $M\rightarrow\infty$,
the magnetic field $h$ is fixed). In this limit the partition functions
$Z_{F,B}^\pm$ are divergent and using \eqref{(4.3)}, \eqref{(4.4)} one
can estimate
\[Z_B/Z_F <C^M; \bl{1} C=\tanh \frac{\beta E_{\rm
max}}{2}<1,\]
where $E_{\rm max}$ is the maximal value of the quasiparticle energy
$E(p)$ \eqref{(1.28)}. So only the "fermionic" contributions survive in
the thermodynamic limit in \eqref{(4.8)} and \eqref{(5.2)}. The
determinants of the $M\times M$ matrices in the expressions
\eqref{(4.10)}, \eqref{(5.4)} for these contributions become in the
thermodynamic limit the Fredholm determinants of the corresponding
integral operators. It is explained explicitly,
for example, in \cite{cikt2}. So we get the following
answers.
The correlator $G(m)$ \eqref{(4.7)} is given by
\beq
\label{(6.1)}
G(m)=\lim\frac 1Z\mbox{Tr}\left(e^{\alpha{\bf Q}(m)}e^{-\beta{\bf H}}\right)
=\mbox{det}\left(\hat{I}+(e^\alpha -1)\hat{V}\right).
\eeq
At the right hand side, there is the Fredholm determinant of a linear
integral operator acting on functions $f(p)$ on the interval $-\pi\le
p\le\pi$ \beq \label{(6.2)}
(\hat{V}f)(p)=\int\limits_{-\pi}^{\pi}V(p,q)f(q)dq,
\eeq
$\hat{I}$ means the identity operator on the interval and
the kernel $V(p,q)$ is (see \eqref{(4.10)}, \eqref{(2.17)}, \eqref{(4.12)})
\beq
\label{(6.3)}
V(p,q)=\frac {1}{2\pi}\frac{\sin\frac{m(p-q)}{2}}{\sin\frac{p-q}{2}}.
\omega_F(q),
\eeq
The weight $\omega_F(q)$ is equal to
\beq
\label{(6.4)}
\omega_F(q)=\frac 12\left(1-e^{i\theta (q)}\tanh\frac{\beta E(q)}{2}\right)
\eeq
(the angle $\theta(p)$ and the quasiparticle energy $E(p)$ are
defined in \eqref{(1.27)}, \eqref{(1.28)}).
In the thermodynamic limit the correlators \eqref{(5.1)}
\beq
\label{(6.5)}
G^{(ab)}(m)=\lim\frac 1Z \mbox{Tr}(\sigma^a_{m+1}\sigma^b_1
e^{-\beta{\bf H}})
\eeq
can be also represented as Fredholm determinants of linear
integral operators on the interval $[-\pi,\pi ]$. From
\eqref{(5.4)}-\eqref{(5.6)} one gets for $m>0$:
\beq
\label{(6.6)}
G^{(-+)}(m)=G^{(+-)}(m)=\left.\pd{\alpha}\mbox{det} (\hat{I}-\hat{W}+\alpha\hat{C})
\right|_{\alpha=0}\bl{1}(m>0),
\eeq
\beq
\label{(6.7)}
G^{(++)}(m)=G^{(--)}(m)=\left.\pd{\alpha}\mbox{det} (\hat{I}-\hat{W}-i\alpha\hat{S})
\right|_{\alpha=0}\bl{1}(m>0).
\eeq
The kernels of the operators $\hat{W}$, $\hat{C}$ and $\hat{S}$ are
\beq
\label{(6.8)}
W(p,q)=-\frac 1\pi e^\frac{i(p-q)}{2}
\frac{\sin\frac{m(p-q)}{2}}{\sin\frac{p-q}{2}}\omega_F(q),
\eeq
\beq
\label{(6.9)}
C(p,q)=\frac{1}{2\pi}\cos\frac{m(p+q)}{2}\omega_F(q)
\eeq
\beq
\label{(6.10)}
S(p,q)=\frac{1}{2\pi}\sin\frac{m(p+q)}{2}\omega_F(q).
\eeq
These answers differ only by changing the weight \eqref{(5.30)} from the
answers for the isotropic XX0 chain.
One should note that the formal answers for the time-dependent
correlation functions were obtained recently in \cite{ika}.
In our next paper we hope to give more clear answers for
the time-dependent correlators.
We are grateful to the Russian Foundation of Fundamental Research
(grants 95-01-00476 and 96-01-00807) and INTAS (grant INTAS-RFFR 95-414).
One of us (V.S.K.) thanks also TOO "Cyclone" for the support.
\section*{Appendix A}
\renewcommand{\theequation}{A.\arabic{equation}}
\setcounter{equation}{0}
Consider a set of canonical fermion operators
\[a_q, a_q^+\bl{1} ([a_p,a_q]_+=[a^+_p,a^+_q]_+=0, [a_p, a_q^+]_+=\delta_{p,q}).\]
We denote by $M$ the number of operators $a$ (or $a^+$) in the set.
It is supposed that the Fock vacuum exists and has the following properties
\beq
\label{(A.1)}
a_q|0\rangle=0;\bl{1}\l0| a^+_q=0,\bl{1}\forall q;\bl{1}\l0|0\rangle=1.
\eeq
We introduce the coherent states \cite{bie}
\[ |\xi\rangle\equiv |\xi_q\rangle=\exp\left\{\sum\limits_qa^+_q\xi_q\right\}|0\rangle;\]
\beq
\label{(A.2)}
\langle\xi^*|\equiv \langle\xi_q^*|=\langle 0|\exp\left\{\sum\limits_q\xi^*_q a_q\right\},
\eeq
where the summation is taken over the whole set. The parameters
$\xi_q,\xi_q^+$ (Grassman algebra elements) anticommute among
themselves and with all the operators $a_q, a_q^+$. One should
emphasize that the star in $\xi^*$ means only that the corresponding
parameter is connected with a bra-vector; we don't consider
involutions on the Grassman algebra; parameters $\xi$ and $\xi^*$
are entirely independent. The coherent states \eqref{(A.2)} are
the eigenstates for the creation and annihilation
operators, respectively
\beq \label{(A.3)}
a_p|\xi\rangle=\xi_p|\xi\rangle;\bl{1}\langle\xi^*|a^+_p=\langle\xi^*|\xi^*_p.
\eeq
One can easily calculate the scalar product of two coherent states
using the commutation relations between $a$, $a^+$, $\xi$, $\xi^*$:
\beq
\label{(A.4)}
\langle\xi^*|\xi\rangle=\exp\left\{\sum\limits_q\xi_q^*\xi_q\right\}\equiv\exp\{\xi^*\xi\}.
\eeq
The trace of an operator ${\bf O}$ can be represented
as an integral over the anticommuting variables
of the matrix elements of the operator between the
coherent states (see \cite{bie}):
\beq
\label{(A.5)}
\mbox{Tr}{\bf O}=\int d\xi d\xi^*\exp\{\xi^*\xi\}\langle\xi^*|{\bf O}|\xi\rangle,
\eeq
and the expansion for the identity operator is given by
\beq
\label{(A.6)}
{\bf 1}=(-1)^M\int d\xi d\xi^*\exp\{-\xi^*\xi\}|\xi\rangle\langle\xi^*|,
\eeq
(we supposed that the number of sites $M$ is even and the coefficient
$(-1)^M$ in such formulae is usually omitted). We use the following
notation \[d\xi\equiv\prod\limits_q d\xi_q;\bl{1} d\xi^*\equiv\prod\limits_q d\xi^*_q.\]
If an operator ${\bf L}$ has the following form
\[{\bf L}=\sum\limits_{p,q}a^+_p L_{pq} a_q\equiv a^+ La,\]
where $L$ is a $M\times M$ matrix (with matrix elements $L_{pq}$)
which can be diagonalized by an unitary matrix $U$,
\[L=U^+DU;\bl{1}U^+U=UU^+=I;\bl{1} D=\mbox{diag}(D_q),\]
then the matrix elements of the operator $\exp{\bf L}$ are given by
\beq
\label{(A.7)}
\langle\xi^*|\exp\{{\bf L}\}|\xi\rangle=\exp\left\{\sum\limits_{p,q}\xi^*_p(\exp \{L\})_{pq}
\xi_q\right\}\equiv\exp\{\xi^*\exp L\xi\},
\eeq
and
\beq
\label{(A.8)}
\mbox{Tr}\exp\{{\bf L}\}=\mbox{Tr}\exp\{a^+La\}=\mbox{det} [I+\exp\{L\} ],
\eeq
(there is the trace of the operator on the left hand side
of the last formula and the determinant of the $M\times M$ matrix
on the right hand side).
The last equality follows from a well-known formula for the Gaussian
integral over the anticommuting variables
\beq
\label{(A.9)}
\int d\xi d\xi^*\exp\{\xi^* K\xi \eta^*\xi+\xi^*\eta\}=
\exp\{-\eta^*K^{-1}\eta\}\mbox{det} K.
\eeq
We need also to use a formula valid for an antisymmetric matrix
$A$,
\beq
\label{(A.10)}
\int d\xi\exp\left\{\frac 12 \xi A\xi +\eta\xi\right\}=
\exp\left\{\frac 12 \eta A^{-1}\eta\right\}\sqrt{\mbox{det} A},\bl{1} A=-A^T.
\eeq
\section*{Appendix B}
\renewcommand{\theequation}{B.\arabic{equation}}
\setcounter{equation}{0}
Here we give the derivation of the formulae for the time-dependent
correlators for the isotropic XX0 chain with finite number of sites
$M$ ($M$ is supposed to be even). We hope that readers will
appreciate the simplicity of the derivation of the results
using the integration over the Grassmanian variables
(compared
with paper \cite{cikt2,cit}). Below ${\bf H}$ denotes the Hamiltonian
of the XX0 model We begin with the calculation of the partition
function $Z$. Following \cite{McCBA,ki} we represent it in the form
\beq
\label{(B.1)}
Z=\mbox{Tr}\exp\{-\beta{\bf H}\}=\frac 12
(Z_F^++Z_F^-+Z_B^+-Z_B^-),
\eeq
where $\beta\equiv1/T$ is the inverse
temperature and the contributions on the right hand side are defined by
the formulae
\[Z_F^\pm=\mbox{Tr}\exp\{-\beta{\bf H^\pm}\}=e^{\beta
Mh/2}\mbox{det}(I+J_\beta);\]
\beq \label{(B.2)} Z_B^\pm=\mbox{Tr}[(-1)^{\bf
N}\exp\{-\beta{\bf H^\pm}\}]e^{\beta Mh/2}\mbox{det}(I-J_\beta), \eeq
where $J_\beta$ is
a diagonal matrix
\beq \label{(B.3)}
J_\beta=\mbox{diag}(e^{-\beta\varepsilon(p)}),
\eeq
(we emphasize that $p\in X^+$ for $Z^+_{F,B}$
and $p\in X^-$ for $Z^-_{F,B}$). To obtain these representations
one should use the following relation
\beq
\label{(B.4)}
(-1)^{\bf N}|\phi,\pm\rangle=|-\phi,\pm\rangle,
\eeq
\[\exp\{-\beta{\bf H^\pm}\}|\phi,\pm\rangle=e^{\beta Mh/2}|J_\beta\phi,\pm\rangle\equiv\]
\beq
\label{(B.5)}
\equiv e^{\beta Mh/2}\exp\left\{\sum\limits_{p\in X^\pm} a^+_pe^{-\beta\varepsilon(p)}
\phi_p\right\}|0\rangle,
\eeq
and rewrite, for example, $Z^+_{B}$ as a Gaussian integral
\beq
\label{(B.6)}
Z_B^\pm=e^{\beta Mh/2}\int d\phi d\phi^*\exp\{\phi^*[I-J_\beta]\phi\},
\eeq
(see \eqref{(A.4)}, \eqref{(A.5)}).
Consider now the simplest equal-time temperature correlator,
\beq
\label{(B.7)}
G(m)\equiv\frac 1Z\mbox{Tr}\left[e^{\alpha{\bf Q}(m)}e^{-\beta{\bf H}}\right],
\eeq
which can be represented in the following form \cite{cit}:
\beq
\label{(B.8)}
G(m)=\frac 1{2Z} (Z_F^+G_F^++Z_F^-G_F^-+Z_B^+G_B^+-Z_B^-G_B^-),
\eeq
where
\[Z_F^\pm G_F^\pm=\mbox{Tr}\left(e^{\alpha{\bf Q}(m)}e^{-\beta{\bf
H^\pm}}\right),\]
\beq \label{(B.9)} Z_B^\pm
G_B^\pm=\mbox{Tr}\left(e^{\alpha{\bf Q}(m)}e^{-\beta{\bf H^\pm}}(-1)^{\bf N}
\right).
\eeq
The contributions can be represented as determinants of
$M\times M$ matrices:
\[G_F^\pm=\mbox{det}\left(I+(e^\alpha-1)Q^{(0)}(m)\Theta_F\right),\]
\beq
G_B^\pm=\mbox{det}\left(I-(e^\alpha-1)Q^{(0)}(m)\Theta_B\right)
\label{(B.10)}
\eeq
where diagonal matrices of the Fermi and Bose weights
$\Theta_F$ and $\Theta_B$ are
\[\Theta_F=J_\beta [I+J_\beta ]^{-1}=\mbox{diag}\left[\Theta_F(p)\equiv
\frac{1}{1+e^{\beta\varepsilon(p)}}\right],\]
\beq
\label{(B.11)}
\Theta_B=J_\beta [I-J_\beta ]^{-1}=\mbox{diag}\left[\Theta_B(p)\equiv
\frac{1}{e^{\beta\varepsilon(p)}-1}\right],
\eeq
and the matrix $Q^{(0)}(m)$ with matrix elements $Q^{(0)}_{pq}(m)$was defined
in \eqref{(2.17)} ($p,q\in X^+$ for $G^+_{F,B}$
and $p,q\in X^-$ for $G^-_{F,B}$). To obtain these representations one
should rewrite, for example, the contribution $Z_B^-G_B^-$ as
\[Z_B^-G_B^-=e^{\beta Mh/2}\int d\phi d\phi^*e^{\phi^*\phi}\langle\phi^*,-|
e^{\alpha Q(m)}|-J_\beta\phi,-\rangle,\]
use the expression \eqref{(2.18)} for the matrix element under the integral,
calculate the Gaussian integral, perform the similarity transformation
$Q(m)\rightarrow Q^{(0)}(m)$ (see \eqref{(2.16)}, \eqref{(2.17)}) and
extract the coefficient $\mbox{det} (I-J_\beta)$ from the determinant obtained.
Now we turn to the time-dependent correlation function of
the local spins; as usual
\[\sigma^\alpha_m(t)=e^{it{\bf H}}\sigma^\alpha_m e^{-it{\bf H}}.\]
Because of the translation invariance, the local spin correlators
$\sigma_{m_2}^\pm (t_2)$, $\sigma_{m_1}^\pm (t_1)$ depend only on
differences $m=m_2-m_1$, $t=t_2-t_1$:
\beq
\label{(B.12)}
G^{(ab)}(m,t)=\frac 1Z \mbox{Tr}(e^{it{\bf H}}\sigma^a_{m+1}e^{-it{\bf
H}}\sigma^b_1 e^{-\beta{\bf H}}),\bl{1} a,b=\pm.
\eeq
Due to the property
\eqref{(2.21)} the correlator can be represented as a sum of four
contributions:
\beq \label{(B.13)} G^{(ab)}(m,t)=\frac 1{2Z}
\left(Z_F^+G_F^{(ab),+}+Z_F^-G_F^{(ab),-}
+Z_B^+G_B^{(ab),+}-Z_B^-G_B^{(ab),-}\right),
\eeq
where
\[Z_F^\pm G_F^{(ab),\pm}=\mbox{Tr}\left(e^{it{\bf H^\pm}}
\sigma^a_{m+1}e^{-it{\bf H^\mp}}\sigma^b_1e^{-\beta{\bf H^\pm}}\right),\]
\beq
\label{(B.14)}
Z_B^\pm G_B^{(ab),\pm}=\mbox{Tr}\left(e^{it{\bf H^\pm}}\sigma^a_{m+1}
e^{-it{\bf H^\mp}}\sigma^b_1e^{-\beta{\bf H^\pm}}(-1)^{\bf N}
\right).
\eeq
We begin with the correlator $G^{(-+)}(m,t)$. One can represent,
e.g., the contribution
\[Z_F^+G^{(-+)}=e^{\beta Mh/2}\int d\phi d\phi^* d\psi d\psi^*
e^{\phi^*\phi-\psi^*\psi}\times\]
\[\times\langle\phi^*J(0,t),+|\sigma^-_{m+1}|J(0,-t)\psi,-\rangle
\langle\psi^*,-|\sigma_1^+|J_\beta\phi ,+\rangle,\]
where we introduced a diagonal $M\times M$ matrix $J(m,t)$:
\beq
\label{(B.15)}
J(m,t)=\mbox{diag}\left(e^{-ipm+i\varepsilon(p)t}\right).
\eeq
Now we use an expression \eqref{(2.22)} for the matrix elements and
perform the integration over $\psi,\psi^*$ and then over $\phi,\phi^*$:
\[Z_F^+G^{(-+)}=e^{\beta Mh/2}\frac{\partial}{\partial\alpha}\int
d\phi d\phi^* d\psi d\psi^*
e^{\phi^*\phi-\psi^*\psi}\times\]
\[\left. e^{\phi^*L(m,t)\psi+\psi^*L(0)J_\beta\phi +\alpha \phi^*RJ_\beta\phi}\right|_
{\alpha =0}=\]
\[=\left.e^{\beta Mh/2}\frac{\partial}{\partial\alpha}\mbox{det}\left[I+
L(m,t)L(0)J_\beta +\alpha RJ_\beta\right]\right|_{\alpha=0},\]
where we defined the matrix
\[L(m,t)=J(0,t)L(m)J(0,-t)\]
(in the matrix elements $L_{p,q}(m,t)$, $p\in X^+$ and $q\in X^-$),
and also the matrix $R$ of rank 1 (all the columns of this
matrix are equal):
\[R_{p_1,p_2}=\frac 1Me^{-ip_1 m+i\varepsilon (p_1)t},\bl{1}p_1,p_2\in
X^+.\]
After the similarity transformation with the diagonal matrix
$J(-\frac m2, -\frac t2)$ one gets \beq \label{(B.16)}
Z_F^+G_F^{(-+),+}=e^{\beta Mh/2}\left.\pd{\alpha}\mbox{det}\left[I+\tilde{L}J_\beta+
\alpha\tilde{R}J_\beta\right]\right|_{\alpha=0},
\eeq
where matrix elements of matrices $\tilde{L}$ and $\tilde{R}$ are
\beq
\label{(B.17)}
\tilde{L}_{p_1p_2}=e^{imp_1/2-it\varepsilon(p_1)/2}\sum\limits_qL_{p_1q}(m,t)
L_{qp_2}(0)e^{-imp_2/2+it\varepsilon(p_2)/2},
\eeq
\beq
\label{(B.18)}
\tilde{R}_{p_1p_2}=\frac 1M e^{-imp_1/2+it\varepsilon(p_1)/2}
e^{-imp_2/2+it\varepsilon(p_2)/2},
\eeq
\[({\rm here}\bl{1} p_1,p_2\in X^+, q\in X^-).\]
To represent these matrices in a more convenient way we introduce
the following functions (we follow the paper \cite{cikt2}, changing
a bit some notations; recall that $\varepsilon (q)=h-\cos q$)
\beq
\label{(B.19)}
g(m,t)=\frac 1M\sum\limits_qe^{imq+it\cos q};
\eeq
\beq
e(m,t,p)=\frac 1M\sum\limits_q\frac{e^{imq+it\cos q}}{\tan\frac{q-p}{2}};
\label{(B.20)}
\eeq
\beq
d(m,t,p)=\frac 1M\sum\limits_q\frac{e^{imq+it\cos q}-e^{imp+it\cos p}}
{\sin^2\frac{q-p}{2}};
\label{(B.21)}
\eeq
\beq
\label{(B.22)}
e_-(m,t,p)=e^{-i\frac{m}{2}p-i\frac{t}{2}\cos p};
\eeq
\beq
\label{(B.23)}
e_(m,t,p)=e_-(m,t,p)e(m,t,p).
\eeq
All these functions are of order $O(1)$ as $M\rightarrow\infty$.
It is also convenient to introduce four one-dimensional projectors
$\Pi^{++}$, $\Pi^{+-}$, $\Pi^{-+}$, $\Pi^{--}$ which are
$M\times M$ matrices with matrix elements
\beq
\label{(B.24)}
\Pi^{ab}_{p_1p_2}=e_a(m,t,p_1)e_b(m,t,p_2)\bl{1}((a,b)=+,-).
\eeq
Using identities
\beq
\label{(B.25)}
\sum\limits_q\frac{1}{\sin^2\frac{q-p}{2}}=M^2\bl{1}(q\in X^-,p\in X^+);
\eeq
\beq
\label{(B.26)}
\cot(x-u)\cot(x-v)=\cot(u-v)[\cot(x-u)-\cot(x-v)]-1,
\eeq
one can represent the matrix $\tilde{L}$ in the following form
\beq
\label{(B.27)}
\tilde{L}=I+\frac 1M\left[S+i\Pi^{+-}-i\Pi^{-+}\right].
\eeq
The diagonal and nondiagonal matrix elements of the matrix $S$
are given by
\beq
\label{(B.28)}
S_{pp}=d(p)e^{-imp-it\cos p}
\eeq
\beq
\label{(B.29)}
S_{p_1p_2}=\frac{e_+(p_1)e_-(p_2)-e_-(p_1)e_+(p_2)}{\tan\frac{p_1-p_2}{2}}.
\eeq
The matrix $\tilde{R}$ is a projector,
\beq
\label{(B.30)}
\tilde{R}=\frac{e^{ith}}{M}\Pi^{--}
\eeq
(For brevity, the arguments $m$ and $t$ are omitted, being the same
for all the functions $d,e,e_{\pm}$).
Other contributions can be calculated analogously. We give only
the results:
\beq
\label{(B.31)}
G^{(-+),\pm}_F=e^{ith}\left.\pd{\alpha}\mbox{det}\left[I+\frac 1M (S+i\Pi^{+-}-
i\Pi^{-+}+\alpha\Pi^{--})\Theta_F\right]\right|_{\alpha=0};
\eeq
\[G^{(-+),\pm}_B=e^{ith}\left.\pd{\alpha}\mbox{det}\left[I+\frac 1M (S+i\Pi^{+-}-
i\Pi^{-+}+\alpha\Pi^{--})\Theta_B\right]\right|_{\alpha=0}.\]
Though formally both contributions $G_F^{(-+),\pm}$ (as $G_F^{(-+),\pm}$)
are written in the same form, really they are, of course, different.
It is necessary to take into account that
\[p,p_1 ,p_2\in X^+,\bl{1} q\in X^- \bl{1} {\rm for}\bl{1}
G_{F,B}^{(-+),+};\]
\[p,p_1 ,p_2\in X^-,\bl{1} q\in X^+ \bl{1} {\rm for}\bl{1}
G_{F,B}^{(-+),-};\]
in all the formulae \eqref{(B.19)}-\eqref{(B.24)}, \eqref{(B.28)} and \eqref{(B.29)} defining
the functions $g,e,d,e_\pm$ and the matrix elements of the matrices
$S$ and $\Pi^{ab}$.
Analogously one can calculate also the correlator $G^{(+-)}$. The
corresponding contributions are
\[G^{(+-),\pm}_F=e^{ith}\left[g(m,t)+\pd{\alpha}\right]\mbox{det}\left[I+
\frac 1M (S+i\Pi^{+-}-i\Pi^{-+}-\vphantom{
-\alpha\{\Pi^{++}+ig\Pi^{+-}-ig\Pi^{-+}+g^2\Pi^{--}\})
\Theta_F}\right.\]
\beq
\label{(B.32)}
\left.\vphantom{\pd{\alpha}}\left.\vphantom{I+
\frac 1M (S+i\Pi^{+-}-i\Pi^{-+}-}
-\alpha\{\Pi^{++}+ig\Pi^{+-}-ig\Pi^{-+}+g^2\Pi^{--}\})
\Theta_F\right]\right|_{\alpha=0};
\eeq
\[G^{(+-),\pm}_B=e^{ith}\left[g(m,t)+\pd{\alpha}\right]\mbox{det}\left[I+
\frac 1M (S+i\Pi^{+-}-i\Pi^{-+}-\vphantom{
-\alpha\{\Pi^{++}+ig\Pi^{+-}-ig\Pi^{-+}+g^2\Pi^{--}\})
\Theta_F}\right.\]
\[\left.\vphantom{\pd{\alpha}}\left.\vphantom{I+
\frac 1M (S+i\Pi^{+-}-i\Pi^{-+}-}
-\alpha\{\Pi^{++}+ig\Pi^{+-}-ig\Pi^{-+}+g^2\Pi^{--}\})
\Theta_B\right]\right|_{\alpha=0};\]
(one should take into account
once again the difference between the contributions
$G_{F,B}^{(+-),+}$ and $G_{F,B}^{(+-),-}$ explained above).
One should underline that the results obtained here coincide
with the representations obtained using another method
(for which rather complicated calculations are required)
in \cite{cikt2,cit}. Precisely our expression can be obtained
by choosing arbitrary constants $c_1=i, c_2=-i$ in the formulae
\eqref{(A.1)}-\eqref{(A.10)} of the Appendix A of the paper \cite{iiks5}.
|
1,108,101,566,703 | arxiv | \section*{Word Counts}
\begin{abstract}
Considering collaborative innovation in patent development, we provide micro-level evidence of knowledge spillovers.
Knowledge embodied in a patent is proxied by word pairs appearing in its abstract, while novelty is measured by the frequency with which these word pairs have appeared in past patents.
Inventors are assumed to possess the knowledge associated with patents in which they have previously participated.
We find that collaboration by inventors with more mutually differentiated knowledge sets is likely to result in patents with higher novelty.
\end{abstract}
\maketitle
\setcounter{page}{1}
\thispagestyle{firststyle}
\ifthenelse{\boolean{shortarticle}}{\ifthenelse{\boolean{singlecolumn}}{\abscontentformatted}{\abscontent}}{}
\dropcap{H}umans are a collaborative species \cite{tomasello2007shared, tomasello2010ape, tomasello2003makes} and bring this collaborative nature with them to the workplace and to the laboratory. Collaboration within teams has been increasingly important within academic research \cite{wuchty2007increasing} and has long been an essential part of other creative endeavors such as the production of Broadway musicals \cite{uzzi2005collaboration}.
A necessary condition for successful collaboration is successful coordination on roles and tasks \cite{AngusNewton2015PLOS, newton2017shared}, which can be complicated when the characteristics of one's collaborative partners are imperfectly understood \cite{rusch2019collaboration}. Hence, for collaboration to both occur and be creative, it requires both shared knowledge and differentiated knowledge. There is, for example, evidence that the successful production of Broadway musicals involves teams that comprise both people who have previously collaborated and people who have not previously collaborated \cite{uzzi2005collaboration, bel2015team}.
Using Japanese patent data and adopting a measure of novelty based on word combinations that appeared in the patent \cite{Watzinger-et-al-CEPR2021}, we find a strong positive relationship between collaborators' mutual knowledge differentiation and the novelty of their output.
\section*{Results}
$\bf W_p$ represents the set of distinct word pairs in patent $p$'s abstract. (Hereafter, a bold capital letter expresses a set, and the corresponding italic letter its cardinality.)
The \emph{novelty} of a word pair at a given point in time is measured by the likelihood of its appearance in patents filed in the past \cite{Watzinger-et-al-CEPR2021}. Specifically, the novelty $n_{wt}$ of word pair $w$ at time $t$ is the ratio of (i) the sum of $W_p = |\bf W_p|$ over all patents $p$ filed at dates up to and including $t$, to (ii) the number of these patents that include word pair $w$.
We measure patent novelty by the average novelty of word pairs in its abstract, $\frac{1}{W_p}\sum_{w\in {\bf W}_p} n_{wt_p}$, where $t_p$ is the patent's filing time.
We consider collaborative aspects of patent development by focusing on the productivity per inventor pair, following \cite{Berliant-Fujita-IER2008}.
${\bf H}_p$ is the set of all inventors who participated in patent $p$, while ${\bf M}_p\equiv \{(i,j): i,j,\in {\bf H}_p, i\neq j\}$ is the set of pairs of such inventors.
The \emph{average pairwise-contribution to the patent's novelty} is given by
\begin{equation*}
n_{p}=\frac{1}{M_p W_p}\sum_{w\in {\bf W}_p} n_{wt_p}\,.
\end{equation*
Denoting by ${\bf G}_{it}$ the set of patents inventor $i$ participated in at time $t$, define $i$'s \emph{knowledge} at $t$ by ${\bf K}_{it} = \cup_{\tau < t}\cup_{p\in {\bf G}_{i\tau}} {\bf W}_p$ and its novelty by $k_{it} = \sum_{w\in \mathbf{K}_{it}} n_{wt}$.
Inventor pair $\{i,j\}$ has total knowledge $\mathbf K_{ijt} = \mathbf{K}_{it} \cup \mathbf K_{jt}$, with novelty $k_{ijt} = \sum_{w\in \mathbf K_{ijt}} n_{wt}$, and inventor $i$'s
\emph{differentiated knowledge} relative to $j$ is
$\mathbf{K}^D_{ijt} = \mathbf K_{it}\backslash \mathbf K_{jt}$, with novelty $k^D_{ijt} = \sum_{w\in \mathbf K^D_{ijt}} n_{wt}$.
Knowledge differentiation between $\{i,j\}$ is evaluated by the geometric mean of their respective differentiated-knowledge shares in the union of their knowledge,
\begin{equation*}
s_{ijt} = \sqrt{k^D_{ijt} k^D_{jit}}/k_{ijt}\in [0,0.5]\,.
\end{equation*}
Their average in patent $p$,
\begin{equation*}
s_p = \frac{1}{M_p}\sum_{(i,j)\in\mathbf{M}_p} s_{ijt_p}
\end{equation*}
measures knowledge differentiation in $p$.
We focus on patents with $s_p>0$, since $s_p= 0$ implies no knowledge exchange as inventors can be indexed so that ${\bf K}_1\subseteq {\bf K}_2 \cdots \subseteq {\bf K}_{H_p}$.
We estimate the effect of $s_p$ on $n_p$ by the model:
\begin{align}
n_{p}= \beta_0 &+ \beta_1 s_p + \cdots + \beta_m s_p^m \nonumber\\
&+ \gamma \bar{K}_p+ \delta M_p + f_p + \varphi_p + \tau_p + \varepsilon_p\,, \label{eq:regression}
\end{align}
controlling for average knowledge size
$\bar{K}_p \equiv \frac{1}{H_p}\sum_{i\in {\bf H}_p} K_{it_p}$, inventor-pair count $M_p$ (reflecting the costs/benefits of coordination and task specialization), and fixed effects, $f_p$, $\varphi_p$, and $\tau_p$, for firms, classes of International Patent Classification (IPC), and years, respectively.
$\varepsilon_p$ is a stochastic error.
\begin{comment}
Fig.\,\ref{fig:main}A shows the conditional expectation of $n_p$ obtained by estimating the model:
\begin{align}
n_{p}= \beta_0 &+ \beta_1 s_p + \beta_2 s_p^2 + \cdots + \beta_m s_p^m \nonumber\\
&+ \gamma \bar{K}_p+ \delta M_p + f_p + \varphi_p + \tau_p + \varepsilon_p\, \label{eq:regression}
\end{align}
where up to $m\,(\geq 1)$-th order polynomial terms of $s_p$ are considered.
The average cumulative-knowledge size of inventors in the patent team, $\bar{K}_p \equiv \frac{1}{H_p}\sum_{i\in {\bf H}_p} K_{it_p}$,
controls individual ability in the team.
The inventor-pair count in the patent team, $M_p$, controls the team's potential costs and benefits of coordination and task specialization.
Fixed effects for firms, the subclass of International Patent Classification (IPC), and years are captured by $f_p$, $\varphi_p$, and $\tau_p$, respectively, and $\varepsilon_p$ is a stochastic error. The standard errors are clustered by IPC class.
\end{comment}
The estimated conditional expectation and quantiles of $n_p$ indicate a positive association between $s_p$ and $n_p$, except for a range of small $s_p$, while the observed $s_p$ are spread over the entire feasible range, $(0,0.5]$ (Fig.~\ref{fig:main}).
\begin{figure}[h!]
\centering
\includegraphics[width=0.9\columnwidth]{fig_main_w_paircount_poly_4.png}
\caption{. (A,B) Estimated expectation and quantiles of $n_p$ conditional on $s_p$, respectively.
Other variables are evaluated at their mean. $m=4$ is chosen by the Bayesian Information Criterion for the regression to estimate $E(n_p | s_p)$.
Shaded areas indicate 95\% confidence intervals.
In the regressions, standard errors are clustered by 124 IPC classes.
The fixed effects $f_p$ are applied to firms that file 500 or more patents.
(C) Frequency distribution of $s_p\,(>0)$.}
\label{fig:main}
\end{figure}
To see the robustness of the result, we consider citation count of a patent as an alternative measure of output.
Let $\bar{c}_p$ be the citation count of patent $p$ within five years of application, excluding self-citations, where the self-citations include those by any patent $q$ whose set of inventors overlaps with that of $p$, that is, $\mathbf{H}_p \cap \mathbf{H}_q \neq \varnothing$.
We then use the citation count per inventor pair, $c_p = \bar{c}_p / M_p$, as an alternative output measure to $n_p$.
The basic relationships observed between $n_p$ and $s_p$ carry over to those between $c_p$ and $s_p$ as indicated by Fig.~\ref{fig:citation}A and B that correspond to Fig.~\ref{fig:main}A and B, respectively.
\begin{figure}[h!]
\centering \includegraphics[width=0.9\columnwidth]{fig_citation_w_paircount_poly_4.png}
\caption{. (A,B) Estimated expectation and quantiles of $c_p$ conditional on $s_p$ corresponding to Fig.\,\ref{fig:main}(A,B), respectively.
$m=4$ is chosen by the Bayesian Information Criterion for the regression to estimate $E(c_p | s_p)$.
Only the results for the 99, 95, and 90 percentiles are computed for Panel B since more than half the patents have zero citations.
The shaded area indicates 95\% confidence interval.}
\label{fig:citation}
\end{figure}
\section*{Theoretical model and simulation}
There is a set of agents, $\mathbf{I}\equiv \{1,2,\ldots,I\}$, $I$ even, who collaborate in pairs to invent new knowledge. When agents $\{i,j\}$ collaborate on a given patent, the novelty $n$ of the patent is stochastic and follows a known probability distribution $P(n|s)$ conditional on the knowledge differentiation $s \in (0,0.5]$ between the collaborators.
The value $v=v(n)$ of an invention is an increasing function of the novelty of the invention.
Assume that $v(\cdot)\ll \infty$ so that $V(s)\equiv E_{P(\cdot|s)}[v(n)]\ll \infty$.
The cost of collaboration by a pair $\{i,j\}$ is a continuous, increasing function $c_{ij}(s)$ of
$s$ between the collaborators. Assume $c_{ij}(0) >0$. Collaboration carries a cost which increases as the knowledge sets of the collaborators become more differentiated, as there is less of a basis for the common understanding that assists effective communication. Various factors, such as geographic proximity and personality traits mean that this cost will be heterogeneous across pairs
As only profitable collaborations
are pursued, the net value of a collaboration between
$\{i,j\}$ is given by $A_{ij} = \max\{0, V(s)-c_{ij}(s)\}$.
For simplicity, assume that
$A_{ij}$ differs across all pairs $\{i,j\}$ and that
when a pair collaborates, $A_{ij}$ is split equally between members of a pair (see SI for an alternative value split by bargaining).
This defines a one-sided non-transferable utility matching problem \cite{GS1}.
Under these assumptions, a unique stable Pareto-efficient matching exists.
To find this matching, first match the pair $\{i,j\}$ with the largest $A_{ij}$ and remove them from $\mathbf{I}$.
Then, repeat this process until all agents match into collaborating pairs.
To simulate a specific instantiation of this model, we consider technological categories with different levels of technological maturity that give rise to different levels of knowledge differentiation. Specifically, assume technological categories ${\bf L}\equiv \{1,\ldots, L\}$ with each agent working in a single category. That is, $\mathbf{I}= \mathbf{I}^1 \cup \ldots \cup \mathbf{I}^L$, where $\mathbf{I}^l$ represents the set of agents working in category $l\in\mathbf{L}$; $\mathbf{I}^l \cap \mathbf{I}^m =\varnothing$ for $l\neq m$ . Assume that if agents $i$ and $j$ are in different categories, then $c_{ij}$ is large enough that such agents never collaborate.
Let ${\bf K}^l \equiv \cup_{i\in {\bf I}^l} {\bf K}_i$
denote the knowledge set specific to category $l\in\mathbf{L}$.
Assume each agent $i$'s knowledge set has equal size $K_i = \bar{K} (< K^l\;\:\forall l\in \mathbf{L})$ and that all word pairs are equally novel so that $n_{wt}$ is the same for all $w$. Note that the maximum feasible value of $s_{ij}$ is increasing in $K^l$.
Categories with smaller $K^l$ are more technologically \emph{mature} since agents have more knowledge in common, hence less room for knowledge recombination.
Assume that all categories share a value function, $v(n(s_{ij})) = \tilde{v}(s_{ij})e^{\varepsilon_{ij}}$, where $\tilde{v}(s)$ is given by the quartic function of $s$ from the estimated conditional expectation of the observed novelty of patents (Fig.~\ref{fig:main}A), with an appropriately adjusted intercept $v_0>0$.
For agents within the same category, the cost function is given by $c_{ij} (s_{ij}) \equiv c(s_{ij})e^{\epsilon_{ij}}$ with $c(s)\equiv c_0 s$ and $c_0 > 0$.
$\varepsilon_{ij},\epsilon_{ij}\sim \mathcal{N}(0,1)$ are idiosyncratic noises.
Fig.~\ref{fig:simulation} demonstrates that this model qualitatively replicates the observed variation in $s$ and in $n$ from inter-category variation in technological maturity and intra-category variation due to mismatch and idiosyncratic costs in collaboration.
The mismatch results from failing to achieve the best match due to the finiteness of the set of possible collaborators (see SI).
\begin{figure}[h!]
\centering
\includegraphics[width=0.9\columnwidth]{fig_simulation_category-count_100000_noise_lognormal_std_1.png}
\caption{. Simulated pairwise collaboration.
$L=100\text{,}000$, $I^l=100\;\:\forall l\in \mathbf{L}$, $\bar{K}=100$. $K^l$ $(l\in {\bf L})$ is given by an integer rounding the value drawn from Pareto distribution with scale and shape parameters being $\bar{K}+1$ and 2, respectively.
$v_0=c_0=3.5\times 10^7$.
(A) The distribution of $s_{ij}$ for matched inventor pairs.
(B) Estimated quantiles of output values. Shaded areas indicate 95\% confidence intervals.}
\label{fig:simulation}
\end{figure}
\section*{Discussion}
The current paper has found a direct, micro-level relationship linking knowledge exchange between collaborators to the novelty of patent applications.
This provides support for a broader class of models of knowledge spillovers in invention that have, thus far, lacked micro-level evidence \citep[e.g.,][]{Weitzman-QJE1998,Berliant-Fujita-IER2008,Zacchia-REStud2020}.
Importantly, our estimation shows a generally positive relationship between patent novelty and knowledge differentiation, with the highest novelty attained with the highest differentiation (Fig.\,\ref{fig:main}A). Our approach should also provide guidance for researchers studying different contexts, where this relationship might differ.
\matmethods{%
\subsection*{Patent data}
Patent data are taken from the \emph{published patent applications} of Japan \cite{Naito-Data}.
Identical inventors are traced by matching their names and the establishments they belong to.
The data includes all patents filed between 1994 and 2017. We focus on those in 2009 and later, 970,197 of which involve multiple inventors.
The 1994-2008 data are used to compute the novelty of the patents filed after 2009.
\subsection*{Word standardization}
We use NLTK python library to standarize words as follows. (i) Nouns and verbs are lemmatized.
(ii) Distinction between a verb and a noun is judged in the context. (iii) Numbers, non-alphabetical characters, and single-character words are removed. As an exception, hyphen-connected words (e.g., \emph{self-organization}) are kept.
Noun/verb parts of these words are lammatized.
(iv) Stop-words (e.g., \emph{are} and \emph{also}) are removed.
\subsection*{Data availability}
All data and codes are provided in SI.
}
\showmatmethods{}
\acknow{This research was conducted as part of the project, ``Agglomeration-based framework for empirical and policy analyses of regional economies,'' undertaken at the Research Institute of Economy, Trade and Industry.
This research has also been supported by the Kajima Foundation, and the Murata Science Foundation, the International Joint Research Center of Advanced Economic Theory of the Institute of Economic Research in Japan, and the Grant in Aid for Research Nos.~17H00987, 19K15108, and 21H00695 of the MEXT, Japan.
}
\label{page:lastpage}
\showacknow{}
|
1,108,101,566,704 | arxiv | \section{Introduction}
Distributed estimation algorithms are usually built on a given sensor network for a complex system, aiming at estimating an unknown global system parameter vector cooperatively by the distributed sensors. Each sensor in the network is taken as a node which can only observe partial data of the whole system, perform processing individually, and communicate information only with its neighbors, where the neighbors are defined by the network topology. In recent years, distributed estimation over sensor networks has received increasing research attention, and has been widely studied and used in many areas, e.g., collaborative spectral sensing in cognitive radio systems, target localization in biological networks, environmental monitoring, military surveillance, and so on (see e.g. \cite{Taj2011, Sayed2013}). Unlike the traditional centralized method, no node in the network needs to transfer its information to a fusion center for processing in the distributed case, which is more robust and scalable since the fusion center in the centralized method is sensitive and vulnerable to outside attacks. Once the fusion center is under attack, the entire network could collapse. In the distributed method, each node in the network can only exchange data with its neighbors, which may make the communication over the network possible, enhance the safety and privacy of the system, improve the estimation performance, and increase the robustness and scalability of the system (see e.g. \cite{Sayed2013, Sayed2014b}).
It goes without saying that different cooperation strategies will lead to different distributed estimation algorithms. For example, the proposed incremental \cite{Sayed2014b, Khalili2017, Sayed2006, Chaves2013}, consensus \cite{Xie2018Auto, Xie2018SIAM, Mateos2012, Mateos2009, Gra2019, Carli2008, Bat2014, Bat2015, Liu2018, Das2017}, and diffusion \cite{Xie2018TAC, Piggott2016, Nosrati2015, Tu2012, Vah2018, Har2019, Xiao2006, Cattivelli2008, Bertrand2011, Ara2014, Vah2017, Ras2019, Yu2019} strategies, may be combined with different estimation algorithms, e.g., least mean squares (LMS), LS and Kalman filters (KF) \cite{Widerow1985, Solo1995, Guo1994, Guo1995}, to give rise to different distributed estimation algorithms. Stability and performance analyses have also been established for different distributed estimation algorithms, for example, incremental LMS \cite{Sayed2014b, Khalili2017}, consensus LMS \cite{Xie2018Auto, Xie2018SIAM}, diffusion LMS \cite{Xie2018TAC, Piggott2016, Nosrati2015, Tu2012, Vah2018, Har2019}, incremental LS \cite{Sayed2006, Chaves2013}, consensus LS \cite{Mateos2012, Mateos2009, Gra2019}, diffusion LS \cite{Xiao2006, Cattivelli2008, Bertrand2011, Ara2014, Vah2017, Ras2019, Yu2019}, and distributed KF \cite{Carli2008, Bat2014, Bat2015, Liu2018, Das2017}. In our recent work (see e.g. \cite{Xie2018Auto, Xie2018SIAM, Xie2018TAC}), we have given the stability and performance results for the consensus and diffusion LMS filters, without imposing the usual independence and stationarity assumptions for the system signals.
Note that the LS is a most basic, widely used and comprehensively studied estimation algorithm in many fields of science and engineering. Moreover, when the unknown parameter is time-invariant, the LS algorithm may generate more accurate estimates in the transient phase and have faster convergence speed compared with LMS algorithm. So the LS appears to be more suitable for applications that require fast speed and accurate estimates for unknown constant parameters. This is one of the main motivations for us to consider the LS-based distributed estimation algorithm in this paper. Another reason for us to study this problem is that the existing convergence theory in the literature is far from satisfactory since it can hardly be applied to non-independent and non-stationary signals coming from practical complex systems where feedback loops inevitably exist.
In fact, almost all the existing studies on the distributed LS (see e.g., \cite{Sayed2006, Chaves2013, Mateos2012, Mateos2009, Gra2019, Xiao2006, Cattivelli2008, Bertrand2011, Ara2014, Vah2017, Ras2019, Yu2019}) require some independent, stationary, or Gaussian assumptions for the system signals. For examples, an incremental LS estimation strategy was proposed in \cite{Sayed2006}, and the mean-square performance was studied for independent regressors. Moreover, \cite{Mateos2012} presented a distributed LS algorithm, and gave stability and performance analyses for independent noises and regressors. \cite{Xiao2006} proposed a diffusion scheme for LS estimation problem, and analyzed its mean-square convergence under independence conditions on both the system signals and Gaussian noises. Furthermore, \cite{Cattivelli2008} presented a diffusion LS algorithm, and proved that the algorithm is asymptotically unbiased and stable for independent regressors and Gaussian noises. In \cite{Bertrand2011}, a diffusion bias-compensated LS algorithm was developed, and the closed-form expressions for the residual bias and the mean-square deviation of the estimates were provided under independence and stationarity assumptions. In addition, partial diffusion LS algorithms were proposed in \cite{Ara2014, Vah2017}, and the performance results were established for ergodic signals \cite{Ara2014} and independent signals \cite{Vah2017}. Moreover, \cite{Ras2019} proposed a reduced communication diffusion LS algorithm for distributed estimation over multi-agent, and \cite{Yu2019} developed robust diffusion LS algorithms to mitigate the performance degradation in the presence of impulsive noise. They both established the performance results under independent signal assumptions. Some other related papers, e.g.,\cite{Chaves2013, Gra2019, Mateos2009}, verified the efficiency of the LS-type algorithms via numerical simulations. All of these indicate that to substantially relax the widely imposed independence and stationarity conditions on the system signals in the analyses of distributed LS, will inevitably bring challenging difficulties in establishing a convergence theory.
Fortunately, in the traditional single sensor case, there is a vast literature on the convergence theory of the classical LS, which is indeed applicable to stochastic systems with feedback control. In fact, motivated by the need to establish a rigorous theory for the well-known LS-based self-tuning regulators proposed by $\mathring{\text{A}}$str\" om and Wittenmark \cite{Astrom1973} in stochastic adaptive control, the convergence study of LS with possible stochastic feedback signals had received a great deal of attention in the literature, see e.g., \cite{Ljung1976, Moore1978, Chen1982, Lai1982, Lai1986, Chen1986, Chen1991, Guo1991, Guo1995}. At the same time, much effort had also been devoted to stochastic adaptive control, see e.g, \cite{Ljung1977, Goodwin1981, Lai1986, Kumar1990, Chen1991}. Among the many significant contributions in this direction, here we only mention that Lai and Wei \cite{Lai1982} established a celebrated convergence result under a weakest possible decaying excitation condition on the system signals, and Guo and Chen \cite{Guo1991} and Guo \cite{Guo1995} finally resolved the longstanding problem concerning the global stability and convergence of the LS-based self-tuning regulators. We remark that the analysis methods including stochastic Lyapunov functions and martingale convergence theorems, which are so useful for the analysis of the classical LS, will also be instrumental for us in investigating the distributed LS algorithm in the current paper.
In this paper, we will provide a theoretical analysis for a distributed LS algorithm of diffusion type \cite{Bat2014, Bat2015, Liu2018}, where the diffusion strategy is designed via the so called covariance intersection fusion rule (see, e.g., \cite{Julier1997, Chen2002}). In such a diffusion strategy, each node is only allowed to communicate with its neighbors, and both the estimates of the unknown parameter and the inverse of the covariance matrices are diffused between neighboring nodes. We will generalize the well-known convergence results on the classical LS by establishing both the upper bound of the accumulated regrets of the adaptive predictor and the convergence of the distributed LS estimator, with the following key features compared with the related results in the existing literature:
\begin{itemize}
\item Our theory does not need the usually assumed independence, stationarity or Gaussian property on the system signals, and hence does not exclude the applications of the theory to stochastic feedback systems, and will also make it possible for further investigation on related problems concerning the combination of learning, communication and control.
\item Our theory for the convergence of the distributed LS is established under a weakest possible cooperative excitation condition which is a natural extension of the single sensor case. The cooperative excitation condition introduced in this paper implies that even if any individual sensor is not able to estimate the unknown parameter, the distributed LS can still accomplish the estimation task. It is also considerably weaker than the related cooperative information condition introduced in our previous work for the theory of the distributed LMS filters (see e.g. \cite{Xie2018Auto, Xie2018SIAM, Xie2018TAC}).
\item The mathematical techniques used in our theoretical analysis are also different from the existing ones for distributed LS. Besides using the powerful techniques from the analysis of the classical LS, we also need to establish some inequalities on convex combination of nonnegative definite matrices and to use the Ky Fan convex theorem \cite{Ky1950}.
\end{itemize}
The rest of the paper is organized as follows. In Section II, we present some preliminaries on notations and graph theory, the observation model, and the distributed LS algorithm studied in the paper. The main results are stated in Section III. In Section IV, we provide the proofs of the main results. Finally, some concluding remarks are given in Section V.
\section{Problem Formulation}
\subsection{Basic Notations}
In the sequel, $X\in\mathbb{R}^{n}$ is viewed as an $n$-dimensional column vector and $A\in\mathbb{R}^{m\times n}$ is viewed as an $m\times n$-dimensional matrix. Let $A\in\mathbb{R}^{n\times n}$ and $B\in\mathbb{R}^{n\times n}$ be two symmetric matrices, then $A\geq B (A>B)$ means $A-B$ is a positive semidefinite (definite) matrix. Also, let $\lambda_{max}\{\cdot\}$ and $\lambda_{min}\{\cdot\}$ denote the largest and the smallest eigenvalues of the corresponding matrix respectively. For any matrix $X\in\mathbb{R}^{m\times n}$, $\parallel X\parallel$ denotes the operator norm induced by the Euclidean norm, i.e., $(\lambda_{max}\{XX^{T}\})^{\frac{1}{2}}$ , where $(\cdot)^{T}$ denotes the transpose operator. We use $\mathbb{E}[\cdot]$ to denote the mathematical expectation operator, and $\mathbb{E}[\cdot|\mathcal{F}_{k}]$ to denote the conditional mathematical expectation operator, where $\{\mathcal{F}_{k}\}$ is a sequence of nondecreasing $\sigma$-algebras\cite{Chow1978}. We also use $\log (\cdot)$ to denote the natural logarithm function, and $\text{Tr}(\cdot)$ to denote the trace of the corresponding matrix. Through out the paper, $\vert\cdot\vert$ denotes the determinant of the corresponding matrix, which should not be confused with the absolute value of a scalar from the context.
Let $\{A_{k},k\geq 0\}$ be a matrix sequence and $\{b_{k},k\geq 0\}$ be a positive scalar sequence. Then by $A_{k}=O(b_{k})$ we mean that there exists a constant $M>0$ such that $\|A_{k}\|\leq Mb_{k}, \forall k\geq 0$, and by $A_{k}=o(b_{k})$ we mean that $\lim\limits_{k\to\infty}\|A_{k}\|/b_{k}=0$.
\subsection{Graph Theory}
As usual, let the communication structure among sensors be represented by an undirected weighted graph $\mathcal{G}=(\mathcal{V},\mathcal{E},\mathcal{A})$, where $\mathcal{V}=\{1, 2,......, n\}$ is the set of sensors and $\mathcal{E}\subseteq \mathcal{V}\times \mathcal{V}$ is the set of edges. The structure of the graph $\mathcal{G}$ is described
by $\mathcal{A}=\{a_{ij}\}_{n\times n}$ which is called the weighted adjacency matrix, where $a_{ij}>0$ if $(i, j)\in\mathcal{E}$ and $a_{ij}=0$ otherwise. Note that $(i,j)\in\mathcal{E}\Leftrightarrow a_{ij}>0$. In this paper, we assume that the elements of the weighted matrix $\mathcal{A}$ satisfy $a_{ij}=a_{ji}, \forall i,j=1,\dots,n$, and $\sum_{j=1}^{n}a_{ij}=1,\forall i=1,\dots,n$. Thus the matrix $\mathcal{A}$ is symmetric and doubly stochastic\footnote{A matrix is called doubly stochastic, if all elements are nonnegative, both the sum of each row and the sum of each column equal to 1.}.
A path of length $\ell$ in the graph $\mathcal{G}$ is a sequence of nodes $\{i_{1},\dots,i_{\ell}\}$ subject to $(i_{j},i_{j+1})\in\mathcal{E}$, for $1\leq j\leq\ell-1$. The maximum value of the distances between any two nodes in the graph $\mathcal{G}$ is called the diameter of $\mathcal{G}$. Here in this paper, we assume that the graph is connected, and denote the diameter of the graph $\mathcal{G}$ as $D_{\mathcal{G}}$. Then $1\leq D_{\mathcal{G}}<\infty$ holds. The set of neighbors of the sensor $i$ is denoted as
$$
\mathcal{N}_{i}=\{j\in V | (j,i)\in \mathcal{E}\},
$$
and the sensor $i$ can only share information with its neighboring sensors from $\mathcal{N}_{i}$.
\subsection{Observation Model}
Let us consider a sensor network consisting of $n$ sensors. Assume that at each time instant $k$, each sensor $i\in\{1,\dots,n\}$ in the sensor network receives a noisy scalar measurement $y_{k+1,i}$ and an $m$-dimensional regressor $\bm{\varphi}_{k,i}\in\mathbb{R}^{m}$. They are related by a typical linear stochastic regression model
\begin{equation}\label{model}
y_{k+1,i}=\bm{\varphi}_{k,i}^{T}\bm{\theta}+w_{k+1,i}, ~~~~k\geq 0,
\end{equation}
where $w_{k+1,i}$ is a random noise process, and $\bm{\theta}\in\mathbb{R}^{m}$ is an unknown parameter vector which needs to be estimated. Here we assume that at any sensor $i\in\{1, \dots, n\}$, $\bm{\varphi}_{k,i}$ is $\mathcal{F}_{k}$-measurable, where $\{\mathcal{F}_{k}\}$ is a sequence of nondecreasing $\sigma$-algebras. Many problems from different application areas can be cast as $(\ref{model})$, see, e.g., \cite{Solo1995, Widerow1985, Sayed2014b}. At any time instant $k\geq 1$, sensor $i$ uses both the observations $y_{j+1,i}$ and the regressors $\bm{\varphi}_{j,i} (j\leq k)$ to estimate the unknown parameter $\bm{\theta}$, which can be regarded as a supervised learning problem \cite{Bot2018}.
Because of its ``optimality'' and fast convergence rate, the well-known LS algorithm is one of the most basic and widely used algorithms in science and technology. The LS estimate at each sensor $i$ is defined by the following at each time instant $k$:
$$
\bm{\theta}_{k,i}=\arg\min_{\bm{\theta}\in\mathbb{R}^{ m}}\sum_{j=1}^{k}(y_{j,i}-\bm{\varphi}_{j-1,i}^{T}\bm{\theta})^{2},
$$
which can be solved explicitly and can be calculated recursively as follows (see e.g. \cite{Chen1991}):
\begin{equation}\label{LS1}
\bm{\theta}_{k+1,i}=\bm{\theta}_{k,i}+b_{k,i}P_{k,i}\bm{\varphi}_{k,i}(y_{k+1,i}-\bm{\varphi}_{k,i}^{T}\bm{\theta}_{k,i}),
\end{equation}
\begin{equation}\label{LS2}
P_{k+1,i}=P_{k,i}-b_{k,i}P_{k,i}\bm{\varphi}_{k,i}\bm{\varphi}_{k,i}^{T}P_{k,i},
\end{equation}
\begin{equation}\label{LS3}
b_{k,i}=(1+\bm{\varphi}_{k,i}^{T}P_{k,i}\bm{\varphi}_{k,i})^{-1},
\end{equation}
where the initial estimate $\bm{\theta}_{0,i}\in\mathbb{R}^{m}$, and the initial positive definite matrix $P_{0,i}\in\mathbb{R}^{m\times m}$ can be chosen arbitrarily. Note that in practice $P_{0,i}$ is usually set as $\alpha_{0}I_{m}$, where $\alpha_{0}$ is a positive constant, and $I_{m}$ denotes the $m\times m$-dimensional identity matrix.
The above defined LS algorithm can be used for adaptive prediction problems. For any $i\in\{1,\dots, n\}$, and at any time instant $k\geq 1$, the best prediction to the future observation $y_{k+1,i}$ is the following conditional mathematical expectation:
$$
\mathbb{E}[y_{k+1,i}\vert \mathcal{F}_{k}]=\bm{\varphi}_{k,i}^{T}\bm{\theta},
$$
if the noise is a martingale difference sequence with second moment. Unfortunately, this optimal predictor is unavailable because $\bm{\theta}$ is unknown. A natural way is to construct an adaptive predictor $\widehat{y}_{k+1,i}$ by using the online LS estimate $\bm{\theta}_{k,i}$, i.e.,
$$
\widehat{y}_{k+1,i}=\bm{\varphi}_{k,i}^{T}\bm{\theta}_{k,i}.
$$
The error between the best predictor and the adaptive predictor may be referred to as the regret denoted by
\begin{equation}
R_{k,i}=(\mathbb{E}[y_{k+1,i}\vert \mathcal{F}_{k}]-\widehat{y}_{k+1,i})^{2},
\end{equation}
which may not be zero and even may not be small in sample paths due to the persistent disturbance of the unpredictable noises in the model. However, one may evaluate the averaged regrets defined as follows:
\begin{equation}
\frac{1}{nt}\sum_{i=1}^{n}\sum_{k=0}^{t}R_{k,i},
\end{equation}
which we are going to show tends to zero as $t$ increases to infinity under essentially no excitation conditions on the regressors and no independence, stationarity or Gaussian assumptions on system signals, see \emph{Theorem 3.2} below. This is a clerbrated property that is widely studied in distributed online learning and optimization problems \cite{Yan2013, Hos2016, Akb2017, Sha2018}, but under rather restrictive assumptions such as boundedness, stationarity or independence on the system signals. Moreover, different from \cite{Yan2013, Hos2016, Akb2017, Sha2018}, to make the supervised learning result applicable to prediction or classification problem with unseen data, one needs the so called generalization ability in theory, which in turn needs to further study the convergence of the LS estimate itself.
It is well-known that the estimation error of the above classical LS has the following upper bound (see \cite{Lai1982, Guo1995}) for each sensor $i\in\{1,\dots,n\}$ as $k\to\infty$:
\begin{equation}\label{trad}
\|\bm{\theta}_{k+1,i}-\bm{\theta}\|^{2}=O\Bigg(\frac{\log \Big(\lambda_{max}\{P_{0,i}^{-1}\}+\sum_{j=0}^{k}\|\bm{\varphi}_{j,i}\|^{2}\Big)}{\lambda_{min}\Big\{P_{0,i}^{-1}+\sum_{j=0}^{k}\bm{\varphi}_{j,i}\bm{\varphi}_{j,i}^{T}\Big\}}\Bigg),a.s.
\end{equation}
Consequently, it is easy to see that the LS estimates will converge to the true parameter if
\begin{equation}\label{tozero}
\lim_{k\to\infty}\frac{\log \Big(\lambda_{max}\{P_{0,i}^{-1}\}+\sum_{j=0}^{k}\|\bm{\varphi}_{j,i}\|^{2}\Big)}{\lambda_{min}\Big\{P_{0,i}^{-1}+\sum_{j=0}^{k}\bm{\varphi}_{j,i}\bm{\varphi}_{j,i}^{T}\Big\}}=0,~~a.s.
\end{equation}
Moreover, examples can be constructed to show that if the above limit is a nonzero constant, then the LS estimate cannot converge to the true parameter (see \cite{Lai1982}). In this sense, one can say that the condition (\ref{tozero}) is the weakest possible one for convergence of the classical LS \cite{Lai1982}. Despite of this, the verification of (\ref{tozero}) is still a very challenging issue for stochastic adaptive control systems (see e.g. \cite{Astrom1973, Lai1982, Lai1986, Chen1991, Guo1995}). Moreover, for high-dimensional or sparse stochastic regressors, the condition (\ref{tozero}) may indeed be not satisfied. This situation may be improved by exchanging information among nodes in a sensor network on which the distributed LS is defined.
\subsection{Distributed LS Algorithm}
In this paper, we will consider the following basic class of distributed LS algorithms of diffusion type, and our main contribution is to establish a convergence theory for general correlated, non-stationary and non-Gaussian regression signals, so that the theory is applicable to control systems.
\begin{algorithm}\label{LMS2}
\caption{Distributed LS algorithm}
For any given sensor $i\in\{1\dots,n\}$, begin with an initial estimate $\bm{\theta}_{0,i}\in\mathbb{R}^{m}$, and an initial positive definite matrix $P_{0,i}\in\mathbb{R}^{m\times m}$. The algorithm is recursively defined at any iteration $k\geq 0$ as follows:
\begin{algorithmic}[1]
\State Adapt (generate $\bar{\bm{\theta}}_{k+1,i}$ and $\bar{P}_{k+1,i}$ on the bases of $\bm{\theta}_{k,i}, P_{k,i}$ and $\bm{\varphi}_{k,i}, y_{k+1,i}$):
\begin{equation}
\bar{\bm{\theta}}_{k+1,i}=\bm{\theta}_{k,i}+b_{k,i}P_{k,i}\bm{\varphi}_{k,i}(y_{k+1,i}-\bm{\varphi}_{k,i}^{T}\bm{\theta}_{k,i}),
\end{equation}
\begin{equation}\label{barP}
\bar{P}_{k+1,i}=P_{k,i}-b_{k,i}P_{k,i}\bm{\varphi}_{k,i}\bm{\varphi}_{k,i}^{T}P_{k,i},
\end{equation}
\begin{equation}
b_{k,i}=(1+\bm{\varphi}_{k,i}^{T}P_{k,i}\bm{\varphi}_{k,i})^{-1},
\end{equation}
\State Combine (generate $P_{k+1,i}^{-1}$ and $\bm{\theta}_{k+1,i}$ by a convex combination of $\bar{P}_{k+1,j}^{-1}$ and $\bar{\bm{\theta}}_{k+1,j}$):
\begin{equation}
P_{k+1,i}^{-1}=\sum_{j\in\mathcal{N}_{i}}a_{ji}\bar{P}_{k+1,j}^{-1},
\end{equation}
\begin{equation}
\bm{\theta}_{k+1,i}=P_{k+1,i}\sum_{j\in\mathcal{N}_{i}}a_{ji}\bar{P}_{k+1,j}^{-1}\bar{\bm{\theta}}_{k+1,j}.
\end{equation}
\end{algorithmic}
\end{algorithm}
When $\mathcal{A}=I_{n}$, the above distributed LS will degenerate to the classical LS. One may also perform the combination stage for more steps to improve the performance of the algorithm, see e.g. \cite{Bat2014}. Note that the diffusion strategy used above is called the covariance intersection fusion rule in (e.g.,\cite{Julier1997, Chen2002}), and that the above distributed LS algorithm can be deduced from the distributed KF algorithms \cite{Bat2014, Bat2015, Liu2018} by assuming that the state to be estimated is a constant parameter. In this paper, we are interested in the case where each sensor in the network expects to estimate the unknown parameter for its decision, which is a problem widely studied in the literature, see e.g., \cite{Taj2011, Sayed2013, Sayed2014b, Khalili2017, Sayed2006, Chaves2013, Xie2018Auto, Xie2018SIAM, Mateos2012, Mateos2009, Gra2019, Carli2008, Bat2014, Bat2015, Liu2018, Das2017,Xie2018TAC, Piggott2016, Nosrati2015, Tu2012, Vah2018, Har2019, Xiao2006, Cattivelli2008, Bertrand2011, Ara2014, Vah2017, Ras2019, Yu2019}. Here we focus on the scenario where the individual sensor has insufficient information and capability to fulfill the estimation task. It is well known that the estimates and covariance matrices from different sensors may contain complementary information. Combining these two kinds of information together may help to achieve a more accurate estimation of the unknown parameter. Moreover, as stated in \cite{Bat2014}, the unaware reuse of the same data due to the presence of loops within the network as well as the possible correlation between measurements of different sensors may lead to inconsistency and divergence, which is the primary motivation that leads to the development of the so-called covariance intersection fusion rule \cite{Julier1997, Chen2002}. Thus, in order to guarantee the convergence of the estimates for non-independent signals, sometimes it may not be sufficient enough to only exchange information about the estimates.
Note that in the above distributed LS, the computation complexity of each sensor is $O(m^{3})$. Moreover, every sensor needs to communicate a total of $(m^{2}+3m)/2$ scalars to its neighboring nodes, and to store a total of $2m^{2}+5m+n+2$ scalars locally at each time instant $k$. The algorithm is going to be time-consuming when $m$ is very large, and the covariance intersection fusion rule would only be beneficial when the number of the parameters is manageable locally. Note that if the matrix $\bar{P}_{k,i}$ degenerates to a scalar, for examples, in stochastic gradient-base \cite{Chen1991} and LMS-based \cite{Widerow1985, Solo1995, Guo1994, Guo1995} distributed estimation algorithms, the communication complexity will be reduced. However, for those algorithms, the estimation error either converges slowly to zero or does not converge to zero at all. Therefore, there is a tradeoff between the complexity and the convergence rate of the distributed estimation algorithms. Moreover, the convergence rate would be ``optimal'' when $\bar{P}_{k,i}$ is chosen to be the form in the paper. Furthermore, some existing methods can be used to reduce the communication complexity and to make the algorithm suitable for higher dimensional signals, for examples, event-driven methods \cite{Zhong2010}, partial diffusion methods \cite{Vah2018, Ara2014, Vah2017}, and compressed methods \cite{Xie2019} and so on.
\section{The Main Results}
\subsection{Some Preliminaries}
For the theoretical analysis, we need the following standard condition on the noise processes.
\emph{Condition 3.1 (Noise condition).} For any $i\in\{1,\dots,n\}$, the noise sequence $\{w_{k,i},\mathcal{F}_{k}\}$ is a martingale difference (where $\{\mathcal{F}_{k}\}$ is a sequence of nondecreasing $\sigma$-algebras), and there exists a constant $\beta>2$ such that
\begin{equation}
\sup_{k\geq 0}\mathbb{E}[\vert w_{k+1,i}\vert^{\beta}\vert\mathcal{F}_{k}]<\infty,~~~~a.s.
\end{equation}
In order to guarantee the convergence of the above distributed LS algorithm, the following condition on the network topology is naturally required to avoid isolated nodes in the network.
\emph{Condition 3.2 (Network topology).} The graph $\mathcal{G}$ is connected.
\emph{Remark 3.1.} From \emph{Lemma 8.1.2} in \cite{Godsil2014}, it is not difficult to see that for any two nodes $i$ and $j$, there exists a path from $i$ to $j$ with length not less than $\ell$ if and only if the $(i,j)$th entry of the matrix $\mathcal{A}^{\ell}$ is positive. From this, it is easy to see that each entry of the matrix $\mathcal{A}^{\ell}$ will be positive when $\ell$ is not smaller than the diameter of the graph $\mathcal{G}$, i.e., $D_{\mathcal{G}}$, see also \cite{Liu2018}.
\subsection{Theoretical Results}
For convenience of analysis, we need to introduce the following notations:
$$
\begin{aligned}
&\bm{Y}_{k+1}\overset{\triangle}{=}\text{col}\{y_{k+1,1},\dots,y_{k+1,n}\},~~~~~~~~~~~~~(n\times 1)\\
&\bm{\Phi}_{k}\overset{\triangle}{=}\text{diag}\{\bm{\varphi}_{k,1},\dots,\bm{\varphi}_{k,n}\},~~~~~~~~~~~~~~~~~~(mn\times n)\\
&\bm{W}_{k+1}\overset{\triangle}{=}\text{col}\{w_{k+1,1},\dots,w_{k+1,n}\},~~~~~~~~~~(n\times 1)\\
&\bm{\Theta}\overset{\triangle}{=}\text{col}\{\underbrace{\bm{\theta},\dots,\bm{\theta}}_{n}\},~~~~~~~~~~~~~~~~~~~~~~~~~~~~(mn\times 1)\\
&\bm{\Theta}_{k}\overset{\triangle}{=}\text{col}\{\bm{\theta}_{k,1},\dots,\bm{\theta}_{k,n}\},~~~~~~~~~~~~~~~~~~~~(mn\times 1)\\
&\bar{\bm{\Theta}}_{k}\overset{\triangle}{=}\text{col}\{\bar{\bm{\theta}}_{k,1},\dots,\bar{\bm{\theta}}_{k,n}\},~~~~~~~~~~~~~~~~~~~~(mn\times 1)\\
&\widetilde{\bm{\Theta}}_{k}\overset{\triangle}{=}\text{col}\{\widetilde{\bm{\theta}}_{k,1},\dots,\widetilde{\bm{\theta}}_{k,n}\},~~~~~~~~~~~~~~~~~~~~(mn\times 1)\\
&~~~~~~~\text{where}~ \widetilde{\bm{\theta}}_{k,i}=\bm{\theta}-\bm{\theta}_{k,i},\\
&\widetilde{\bar{\bm{\Theta}}}_{k}\overset{\triangle}{=}\text{col}\{\widetilde{\bar{\bm{\theta}}}_{k,1},\dots,\widetilde{\bar{\bm{\theta}}}_{k,n}\},~~~~~~~~~~~~~~~~~~~~(mn\times 1)\\
&~~~~~~~\text{where}~ \widetilde{\bar{\bm{\theta}}}_{k,i}=\bm{\theta}-\bar{\bm{\theta}}_{k,i},\\
&\bm{b}_{k}\overset{\triangle}{=}\text{diag}\{b_{k,1},\dots,b_{k,n}\},~~~~~~~~~~~~~~~~~~~~(n\times n)\\
&\bm{c}_{k}\overset{\triangle}{=}\bm{b}_{k}\otimes I_{m},~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~(mn\times mn)\\
&\bm{P}_{k}\overset{\triangle}{=}\text{diag}\{P_{k,1},\dots,P_{k,n}\},~~~~~~~~~~~~~~~~~~~(mn\times mn)\\
&\bar{\bm{P}}_{k}\overset{\triangle}{=}\text{diag}\{\bar{P}_{k,1},\dots,\bar{P}_{k,n}\},~~~~~~~~~~~~~~~~~~~(mn\times mn)\\
&\bm{\mathscr{A}}\overset{\triangle}{=}\mathcal{A}\otimes I_{m},~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~(mn\times mn)
\end{aligned}
$$
where $\text{col}\{\cdots\}$ denotes a vector by stacking the specified vectors, $\text{diag}\{\cdots\}$ is used in a non-standard manner which means that $m\times 1$ column vectors are combined ``in a diagonal manner'' resulting in a $mn\times n$ matrix, and $\otimes$ is the Kronecker product. Note also that $\bm{\Theta}$ is just the $n$-times replication of vectors $\bm{\theta}$, and the matrix $\mathcal{A}$ is the weighted adjacency matrix of the graph $\mathcal{G}$.
Then (\ref{model}) can be rewritten in the following compact form:
\begin{equation}\label{Model}
\bm{Y}_{k+1}=\bm{\Phi}_{k}^{T}\bm{\Theta}+\bm{W}_{k+1},
\end{equation}
Similarly, for the distributed LS algorithm we have
\begin{equation}\label{7}
\begin{cases}
&\bar{\bm{\Theta}}_{k+1}=\bm{\Theta}_{k}+\bm{c}_{k}\bm{P}_{k}\bm{\Phi}_{k}(\bm{Y}_{k+1}-\bm{\Phi}_{k}^{T}\bm{\Theta}_{k}),\\
&\bar{\bm{P}}_{k+1}=\bm{P}_{k}-\bm{c}_{k}\bm{P}_{k}\bm{\Phi}_{k}\bm{\Phi}_{k}^{T}\bm{P}_{k},\\
&\bm{b}_{k}=(I_{n}+\bm{\Phi}_{k}^{T}\bm{P}_{k}\bm{\Phi}_{k})^{-1},\\
&\bm{c}_{k}=\bm{b}_{k}\otimes I_{m},\\
&\text{vec}\{\bm{P}_{k+1}^{-1}\}=\mathscr{A}\text{vec}\{\bar{\bm{P}}_{k+1}^{-1}\},\\
&\bm{\Theta}_{k+1}=\bm{P}_{k+1}\mathscr{A}\bar{\bm{P}}_{k+1}^{-1}\bar{\bm{\Theta}}_{k+1},
\end{cases}
\end{equation}
where $\text{vec}\{\cdot\}$ denotes the operator that stacks the blocks of a block diagonal matrix on top of each other.
Since $\widetilde{\bm{\Theta}}_{k}=\bm{\Theta}-\bm{\Theta}_{k}$ and $\widetilde{\bar{\bm{\Theta}}}_{k}=\bm{\Theta}-\bar{\bm{\Theta}}_{k}$ by definition, substituting (\ref{Model}) into (\ref{7}), we can get
$$
\begin{aligned}
\widetilde{\bar{\bm{\Theta}}}_{k+1}=&\bm{\Theta}-\bar{\bm{\Theta}}_{k+1}\\
=&\bm{\Theta}-\bm{\Theta}_{k}-\bm{c}_{k}\bm{P}_{k}\bm{\Phi}_{k}(\bm{\Phi}_{k}^{T}\bm{\Theta}+\bm{W}_{k+1}-\bm{\Phi}_{k}^{T}\bm{\Theta}_{k})\\
=&(I_{mn}-\bm{c}_{k}\bm{P}_{k}\bm{\Phi}_{k}\bm{\Phi}_{k}^{T})\widetilde{\bm{\Theta}}_{k}-\bm{c}_{k}\bm{P}_{k}\bm{\Phi}_{k}\bm{W}_{k+1}\\
=&\bar{\bm{P}}_{k+1}\bm{P}_{k}^{-1}\widetilde{\bm{\Theta}}_{k}-\bm{c}_{k}\bm{P}_{k}\bm{\Phi}_{k}\bm{W}_{k+1}.
\end{aligned}
$$
Note also that
$$
\begin{aligned}
&\bm{P}_{k+1}\mathscr{A}\bar{\bm{P}}_{k+1}^{-1}\bm{\Theta}\\
=&\text{col}\Bigg\{P_{k+1,1}\sum_{j\in\mathcal{N}_{1}}a_{j1}\bar{P}_{k+1,j}^{-1}\bm{\theta},\dots,P_{k+1,n}\sum_{j\in\mathcal{N}_{n}}a_{jn}\bar{P}_{k+1,j}^{-1}\bm{\theta}\Bigg\}.
\end{aligned}
$$
Then for each sensor $i\in\{1,2\dots,n\}$,
$$
P_{k+1,i}\sum_{j\in\mathcal{N}_{i}}a_{ji}\bar{P}_{k+1,j}^{-1}\bm{\theta}=\Bigg[P_{k+1,i}\Bigg(\sum_{j\in\mathcal{N}_{i}}a_{ji}\bar{P}_{k+1,j}^{-1}\Bigg)\Bigg]\bm{\theta}=\bm{\theta}.
$$
Thus, $\bm{\Theta}=\bm{P}_{k+1}\mathscr{A}\bar{\bm{P}}_{k+1}^{-1}\bm{\Theta}$ holds. Then we have
\begin{align}\label{Theta}
\widetilde{\bm{\Theta}}_{k+1}=&\bm{\Theta}-\bm{\Theta}_{k+1}\nonumber\\
=&\bm{\Theta}-\bm{P}_{k+1}\mathscr{A}\bar{\bm{P}}_{k+1}^{-1}\bar{\bm{\Theta}}_{k+1}\nonumber\\
=&\bm{P}_{k+1}\mathscr{A}\bar{\bm{P}}_{k+1}^{-1}\bm{\Theta}-\bm{P}_{k+1}\mathscr{A}\bar{\bm{P}}_{k+1}^{-1}\bar{\bm{\Theta}}_{k+1}\nonumber\\
=&\bm{P}_{k+1}\mathscr{A}\bar{\bm{P}}_{k+1}^{-1}\widetilde{\bar{\bm{\Theta}}}_{k+1}\nonumber\\
=&\bm{P}_{k+1}\mathscr{A}\bm{P}_{k}^{-1}\widetilde{\bm{\Theta}}_{k}\nonumber\\
&-\bm{P}_{k+1}\mathscr{A}\bar{\bm{P}}_{k+1}^{-1}\bm{c}_{k}\bm{P}_{k}\bm{\Phi}_{k}\bm{W}_{k+1}.
\end{align}
Before establishing a theory on the learning and prediction behavior of the distributed LS, we first present a critical theorem, which requires no excitation conditions on the regression process $\bm{\varphi}_{k,i}$.
\emph{Theorem 3.1.} Let \emph{Condition 3.1} be satisfied, we have as $t\to\infty$,
\begin{enumerate}
\item
$
\sum\limits_{k=0}^{t}\widetilde{\bm{\Theta}}_{k}^{T}\bm{\Phi}_{k}\bm{b}_{k}\bm{\Phi}_{k}^{T}\widetilde{\bm{\Theta}}_{k}=O(\log (r_{t})),~~~a.s.,
$
\vskip 0.3cm
\item
$
\widetilde{\bm{\Theta}}_{t+1}^{T}\bm{P}_{t+1}^{-1}\widetilde{\bm{\Theta}}_{t+1}=O(\log (r_{t})),~~~a.s.,
$
\end{enumerate}
where
\begin{equation}\label{rtdef}
r_{t}=\lambda_{max}\{\bm{P}_{0}^{-1}\}+\sum_{i=1}^{n}\sum_{k=0}^{t}\|\bm{\varphi}_{k,i}\|^{2}.
\end{equation}
The detailed proof of \emph{Theorem 3.1} is supplied in the next section. From this, we can obtain the following upper bound of the accumulated regrets for the distributed LS-based adaptive predictor.
\emph{Theorem 3.2.} Let \emph{Condition 3.1} be satisfied. Then the sample paths of the accumulated regrets have the following bound as $t\to\infty$:
\begin{align}\label{Coro}
\sum_{i=1}^{n}\sum_{k=0}^{t}R_{k,i}=O(\log(r_{t})),~~~~a.s.,
\end{align}
provided that $\bm{\Phi}_{t}^{T}\bm{P}_{t}\bm{\Phi}_{t}=O(1), a.s.$
The proof of \emph{Theorems 3.2} is given in Section IV.
\emph{Remark 3.2.} We remark that when the regressors at each node are bounded in the time-averaging sense, then $r_t$ will be of the order $O(t)$, and consequently by \emph{Theorem 3.2} , we know that the bound on the accumulated regret (\ref{Coro}) will be sublinear with respect to $nt$, i.e., $ {1\over {nt}}\sum_{i=1}^{n}\sum_{k=0}^{t}R_{k,i} = O({{log t}\over t}) \to 0$, as $t\to\infty$, i.e., the averaged regret goes to zero and the distributed LS algorithm for the prediction problem performs well. The order $O(\log(r_{t}))$ for the accumulated regrets may be shown to be the best possible among all adaptive predictors, as is already known in the traditional single sensor case, see \cite{Lai1986A}. The precise constant in $O(\cdot)$ may also be determined if we have further conditions on the regressors, see \emph{Corollary 3.3} in \cite{Guo1995} in the single sensor case.
We point out that one can also get precise upper bound for the expected accumulated regrets for any finite $t\geq 1$, which is stated in the following remark.
\emph{Remark 3.3.} Let \emph{Condition 3.1} be satisfied. Then the expected accumulated regrets have the following bound for any $t\geq 1$:
$$
\sum_{i=1}^{n}\sum_{k=0}^{t}\mathbb{E}[R_{k,i}]\leq a\log(\mathbb{E}[r_{t}])+b,
$$
provided that $\mathbb{E}[\|\bm{\varphi}_{k,i}\|^{2}]<\infty, \forall k\geq 0, \forall i\in\{1,\dots,n\}$, and there exists deterministic constants $c>0, \bar{\sigma}>0$ such that $\|\bm{\Phi}_{t}^{T}\bm{P}_{t}\bm{\Phi}_{t}\|\leq c, \sigma_{w}\leq\bar{\sigma}$, where
$$
\begin{aligned}
&a=(1+c)mn\bar{\sigma},\\
&b=(1+c)\Big\{\mathbb{E}[\widetilde{\bm{\Theta}}_{0}^{T}\bm{P}_{0}^{-1}\widetilde{\bm{\Theta}}_{0}]-\bar{\sigma}\mathbb{E}[\log(\vert\bm{P}_{0}^{-1}\vert)]\Big\}.
\end{aligned}
$$
Note that
\begin{equation}\label{sigmaw}
\sigma_{w}\overset{\triangle}{=}\sum_{i=1}^{n}\sigma_{i}^{2}, ~~~\sigma_{i}^{2}\overset{\triangle}{=}\sup\limits_{k\geq 0}\mathbb{E}[w_{k+1,i}^{2}\vert\mathcal{F}_{k}],
\end{equation}
which is finite almost surely by \emph{Condition 3.1}. The detailed proof is given in Appendix B.
From \emph{Theorem 3.1}, we can also obtain the strong consistency of the distributed LS to guarantee the generalization ability of learning, under the following cooperative excitation condition.
\emph{Condition 3.3 (Cooperative excitation condition).} The growth rate of $\log(\lambda_{max}\{\bm{P}_{k}^{-1}\})$ is slower than that of $\lambda_{min}\{\bm{P}_{k}^{-1}\}$, in other words,
\begin{equation}
\lim_{t\to\infty}\frac{\log (r_{t})}{\lambda_{min}^{n,t}}=0,~~~~~a.s.,
\end{equation}
where $r_{t}$ is defined by (\ref{rtdef}), and
$$
\lambda_{min}^{n,t}=\lambda_{min}\Bigg\{\sum_{j=1}^{n}P_{0,j}^{-1}+\sum_{j=1}^{n}\sum_{k=0}^{t-D_\mathcal{G}+1}\bm{\varphi}_{k,j}\bm{\varphi}_{k,j}^{T}\Bigg\}.
$$
\emph{Remark 3.4.} Let us give some intuitive explanations for \emph{Condition 3.3} used in the paper. To start with, let us first consider the extreme case where the regressor process $\bm{\varphi}_{j}^{i}$ is identically zero. It is clear that \emph{Condition 3.3} is not satisfied, which is indeed a trivial case where the system is not identifiable since the observations contain no information about the unknown parameters. Hence, to estimate the unknown parameters, some non-zero ``excitation" conditions should be imposed on the regressors $\bm{\varphi}_{j}^{i}$, which are usually reflected in the so called (Fisher) information matrix $\bm{P}_{k}^{-1}$, and now explicitly required in our \emph{Condition 3.3}.
We remark that in the traditional single sensor case (where $n=1$ and $D_{\mathcal{G}}=1$), \emph{Condition 3.3} reduces to the well-known Lai-Wei excitation condition (\ref{tozero}) for $i=1$, which is known to be the weakest possible data condition for the convergence of the classical LS estimates\cite{Lai1982}. This condition is much weaker than the well-known persistence of excitation (PE) condition usually used in the parameter estimation of finite-dimensional linear control systems, since the PE condition requires that the condition number of $\bm{P}_{k}^{-1}$, i.e., ${\lambda_{max}\{\bm{P}_{k}^{-1}\}}\over \lambda_{min}\{\bm{P}_{k}^{-1}\}$ is bounded a.s. for all $k \geq 1$.
Moreover, it is easy to convince oneself that the cooperative excitation condition (\emph{Condition 3.3}) will make it possible for the distributed LS to consistently estimate the unknown parameter, even if any individual sensor cannot due to lack of suitable excitation, e.g., when (\ref{tozero}) is not satisfied, because \emph{Condition 3.3} is obviously weaker than (\ref{tozero}) for any $i$.
Finally, we remark that the verification of \emph{Condition 3.3} is straightforward in the ergodic case, since $\log (r_t)$ is of the order $O(\log t)$, and ${\lambda_{min}^{n,t}/ t} $ tends to $\lambda_{min}\big\{\sum_{j=1}^{n} \mathbb{E}[\bm{\varphi}_{0,j}\bm{\varphi}_{0,j}^{T}]\big\}$ as $t\to\infty$, which will be positive if the expectation of the summation of the covariance matrices is positive definite. For more general correlated non-stationary signals from control systems, the verification of \emph{Condition 3.3} may be conducted following a similar way as that for the traditional single sensor case (see,\cite{Chen1991}).
\emph{Theorem 3.3} below states that if \emph{Condition 3.3} holds, then the distributed LS estimate $\bm{\Theta}_{t}$ will converge to the true unknown parameter.
\emph{Theorem 3.3.} Let \emph{Conditions 3.1} and \emph{3.2} be satisfied, we have as $t\to\infty$,
\begin{equation}
\|\widetilde{\bm{\Theta}}_{t+1}\|^{2}=O\Bigg(\frac{\log (r_{t})}{\lambda_{min}^{n,t}}\Bigg),~~a.s.,
\end{equation}
where $r_{t}$ is defined by (\ref{rtdef}) and $\lambda_{min}^{n,t}$ is defined in \emph{Condition 3.3}.
\emph{Remark 3.5.} The detailed proof of \emph{Theorem 3.3} is given in the next section. We remark that the upper bound of the estimation error $\widetilde{\bm{\Theta}}_{t+1}$ established in \emph{Theorem 3.3} does not need \emph{Condition 3.3}. It is needed only when the estimation error $\widetilde{\bm{\Theta}}_{t+1}$ is required to approach zero. Moreover, the above theoretical analysis method can naturally be generalized to multidimensional cases, e.g. the widely used autoregressive-moving average with exogenous input (ARMAX) model \cite{Chen1991}, where the unknown parameter is a matrix, both the regressors and observations are stochastic vectors, and the noises are colored.
Note that the linear stochastic regression model $(\ref{model})$ is a basic hypothesis for our theoretical investigation, which can be regarded as an approximation of more complex systems and is widely used and studied in many different fields, e.g., automatic control, signal processing, statistics, adaptive filtering, distributed estimation, and so on. Note also that the linearity in the model $(\ref{model})$ is only assumed for the unknown parameter $\bm{\theta}$, it can be nonlinear in terms of the input and output data in the regressor $\bm{\varphi}_{k,i}$. Of course, when the data does not satisfy such a model, the estimates may be biased and the problem as well as the corresponding theory should reformulated and investigated. If we assume that the noise process $w_{k+1,i}$ contains not only the observation noise satisfying \emph{Condition 3.1}, but also some unknown dynamics (or model bias) which is assumed to be bounded, then it is not difficult to prove that the above regret bound will depend on the bound of the unknown dynamics under the PE condition\cite{Chen1991}. Moreover, if we assume that the observation model contains some types of bias, then the deviation in the estimates may either be corrected by some bias-compensation techniques \cite{Bertrand2011}, or be approximated by using a regression model with slowly increasing lags (see \cite{Chen1991}, Chapter 9). Furthermore, some model validation methods may be also used to estimate the bound of the unknown dynamics (or model bias) when the ideal mathematical model is biased\cite{Ljung1997}. In all theses cases, the analyses in this paper should serve as a basis for further investigation on the related distributed estimation problems.
\emph{Remark 3.6.} Let us now compare the above distributed LS algorithm with centralized methods whereby, at each time instant $k$, all the $n$ sensors transmit their raw data $\{y_{k+1,i}, \bm{\varphi}_{k,i}\}$ to a fusion center for processing to obtain the centralized estimate $\bm{\theta}_{k+1}^{c}$. Note that there are many different ways to construct a centralized algorithm, which may give different estimation errors. Let us consider a simple and natural way in the following. Denote
$$
\begin{aligned}
&\bm{Y}_{k+1}\overset{\triangle}{=}\text{col}\{y_{k+1,1},\dots,y_{k+1,n}\},~~~~~~~~~~~~~(n\times 1)\\
&\bm{W}_{k+1}\overset{\triangle}{=}\text{col}\{w_{k+1,1},\dots,w_{k+1,n}\},~~~~~~~~~~~(n\times 1)\\
&\bm{\Phi}_{k}^{c}\overset{\triangle}{=}\begin{pmatrix}
\bm{\varphi}_{k,1}, & \cdots, & \bm{\varphi}_{k,n}
\end{pmatrix},~~~~~~~~~~~~~~~~~~(m\times n)\\
\end{aligned}
$$
then one has the following regression model:
$$
\bm{Y}_{k+1}=(\bm{\Phi}_{k}^{c})^{T}\bm{\theta}+\bm{W}_{k+1}.
$$
Let the centralized LS estimate be defined by the following at each time instant $k$:
$$
\bm{\theta}_{k}^{c}=\arg\min_{\bm{\theta}\in\mathbb{R}^{ m}}\sum_{j=1}^{k}[\bm{Y}_{j}-(\bm{\Phi}_{j-1}^{c})^{T}\bm{\theta}]^{T}[\bm{Y}_{j}-(\bm{\Phi}_{j-1}^{c})^{T}\bm{\theta}],
$$
which can be calculated recursively as follows:
$$
\bm{\theta}_{k+1}^{c}=\bm{\theta}_{k}^{c}+P_{k}\bm{\Phi}_{k}^{c}B_{k}[\bm{Y}_{k+1}-(\bm{\Phi}_{k}^{c})^{T}\bm{\theta}_{k}^{c}],
$$
$$
P_{k+1}=P_{k}-P_{k}\bm{\Phi}_{k}^{c}B_{k}(\bm{\Phi}_{k}^{c})^{T}P_{k},
$$
$$
B_{k}=[I_{n}+(\bm{\Phi}_{k}^{c})^{T}P_{k}\bm{\Phi}_{k}^{c}]^{-1},
$$
where the initial estimate $\bm{\theta}_{0}^{c}\in\mathbb{R}^{m}$, and the initial positive definite matrix $P_{0}\in\mathbb{R}^{m\times m}$ can be chosen arbitrarily. Then by (\ref{trad}), the above centralized LS has the following upper bound for the estimation error as $k\to\infty$:
$$
\begin{aligned}
&\|\bm{\theta}_{k+1}^{c}-\bm{\theta}\|^{2}\\
=&O\Bigg(\frac{\log \Big(\lambda_{max}\{P_{0}^{-1}\}+\sum_{j=0}^{k}\|\bm{\Phi}_{j}^{c}\|^{2}\Big)}{\lambda_{min}\Big\{P_{0}^{-1}+\sum_{j=0}^{k}\bm{\Phi}_{j}^{c}(\bm{\Phi}_{j}^{c})^{T}\Big\}}\Bigg),~~a.s.\\
=&O\Bigg(\frac{\log \Big(\lambda_{max}\{P_{0}^{-1}\}+\sum_{i=1}^{n}\sum_{j=0}^{k}\|\bm{\varphi}_{j,i}\|^{2}\Big)}{\lambda_{min}\Big\{P_{0}^{-1}+\sum_{i=1}^{n}\sum_{j=0}^{k}\bm{\varphi}_{j,i}(\bm{\varphi}_{j,i})^{T}\Big\}}\Bigg),~~a.s.
\end{aligned}
$$
From this and \emph{Theorem 3.3} one can see that both the convergence condition and the convergence rate of the centralized algorithm is essentially the same as those for the distributed algorithm. Moreover, for the centralized algorithm, the computation complexity of the fusion center is $O(m^{3}+m^{2}n+mn^{2}+n^{3})$, which is of the same order compared with the computation complexity of \textbf{Algorithm 1}. Every sensor needs to communicate a total of $m+1$ scalars to the fusion center, and the fusion center needs to communicate a total of $m$ scalars to each sensor and store a total of $(m^{2}+3m+n^{2}+3n)/2+mn$ scalars at each time instant $k$. Although the centralized algorithm has some advantages over the distributed algorithm in terms of communication complexity, it also has some drawbacks compared with the distributed case. Firstly, the distributed methods may have stronger structural robustness compared with the centralized ones. This is because the centralized algorithm will fail once the fusion center is broken down by outside attacks, while the distributed algorithm can still estimate the unknown parameters even if the communications among some sensors are interrupted, as long as the network connectivity is maintained. Secondly, if the fusion center is far away from some sensors, the communications with the fusion center may not be feasible, and the transmission of observations and regression vectors may compromise the safety and privacy of the system even if the communication is possible. Hence, there may be many factors need to be considered when we choose to use the centralized or distributed algorithms.
\section{Proofs of the main results}
\subsection{Proof of \emph{Theorem 3.1}}
To prove \emph{Theorem 3.1}, we need to establish several lemmas first. The first lemma below is a key inequality on convex combination of nonnegative definite matrices.
\emph{Lemma 4.1.} For any adjacency matrix $\mathcal{A}=\{a_{ij}\}\in\mathbb{R}^{n\times n}$, denote $\mathscr{A}=\mathcal{A}\otimes I_{m}$, and for any nonnegative definite matrices $Q_{i}\in\mathbb{R}^{m\times m}, i=1,\dots,n$, denote
$$
\begin{aligned}
&Q=\text{diag}\{Q_{1},\dots,Q_{n}\},\\
&Q^{'}=\text{diag}\{Q_{1}^{'},\dots,Q_{n}^{'}\},
\end{aligned}
$$
where $Q_{i}^{'}=\sum\limits_{j=1}^{n}a_{ji}Q_{j}$. Then the following inequality holds:
\begin{align}\label{AQA}
\mathscr{A}Q\mathscr{A}\leq Q^{'}.
\end{align}
\begin{IEEEproof}
By the definition of $\mathscr{A}$ and $Q$, we can get that
$$
\begin{aligned}
&\mathscr{A}Q\mathscr{A}\\
=&\begin{pmatrix}
\sum\limits_{j=1}^{n}a_{1j}a_{j1}Q_{j} & \cdots & \sum\limits_{j=1}^{n}a_{1j}a_{jn}Q_{j}\\
\sum\limits_{j=1}^{n}a_{2j}a_{j1}Q_{j} & \cdots & \sum\limits_{j=1}^{n}a_{2j}a_{jn}Q_{j}\\
\vdots & \ddots & \vdots\\
\sum\limits_{j=1}^{n}a_{nj}a_{j1}Q_{j} & \cdots & \sum\limits_{j=1}^{n}a_{nj}a_{jn}Q_{j}
\end{pmatrix}.
\end{aligned}
$$
In order to prove (\ref{AQA}), we only need to prove that for any unit column vector $x\in\mathbb{R}^{mn}$ with $\|x\|=1$, $x^{T}\mathscr{A}Q\mathscr{A}x\leq x^{T}Q^{'}x$ holds. Denote $x=\text{col}\{x_{1}, x_{2}, \dots, x_{n}\}$ with $x_{i}\in\mathbb{R}^{m}$, then by the Schwarz inequality and noticing that $Q_{j}\geq 0$, $\sum_{j=1}^{n}a_{ij}=1$, and $a_{ji}=a_{ij}, (i,j=1,\dots,n)$, we have
$$
\begin{aligned}
&x^{T}\mathscr{A}Q\mathscr{A}x\\
=&\sum_{p=1}^{n}\sum_{q=1}^{n}\sum_{j=1}^{n}a_{pj}a_{jq}x_{p}^{T}Q_{j}x_{q}\\
=&\sum_{p=1}^{n}\sum_{q=1}^{n}\sum_{j=1}^{n}\sqrt{a_{pj}a_{jq}}x_{p}^{T}Q_{j}^{\frac{1}{2}}\cdot \sqrt{a_{pj}a_{jq}}Q_{j}^{\frac{1}{2}}x_{q}\\
\leq &\Bigg\{\sum_{p=1}^{n}\sum_{q=1}^{n}\sum_{j=1}^{n}a_{pj}a_{jq}x_{p}^{T}Q_{j}x_{p}\Bigg\}^{\frac{1}{2}}\\
&\cdot\Bigg\{\sum_{p=1}^{n}\sum_{q=1}^{n}\sum_{j=1}^{n}a_{pj}a_{jq}x_{q}^{T}Q_{j}x_{q}\Bigg\}^{\frac{1}{2}}\\
= &\Bigg\{\sum_{p=1}^{n}\sum_{j=1}^{n}a_{pj}x_{p}^{T}Q_{j}x_{p}\Bigg\}^{\frac{1}{2}}\Bigg\{\sum_{q=1}^{n}\sum_{j=1}^{n}a_{jq}x_{q}^{T}Q_{j}x_{q}\Bigg\}^{\frac{1}{2}}\\
=&\sum_{i=1}^{n}\sum_{j=1}^{n}a_{ji}x_{i}^{T}Q_{j}x_{i}\\
=&x^{T}Q^{'}x,
\end{aligned}
$$
which completes the proof.
\end{IEEEproof}
By \emph{Lemma 4.1}, we can obtain the following result.
\emph{Lemma 4.2.} For any adjacency matrix $\mathcal{A}=\{a_{ij}\}\in\mathbb{R}^{n\times n}$, denote $\mathscr{A}=\mathcal{A}\otimes I_{m}$. Then for any $k\geq 1$,
\begin{equation}
\mathscr{A}\bar{\bm{P}}_{k+1}^{-1}\mathscr{A}\leq \bm{P}_{k+1}^{-1},
\end{equation}
and
\begin{equation}\label{APA}
\mathscr{A}\bm{P}_{k+1}\mathscr{A}\leq \bar{\bm{P}}_{k+1},
\end{equation}
holds, where $\bar{\bm{P}}_{k+1}$ and $\bm{P}_{k+1}$ are defined in (\ref{7}).
\begin{IEEEproof}
By taking $Q_{i}=\bar{P}_{k+1,i}^{-1}\geq 0$ and noticing $P_{k+1,i}^{-1}=\sum_{j=1}^{n}a_{ji}\bar{P}_{k+1,j}^{-1}=Q_{i}^{'}$, we know from \emph{Lemma 4.1} that
$$
\mathscr{A}\bar{\bm{P}}_{k+1}^{-1}\mathscr{A}\leq \bm{P}_{k+1}^{-1},
$$
holds. To prove (\ref{APA}), we first assume that $\mathscr{A}$ is invertible. Then by \emph{Lemma A.1} in Appendix A, it is easy to see that
$$
\mathscr{A}\bm{P}_{k+1}\mathscr{A}\leq \bar{\bm{P}}_{k+1}.
$$
Next, we consider the case where $\mathscr{A}$ is not invertible. Since the number of eigenvalues of the matrix $\mathscr{A}$ is finite, then exists a constant $\varepsilon^{*}\in(0,1)$ such that the perturbed adjacency matrix $\mathscr{A}^{\varepsilon}=\mathscr{A}+\varepsilon I_{mn}=\{a_{ij}^{\varepsilon}\}$ will be invertible for any $0<\varepsilon<\varepsilon^{*}$. By the definition of $\mathscr{A}^{\varepsilon}$, we know that $\mathscr{A}^{\varepsilon}$ is symmetric and the sums of each columns and rows of the matrix $\mathscr{A}^{\varepsilon}$ are all $1+\varepsilon$. Then we define
$$
(P_{k+1,i}^{\varepsilon})^{-1}=\sum_{j=1}^{n}a_{ji}^{\varepsilon}\bar{P}_{k+1,j}^{-1},
$$
and we can denote $\bm{P}_{k+1}^{\varepsilon}=\text{diag}\{P_{k+1,1}^{\varepsilon}, \dots, P_{k+1,n}^{\varepsilon}\}$ since $(P_{k+1,i}^{\varepsilon})^{-1}$ defined above is invertible. Similar to the proof of \emph{Lemma 4.1}, for any unit column vector $x\in\mathbb{R}^{mn}$, we have
$$
\begin{aligned}
&x^{T}\mathscr{A}^{\varepsilon}\bar{\bm{P}}_{k+1}^{-1}\mathscr{A}^{\varepsilon}x\\
\leq &\Bigg\{\sum_{p=1}^{n}\sum_{q=1}^{n}\sum_{j=1}^{n}a_{pj}^{\varepsilon}a_{jq}^{\varepsilon}x_{p}^{T}\bar{P}_{k+1,j}^{-1}x_{p}\Bigg\}^{\frac{1}{2}}\\
&\cdot\Bigg\{\sum_{p=1}^{n}\sum_{q=1}^{n}\sum_{j=1}^{n}a_{pj}^{\varepsilon}a_{jq}^{\varepsilon}x_{q}^{T}\bar{P}_{k+1,j}^{-1}x_{q}\Bigg\}^{\frac{1}{2}}\\
=&(1+\varepsilon)\sum_{i=1}^{n}\sum_{j=1}^{n}a_{ji}^{\varepsilon}x_{i}^{T}\bar{P}_{k+1,j}^{-1}x_{i}\\
=&(1+\varepsilon)x^{T}(\bm{P}_{k+1}^{\varepsilon})^{-1}x.
\end{aligned}
$$
Consequently, we have $\mathscr{A}^{\varepsilon}\bar{\bm{P}}_{k+1}^{-1}\mathscr{A}^{\varepsilon}\leq (1+\varepsilon)(\bm{P}_{k+1}^{\varepsilon})^{-1}$. Since $\mathscr{A}^{\varepsilon}$ is invertible, we know from \emph{Lemma A.1} in Appendix A that
$$
\mathscr{A}^{\varepsilon}\bm{P}_{k+1}^{\varepsilon}\mathscr{A}^{\varepsilon}\leq (1+\varepsilon)\bar{\bm{P}}_{k+1}.
$$
By taking $\varepsilon\to 0$ on both sides of the above equation, we can obtain that
$$
\lim_{\varepsilon\to 0}\mathscr{A}^{\varepsilon}\bm{P}_{k+1}^{\varepsilon}\mathscr{A}^{\varepsilon}=\mathscr{A}\bm{P}_{k+1}\mathscr{A}\leq\lim_{\varepsilon\to 0}(1+\varepsilon)\bar{\bm{P}}_{k+1}=\bar{\bm{P}}_{k+1}.
$$
This completes the proof.
\end{IEEEproof}
To accomplish the proof of \emph{Theorem 3.1}, we also need the following inequality.
\emph{Lemma 4.3.} For any adjacency matrix $\mathcal{A}=\{a_{ij}\}\in\mathbb{R}^{n\times n}$, and for any $k\geq 1$,
\begin{equation}\label{det}
\vert\bar{\bm{P}}_{k+1}^{-1}\vert\leq\vert\bm{P}_{k+1}^{-1}\vert,
\end{equation}
holds, where $\bar{\bm{P}}_{k+1}$ and $\bm{P}_{k+1}$ are defined in (\ref{7}).
\begin{IEEEproof}
Since
$$
\bm{P}_{k+1}^{-1}=\begin{pmatrix}
\sum\limits_{j=1}^{n}a_{j1}\bar{P}_{k+1,j}^{-1} & \cdots & 0\\
\vdots & \ddots & \vdots\\
0 & \cdots & \sum\limits_{j=1}^{n}a_{jn}\bar{P}_{k+1,j}^{-1}\\
\end{pmatrix},
$$
and
$$
\bar{\bm{P}}_{k+1}^{-1}=\begin{pmatrix}
\bar{P}_{k+1,1}^{-1} & \cdots & 0\\
\vdots & \ddots & \vdots\\
0 & \cdots & \bar{P}_{k+1,n}^{-1}\\
\end{pmatrix},
$$
by \emph{Lemma A.2} in Appendix A and noticing the definition of the adjacency matrix $\mathcal{A}=\{a_{ij}\}$, we can see that
$$
\begin{aligned}
\vert\bm{P}_{k+1}^{-1}\vert=&\prod_{i=1}^{n}\Bigg\vert\sum_{j=1}^{n}a_{ji}\bar{P}_{k+1,j}^{-1}\Bigg\vert\\
\geq & \prod_{i=1}^{n}\vert\bar{P}_{k+1,1}^{-1}\vert^{a_{1i}}\vert\bar{P}_{k+1,2}^{-1}\vert^{a_{2i}}\cdots\vert\bar{P}_{k+1,n}^{-1}\vert^{a_{ni}}\\
=&\vert\bar{P}_{k+1,1}^{-1}\vert^{\sum\limits_{i=1}^{n}a_{1i}}\vert\bar{P}_{k+1,2}^{-1}\vert^{\sum\limits_{i=1}^{n}a_{2i}}\cdots\vert\bar{P}_{k+1,n}^{-1}\vert^{\sum\limits_{i=1}^{n}a_{ni}}\\
=&\vert\bar{P}_{k+1,1}^{-1}\vert\cdot\vert\bar{P}_{k+1,2}^{-1}\vert\cdots\vert\bar{P}_{k+1,n}^{-1}\vert\\
=&\vert\bar{\bm{P}}_{k+1}^{-1}\vert,
\end{aligned}
$$
which completes the proof.
\end{IEEEproof}
To prove \emph{Theorem 3.1}, we also need the following critical lemma.
\emph{Lemma 4.4.} Let \emph{Condition 3.1} be satisfied. Then the distributed LS defined by (\ref{Model}) and (\ref{7}) satisfies the following relationship as $t\to\infty$:
\begin{align}\label{lemma4.4}
&\widetilde{\bm{\Theta}}_{t+1}^{T}\bm{P}_{t+1}^{-1}\widetilde{\bm{\Theta}}_{t+1}\nonumber\\
&+[1+o(1)]\sum_{k=0}^{t}\widetilde{\bm{\Theta}}_{k}^{T}\bm{\Phi}_{k}\bm{b}_{k}\bm{\Phi}_{k}^{T}\widetilde{\bm{\Theta}}_{k}\nonumber\\
&+[1+o(1)]\sum_{k=0}^{t}\widetilde{\bm{\Theta}}_{k}^{T}\bm{P}_{k}^{-1}\bm{\Delta}_{k+1}\bm{P}_{k}^{-1}\widetilde{\bm{\Theta}}_{k}\nonumber\\
\leq &\sigma_{w}\log(\vert\bm{P}_{t+1}^{-1}\vert)+o(\log(\vert\bm{P}_{t+1}^{-1}\vert))+O(1),~~a.s.,
\end{align}
where $\bm{b}_{k}=(I_{n}+\bm{\Phi}_{k}^{T}\bm{P}_{k}\bm{\Phi}_{k})^{-1}, \bm{c}_{k}=\bm{b}_{k}\otimes I_{m}, \bm{\Delta}_{k+1}\overset{\triangle}{=}\bar{\bm{P}}_{k+1}-\mathscr{A}\bm{P}_{k+1}\mathscr{A}\geq 0$ by \emph{Lemma 4.2}, and $\sigma_{w}$ is defined by (\ref{sigmaw}).
\begin{IEEEproof}
Since $\bm{b}_{k}=(I_{n}+\bm{\Phi}_{k}^{T}\bm{P}_{k}\bm{\Phi}_{k})^{-1}$ and $\bm{c}_{k}=\bm{b}_{k}\otimes I_{m}$, then by (\ref{Theta}), we know that
$$
\widetilde{\bm{\Theta}}_{k+1}=\bm{P}_{k+1}\mathscr{A}\bm{P}_{k}^{-1}\widetilde{\bm{\Theta}}_{k}-\bm{P}_{k+1}\mathscr{A}\bar{\bm{P}}_{k+1}^{-1}\bm{c}_{k}\bm{P}_{k}\bm{\Phi}_{k}\bm{W}_{k+1}.
$$
Hence, we have the following expansion for the stochastic Lyapunov function $V_{k}=\widetilde{\bm{\Theta}}_{k}^{T}\bm{P}_{k}^{-1}\widetilde{\bm{\Theta}}_{k}$:
\begin{align}\label{Lya1}
V_{k+1}=&\widetilde{\bm{\Theta}}_{k+1}^{T}\bm{P}_{k+1}^{-1}\widetilde{\bm{\Theta}}_{k+1}\nonumber\\
=&(\widetilde{\bm{\Theta}}_{k}^{T}\bm{P}_{k}^{-1}\mathscr{A}\bm{P}_{k+1}-\bm{W}_{k+1}^{T}\bm{\Phi}_{k}^{T}\bm{P}_{k}\bm{c}_{k}\bar{\bm{P}}_{k+1}^{-1}\mathscr{A}\bm{P}_{k+1})\nonumber\\
&\cdot(\mathscr{A}\bm{P}_{k}^{-1}\widetilde{\bm{\Theta}}_{k}-\mathscr{A}\bar{\bm{P}}_{k+1}^{-1}\bm{c}_{k}\bm{P}_{k}\bm{\Phi}_{k}\bm{W}_{k+1})\nonumber\\
=&\widetilde{\bm{\Theta}}_{k}^{T}\bm{P}_{k}^{-1}\mathscr{A}\bm{P}_{k+1}\mathscr{A}\bm{P}_{k}^{-1}\widetilde{\bm{\Theta}}_{k}\nonumber\\
&-2\widetilde{\bm{\Theta}}_{k}^{T}\bm{P}_{k}^{-1}\mathscr{A}\bm{P}_{k+1}\mathscr{A}\bar{\bm{P}}_{k+1}^{-1}\bm{c}_{k}\bm{P}_{k}\bm{\Phi}_{k}\bm{W}_{k+1}\nonumber\\
&+\bm{W}_{k+1}^{T}\bm{\Phi}_{k}^{T}\bm{P}_{k}\bm{c}_{k}\bar{\bm{P}}_{k+1}^{-1}\mathscr{A}\bm{P}_{k+1}\mathscr{A}\bar{\bm{P}}_{k+1}^{-1}\nonumber\\
&\cdot\bm{c}_{k}\bm{P}_{k}\bm{\Phi}_{k}\bm{W}_{k+1}.
\end{align}
Now, we proceed to estimate the right-hand-side (RHS) of (\ref{Lya1}) term by term. Firstly, we know that
\begin{align}\label{term1}
&\widetilde{\bm{\Theta}}_{k}^{T}\bm{P}_{k}^{-1}\mathscr{A}\bm{P}_{k+1}\mathscr{A}\bm{P}_{k}^{-1}\widetilde{\bm{\Theta}}_{k}\nonumber\\
=&\widetilde{\bm{\Theta}}_{k}^{T}\bm{P}_{k}^{-1}\bar{\bm{P}}_{k+1}\bm{P}_{k}^{-1}\widetilde{\bm{\Theta}}_{k}-\widetilde{\bm{\Theta}}_{k}^{T}\bm{P}_{k}^{-1}\bm{\Delta}_{k+1}\bm{P}_{k}^{-1}\widetilde{\bm{\Theta}}_{k}\nonumber\\
=&\widetilde{\bm{\Theta}}_{k}^{T}\bm{P}_{k}^{-1}(\bm{P}_{k}-\bm{P}_{k}\bm{\Phi}_{k}\bm{b}_{k}\bm{\Phi}_{k}^{T}\bm{P}_{k})\bm{P}_{k}^{-1}\widetilde{\bm{\Theta}}_{k}\nonumber\\
&-\widetilde{\bm{\Theta}}_{k}^{T}\bm{P}_{k}^{-1}\bm{\Delta}_{k+1}\bm{P}_{k}^{-1}\widetilde{\bm{\Theta}}_{k}\nonumber\\
=&\widetilde{\bm{\Theta}}_{k}^{T}\bm{P}_{k}^{-1}\widetilde{\bm{\Theta}}_{k}-\widetilde{\bm{\Theta}}_{k}^{T}\bm{\Phi}_{k}\bm{b}_{k}\bm{\Phi}_{k}^{T}\widetilde{\bm{\Theta}}_{k}\nonumber\\
&-\widetilde{\bm{\Theta}}_{k}^{T}\bm{P}_{k}^{-1}\bm{\Delta}_{k+1}\bm{P}_{k}^{-1}\widetilde{\bm{\Theta}}_{k}\nonumber\\
=&V_{k}-\widetilde{\bm{\Theta}}_{k}^{T}\bm{\Phi}_{k}\bm{b}_{k}\bm{\Phi}_{k}^{T}\widetilde{\bm{\Theta}}_{k}-\widetilde{\bm{\Theta}}_{k}^{T}\bm{P}_{k}^{-1}\bm{\Delta}_{k+1}\bm{P}_{k}^{-1}\widetilde{\bm{\Theta}}_{k}.
\end{align}
Moreover, by the (block) diagonal property of $\bm{b}_{k}, \bm{c}_{k}, \bm{P}_{k}$ and $\bm{\Phi}_{k}$, we have
\begin{equation}\label{exchange}
\bm{c}_{k}\bm{P}_{k}=\bm{P}_{k}\bm{c}_{k},~~~\bm{\Phi}_{k}^{T}\bm{c}_{k}=\bm{b}_{k}\bm{\Phi}_{k}^{T},~~~\bm{c}_{k}\bm{\Phi}_{k}=\bm{\Phi}_{k}\bm{b}_{k}.
\end{equation}
By \emph{Lemma A.3} in Appendix A and let $A=\bm{P}_{k}^{-1}, B=\bm{\Phi}_{k}, C=\bm{\Phi}_{k}^{T}$ and $D=I_{n}$ respectively, it is easy to see that
$$
\begin{aligned}
&(\bm{P}_{k}^{-1}+\bm{\Phi}_{k}\bm{\Phi}_{k}^{T})^{-1}\\
=&\bm{P}_{k}-\bm{P}_{k}\bm{\Phi}_{k}(I_{n}+\bm{\Phi}_{k}^{T}\bm{P}_{k}\bm{\Phi}_{k})^{-1}\bm{\Phi}_{k}^{T}\bm{P}_{k}\\
=&\bm{P}_{k}-\bm{P}_{k}\bm{\Phi}_{k}\bm{b}_{k}\bm{\Phi}_{k}^{T}\bm{P}_{k}\\
=&\bar{\bm{P}}_{k+1}.
\end{aligned}
$$
From this, we have $\bar{\bm{P}}_{k+1}^{-1}=\bm{P}_{k}^{-1}+\bm{\Phi}_{k}\bm{\Phi}_{k}^{T}$. Thus, we can estimate the second term on the RHS of (\ref{Lya1}) as follows:
\begin{align}\label{term2}
&\widetilde{\bm{\Theta}}_{k}^{T}\bm{P}_{k}^{-1}\mathscr{A}\bm{P}_{k+1}\mathscr{A}\bar{\bm{P}}_{k+1}^{-1}\bm{c}_{k}\bm{P}_{k}\bm{\Phi}_{k}\bm{W}_{k+1}\nonumber\\
=&\widetilde{\bm{\Theta}}_{k}^{T}\bm{P}_{k}^{-1}\mathscr{A}\bm{P}_{k+1}\mathscr{A}(\bm{P}_{k}^{-1}+\bm{\Phi}_{k}\bm{\Phi}_{k}^{T})\bm{c}_{k}\bm{P}_{k}\bm{\Phi}_{k}\bm{W}_{k+1}\nonumber\\
=&\widetilde{\bm{\Theta}}_{k}^{T}\bm{P}_{k}^{-1}\mathscr{A}\bm{P}_{k+1}\mathscr{A}\bm{c}_{k}\bm{\Phi}_{k}\bm{W}_{k+1}\nonumber\\
&+\widetilde{\bm{\Theta}}_{k}^{T}\bm{P}_{k}^{-1}\mathscr{A}\bm{P}_{k+1}\mathscr{A}\bm{\Phi}_{k}\bm{\Phi}_{k}^{T}\bm{c}_{k}\bm{P}_{k}\bm{\Phi}_{k}\bm{W}_{k+1}\nonumber\\
=&\widetilde{\bm{\Theta}}_{k}^{T}\bm{P}_{k}^{-1}\mathscr{A}\bm{P}_{k+1}\mathscr{A}\bm{c}_{k}\bm{\Phi}_{k}\bm{W}_{k+1}\nonumber\\
&+\widetilde{\bm{\Theta}}_{k}^{T}\bm{P}_{k}^{-1}\mathscr{A}\bm{P}_{k+1}\mathscr{A}\bm{\Phi}_{k}\bm{b}_{k}(I_{n}+\bm{\Phi}_{k}^{T}\bm{P}_{k}\bm{\Phi}_{k})\bm{W}_{k+1}\nonumber\\
&-\widetilde{\bm{\Theta}}_{k}^{T}\bm{P}_{k}^{-1}\mathscr{A}\bm{P}_{k+1}\mathscr{A}\bm{\Phi}_{k}\bm{b}_{k}\bm{W}_{k+1}\nonumber\\
=&\widetilde{\bm{\Theta}}_{k}^{T}\bm{P}_{k}^{-1}\mathscr{A}\bm{P}_{k+1}\mathscr{A}\bm{c}_{k}\bm{\Phi}_{k}\bm{W}_{k+1}\nonumber\\
&+\widetilde{\bm{\Theta}}_{k}^{T}\bm{P}_{k}^{-1}\mathscr{A}\bm{P}_{k+1}\mathscr{A}\bm{\Phi}_{k}\bm{W}_{k+1}\nonumber\\
&-\widetilde{\bm{\Theta}}_{k}^{T}\bm{P}_{k}^{-1}\mathscr{A}\bm{P}_{k+1}\mathscr{A}\bm{\Phi}_{k}\bm{b}_{k}\bm{W}_{k+1}\nonumber\\
=&\widetilde{\bm{\Theta}}_{k}^{T}\bm{P}_{k}^{-1}\mathscr{A}\bm{P}_{k+1}\mathscr{A}\bm{\Phi}_{k}\bm{W}_{k+1}\nonumber\\
=&\widetilde{\bm{\Theta}}_{k}^{T}\bm{P}_{k}^{-1}\bar{\bm{P}}_{k+1}\bm{\Phi}_{k}\bm{W}_{k+1}-\widetilde{\bm{\Theta}}_{k}^{T}\bm{P}_{k}^{-1}\bm{\Delta}_{k+1}\bm{\Phi}_{k}\bm{W}_{k+1}.
\end{align}
As for the last term on the RHS of (\ref{Lya1}), by $\mathscr{A}\bm{P}_{k+1}\mathscr{A}\leq \bar{\bm{P}}_{k+1}$, we can estimate it as follows:
\begin{align}\label{term3}
&\bm{W}_{k+1}^{T}\bm{\Phi}_{k}^{T}\bm{P}_{k}\bm{c}_{k}\bar{\bm{P}}_{k+1}^{-1}\mathscr{A}\bm{P}_{k+1}\mathscr{A}\bar{\bm{P}}_{k+1}^{-1}\bm{c}_{k}\bm{P}_{k}\bm{\Phi}_{k}\bm{W}_{k+1}\nonumber\\
\leq &\bm{W}_{k+1}^{T}\bm{\Phi}_{k}^{T}\bm{P}_{k}\bm{c}_{k}(\bm{P}_{k}^{-1}+\bm{\Phi}_{k}\bm{\Phi}_{k}^{T})\bm{c}_{k}\bm{P}_{k}\bm{\Phi}_{k}\bm{W}_{k+1}\nonumber\\
=&\bm{W}_{k+1}^{T}\bm{\Phi}_{k}^{T}\bm{P}_{k}\bm{c}_{k}^{2}\bm{\Phi}_{k}\bm{W}_{k+1}\nonumber\\
&+\bm{W}_{k+1}^{T}\bm{\Phi}_{k}^{T}\bm{P}_{k}\bm{c}_{k}\bm{\Phi}_{k}\bm{\Phi}_{k}^{T}\bm{c}_{k}\bm{P}_{k}\bm{\Phi}_{k}\bm{W}_{k+1}\nonumber\\
=&\bm{W}_{k+1}^{T}\bm{b}_{k}^{2}\bm{\Phi}_{k}^{T}\bm{P}_{k}\bm{\Phi}_{k}\bm{W}_{k+1}\nonumber\\
&+\bm{W}_{k+1}^{T}(I_{n}+\bm{\Phi}_{k}^{T}\bm{P}_{k}\bm{\Phi}_{k})\bm{b}_{k}^{2}\bm{\Phi}_{k}^{T}\bm{P}_{k}\bm{\Phi}_{k}\bm{W}_{k+1}\nonumber\\
&-\bm{W}_{k+1}^{T}\bm{b}_{k}^{2}\bm{\Phi}_{k}^{T}\bm{P}_{k}\bm{\Phi}_{k}\bm{W}_{k+1}\nonumber\\
=&\bm{W}_{k+1}^{T}\bm{b}_{k}\bm{\Phi}_{k}^{T}\bm{P}_{k}\bm{\Phi}_{k}\bm{W}_{k+1}.
\end{align}
By (\ref{term1}), (\ref{term2}) and (\ref{term3}), we can get from (\ref{Lya1}) that
\begin{align}\label{Lya}
V_{k+1}\leq&V_{k}-\widetilde{\bm{\Theta}}_{k}^{T}\bm{\Phi}_{k}\bm{b}_{k}\bm{\Phi}_{k}^{T}\widetilde{\bm{\Theta}}_{k}-\widetilde{\bm{\Theta}}_{k}^{T}\bm{P}_{k}^{-1}\bm{\Delta}_{k+1}\bm{P}_{k}^{-1}\widetilde{\bm{\Theta}}_{k}\nonumber\\
&-2\widetilde{\bm{\Theta}}_{k}^{T}\bm{P}_{k}^{-1}\bar{\bm{P}}_{k+1}\bm{\Phi}_{k}\bm{W}_{k+1}\nonumber\\
&+2\widetilde{\bm{\Theta}}_{k}^{T}\bm{P}_{k}^{-1}\bm{\Delta}_{k+1}\bm{\Phi}_{k}\bm{W}_{k+1}\nonumber\\
&+\bm{W}_{k+1}^{T}\bm{b}_{k}\bm{\Phi}_{k}^{T}\bm{P}_{k}\bm{\Phi}_{k}\bm{W}_{k+1}.
\end{align}
Summing from $k=0$ to $t$ yields
\begin{align}\label{sum}
&V_{t+1}+\sum_{k=0}^{t}\widetilde{\bm{\Theta}}_{k}^{T}\bm{\Phi}_{k}\bm{b}_{k}\bm{\Phi}_{k}^{T}\widetilde{\bm{\Theta}}_{k}\nonumber\\
&+\sum_{k=0}^{t}\widetilde{\bm{\Theta}}_{k}^{T}\bm{P}_{k}^{-1}\bm{\Delta}_{k+1}\bm{P}_{k}^{-1}\widetilde{\bm{\Theta}}_{k}\nonumber\\
\leq & V_{0}-2\sum_{k=0}^{t}\widetilde{\bm{\Theta}}_{k}^{T}\bm{P}_{k}^{-1}\bar{\bm{P}}_{k+1}\bm{\Phi}_{k}\bm{W}_{k+1}\nonumber\\
&-2\sum_{k=0}^{t}\widetilde{\bm{\Theta}}_{k}^{T}\bm{P}_{k}^{-1}(-\bm{\Delta}_{k+1})\bm{\Phi}_{k}\bm{W}_{k+1}\nonumber\\
&+\sum_{k=0}^{t}\bm{W}_{k+1}^{T}\bm{b}_{k}\bm{\Phi}_{k}^{T}\bm{P}_{k}\bm{\Phi}_{k}\bm{W}_{k+1}.
\end{align}
Next, we estimate the last three terms on the RHS of (\ref{sum}) separately. By \emph{Condition 3.1}, and $\widetilde{\bm{\Theta}}_{k}^{T}\bm{P}_{k}^{-1}\bar{\bm{P}}_{k+1}\bm{\Phi}_{k}\in\mathcal{F}_{k}, \widetilde{\bm{\Theta}}_{k}^{T}\bm{P}_{k}^{-1}(-\bm{\Delta}_{k+1})\bm{\Phi}_{k}\in\mathcal{F}_{k}$, we can use the martingale estimation theorem (\emph{Theorem 2.8} in \cite{Chen1991}) to get the following estimation for any $\delta>0$,
\begin{align}\label{estimate1}
&\sum_{k=0}^{t}\widetilde{\bm{\Theta}}_{k}^{T}\bm{P}_{k}^{-1}\bar{\bm{P}}_{k+1}\bm{\Phi}_{k}\bm{W}_{k+1}\nonumber\\
=&O\Bigg(\Bigg\{\sum_{k=0}^{t}\|\widetilde{\bm{\Theta}}_{k}^{T}\bm{P}_{k}^{-1}\bar{\bm{P}}_{k+1}\bm{\Phi}_{k}\|^{2}\Bigg\}^{\frac{1}{2}+\delta}\Bigg)~~~~a.s,
\end{align}
and
\begin{align}\label{estimate2}
&\sum_{k=0}^{t}\widetilde{\bm{\Theta}}_{k}^{T}\bm{P}_{k}^{-1}(-\bm{\Delta}_{k+1})\bm{\Phi}_{k}\bm{W}_{k+1}\nonumber\\
=&O\Bigg(\Bigg\{\sum_{k=0}^{t}\|\widetilde{\bm{\Theta}}_{k}^{T}\bm{P}_{k}^{-1}\bm{\Delta}_{k+1}\bm{\Phi}_{k}\|^{2}\Bigg\}^{\frac{1}{2}+\delta}\Bigg)~~~~a.s.
\end{align}
To further analyze (\ref{estimate1}) and (\ref{estimate2}), we note from (\ref{exchange}) and the definitions of $\bar{\bm{P}}_{k+1}$ and $\bm{b}_{k}$ that
$$
\begin{aligned}
&\bm{P}_{k}^{-1}\bar{\bm{P}}_{k+1}\bm{\Phi}_{k}\\
=&\bm{\Phi}_{k}-\bm{c}_{k}\bm{\Phi}_{k}\bm{\Phi}_{k}^{T}\bm{P}_{k}\bm{\Phi}_{k}\\
=&\bm{\Phi}_{k}-\bm{c}_{k}\bm{\Phi}_{k}(I_{n}+\bm{\Phi}_{k}^{T}\bm{P}_{k}\bm{\Phi}_{k})+\bm{c}_{k}\bm{\Phi}_{k}\\
=&\bm{\Phi}_{k}\bm{b}_{k}.
\end{aligned}
$$
Hence, it is easy to see that
\begin{align}\label{es1}
&\|\widetilde{\bm{\Theta}}_{k}^{T}\bm{P}_{k}^{-1}\bar{\bm{P}}_{k+1}\bm{\Phi}_{k}\|^{2}\nonumber\\
=&\widetilde{\bm{\Theta}}_{k}^{T}\bm{P}_{k}^{-1}\bar{\bm{P}}_{k+1}\bm{\Phi}_{k}\bm{\Phi}_{k}^{T}\bar{\bm{P}}_{k+1}\bm{P}_{k}^{-1}\widetilde{\bm{\Theta}}_{k}\nonumber\\
=&\widetilde{\bm{\Theta}}_{k}^{T}\bm{\Phi}_{k}\bm{b}_{k}^{2}\bm{\Phi}_{k}^{T}\widetilde{\bm{\Theta}}_{k}\nonumber\\
\leq &\widetilde{\bm{\Theta}}_{k}^{T}\bm{\Phi}_{k}\bm{b}_{k}\bm{\Phi}_{k}^{T}\widetilde{\bm{\Theta}}_{k},
\end{align}
where for the last inequality we have used the fact that $\bm{b}_{k}\leq I_{n}$. By taking $0<\delta<\frac{1}{2}$, we have from (\ref{estimate1}) and (\ref{es1}) that
\begin{align}\label{estimate1new}
&\sum_{k=0}^{t}\widetilde{\bm{\Theta}}_{k}^{T}\bm{P}_{k}^{-1}\bar{\bm{P}}_{k+1}\bm{\Phi}_{k}\bm{W}_{k+1}\nonumber\\
=&O(1)+o\Bigg(\Bigg\{\sum_{k=0}^{t}\|\widetilde{\bm{\Theta}}_{k}^{T}\bm{P}_{k}^{-1}\bar{\bm{P}}_{k+1}\bm{\Phi}_{k}\|^{2}\Bigg\}\Bigg)\nonumber\\
=&O(1)+o\Bigg(\Bigg\{\sum_{k=0}^{t}\widetilde{\bm{\Theta}}_{k}^{T}\bm{\Phi}_{k}\bm{b}_{k}\bm{\Phi}_{k}^{T}\widetilde{\bm{\Theta}}_{k}\Bigg\}\Bigg)~~~~a.s.
\end{align}
To further analyze (\ref{estimate2}), we now prove that
\begin{equation}\label{es}
\bm{\Delta}_{k+1}\bm{\Phi}_{k}\bm{\Phi}_{k}^{T}\bm{\Delta}_{k+1}\leq \bm{\Delta}_{k+1}.
\end{equation}
For this, we need only to prove that
$$
\bm{\Delta}_{k+1}^{\frac{1}{2}}\bm{\Phi}_{k}\bm{\Phi}_{k}^{T}\bm{\Delta}_{k+1}^{\frac{1}{2}}\leq I_{mn}.
$$
Since $\bm{\Delta}_{k+1}=\bar{\bm{P}}_{k+1}-\mathscr{A}\bm{P}_{k+1}\mathscr{A}\leq \bar{\bm{P}}_{k+1}$, by \emph{Lemma A.1} in Appendix A, we have
$$
\begin{aligned}
&\bm{\Delta}_{k+1}^{\frac{1}{2}}\bm{\Phi}_{k}\bm{\Phi}_{k}^{T}\bm{\Delta}_{k+1}^{\frac{1}{2}}\\
\leq &\lambda_{max}\{\bm{\Delta}_{k+1}^{\frac{1}{2}}\bm{\Phi}_{k}\bm{\Phi}_{k}^{T}\bm{\Delta}_{k+1}^{\frac{1}{2}}\}\cdot I_{mn}\\
=&\lambda_{max}\{\bm{\Phi}_{k}^{T}\bm{\Delta}_{k+1}\bm{\Phi}_{k}\}\cdot I_{mn}\\
\leq &\lambda_{max}\{\bm{\Phi}_{k}^{T}\bar{\bm{P}}_{k+1}\bm{\Phi}_{k}\}\cdot I_{mn}
\end{aligned}
$$
$$
\begin{aligned}
=&\lambda_{max}\{\bm{\Phi}_{k}^{T}(\bm{P}_{k}-\bm{c}_{k}\bm{P}_{k}\bm{\Phi}_{k}\bm{\Phi}_{k}^{T}\bm{P}_{k})\bm{\Phi}_{k}\}\cdot I_{mn}\\
=&\lambda_{max}\{\bm{\Phi}_{k}^{T}\bm{P}_{k}\bm{\Phi}_{k}-\bm{b}_{k}(\bm{\Phi}_{k}^{T}\bm{P}_{k}\bm{\Phi}_{k})^{2}\}\cdot I_{mn}\\
=&\lambda_{max}\{\bm{b}_{k}\bm{\Phi}_{k}^{T}\bm{P}_{k}\bm{\Phi}_{k}\}\cdot I_{mn}<I_{mn}.
\end{aligned}
$$
Hence, we have (\ref{es}), and so we have
\begin{align}\label{es2}
&\|\widetilde{\bm{\Theta}}_{k}^{T}\bm{P}_{k}^{-1}\bm{\Delta}_{k+1}\bm{\Phi}_{k}\|^{2}\nonumber\\
=&\widetilde{\bm{\Theta}}_{k}^{T}\bm{P}_{k}^{-1}\bm{\Delta}_{k+1}\bm{\Phi}_{k}\bm{\Phi}_{k}^{T}\bm{\Delta}_{k+1}\bm{P}_{k}^{-1}\widetilde{\bm{\Theta}}_{k}\nonumber\\
\leq & \widetilde{\bm{\Theta}}_{k}^{T}\bm{P}_{k}^{-1}\bm{\Delta}_{k+1}\bm{P}_{k}^{-1}\widetilde{\bm{\Theta}}_{k}.
\end{align}
By taking $0<\delta<\frac{1}{2}$, we know from (\ref{estimate2}) and (\ref{es2}) that
\begin{align}
&\sum_{k=0}^{t}\widetilde{\bm{\Theta}}_{k}^{T}\bm{P}_{k}^{-1}(-\bm{\Delta}_{k+1})\bm{\Phi}_{k}\bm{W}_{k+1}\nonumber\\
=&O(1)+o\Bigg(\Bigg\{\sum_{k=0}^{t}\|\widetilde{\bm{\Theta}}_{k}^{T}\bm{P}_{k}^{-1}\bm{\Delta}_{k+1}\bm{\Phi}_{k}\|^{2}\Bigg\}\Bigg)\nonumber
\end{align}
\begin{align}\label{estimate2new}
=&O(1)+o\Bigg(\Bigg\{\sum_{k=0}^{t}\widetilde{\bm{\Theta}}_{k}^{T}\bm{P}_{k}^{-1}\bm{\Delta}_{k+1}\bm{P}_{k}^{-1}\widetilde{\bm{\Theta}}_{k}\Bigg\}\Bigg)~~~~a.s.
\end{align}
We now proceed to estimate the last term in (\ref{sum}). Firstly, we know that
\begin{align}\label{last}
&\bm{W}_{k+1}^{T}\bm{b}_{k}\bm{\Phi}_{k}^{T}\bm{P}_{k}\bm{\Phi}_{k}\bm{W}_{k+1}\nonumber\\
\leq &\|\bm{b}_{k}\bm{\Phi}_{k}^{T}\bm{P}_{k}\bm{\Phi}_{k}\|\cdot\|\bm{W}_{k+1}\|^{2}\nonumber\\
=&\lambda_{max}\{\bm{b}_{k}\bm{\Phi}_{k}^{T}\bm{P}_{k}\bm{\Phi}_{k}\}\cdot\Bigg\{\sum_{i=1}^{n}w_{k+1,i}^{2}\Bigg\}.
\end{align}
Following a similar proof idea in the traditional single sensor case (\cite{Lai1982}, see also \cite{Chen1991}), from $\bar{\bm{P}}_{k+1}=\bm{P}_{k}-\bm{c}_{k}\bm{P}_{k}\bm{\Phi}_{k}\bm{\Phi}_{k}^{T}\bm{P}_{k}$, we have $\bm{P}_{k}^{-1}=\bar{\bm{P}}_{k+1}^{-1}(I_{mn}-\bm{c}_{k}\bm{P}_{k}\bm{\Phi}_{k}\bm{\Phi}_{k}^{T})$. By taking determinants on both sides of the above identity, and noticing $0\leq\bm{b}_{k}\bm{\Phi}_{k}^{T}\bm{P}_{k}\bm{\Phi}_{k}\leq I_{n}$ and \emph{Lemma A.1} in Appendix A, we have
$$
\begin{aligned}
\vert\bm{P}_{k}^{-1}\vert=&\vert\bar{\bm{P}}_{k+1}^{-1}\vert\cdot\vert I_{mn}-\bm{c}_{k}\bm{P}_{k}\bm{\Phi}_{k}\bm{\Phi}_{k}^{T}\vert\\
=&\vert\bar{\bm{P}}_{k+1}^{-1}\vert\cdot\vert I_{n}-\bm{b}_{k}\bm{\Phi}_{k}^{T}\bm{P}_{k}\bm{\Phi}_{k}\vert\\
=&\vert\bar{\bm{P}}_{k+1}^{-1}\vert\cdot\Bigg\{\prod_{i=1}^{n}(1-b_{k,i}\bm{\varphi}_{k,i}^{T}P_{k,i}\bm{\varphi}_{k,i})\Bigg\}\\
\leq &\vert\bar{\bm{P}}_{k+1}^{-1}\vert\cdot(1-\max_{i=1,\dots,n}\{b_{k,i}\bm{\varphi}_{k,i}^{T}P_{k,i}\bm{\varphi}_{k,i}\})\\
= &\vert\bar{\bm{P}}_{k+1}^{-1}\vert\cdot(1-\lambda_{max}\{\bm{b}_{k}\bm{\Phi}_{k}^{T}\bm{P}_{k}\bm{\Phi}_{k}\}).
\end{aligned}
$$
Moreover, we know from \emph{Lemma 4.3} that
$$
\begin{aligned}
\lambda_{max}\{\bm{b}_{k}\bm{\Phi}_{k}^{T}\bm{P}_{k}\bm{\Phi}_{k}\}\leq &\frac{\vert\bar{\bm{P}}_{k+1}^{-1}\vert-\vert\bm{P}_{k}^{-1}\vert}{\vert\bar{\bm{P}}_{k+1}^{-1}\vert}\\
=&1-\frac{\vert\bm{P}_{k}^{-1}\vert}{\vert\bar{\bm{P}}_{k+1}^{-1}\vert}\\
\leq &1-\frac{\vert\bm{P}_{k}^{-1}\vert}{\vert\bm{P}_{k+1}^{-1}\vert}\\
\leq &\frac{\vert\bm{P}_{k+1}^{-1}\vert-\vert\bm{P}_{k}^{-1}\vert}{\vert\bm{P}_{k+1}^{-1}\vert}.
\end{aligned}
$$
Therefore
\begin{align}\label{log}
\sum_{k=0}^{t}\lambda_{max}\{\bm{b}_{k}\bm{\Phi}_{k}^{T}\bm{P}_{k}\bm{\Phi}_{k}\}\leq &\sum_{k=0}^{t}\frac{\vert\bm{P}_{k+1}^{-1}\vert-\vert\bm{P}_{k}^{-1}\vert}{\vert\bm{P}_{k+1}^{-1}\vert}\nonumber\\
\leq &\sum_{k=0}^{t}\int_{\vert\bm{P}_{k}^{-1}\vert}^{\vert\bm{P}_{k+1}^{-1}\vert}\frac{dx}{x}\nonumber\\
=&\log(\vert\bm{P}_{t+1}^{-1}\vert)-\log(\vert\bm{P}_{0}^{-1}\vert).
\end{align}
By the $C_{r}$-inequality and the Lyapunov inequality \cite{Guo}, it is easy to see that for any $\alpha\in(2, \min(\beta,4)]$,
$$
\begin{aligned}
&\sup_{k}\mathbb{E}\Bigg[\Bigg(\sum_{i=1}^{n}w_{k+1,i}^{2}-\mathbb{E}\Bigg[\sum_{i=1}^{n}w_{k+1,i}^{2}\Bigg\vert\mathcal{F}_{k}\Bigg]\Bigg)^{\frac{\alpha}{2}}\Bigg\vert\mathcal{F}_{k}\Bigg]\\
\leq & 2\sup_{k}\mathbb{E}\Bigg[\sum_{i=1}^{n}\vert w_{k+1,i}\vert^{\alpha}\Bigg\vert\mathcal{F}_{k}\Bigg]<\infty,~~~~a.s.
\end{aligned}
$$
Consequently, by using the martingale estimation theorem (\emph{Theorem 2.8} in \cite{Chen1991}), we have for any $\forall\eta>0$,
\begin{align}\label{lambdamax}
&\sum_{k=0}^{t}\lambda_{max}\{\bm{b}_{k}\bm{\Phi}_{k}^{T}\bm{P}_{k}\bm{\Phi}_{k}\}\nonumber\\
&\cdot\Bigg\{\sum_{i=1}^{n}w_{k+1,i}^{2}-\mathbb{E}\Bigg[\sum_{i=1}^{n}w_{k+1,i}^{2}\Bigg\vert\mathcal{F}_{k}\Bigg]\Bigg\}\nonumber\\
=&O\Bigg(S_{t}\Bigg(\frac{\alpha}{2}\Bigg)\Bigg\{\log\Bigg(S_{t}\Bigg(\frac{\alpha}{2}\Bigg)+e\Bigg)\Bigg\}^{\frac{2}{\alpha}+\eta}\Bigg),~~a.s.,
\end{align}
where
$$
S_{t}\Bigg(\frac{\alpha}{2}\Bigg)\overset{\triangle}{=}\Bigg[\sum_{k=0}^{t}(\lambda_{max}\{\bm{b}_{k}\bm{\Phi}_{k}^{T}\bm{P}_{k}\bm{\Phi}_{k}\})^{\frac{\alpha}{2}}\Bigg]^{\frac{2}{\alpha}}.
$$
Since $\bm{b}_{k}\bm{\Phi}_{k}^{T}\bm{P}_{k}\bm{\Phi}_{k}\leq I_{n}$ and $\frac{\alpha}{2}>1$, we have from (\ref{log}) that
$$
S_{t}\Bigg(\frac{\alpha}{2}\Bigg)=O(1)+O((\log\vert\bm{P}_{t+1}^{-1}\vert)^{\frac{2}{\alpha}}).
$$
From this, we can get from (\ref{last})-(\ref{lambdamax}) that
$$
\begin{aligned}
&\sum_{k=0}^{t}\bm{W}_{k+1}^{T}\bm{b}_{k}\bm{\Phi}_{k}^{T}\bm{P}_{k}\bm{\Phi}_{k}\bm{W}_{k+1}\\
\leq&\sum_{i=1}^{n}\sigma_{i}^{2}\sum_{k=0}^{t}\lambda_{max}\{\bm{b}_{k}\bm{\Phi}_{k}^{T}\bm{P}_{k}\bm{\Phi}_{k}\}+o(\log\vert\bm{P}_{t+1}^{-1}\vert)+O(1)\\
\leq &\sigma_{w}\log\vert\bm{P}_{t+1}^{-1}\vert+o(\log\vert\bm{P}_{t+1}^{-1}\vert)+O(1).
\end{aligned}
$$
Finally, substituting this together with (\ref{estimate1new}) and (\ref{estimate2new}) into (\ref{sum}), we know that the desired result (\ref{lemma4.4}) is true. This completes the proof.
\end{IEEEproof}
\textbf{\emph{Proof of Theorem 3.1}}.
\begin{IEEEproof}
By the definitions of $\bar{P}_{t,i}^{-1}$ and $P_{t,i}^{-1}$, it is easy to know that for any $t\geq 0$,
\begin{align}
P_{t+1,i}^{-1}=&\sum_{j=1}^{n}a_{ji}\bar{P}_{t+1,j}^{-1}\nonumber\\
=&\sum_{j=1}^{n}a_{ji}(P_{t,j}^{-1}+\bm{\varphi}_{t,j}\bm{\varphi}_{t,j}^{T}).
\end{align}
Consequently, we have
\begin{align}\label{Pt}
&\max_{1\leq i\leq n}\lambda_{max}\{P_{t+1,i}^{-1}\}\nonumber\\
\leq &\max_{1\leq i\leq n}\sum_{j=1}^{n}a_{ji}\Big(\lambda_{max}\{P_{t,j}^{-1}\}+\lambda_{max}\{\bm{\varphi}_{t,j}\bm{\varphi}_{t,j}^{T}\}\Big)\nonumber\\
\leq &\max_{1\leq i\leq n}\lambda_{max}\{P_{t,i}^{-1}\}\sum_{j=1}^{n}a_{ji}+\sum_{j=1}^{n}\lambda_{max}\{\bm{\varphi}_{t,j}\bm{\varphi}_{t,j}^{T}\}\nonumber\\
= & \max_{1\leq i\leq n}\lambda_{max}\{P_{t,i}^{-1}\}+\sum_{j=1}^{n}\|\bm{\varphi}_{t,j}\|^{2}\nonumber\\
\leq & \cdots\nonumber\\
\leq & \max_{1\leq i\leq n}\lambda_{max}\{P_{0,i}^{-1}\}+\sum_{j=1}^{n}\sum_{k=0}^{t}\|\bm{\varphi}_{k,j}\|^{2}\nonumber\\
= & \lambda_{max}\{\bm{P}_{0}^{-1}\}+\sum_{j=1}^{n}\sum_{k=0}^{t}\|\bm{\varphi}_{k,j}\|^{2}.
\end{align}
From (\ref{Pt}) and the connection between determinant and eigenvalues of a matrix, it is easy to conclude that
\begin{align}\label{rt}
\log(\vert\bm{P}_{t+1}^{-1}\vert)\leq & mn\log\Big(\max_{1\leq i\leq n}\lambda_{max}\{P_{t+1,i}^{-1}\}\Big)\nonumber\\
\leq & mn\log (r_{t}).
\end{align}
Consequently, \emph{Theorem 3.1} follows from this and \emph{Lemma 4.4} immediately.
\end{IEEEproof}
\subsection{Proof of \emph{Theorem 3.2}}
By the definition of $\bm{b}_{k}$ in (\ref{7}), we know that
$$
\bm{\Phi}_{k}\bm{\Phi}_{k}^{T}=\bm{\Phi}_{k}\bm{b}_{k}\bm{\Phi}_{k}^{T}+\bm{\Phi}_{k}(\bm{b}_{k}\bm{\Phi}_{k}^{T}\bm{P}_{k}\bm{\Phi}_{k})\bm{\Phi}_{k}^{T}.
$$
Then by noticing that $\bm{b}_{k}, \bm{\Phi}_{k}$ and $\bm{P}_{k}$ are (block) diagonal matrices, and $\bm{\Phi}_{k}^{T}\bm{P}_{k}\bm{\Phi}_{k}=O(1), a.s.$, we know that
\begin{align}\label{key}
&\sum_{i=1}^{n}\sum_{k=0}^{t}R_{k,i}\nonumber\\
=&\sum_{i=1}^{n}\sum_{k=0}^{t}(\bm{\varphi}_{k,i}^{T}\widetilde{\bm{\theta}}_{k,i})^{2}\nonumber\\
=&\sum_{k=0}^{t}\widetilde{\bm{\Theta}}_{k}^{T}\bm{\Phi}_{k}\bm{\Phi}_{k}^{T}\widetilde{\bm{\Theta}}_{k}\nonumber\\
=&\sum_{k=0}^{t}\widetilde{\bm{\Theta}}_{k}^{T}\bm{\Phi}_{k}\bm{b}_{k}\bm{\Phi}_{k}^{T}\widetilde{\bm{\Theta}}_{k}+\sum_{k=0}^{t}\widetilde{\bm{\Theta}}_{k}^{T}\bm{\Phi}_{k}(\bm{b}_{k}\bm{\Phi}_{k}^{T}\bm{P}_{k}\bm{\Phi}_{k})\bm{\Phi}_{k}^{T}\widetilde{\bm{\Theta}}_{k}\nonumber\\
=&\sum_{k=0}^{t}\widetilde{\bm{\Theta}}_{k}^{T}\bm{\Phi}_{k}\bm{b}_{k}\bm{\Phi}_{k}^{T}\widetilde{\bm{\Theta}}_{k}+\sum_{k=0}^{t}\widetilde{\bm{\Theta}}_{k}^{T}\bm{\Phi}_{k}\bm{b}_{k}^{\frac{1}{2}}(\bm{\Phi}_{k}^{T}\bm{P}_{k}\bm{\Phi}_{k})\bm{b}_{k}^{\frac{1}{2}}\bm{\Phi}_{k}^{T}\widetilde{\bm{\Theta}}_{k}\nonumber\\
=&O\Bigg(\sum_{k=0}^{t}\widetilde{\bm{\Theta}}_{k}^{T}\bm{\Phi}_{k}\bm{b}_{k}\bm{\Phi}_{k}^{T}\widetilde{\bm{\Theta}}_{k}\Bigg).
\end{align}
Substituting this into \emph{Theorem 3.1} $1)$, we conclude that (\ref{Coro}) holds.
\subsection{Proof of \emph{Theorem 3.3}}
For ease of representation, let $a_{ij}^{(s)}$ be the $(i,j)$-th entry of the matrix $\mathcal{A}^{s}, s\geq 1$. Note that $a_{ij}^{(1)}=a_{ij}$. By \emph{Condition 3.2} and \emph{Remark 3.1}, we know that $a_{ji}^{(D_{\mathcal{G}})}\geq a_{min}>0$, where $a_{min}=\min\limits_{i,j\in\mathcal{V}}a_{ij}^{(D_{\mathcal{G}})}>0$, and $D_{\mathcal{G}}$ is the diameter of the graph $\mathcal{G}$. Consequently, it is not difficult to see that for any $k>D_{\mathcal{G}}$, $a_{ji}^{(k)}\geq a_{min}$ holds.
By (\ref{7}), it is easy to see that for any $t\geq 0$,
\begin{align}
&\text{vec}\{\bm{P}_{t+1}^{-1}\}\nonumber\\
=&\mathscr{A}\text{vec}\{\bar{\bm{P}}_{t+1}^{-1}\}\nonumber\\
=&\mathscr{A}\text{vec}\{\bm{P}_{t}^{-1}\}+\mathscr{A}\text{vec}\{\bm{\Phi}_{t}\bm{\Phi}_{t}^{T}\}\nonumber\\
=&\cdots\nonumber\\
=&\mathscr{A}^{t+1}\text{vec}\{\bm{P}_{0}^{-1}\}+\sum_{k=0}^{t}\mathscr{A}^{t-k+1}\text{vec}\{\bm{\Phi}_{k}\bm{\Phi}_{k}^{T}\},
\end{align}
which implies that for any $t\geq D_{\mathcal{G}}$,
\begin{align}
P_{t+1,i}^{-1}= &\sum_{j=1}^{n}a_{ji}^{(t+1)}P_{0,j}^{-1}+\sum_{j=1}^{n}\sum_{k=0}^{t}a_{ji}^{(t-k+1)}\bm{\varphi}_{k,j}\bm{\varphi}_{k,j}^{T}\nonumber\\
\geq &\sum_{j=1}^{n}a_{ji}^{(t+1)}P_{0,j}^{-1}+\sum_{j=1}^{n}\sum_{k=0}^{t-D_{\mathcal{G}}+1}a_{ji}^{(t-k+1)}\bm{\varphi}_{k,j}\bm{\varphi}_{k,j}^{T}\nonumber\\
\geq &a_{min}\sum_{j=1}^{n}P_{0,j}^{-1}+a_{min}\sum_{j=1}^{n}\sum_{k=0}^{t-D_{\mathcal{G}}+1}\bm{\varphi}_{k,j}\bm{\varphi}_{k,j}^{T},
\end{align}
holds. From this, we conclude that
$$
\lambda_{min}\{\bm{P}_{t+1}^{-1}\}\geq a_{min}\lambda_{min}\Bigg\{\sum_{j=1}^{n}P_{0,j}^{-1}+\sum_{j=1}^{n}\sum_{k=0}^{t-D_{\mathcal{G}}+1}\bm{\varphi}_{k,j}\bm{\varphi}_{k,j}^{T}\Bigg\}.
$$
Note also that
\begin{equation}\label{lambdamin}
\|\widetilde{\bm{\Theta}}_{t+1}\|^{2}\leq\widetilde{\bm{\Theta}}_{t+1}^{T}\Bigg[\frac{\bm{P}_{t+1}^{-1}}{\lambda_{min}\{\bm{P}_{t+1}^{-1}\}}\Bigg]\widetilde{\bm{\Theta}}_{t+1}.
\end{equation}
Hence, by (\ref{lambdamin}), and $2)$ in \emph{Theorem 3.1}, we know that \emph{Theorem 3.3} holds.
\iffalse
\section{Simulations}
In this section, we will construct an example to illustrate that even none of the agent can estimate the parameters individually, the whole network can still fulfill the filtering task cooperatively and effectively.
Let us take $n=3$ with the following adjacency matrix
$$
\mathcal{A}=
\begin{pmatrix}
2/3 & 1/3 & 0\\
1/3 & 1/2 & 1/6 \\
0 & 1/6 & 5/6
\end{pmatrix},
$$
then the corresponding graph is undirected and connected. For each agent $i=\{1,2,3\}$, we consider the following cooperative input and output model:
\begin{equation}
y_{k+1,i}=ay_{k,i}+\sum_{j=1}^{3}b_{ji}u_{k,j}+w_{k+1,i}, k\geq 0,
\end{equation}
where $y_{k,i}$ and $u_{k,i}$ are the output and input of agent $i$ at time $k$, respectively, $b_{31}=b_{13}=0$ and $0<a<1, b_{11}, b_{21}, b_{12}, b_{22}, b_{32}, b_{23}, b_{33}$ are $8$ unknown parameters which need to be estimated, and $w_{k+1,i}$ is a noise process. The objective is to estimate the following unknown parameter
$$
\bm{\theta}=[a, b_{11}, b_{21}, b_{12}, b_{22}, b_{32}, b_{23}, b_{33}]^{T},
$$
then we can denote that
$$
\begin{aligned}
&\bm{\varphi}_{k,1}=[y_{k,1}, u_{k,1}, u_{k,2}, 0, 0, 0, 0, 0]^{T},\\
&\bm{\varphi}_{k,2}=[y_{k,2}, 0, 0, u_{k,1}, u_{k,2}, u_{k,3}, 0, 0]^{T},\\
&\bm{\varphi}_{k,3}=[y_{k,3}, 0, 0, 0, 0, 0, u_{k,2}, u_{k,3}]^{T}.
\end{aligned}
$$
From this, we know that the individual excitation condition \cite{Guo1995} for each agent $i\in\{1,2,3\}$ is not satisfied because of the zero elements in $\bm{\varphi}_{k,i}$. However, the cooperative decaying excitation condition may be satisfied. By \emph{Lemma 6.5.3} in \cite{Guo}, we know that
$$
\begin{aligned}
&\lambda_{min}\Bigg\{\sum_{k=0}^{t}[y_{k,1}, u_{k,1}, u_{k,2}]^{T}\cdot[y_{k,1}, u_{k,1}, u_{k,2}]\Bigg\}\\
\geq &c\lambda_{min}\Bigg\{\sum_{k=0}^{t}[w_{k,1}, u_{k,1}, u_{k,2}]^{T}\cdot[w_{k,1}, u_{k,1}, u_{k,2}]\Bigg\},
\end{aligned}
$$
$$
\begin{aligned}
&\lambda_{min}\Bigg\{\sum_{k=0}^{t}[y_{k,2}, u_{k,1}, u_{k,2}, u_{k,3}]^{T}\cdot[y_{k,1}, u_{k,1}, u_{k,2}, u_{k,3}]\Bigg\}\\
\geq &c\lambda_{min}\Bigg\{\sum_{k=0}^{t}[w_{k,2}, u_{k,1}, u_{k,2}, u_{k,3}]^{T}\cdot[w_{k,2}, u_{k,1}, u_{k,2}, u_{k,3}]\Bigg\},
\end{aligned}
$$
and
$$
\begin{aligned}
&\lambda_{min}\Bigg\{\sum_{k=0}^{t}[y_{k,3}, u_{k,2}, u_{k,3}]^{T}\cdot[y_{k,3}, u_{k,2}, u_{k,3}]\Bigg\}\\
\geq &c\lambda_{min}\Bigg\{\sum_{k=0}^{t}[w_{k,3}, u_{k,2}, u_{k,3}]^{T}\cdot[w_{k,3}, u_{k,2}, u_{k,3}]\Bigg\},
\end{aligned}
$$
all holds, where $c>0$ is a constant. Then for any unit column vector $x\in\mathbb{R}^{8}$, which is denoted as $\text{col}\{x_{1}, \dots, x_{8}\}$ with $x_{i}\in\mathbb{R}$, we have
$$
\begin{aligned}
&\lambda_{min}\Bigg\{\sum_{i=1}^{n}\sum_{k=0}^{t}\bm{\varphi}_{k,i}\bm{\varphi}_{k,i}^{T}\Bigg\}=\inf_{\|x\|=1}\sum_{i=1}^{n}\sum_{k=0}^{t}(\bm{\varphi}_{k,i}^{T}x)^{2}\\
=&\inf_{\|x\|=1}\sum_{k=0}^{t}\Big\{(y_{k,1}x_{1}+u_{k,1}x_{2}+u_{k,2}x_{3})^{2}\\
&+(y_{k,2}x_{1}+u_{k,1}x_{4}+u_{k,2}x_{5}+u_{k,3}x_{6})^{2}\\
&+(y_{k,3}x_{1}+u_{k,2}x_{7}+u_{k,3}x_{8})^{2}\Big\}
\end{aligned}
$$
For any agent $i\in\{1,2,3\}$ and any time instant $k\geq 0$, let $u_{k,i}\sim N(0,1), w_{k+1,i}\sim N(0,0.1)$ (Gaussian distribution), $y_{0,i}=0$, and $\bm{\theta}=[0.2, 0.5, 0.3, 0.2, 0.1, 1.2, 0.6, 1.5]^{T}$, $\bm{\theta}_{0,i}=[\underbrace{0,\dots,0}_{8}\}$, and $P_{0,i}=I_{8}$. Here we repeat the simulation for $r=100$ times with the same initial states. Then for each agent $i\in\{1,2,3\}$, we can get $r$ sequences $\{\|\bm{\theta}_{k,i}^{j}-\bm{\theta}\|^{2}, k=1, 10, 20, \dots, 200\}(j=1, \dots, r)$, where the superscript $j$ denotes the $j$-th simulation result. We use $\frac{1}{r}\sum_{j=1}^{r}\|\bm{\theta}_{k,i}^{j}-\bm{\theta}\|^{2}(i=1, 2, 3, k=1, 10, 20, \dots, 200)$ to approximate the estimation errors in the following Fig. 1.
\begin{figure}[!htb]
\begin{center}
\renewcommand{\captionfont}{\footnotesize}
\includegraphics[width=\hsize]{estimation.eps}
\caption{Estimation errors of the three agents}
\end{center}
\end{figure}
The upper one in Fig. 1 is the traditional LS algorithm in which the estimation errors of the three agents are all quite large because all the agents do not satisfy the excitation condition. The lower one in Fig. 1 is the distributed LS algorithm in which all the estimation errors converge to zero as $k$ increases, since the whole system satisfies the cooperative excitation condition.
\fi
\section{Concluding remarks}
In this paper, we have established a convergence theory for a basic class of distributed LS algorithms, under quite general conditions on the measured information or data used in the estimation. The accumulated regret of adaptive predictors has been shown to have a celebrated logarithm increase without any excitation condition imposed on the system data, and the convergence rate of the distributed LS estimates has also been established under a cooperative excitation condition, which can be regarded as an extension of the weakest possible excitation condition known for the convergence of the classical LS. Neither independence and stationarity, nor Gaussian property, are required in our results, which makes it possible for our theory to be applicable to feedback control systems, and to lay a foundation for further investigation on related problems concerning the combination of learning, communication and control. Moreover, the cooperative excitation condition introduced and used in the paper indicates that the distributed LS can fulfill the estimation task cooperatively, even if any individual sensor cannot due to lack of necessary excitation. Of course, there are still a number of interesting problems for further research, for examples, to consider other distributed estimation algorithms including ones based on forgetting factor LS or Kalman filter for tracking unknown time-varying signals (e.g. \cite{Guo1994}), to investigate the case where both of the measurements and regressors contain noises (e.g. \cite{Zheng1999}), and to combine distributed learning with distributed control problems, etc.
\begin{appendices}
\section{Some basic lemmas}
\emph{Lemma A.1.}\cite{Guo} Let $A\in\mathbb{R}^{d\times s}$ and $B\in\mathbb{R}^{s\times d}$ be two matrices. Then the nonzero eigenvalues of the matrices $AB$ and $BA$ are the same, and $\vert I_{d}+AB\vert=\vert I_{s}+BA\vert$ holds. Moreover, if $d=s$, then $\vert AB\vert=\vert A\vert\cdot\vert B\vert=\vert BA\vert, \text{Tr}(A)=\text{Tr}(A^{T}),\text{Tr}(AB)=\text{Tr}(BA)$. Furthermore, if $A$ and $B$ are positive definite matrices with $A\geq B$, then $A^{-1}\leq B^{-1}$.
\emph{Lemma A.2.} (Ky Fan Convex Theorem)\cite{Ky1950} For any nonnegative definite matrices $A_{i}\in\mathbb{R}^{m\times m} (i=1, \dots, n)$, and any constants $0\leq\lambda_{i}\leq 1 (i=1,\dots, n)$ satisfying $\sum_{i=1}^{n}\lambda_{i}=1$, the following inequality holds:
$$
\vert\lambda_{1}A_{1}+\lambda_{2}A_{2}+\dots+\lambda_{n}A_{n}\vert\geq\vert A_{1}\vert^{\lambda_{1}}\vert A_{2}\vert^{\lambda_{2}}\dots\vert A_{n}\vert^{\lambda_{n}}.
$$
We remark that this lemma is exactly Lemma 1 in \cite{Ky1950} for $n=2$. For $n>2$, this lemma can be proved easily by induction.
\emph{Lemma A.3.} \cite{Guo} For any matrices $A, B, C$ and $D$ with suitable dimensions,
$$
(A+BDC)^{-1}=A^{-1}-A^{-1}B(D^{-1}+CA^{-1}B)^{-1}CA^{-1},
$$
holds, provided that the relevant matrices are invertible.
\emph{Lemma A.4.} \cite{Guo} For any scalar sequence $a_{j}\geq 0, (j=1,\dots,m)$, the following $C_{r}$-inequality holds:
$$
\Bigg(\sum\limits_{j=1}^{m}a_{j}\Bigg)^{r}\leq
\begin{cases}
&m^{r-1}\sum\limits_{j=1}^{m}a_{j}^{r},~~~~r\geq 1,\\
&\sum\limits_{j=1}^{m}a_{j}^{r},~~~~~~~~~~0\leq r\leq 1.
\end{cases}
$$
\section{Proof of Remark 3.3}
Similar to the proof of \emph{Lemma 4.4}, here we consider the following Lyapunov function:
$$
\bar{V}_{k}=\mathbb{E}[\widetilde{\bm{\Theta}}_{k}^{T}\bm{P}_{k}^{-1}\widetilde{\bm{\Theta}}_{k}].
$$
Since $\bm{\Delta}_{k+1}=\bar{\bm{P}}_{k+1}-\mathscr{A}\bm{P}_{k+1}\mathscr{A}\geq 0$ and $\{\omega_{k,i}, \mathcal{F}_{k}\}$ is a martingale difference sequence, it is not difficult to convince oneself that one can take mathematical expectations on both sides of (\ref{Lya}) to arrive at the following relationship:
$$
\begin{aligned}
\bar{V}_{k+1}\leq&\bar{V}_{k}-\mathbb{E}[\widetilde{\bm{\Theta}}_{k}^{T}\bm{\Phi}_{k}\bm{b}_{k}\bm{\Phi}_{k}^{T}\widetilde{\bm{\Theta}}_{k}]-\mathbb{E}[\widetilde{\bm{\Theta}}_{k}^{T}\bm{P}_{k}^{-1}\bm{\Delta}_{k+1}\bm{P}_{k}^{-1}\widetilde{\bm{\Theta}}_{k}]\\
&-2\mathbb{E}[\widetilde{\bm{\Theta}}_{k}^{T}\bm{P}_{k}^{-1}\bar{\bm{P}}_{k+1}\bm{\Phi}_{k}\bm{W}_{k+1}]\\
&+2\mathbb{E}[\widetilde{\bm{\Theta}}_{k}^{T}\bm{P}_{k}^{-1}\bm{\Delta}_{k+1}\bm{\Phi}_{k}\bm{W}_{k+1}]\\
&+\mathbb{E}[\bm{W}_{k+1}^{T}\bm{b}_{k}\bm{\Phi}_{k}^{T}\bm{P}_{k}\bm{\Phi}_{k}\bm{W}_{k+1}]\\
\leq & \bar{V}_{k}-\mathbb{E}[\widetilde{\bm{\Theta}}_{k}^{T}\bm{\Phi}_{k}\bm{b}_{k}\bm{\Phi}_{k}^{T}\widetilde{\bm{\Theta}}_{k}]\\
&-2\mathbb{E}[\widetilde{\bm{\Theta}}_{k}^{T}\bm{P}_{k}^{-1}\mathscr{A}\bm{P}_{k+1}\mathscr{A}\bm{\Phi}_{k}\bm{W}_{k+1}]\\
&+\mathbb{E}[\bm{W}_{k+1}^{T}\bm{b}_{k}\bm{\Phi}_{k}^{T}\bm{P}_{k}\bm{\Phi}_{k}\bm{W}_{k+1}]
\end{aligned}
$$
$$
\begin{aligned}
= &\bar{V}_{k}-\mathbb{E}[\widetilde{\bm{\Theta}}_{k}^{T}\bm{\Phi}_{k}\bm{b}_{k}\bm{\Phi}_{k}^{T}\widetilde{\bm{\Theta}}_{k}]\nonumber\\
&-2\mathbb{E}[\mathbb{E}[\widetilde{\bm{\Theta}}_{k}^{T}\bm{P}_{k}^{-1}\mathscr{A}\bm{P}_{k+1}\mathscr{A}\bm{\Phi}_{k}\bm{W}_{k+1}\vert\mathcal{F}_{k}]]\nonumber\\
&+\mathbb{E}[\bm{W}_{k+1}^{T}\bm{b}_{k}\bm{\Phi}_{k}^{T}\bm{P}_{k}\bm{\Phi}_{k}\bm{W}_{k+1}]\nonumber\\
=&\bar{V}_{k}-\mathbb{E}[\widetilde{\bm{\Theta}}_{k}^{T}\bm{\Phi}_{k}\bm{b}_{k}\bm{\Phi}_{k}^{T}\widetilde{\bm{\Theta}}_{k}]\nonumber\\
&+\mathbb{E}[\bm{W}_{k+1}^{T}\bm{b}_{k}\bm{\Phi}_{k}^{T}\bm{P}_{k}\bm{\Phi}_{k}\bm{W}_{k+1}].
\end{aligned}
$$
Similar to the proof of \emph{Lemma 4.4} and \emph{Theorem 3.1}, summing from $k=0$ to $t$ yields
\begin{align}\label{sume}
&\bar{V}_{t+1}+\sum_{k=0}^{t}\mathbb{E}[\widetilde{\bm{\Theta}}_{k}^{T}\bm{\Phi}_{k}\bm{b}_{k}\bm{\Phi}_{k}^{T}\widetilde{\bm{\Theta}}_{k}]\nonumber\\
\leq & \bar{V}_{0}+\sum_{k=0}^{t}\mathbb{E}[\bm{W}_{k+1}^{T}\bm{b}_{k}\bm{\Phi}_{k}^{T}\bm{P}_{k}\bm{\Phi}_{k}\bm{W}_{k+1}]\nonumber\\
\leq & \bar{V}_{0}+\mathbb{E}[\sigma_{w}\log(\vert\bm{P}_{t+1}^{-1}\vert)]-\mathbb{E}[\sigma_{w}\log(\vert\bm{P}_{0}^{-1}\vert)]\nonumber\\
\leq & \bar{V}_{0}+mn\bar{\sigma}\mathbb{E}[\log(r_{t})]-\bar{\sigma}\mathbb{E}[\log(\vert\bm{P}_{0}^{-1}\vert)]\nonumber\\
\leq & \bar{V}_{0}+mn\bar{\sigma}\log(\mathbb{E}[r_{t}])-\bar{\sigma}\mathbb{E}[\log(\vert\bm{P}_{0}^{-1}\vert)],
\end{align}
where for the last inequality we have used the fact that $\log(\cdot)$ is a concave function.
Since there exists a deterministic constant $c>0$ such that $\|\bm{\Phi}_{t}^{T}\bm{P}_{t}\bm{\Phi}_{t}\|\leq c$, the following result holds by (\ref{key}) and (\ref{sume}):
$$
\begin{aligned}
&\sum_{i=1}^{n}\sum_{k=0}^{t}\mathbb{E}[R_{k,i}]\\
=&\sum_{k=0}^{t}\mathbb{E}[\widetilde{\bm{\Theta}}_{k}^{T}\bm{\Phi}_{k}\bm{\Phi}_{k}^{T}\widetilde{\bm{\Theta}}_{k}]\\
\leq &(1+c)\sum_{k=0}^{t}\mathbb{E}[\widetilde{\bm{\Theta}}_{k}^{T}\bm{\Phi}_{k}\bm{b}_{k}\bm{\Phi}_{k}^{T}\widetilde{\bm{\Theta}}_{k}]\\
\leq & (1+c)\Big\{mn\bar{\sigma}\log(\mathbb{E}[r_{t}])+\mathbb{E}[\widetilde{\bm{\Theta}}_{0}^{T}\bm{P}_{0}^{-1}\widetilde{\bm{\Theta}}_{0}]\\
&-\bar{\sigma}\mathbb{E}[\log(\vert\bm{P}_{0}^{-1}\vert)]\Big\}.
\end{aligned}
$$
This completes the proof.
\end{appendices}
|
1,108,101,566,705 | arxiv | \section{Introduction}
\label{Intro}
Magnetic fields play a crucial role in several high-energy
astrophysical scenarios at different scales, from Active Galactic
Nuclei (AGN) to Gamma-Ray Bursts (GRBs).
These
phenomena involve compact objects such as neutron stars (NSs) and black holes (BHs) and therefore any attempt to model them requires a general relativistic treatment.
As a consequence, studying this kind of systems demands to solve the full set of
general relativistic magnetohydrodynamic (GRMHD) equations~\cite{anton2006numerical}.
In most situations, the GRMHD equations are to be solved numerically,
often on dynamical spacetimes, and a number of GRMHD codes have
been developed over the years for this purpose (e.g., ~\cite{giacomazzo2007whiskymhd, mosta2013grhydro, etienne2015illinoisgrmhd}).
Some of them have been used, in particular, to study compact binary mergers
(e.g., \cite{Liu2008, Anderson2008, giacomazzo2011accurate, Kiuchi2018, Ciolfi2019}) and accretion onto supermassive BHs
(e.g., \cite{Palenzuela2010, Giacomazzo2012, Farris2012, Gold2014, Kelly2017, dascoli2018, EHTCodeComparison}).
In the case of compact binary mergers, GRMHD codes have been used to
simulate NS-NS and NS-BH mergers in order to study the effects of
magnetic fields on the gravitational wave (GW) and electromagnetic
(EM) emission (e.g., \cite{Kawamura2016, Ciolfi2017}). For instance, GRMHD simulations have recently provided indications that, under certain conditions, the BH remnant of a NS-NS or NS-BH merger may be able to give
rise to a relativistic jet and hence power a short
GRB~\cite{Paschalidis2015, Ruiz2016}.
This is a likely scenario to explain the connection between compact binary mergers and short
GRBs, recently confirmed by the first simultaneous observation
of GWs emitted by a NS-NS merger and a
short GRB~\cite{PhysRevLett.119.161101,LVC-GRB}.
Concerning the accretion onto supermassive BH mergers, current simulations aim at predicting the light curves of possible EM counterparts of future GW sources detected by LISA~\cite{Schnittman2013, Amaro-Seoane2017}.
In this paper, we present our new fully GRMHD numerical code, named
\texttt{Spritz}, that solves the GRMHD equations in 3D and on a
dynamical spacetime. The code inherits a number of basic features from the
\texttt{WhiskyMHD} code~\cite{giacomazzo2007whiskymhd}, but it also takes
advantage of methods implemented and tested in the publicly available
\texttt{GRHydro}~\cite{mosta2013grhydro} and
\texttt{IllinoisGRMHD}~\cite{etienne2015illinoisgrmhd} codes.
The \texttt{WhiskyMHD} code has been used successfully to simulate
NS-NS mergers~\cite{giacomazzo2011accurate, Ciolfi2019, Kawamura2016, Ciolfi2017, Giacomazzo2009, Rezzolla2011, GiacomazzoPerna2013, Giacomazzo2015, Endrizzi2016} and accretion onto supermassive black hole binaries~\cite{Giacomazzo2012BBH}, but it is limited to the use of simple
piecewise polytropic equations of state~\cite{Read2009} and it is not
able to take into account neutrino emission.
Moreover, this code can evolve the
vector potential instead of the magnetic field, but employing a non-staggered formalism that may have undesired effects on the evolution (see discussion in the following sections).
The new \texttt{Spritz} code can instead handle any equation of state
for which the pressure is a function of rest-mass density,
temperature, and electron fraction and therefore can also use modern
tabulated equations of state. This has been possible by following a
similar approach used in the \texttt{GRHydro} code, which can use
finite temperature tabulated equations of state, but that still lacks
of a magnetic field implementation able to handle correctly the use of
mesh refinement techniques. \texttt{Spritz} also implements a
staggered version of the vector potential formulation in a formalism
that, as discussed later in the paper, recovers the original
conservative flux-CT approach implemented in the original version of
\texttt{WhiskyMHD}. This has been possible by using algorithms similar
to those implemented in \texttt{IllinoisGRMHD}, which at the moment
can only handle simple ideal fluid equations of state.
Therefore, the \texttt{Spritz} code aims at merging together the main
capabilities of the three codes mentioned above.
Here, we present a series of extensive
tests in 1D, 2D, and 3D, including, for the first time, a comparison between
staggered and non-staggered schemes for the vector potential evolution and a
rather demanding spherical explosion test. The \texttt{Spritz} code passes
all the tests successfully and it will be soon used to carry out NS-NS
and NS-BH merger simulations. We also plan to make the code publicly
available in the near future within the Einstein Toolkit collaboration.
The paper is organized as follows: in \Sref{sec2} we present the GRMHD
equations and the formulation used in the code; in \Sref{sec3} the
main numerical methods are discussed; in \Sref{sec4} we present the
results of our tests; and in \Sref{sec5} we summarize the main results
and discuss future developments. We use a system of units such that
$G=c=1$ unless otherwise specified. Greek indices run from 0 to 3 and
Latin indices run from 1 to 3.
\section{Equations}
\label{sec2}
In this section we summarize the theoretical background and the equations implemented in \texttt{Spritz}, giving also the main references for the reader who wants to go deeper in the related details.
In addition to these references, it is worth to mention the book \cite{baumgarte2010numerical} which presents an extensive theoretical introduction to numerical relativity approaches to solving Einstein's Equations in several physical scenarios.
\subsection{3+1 spacetime formulation}
\label{3+1}
Our numerical methods and implementation are largely based on the ones employed in \texttt{WhiskyMHD} \cite{giacomazzo2007whiskymhd}, where a 3+1 formulation of the Einstein's equations is adopted. In such a framework, the form of the line element is:
\begin{equation}
\label{LinEl}
ds^{2} = g_{\mu \nu} dx^{\mu} dx^{\nu} = -\left( \alpha^{2} - \beta^{i}\beta_{i} \right) dt^2 + 2 \beta_{i} dx^{i} dt + \gamma_{ij} dx^{i}dx^{j},
\end{equation}
where the usual Einstein notation is adopted. Here $g_{\mu \nu}$ is the metric tensor, $\gamma_{ij}$ its purely spatial part, and $\alpha$ and $\beta^{i}$ are respectively the \textit{lapse} and the \textit{shift} vector. Our coordinate setting considers $x^{0} \equiv t$.
\texttt{Spritz} makes use of the conservative formulation presented in \cite{anton2006numerical}, which is the GRMHD version of the original general relativistic hydrodynamics Valencia formulation~\cite{banyuls1997numerical,marti1991numerical}. Here, the natural observer is called the \textit{Eulerian observer} and its four--velocity $\bi{n}$ is normal to the 3--dimensional hypersurface of constant $t$ with the following components:
\begin{equation}
\label{natvel}
\eqalign{
n^{\mu} &= \frac{1}{\alpha} \left( 1, -\beta^{i} \right), \\
n_{\mu} &= \left( -\alpha,0,0,0 \right).
}
\end{equation}
When considering matter, the spatial components of the fluid velocity measured by the Eulerian observer read
\begin{equation}
\label{spatfluidvel}
v^{i} = \frac{h^{i}_{\mu} u^{\mu}}{-\bi{u} \cdot \bi{n}} = \frac{u^i}{\alpha u^t} + \frac{\beta^i}{\alpha} = \frac{u^i}{W} + \frac{\beta^i}{\alpha},
\end{equation}
where $\bi{u}$ is the fluid four--velocity, $h_{\mu \nu} = g_{\mu \nu} + n_{\mu} n_{\nu}$ is the projector onto the aforementioned hypersurface at constant $t$, $W = 1/\sqrt{1-v^2}$ is the Lorentz factor, and $v^2\equiv\gamma_{ij} v^i v^j$ is the square norm of $\bi{v}$.
\subsection{Electromagnetic field}
\label{sec:Maxwell}
The general relativistic formulation of \cite{anton2006numerical} describes the electromagnetic field via the Faraday tensor and its dual, given respectively by
\begin{equation}
\label{Faraday}
F^{\mu \nu} = U^{\mu} E^{\nu} - U^{\nu} E^{\mu} - \eta^{\mu \nu \lambda \delta} U_{\lambda} B_{\delta},
\end{equation}
\begin{equation}
\label{FaradayDual}
^{*}F^{\mu \nu} = \frac{1}{2} \eta^{\mu \nu \lambda \delta} F_{\lambda \delta} = U^{\mu} B^{\nu} - U^{\nu} B^{\mu} - \eta^{\mu \nu \lambda \delta} U_{\lambda} E_{\delta},
\end{equation}
being $E^{\mu}$ the electric field, $B^{\mu}$ the magnetic field, $U^{\mu}$ a generic observer's four--velocity, and $\eta^{\mu \nu \lambda \delta} = \frac{1}{\sqrt{-g}} \left[ \mu \nu \lambda \delta \right]$ the volume element.
The equations governing the electromagnetic field and its evolution are the well known Maxwell's equations
\begin{equation}
\label{eq:Maxwell}
\eqalign{
{\nabla}_{\nu} ^{*}F^{\mu \nu} &= 0 \, , \\
{\nabla}_{\nu} F^{\mu \nu} &= 4 \pi \mathcal{J}^{\mu} \, ,
}
\end{equation}
where $\bi{\mathcal{J}}$ is the four--vector current density, which can be expressed through the Ohm's law as
\begin{equation}
\label{4curr}
\mathcal{J}^{\mu} = q u^{\mu} + \sigma F^{\mu \nu} u_{\nu} \, ,
\end{equation}
with $q$ the proper charge density and $\sigma$ the electric conductivity. In the ideal MHD regime (i.e., when $\sigma \to \infty$ and $F^{\mu \nu} u_{\nu} \to 0$) \Eref{Faraday} and \Eref{FaradayDual} can be expressed as
\begin{equation}
\label{FaradayIdeal}
F^{\mu \nu} = \eta^{\alpha \beta \mu \nu} b_{\alpha} u_{\beta}, \qquad
^{*}F^{\mu \nu} = b^{\mu} u^{\nu} - b^{\nu} u^{\mu} = \frac{u^{\mu} B^{\nu} - u^{\nu} B^{\mu}}{W} \, ,
\end{equation}
where $\bi{b}$ is the magnetic field according to the comoving observer, which can be written component--wise as follows~\cite{giacomazzo2007whiskymhd}:
\begin{equation}
\label{bsmall}
b^0 = \frac{W B^i v_i}{\alpha}, \qquad b^i = \frac{B^i + \alpha b^0 u^i}{W}, \qquad b^2 \equiv b^\mu b_\mu = \frac{B^2 + \alpha^2 \left( b^0 \right)^2}{W^2} \, .
\end{equation}
Here, $B^2 \equiv B^i B_i$, where $\bi{B}$ is now the magnetic field measured by the Eulerian observer (i.e., from now on $U^\mu=n^\mu$). With \Eref{FaradayIdeal}, the Maxwell's equations considering the dual of Faraday tensor can be written as
\begin{equation}
\label{MaxwellIdeal}
{\nabla}_{\nu} ^{*}F^{\mu \nu} = \frac{1}{\sqrt{-g}} \partial_{\nu} \left( \sqrt{-g} \left(b^{\mu} u^{\nu} - b^{\nu} u^{\mu} \right) \right) = 0.
\end{equation}
Splitted in its different components, \Eref{MaxwellIdeal} provides the equations governing the magnetic field constraints and evolution, namely the \textit{divergence--free condition}
\begin{equation}
\label{divfree}
\partial_i \tilde{B}^i = 0 \, ,
\end{equation}
where $\tilde{B}^i \equiv \sqrt{\gamma} B^i$, and the magnetic field \textit{induction equations}
\begin{equation}
\label{magnind}
\partial_t \tilde{B}^i = \partial_j \left( \tilde{v}^i \tilde{B}^j - \tilde{v}^j \tilde{B}^i \right) \, ,
\end{equation}
where $\tilde{v}^i \equiv \alpha v^i - \beta ^i$.
\subsection{Conservative approach}
\label{Cons}
The stress--energy tensor, considering a perfect fluid and the contribution of the magnetic field, can be written as
\begin{equation}
\label{Tmunu}
T^{\mu \nu} = \left( \rho h + b^2 \right) u^{\mu} u^{\nu} + \left( p_\mathrm{gas} + p_\mathrm{mag} \right) g^{\mu \nu} - b^{\mu} b^{\nu},
\end{equation}
being $\rho$ the rest--mass density, $p_\mathrm{gas}$ the gas pressure, $p_\mathrm{mag} \equiv \frac{b^2}{2}$ the magnetic pressure, $h = 1 + \varepsilon + \frac{p_\mathrm{gas}}{\rho}$ the relativistic specific enthalpy, and $\varepsilon$ the specific internal energy.
The energy-momentum conservation
\begin{equation}
\label{consTmunu}
\nabla_{\nu} T^{\mu \nu} = 0 \, ,
\end{equation}
the conservation of baryon number
\begin{equation}
\label{consbaryon}
\nabla_{\nu} \left( \rho u^{\nu} \right) = 0 \, ,
\end{equation}
Maxwell's equations for the magnetic field~(\ref{magnind}), and an equation of state (EOS, see~\ref{EOS}) give together the complete set of equations describing the evolution of the primitive variables, i.e., $\bi{U} = \left[ \rho, v_j, \varepsilon, B^k \right]$. As usual, these equations can be written in the following conservative form:
\begin{equation}
\label{consequat}
\frac{1}{\sqrt{-g}} \left[ \partial_t \left( \sqrt{\gamma} \bi{F}^0 \right) + \partial_i \left( \sqrt{-g} \bi{F}^i \right) \right] = \bi{S},
\end{equation}
being $\bi{F}^0 \equiv \left[ D, S_j, \tau, B^k \right]$ the vector of conserved variables, defined in terms of the primitive ones as
\begin{equation}
\label{P2Csystem}
\eqalign{
D &\equiv \rho W, \\
S_{j} &\equiv \left( \rho h + b^2 \right) W^2 v_{j} - \alpha b^0 b_{j}, \\
\tau &\equiv \left( \rho h + b^2 \right) W^2 - \left( p_\mathrm{gas} + p_\mathrm{mag} \right) - \alpha^2 \left( b^0 \right)^2 - D, \\
B^k &\equiv B^k,
}
\end{equation}
$\bi{F}^i$ the vector of fluxes defined as
\begin{equation}
\label{conservedvector}
\bi{F}^i \equiv \left[ \eqalign{
&\qquad D\tilde{v}^i / \alpha \\
S_j \tilde{v}^i / \alpha + &\left( p_\mathrm{gas} + p_\mathrm{mag} \right) \delta^i_j - b_j B^i / W \\
\tau \tilde{v}^i / \alpha + &\left( p_\mathrm{gas} + p_\mathrm{mag} \right) v^i - \alpha b^0 B^i / W \\
&B^k \tilde{v}^i / \alpha - B^i \tilde{v}^k / \alpha
} \right]\,,
\end{equation}
and $\bi{S}$ the vector of sources that reads
\begin{equation}
\label{sources}
\bi{S} \equiv \left[ \eqalign{
& \qquad 0 \\
T^{\mu \nu} &\left( \partial_{\mu} g_{\nu j} - \Gamma^{\delta}_{\nu \mu} g_{\delta j} \right) \\
\alpha &\left( T^{\mu 0} \partial_{\mu} \ln{\alpha} - T^{\mu \nu} \Gamma^{0}_{\nu \mu} \right) \\
& \qquad 0^k
} \right].
\end{equation}
In order to avoid time derivatives of the metric in the source terms, these are rewritten as done in the case of the \texttt{Whisky} code~\cite{baiotti2005three} (see section 4.3.2 of ~\cite{BaiottiPhDThesis} for details).
\subsection{Electromagnetic gauge conditions}
\label{GaugeCond}
In order to accurately describe the magnetic field and its evolution, it can be convenient to formulate the problem in terms of the vector potential (see, e.g., \cite{feynman1979feynman}). Considering $\nabla$ as a purely spatial operator, one may write
\begin{equation}
\label{rotB}
\bi{B} = \nabla \times \bi{A} \, ,
\end{equation}
so that
\begin{equation}
\label{inductsatisfy}
\nabla \cdot \bi{B} = \nabla \cdot \left( \nabla \times \bi{A} \right) = 0 \, ,
\end{equation}
and thus evolving the vector potential $\bi{A}$ will automatically satisfy \Eref{divfree}.
As already done in \cite{baumgarte2003collapse,baumgarte2003general,etienne2012relativistic}, we then introduce the four--vector potential
\begin{equation}
\label{4vector}
\mathcal{A}_{\nu} = n_{\nu} \Phi + A_{\nu} \, ,
\end{equation}
being $A_{\nu}$ the purely spatial vector potential and $\Phi$ the scalar potential. With this, \Eref{divfree} and \Eref{magnind} become respectively
\begin{equation}
\label{Adivfree}
B^i = \epsilon^{ijk} \partial_j A_k \, ,
\end{equation}
and
\begin{equation}
\label{Ainduct}
\partial_t A_i = \epsilon_{ijk} v^j B^k - \partial_i \left( \alpha \Phi - \beta^j A_j \right) = -E_i - \partial_i \left( \alpha \Phi - \beta^j A_j \right),
\end{equation}
where $\epsilon^{ijk} = n_{\nu} \epsilon^{\nu ijk}$ is the three--dimensional spatial Levi--Civita tensor.
However, the choice of the four-vector potential $\mathcal{A}^{\nu}$ is not unique and one has to choose a specific gauge. The first GRMHD simulations that employed the vector potential as an evolution variable were performed using the \textit{algebraic} gauge~\cite{etienne2012relativistic,etienne2010relativistic} where the scalar potential satisfy the following equation:
\begin{equation}
\label{algebraic}
\Phi = \frac{1}{\alpha} \left( \beta^j A_j \right).
\end{equation}
In this way~\Eref{Ainduct} is strongly simplified, being reduced to
\begin{equation}
\label{Ainduct-algebraic}
\partial_t A_i = \epsilon_{ijk} v^j B^k = - E_i \, ,
\end{equation}
and therefore it does not require to evolve the scalar potential $\Phi$.
More recently, GRMHD simulations started to use the \textit{Lorenz} gauge~\cite{etienne2012relativistic}, which consists of imposing the constraint $\nabla_{\nu} \mathcal{A}^{\nu} = 0$. This gauge requires now to solve also the evolution equation for the scalar potential:
\begin{equation}
\label{lorenz}
\partial_t \left( \sqrt{\gamma} \Phi \right) + \partial_i \left( \alpha \sqrt{\gamma} A^i - \sqrt{\gamma} \beta^i \Phi \right) = 0.
\end{equation}
The \textit{Lorenz} gauge has been shown to perform better in those simulations that implement adaptive mesh refinement, such as, for example, binary neutron star and neutron star--black hole mergers~\cite{etienne2012relativistic}.
The \textit{algebraic} gauge may indeed cause interpolation errors at the boundaries between refinement levels and thus produce spurious magnetic fields (see~\cite{etienne2012relativistic} for more details). An even more robust gauge choice has been introduced in~\cite{Farris2012} with the name of \textit{generalized Lorenz} gauge:
\begin{equation}
\nabla_{\nu} \mathcal{A}^{\nu} = \xi n_\nu \mathcal{A}^{\nu} \, ,
\end{equation}
where $\xi$ is a parameter that is typically set to be equal to $1.5/\Delta t_{\rm max}$, being $\Delta t_{\rm max}$ the timestep of the coarsest refinement level~\cite{etienne2015illinoisgrmhd}. When employing this gauge choice the evolution equation for the scalar potential becomes
\begin{equation}
\label{lorenz-generalized}
\partial_t \left( \sqrt{\gamma} \Phi \right) + \partial_i \left( \alpha \sqrt{\gamma} A^i - \sqrt{\gamma} \beta^i \Phi \right) = -\xi \alpha \sqrt{\gamma} \Phi \, .
\end{equation}
In \texttt{Spritz} we adopt the \textit{generalized Lorenz} gauge which is also the gauge used in the latest \texttt{WhiskyMHD} simulations~\cite{Ciolfi2019, Kawamura2016, Ciolfi2017, GiacomazzoPerna2013, Giacomazzo2015, Endrizzi2016}.
\section{Numerical Implementation}
\label{sec3}
In the present section we summarize the main numerical methods implemented within the \texttt{Spritz} code. The code is based on the Einstein Toolkit~\cite{ETKpaper, EinsteinToolkit:2019_10, ET} which provides a framework to automatically parallelize the code for the use on supercomputers as well as a number of open-source codes providing a number of useful routines, such as those for the evolution of the spacetime, adaptive mesh refinement, input and output of data, checkpointing, and many others.
\subsection{Riemann Solvers}
\label{RieHRSC}
The \texttt{Spritz} code adopts High Resolution Shock Capturing (HRSC) methods to solve \Eref{consequat}. These methods are based on the choice of reconstruction algorithms, to compute the values of primitive variables at the interface between numerical cells, and of approximate Riemann solvers, to finally compute the fluxes.
Our default Riemann solver is the Harten--Lax--van--Leer--Einfeldt (HLLE) \cite{harten1983upstream}, where the numerical fluxes at cell interfaces are computed as follows:
\begin{equation}
\label{HLLEfluxes}
\bi{F}^i = \frac{c_\mathrm{min} \bi{F}^i_\mathrm{r} + c_\mathrm{max} \bi{F}^i_\mathrm{l} - c_\mathrm{max} c_\mathrm{min} \left( \bi{F}^0_\mathrm{r} - \bi{F}^0_\mathrm{l} \right)}{c_\mathrm{max} + c_\mathrm{min}} \, ,
\end{equation}
where a subscript r (l) means that the function is computed at the right (left) side of the cell interface and
$c_\mathrm{max} \equiv \max \left( 0, c_{+,\tiny{\mbox{l}}}, c_{+,\tiny{\mbox{r}}} \right)$,
$c_\mathrm{min} \equiv -\min \left( 0, c_{-,\tiny{\mbox{l}}}, c_{-,\tiny{\mbox{r}}} \right)$, where
$c_{\pm,\tiny{\mbox{r}}}$ ($c_{\pm,\tiny{\mbox{l}}}$) are the right-going ($+$) and left-going ($-$) maximum wave speeds computed from the primitive variables $\bi{U}$ at the right (left) side.
We decided also to implement the Lax--Friedrichs (LxF) scheme \cite{toro2013riemann}, that is
\begin{equation}
\label{LxFfluxes}
\bi{F}^i = \frac{\bi{F}^i_\mathrm{r} + \bi{F}^i_\mathrm{l} - c_\mathrm{wave} \left( \bi{F}^0_\mathrm{r} - \bi{F}^0_\mathrm{l} \right)}{2},
\end{equation}
where $c_\mathrm{wave} = \max(c_\mathrm{max}, c_\mathrm{min})$~\cite{del2003efficient}. This scheme is a very dissipative one and it can be useful in cases where strong jumps in pressure must be considered.
In order to compute the values of $\bi{F}^0$ at right and left sides of cell's interfaces from the primitive variables, we adopt the third--order Piece-wise Parabolic Method (\textit{PPM}) \cite{colella1984piecewise}. In addition, for those cases that require more dissipative methods, for example in presence of strong shocks, we also implemented the second--order total variation diminishing (TVD) \textit{minmod} method \cite{toro2013riemann}.
\subsection{Electromagnetic Field Evolution}
\label{Stagg}
As already mentioned in \Sref{GaugeCond}, the \texttt{Spritz} code is meant to deal with different electromagnetic gauge conditions for the vector potential.
In order to accurately evolve the magnetic field, particular care must be taken in solving numerically \Eref{Ainduct-algebraic}, in the case of the algebraic gauge, or \Eref{Ainduct} and \Eref{lorenz-generalized}, in case of the generalized Lorenz gauge. From now on, we will also use the following definition for simplicity:
\begin{equation}
\label{psimhd}
\Psi_\mathrm{mhd} \equiv \sqrt{\gamma} \Phi \,.
\end{equation}
As in every numerical code, the spatial domain is divided in grid--cells of user specified dimensions. The fluid's state variables (e.g., $\rho$, $p_{gas}$, $\bi{v}$) are stored in the grid--cell's centers. Usually, the electric and magnetic fields ($\bi{E}$ and $\bi{B}$) are instead stored respectively on cell's edges and faces. \texttt{Spritz} evolves the magnetic field as the curl of a given vector potential $\bi{A}$, whose components are staggered just like the electric field $\bi{E}$ (see \Fref{Fig1}) and are usually evolved using the generalized Lorenz gauge. The precise storage location on the grid--cells of various quantities is reported in \Tref{tabstagg}.
\begin{table}[t!]
\caption{\label{tabstagg}Location over the grid of various quantities. Symbols in the left column should be considered at the code's array position $\left( i,j,k \right)$ while the right column indicates the actual location over the grid that depends on whether the different quantities present a particular specification for the prolongation (for the components of the four--vector potential) or how they are computed within the code (for the components of the magnetic field).}
\footnotesize
\begin{center}
\begin{tabular}{@{}ccc}
\br
Symbol&Definition&Location\\
\mr
$\alpha$&lapse&$(i,j,k)$\\
$\beta^m$&$m$--component of the shift vector&$(i,j,k)$\\
$\gamma^{mn}$&$mn$--component of the spatial metric&$(i,j,k)$\\
$\gamma$&determinant of the spatial metric&$(i,j,k)$\\
$\rho$&rest-mass density&$(i,j,k)$\\
$p_{gas}$&pressure&$(i,j,k)$\\
$\varepsilon$&energy density&$(i,j,k)$\\
$v_m$&$m$--component of fluid velocity&$(i,j,k)$\\
$B^1$&$x$--component of magnetic field&$(i+\frac{1}{2},j,k)$\\
$B^2$&$y$--component of magnetic field&$(i,j+\frac{1}{2},k)$\\
$B^3$&$z$--component of magnetic field&$(i,j,k+\frac{1}{2})$\\
$A_1$&$x$--component of vector potential&$(i,j+\frac{1}{2},k+\frac{1}{2})$\\
$A_2$&$y$--component of vector potential&$(i+\frac{1}{2},j,k+\frac{1}{2})$\\
$A_3$&$z$--component of vector potential&$(i+\frac{1}{2},j+\frac{1}{2},k)$\\
$\Psi_\mathrm{mhd}$&scalar potential&$(i+\frac{1}{2},j+\frac{1}{2},k+\frac{1}{2})$\\
\br
\end{tabular}\\
\end{center}
\end{table}
\begin{figure}[b!]
\begin{center}
\includegraphics[width=0.5\linewidth]{GRID_CELL.eps}
\captionof{figure}{\label{Fig1}Representation of storage locations for magnetic field and vector potential components in our numerical code. Point $P_{i,j,k}$ represents the cell's center.}
\end{center}
\end{figure}
Since $\bi{B}$ is computed from the curl of $\bi{A}$, the divergence--free character of the magnetic field is automatically satisfied.
The \texttt{Spritz} code evolves the vector potential $A$ and, when employing the generalized Lorenz gauge, the scalar potential $\Psi_\mathrm{mhd}$ is also computed.
Following \cite{etienne2015illinoisgrmhd}, we can write the update terms for the vector potential's components and for the scalar potential as follows:
\begin{equation}
\label{Axrhs}
\eqalign{
\partial_t A_m &= -E_m - \partial_m \left( G_{A} \right) = \cr
&= -E_m - \partial_m \left( \alpha \frac{\Psi_\mathrm{mhd}}{\sqrt{\gamma}} - \beta^j A_j \right),}
\end{equation}
for $m = 1$, $2$ and $3$, and
\begin{equation}
\label{Psirhs}
\eqalign{
\partial_t \Psi_\mathrm{mhd} &= -\partial_j \left( {F_{\Psi}}^j \right) - \xi \alpha \Psi_\mathrm{mhd} = \cr
&= -\partial_j \left( \alpha \sqrt{\gamma} A^j - \beta^j \Psi_\mathrm{mhd} \right) - \xi \alpha \Psi_\mathrm{mhd},}
\end{equation}
being $\xi$ the so--called damping factor, used for the generalized Lorenz gauge. As the reader may deduce from \Eref{Axrhs}, \Eref{Psirhs}, and \Tref{tabstagg}, the terms on the right--hand sides in general have different storage locations and therefore we decided to follow this scheme:
\begin{enumerate}
\item At first we consider functions ${F_{\Psi}}^j$ and $G_{A}$, defined via \Eref{Axrhs} and \Eref{Psirhs} respectively, to be computed at cell centers, by interpolating $\Psi_\mathrm{mhd}$ and $A_j$ respectively from cell vertices and edges to the center.
\item Then we interpolate the values obtained at point (i) for ${F_{\Psi}}^j$ back to the cell edges and for $G_{A}$ back to cell vertices.
\item Finally we numerically differentiate the values at point (ii) via a centered difference scheme. For example, the derivative along $x$ ($m=1$) of $G_{A}$ in~\Eref{Axrhs} on the edge $(i,j+1/2,k+1/2)$ is computed as $[G_{A}(i+1/2,j+1/2,k+1/2)-G_{A}(i-1/2,j+1/2,k+1/2)]/(\Delta x)$. A similar expression is used for the derivatives computed at the cell vertex $(i+1/2,j+1/2,k+1/2)$ in ~\Eref{Psirhs} where the two nearby edges are used.
\end{enumerate}
In details, if a variable $f$ is given at cell vertices, then we interpolate it at the center of the cell using a simple linear interpolation:
\begin{equation}
\label{InterpVertices2Center}
\eqalign{
f(i,j,k) =& \frac{1}{8} \left[ f\left(i-\frac{1}{2},j-\frac{1}{2},k-\frac{1}{2}\right) + f\left(i-\frac{1}{2},j+\frac{1}{2},k-\frac{1}{2}\right) \right. \cr
&+ \left. f\left(i+\frac{1}{2},j+\frac{1}{2},k-\frac{1}{2}\right) + f\left(i+\frac{1}{2},j-\frac{1}{2},k-\frac{1}{2}\right) \right. \cr
&+ \left. f\left(i-\frac{1}{2},j-\frac{1}{2},k+\frac{1}{2}\right) + f\left(i-\frac{1}{2},j+\frac{1}{2},k+\frac{1}{2}\right) \right. \cr
&+ \left. f\left(i+\frac{1}{2},j+\frac{1}{2},k+\frac{1}{2}\right) + f\left(i+\frac{1}{2},j-\frac{1}{2},k+\frac{1}{2}\right) \right]}
\end{equation}
\Eref{InterpVertices2Center} is used to interpolate $\Psi_\mathrm{mhd}$ at step (i) of the aforementioned scheme.
For quantities defined instead on edges, for example along the $x$--direction, the following interpolation is instead used:
\begin{equation}
\label{InterpEdges2Center}
\eqalign{
f(i,j,k) =& \frac{1}{4} \left[ f\left(i,j-\frac{1}{2},k-\frac{1}{2}\right) + f\left(i,j+\frac{1}{2},k-\frac{1}{2}\right) \right. \cr
&+ \left. f\left(i,j-\frac{1}{2},k+\frac{1}{2}\right) + f\left(i,j+\frac{1}{2},k+\frac{1}{2}\right) \right]}
\end{equation}
\Eref{InterpEdges2Center} is used to interpolate $A_x$ at step (i) of the aforementioned scheme. Along other directions, the straightforward permutation of indices leads to the correct interpolating functions.
The following expression is instead used to interpolate from the cell center to a cell edge:
\begin{equation}
\label{InterpCenters2Edge}
\eqalign{
f\left(i,j+\frac{1}{2},k+\frac{1}{2} \right) =& \frac{1}{4} \left[ f(i,j,k) + f(i,j+1,k) \right. \cr
&+ \left. f(i,j,k+1) + f(i,j+1,k+1) \right]}
\end{equation}
We use \Eref{InterpCenters2Edge} to obtain the values of ${F_{\Psi}}^j$ at point (ii). With the following interpolator we instead compute the values of $G_{A}$ at point (ii):
\begin{equation}
\label{InterpCenters2Vertex}
\eqalign{
f\left(i+\frac{1}{2},j+\frac{1}{2},k+\frac{1}{2}\right) =& \frac{1}{8} \left[ f(i,j,k) + f(i,j+1,k) \right. \cr
&+ \left. f(i,j,k+1) + f(i,j+1,k+1) \right. \cr
&+ \left. f(i+1,j,k) + f(i+1,j+1,k) \right. \cr
&+ \left. f(i+1,j,k+1) + f(i+1,j+1,k+1) \right]}
\end{equation}
In order to finally be able to compute the right-hand-side of \Eref{Axrhs}, one needs also to compute the electric field components $E_m$ that are stored at cell edges. Here we follow the same approach implemented in the \texttt{WhiskyMHD} code~\cite{giacomazzo2007whiskymhd} and use the flux-CT method~\cite{balsara1999staggered}, in which the electric field is computed from the magnetic field HLLE fluxes. Our staggered formulation therefore benefits of the same properties of the constrained transport scheme~\cite{evans88}, but without the need of implementing special prolongation and restriction operators to ensure the divergence-free character of the magnetic field~\cite{Balsara2001}.
An alternative scheme could use a non--staggered formulation where $\bi{A}$ and $\bi{B}$ are both stored at the cell centers (e.g., as done in the \texttt{WhiskyMHD} code~\cite{giacomazzo2011accurate}). An example of the different results for a shock--tube $1D$ test obtained via a staggered and a non--staggered scheme is shown in \Fref{Fig2}.
\begin{figure}[hbt!]
\begin{center}
\includegraphics[width=0.5\linewidth]{By_Compare_LR_STAGGvsNOSTAGG.eps}
\captionof{figure}{\label{Fig2}Comparison of results of \texttt{Balsara 1} test (from \cite{balsara2001total}), obtained via staggered (red diamonds) and non--staggered (blue dots) vector potential. Post--shock oscillations are clearly visible in the blue curve. It is worth noting that the non--staggered scheme is anyway still stable since the maximum amplitude of those oscillations does not grow indefinitely during the evolution. We remind that indeed \texttt{WhiskyMHD} applied a Kreiss-Oliger dissipation to the vector potential in order to remove such oscillations~\cite{giacomazzo2011accurate}.}
\end{center}
\end{figure}
\subsection{Boundary Conditions}
\label{BC}
When developing new codes to work within the \texttt{EinsteinToolkit} framework, the treatment of boundary conditions (BC) is usually left to the generic thorn \texttt{Boundary}~\cite{ET}. Through this approach, the \texttt{Spritz} code may consider ``flat'' or ``none'' BC, as already implemented in the \texttt{WhiskyMHD} \cite{giacomazzo2007whiskymhd} and \texttt{GRHydro} \cite{mosta2013grhydro} codes. The ``flat'' BC simply copies to the ghost zones the value that each variable has in the outermost grid point. The ``none'' BC instead does not update the ghost zones and keeps the value of the variables in the ghost zones equal to the ones set by the initial data routine.
Although the ``flat'' and ``none'' BC have been successfully used in simulations with the aforementioned codes, we decided to modify the BC at the external boundary of the computational domain in order to provide a more accurate calculation of B. We followed in particular the work presented in \cite{etienne2015illinoisgrmhd} and we implemented the numerical extrapolation of $\bi{A}$ and $\Psi_\mathrm{mhd}$ at the outer boundary as described in the \texttt{IllinoisGRMHD} code. Basically, for each grid--point in the outer boundary we apply the following formula:
\begin{equation}
\label{numextrap}
F_{i}^{j} = \cases{2F_{i-1}^{j} - F_{i-2}^{j} \qquad \mbox{for } i = N^{j}-2,N^{j}-1,N^{j} \\
2F_{i+1}^{j} - F_{i+2}^{j} \qquad \mbox{for } i = 3,2,1 }
\end{equation}
where $F \in \left\lbrace \bi{A}, \Psi_\mathrm{mhd} \right\rbrace$, $j \in \left\lbrace 1,2,3 \right\rbrace$, $N$ is the number of grid--points in the $j$--direction, and we use 3 points in the ghost zones for each direction.
In addition, the user may choose whether BC for $\bi{A}$ and $\Psi_\mathrm{mhd}$ should be given by \Eref{numextrap} or simply be obtained by the other two conditions provided by the \texttt{Boundary} thorn.
Finally, we also successfully tested the implementation of periodic BC provided by the thorn \texttt{Periodic}~\cite{ET} through the Loop Advection test (see \Sref{2D}), in both uniform and mesh--refined grids.
\subsection{Primitive variables recovering}
\label{C2P}
As mentioned in \Sref{RieHRSC}, the computation of fluxes at each time during the evolution depends on values of the primitive variables $\bi{U}$, although we evolve the conserved ones $\bi{F}^0$.
As recurrent in many conservative approaches, one of the most delicate point is the inversion of \Eref{P2Csystem}, a problem that presents no analytic solution. Thus one has to apply a numerical method (usually a Newton-Raphson scheme).
In the literature many methods have been presented in order to perform this step~\cite{noble2006primitive, Siegel2018}. In the \texttt{Spritz} code we implemented both the 2D method used in \texttt{WhiskyMHD}~\cite{giacomazzo2007whiskymhd} and the one presented in \cite{noble2006primitive} and used in \texttt{GRHydro}.
\subsection{Atmosphere}
\label{Atmo}
As any GRMHD grid-based code, \texttt{Spritz} cannot handle zero values for the rest-mass density and a minimum value $\rho_\mathrm{atm}$ needs to be set. If at time $t$ the rest-mass density $\rho$ computed in our conservative-to-primitive routine is such that $\rho < \rho_\mathrm{atm}$, then its value is set to $\rho_\mathrm{atm}$, the pressure and specific internal energy are recomputed using a polytropic EOS, and the fluid's three--velocity is set to zero. In the tests presented here we typically set $\rho_\mathrm{atm} = 10^{-7} \rho_{0,\mathrm{max}}$, being $\rho_{0,\mathrm{max}}$ the initial maximum value of the rest-mass density.
\subsection{Equation of State}
\label{EOS}
To close the GRMHD system of equations, an equation of state that provides a relation between $\rho$, $\varepsilon$, and $p_\mathrm{gas}$ must be supplied. Many EOS exist, from analytical ones, such as that of an ``ideal fluid'' or of a ``polytropic'' gas~\cite{horedt2004polytropes}, to more complex ones that can only be expressed in a tabulated form~\cite{COMPOSE}. One of the most challenging research fields in astrophysics is focussed on trying to understand how matter behaves in the core of NSs, where the rest-mass density may reach values as high as $\sim\!10^{15}$ g cm$^{-3}$, not reproducible in Earth laboratories. Different EOS result in different bulk properties of the star, e.g., different maximum mass or equatorial radius for both spherical (i.e., non--rotating) and rapidly--rotating equilibrium configurations of NS models (see \cite{cipolletta2015fast} for examples taking into account EOS with various stiffness). It is therefore crucial for any GRMHD code to be able to handle different EOS with different composition as well as different treatments of nucleon interactions, in order to improve the capabilities of comparison between theoretical models and observations.
The \texttt{Spritz} code can implement both analytic and tabulated EOS. This is done via the \texttt{EOSOmni} thorn provided by the \texttt{EinsteinToolkit} which supports analytic EOS, such as ``ideal fluid'' and ``piecewise polytropic'' ones~\cite{Read2009}, and ``tabulated'' EOS.
For the sake of clarity, we report the explicit equations for the ``ideal fluid'' EOS, that can be written as
\begin{equation}
\label{IFEOS}
p_\mathrm{gas} = \left( \Gamma - 1 \right) \rho \varepsilon,
\end{equation}
where $\Gamma$ is the adiabatic index, and for the ``polytropic'' EOS, that reads
\begin{eqnarray}
\label{polyEOS}
p_\mathrm{gas} &=& K \rho^\Gamma\,,\\
\varepsilon &=& K \rho^{\Gamma-1}/(\Gamma - 1)\,,
\end{eqnarray}
being $K$ the polytropic constant. The tests presented in this paper will use only the ``ideal fluid'' EOS. A follow-up paper will present instead tests with cold and finite temperature equations of state, including also the evolution of the electron fraction and neutrino emission.
\subsection{Adaptive Mesh Refinement}
\label{AMR}
Adaptive Mesh Refinement (AMR) is very important in full 3D simulations of binary mergers because it allows for the optimization of the number of grid points by refining only interesting regions of the domain while maintaining a sufficiently large computational domain to reduce the effects of external boundaries and to allow for the extraction of gravitational wave signals far away from the source.
In the \texttt{EinsteinToolkit} framework~\cite{ETKpaper, ET}, this task is performed via the \texttt{Carpet} driver \cite{Carpet,schnetter2004evolutions}. Particular care must be taken in case of staggered variables, like $\bi{A}$ and $\Psi_\mathrm{mhd}$ in the \texttt{Spritz} code, as already mentioned in \Sref{Stagg}. In particular, one needs to use separate restriction and prolongation operators with respect to variables located at the cell centers. Such operators are already provided by the \texttt{Carpet} driver and they are the same used by the \texttt{IllinoisGRMHD} code. In \Sref{2D} we show also some tests of our AMR implementation.
\subsection{Spacetime Evolution}
\label{NumSTevo}
The spacetime evolution is performed using the \texttt{McLachlan} thorn~\cite{Brown:2008sb, Kranc:web, McLachlan:web}, which is part of the \texttt{EinsteinToolkit}. It adopts the BSSNOK formulation presented in \cite{baumgarte1998numerical,nakamura1987general,shibata1995evolution} and for which the numerical implementation has been presented in \cite{baiotti2005three,alcubierre2003gauge,alcubierre2000towards}. More details on the code can be found in~\cite{ETKpaper}.
\section{Results}
\label{sec4}
As already stressed in the Introduction, the central goal of the \texttt{Spritz} code is to perform simulations of BNS and NS-BH binary mergers.
In order to address such a complex task with the necessary confidence, we need to assess the reliability of the code in a variety of physical conditions.
In this Section, we report on the results of our extensive testing, including a number of 1--, 2-- and 3--dimensional simulations. These simulations include critical tests that have been already considered in the literature in several previous papers (see, e.g., \cite{mosta2013grhydro,etienne2015illinoisgrmhd,balsara2001total,beckwith2011second} and references therein), allowing for a direct comparison with other codes.
\subsection{1D tests}
\label{1D}
\begin{figure}[hbt!]
\begin{center}
\includegraphics[width=\linewidth]{Compare_BTests_SPRITZvsEXACT.eps}
\captionof{figure}{\label{Fig3}Comparison of numerical results (red dots) and exact solutions (continuous black lines) for the suite of tests of \cite{balsara2001total}. Left and right columns show respectively the spatial distributions of the rest-mass density and the magnetic field component $B^y$ at the final time of the evolution.
Here the \texttt{Balsara 1}, \texttt{2}, \texttt{4} and \texttt{5} tests are performed with the third order \texttt{PPM} method. On the other hand, the \texttt{Balsara 3} is performed with the second order \texttt{MINMOD} method, because this test is the most demanding one due to the very high jump of four orders of magnitude in the initial pressure and this requires a slightly more dissipative method to succeed.}
\end{center}
\end{figure}
\begin{table}[t]
\footnotesize
\begin{center}
\begin{tabular}{@{}cD{.}{.}{4.1}D{.}{.}{4.1}D{.}{.}{4.1}D{.}{.}{4.1}D{.}{.}{4.1}D{.}{.}{4.1}D{.}{.}{4.1}D{.}{.}{4.1}D{.}{.}{4.1}D{.}{.}{4.1}D{.}{.}{4.1}D{.}{.}{4.1}D{.}{.}{4.1}D{.}{.}{4.1}D{.}{.}{4.1}D{.}{.}{4.1}}
\br
Test: & \multicolumn{2}{c}{ \texttt{1} } & \multicolumn{2}{c}{ \texttt{2} } & \multicolumn{2}{c}{ \texttt{3} } & \multicolumn{2}{c}{ \texttt{4} } & \multicolumn{2}{c}{ \texttt{5} } \\
\mr
& L & R & L & R & L & R & L & R & L & R \\
\mr
\lineup $\rho$ & 1.0 & 0.125 & 1.0 & 1.0 & 1.0 & 1.0 & 1.0 & 1.0 & 1.08 & 1.0 \\
\lineup $p_{gas}$ & 1.0 & 0.1 & 30.0 & 1.0 & 1000.0 & 0.1 & 0.1 & 0.1 & 0.95 & 1.0 \\
\lineup $v_x$ & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.999 & -0.999 & 0.4 & -0.45 \\
\lineup $v_y$ & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.3 & -0.2 \\
\lineup $v_z$ & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.2 & 0.2 \\
\lineup $B^x$ & 0.5 & 0.5 & 5.0 & 5.0 & 10.0 & 10.0 & 10.0 & 10.0 & 2.0 & 2.0 \\
\lineup $B^y$ & 1.0 & -1.0 & 6.0 & 0.7 & 7.0 & 0.7 & 7.0 & -7.0 & 0.3 & -0.7 \\
\lineup $B^z$ & 0.0 & 0.0 & 6.0 & 0.7 & 7.0 & 0.7 & 7.0 & -7.0 & 0.3 & 0.5 \\
\br
\end{tabular}\\
\caption{\label{tab1D}Initial data for \texttt{Balsara} relativistic shock tube tests.}
\end{center}
\end{table}
\begin{figure}[hbt!]
\begin{center}
\includegraphics[width=\linewidth]{B1_HR_SPRITZvsGRHYDRO.eps}
\captionof{figure}{\label{Fig4}Comparison of results on the \texttt{Balsara 1} test (from \cite{balsara2001total}) obtained with the \texttt{Spritz} code (red dots) and the \texttt{GRHydro} code (green diamonds) \cite{mosta2013grhydro}.}
\end{center}
\end{figure}
In \Fref{Fig3}, we present the results for 1--dimensional (1D) relativistic shock--tube problems corresponding to the suite of tests of \cite{balsara2001total}. Here, our numerical solution of such problems can be directly compared with the exact solutions computed via the code presented in \cite{giacomazzo2006exact}. Initial data for such tests are described in \Tref{tab1D}. In all tests we employ an ideal fluid EOS, with $\Gamma = 2.0$ for test \texttt{Balsara 1} and $\Gamma = 5/3$ for the others. The final evolution time is $t = 0.55$ for test \texttt{Balsara 5} and $t = 0.4$ for the others.
All tests show an excellent agreement between the numerical results and the exact solutions.
We also compared the results of these 1D tests obtained with the \texttt{Spritz} code with those already published for the numerical code \texttt{GRHydro} \cite{mosta2013grhydro}, finding a perfect match.
In \Fref{Fig4}, we show an example of such comparison referring to the \texttt{Balsara 1} shock--tube test.
Finally, \Fref{Fig5} shows our results on the most demanding \texttt{Balsara 3} test with different resolutions (200, 800, and 1600 grid points). Higher resolution leads to a significant increase in accuracy, which is particularly evident at the shock front (compare also with the exact solution in \Fref{Fig3}).
\begin{figure}[hbt!]
\begin{center}
\includegraphics[width=\linewidth]{B3_SpritzResolutions.eps}
\captionof{figure}{\label{Fig5}Comparison of results on the \texttt{Balsara 3} test~\cite{balsara2001total} obtained with the \texttt{Spritz} code at three different resolutions: low resolution (200 points -- green diamonds), medium resolution (800 points -- blue triangles) and high resolution (1600 points -- red dots).}
\end{center}
\end{figure}
\subsection{2D tests}
\label{2D}
We now move on to discuss 2D tests performed with the Spritz code.
In this work, we considered three types of 2D tests, namely the cylindrical explosion, the magnetic rotor and the magnetic loop advection, all performed in Cartesian coordinates. We discuss them in some detail in the following subsections.
\subsubsection{Cylindrical Explosion}
\label{CylExp}
\hfill\\
\noindent The cylindrical explosion (also known as the cylindrical blast wave) is a demanding multidimensional shock test, first introduced by \cite{komissarov1999godunov}, and later modified and implemented in \cite{mosta2013grhydro,etienne2010relativistic,beckwith2011second,del2007echo}. This test considers a uniformly magnetized domain consisting of a dense, over--pressured cylinder in the central region expanding in a surrounding ambient medium. Here, we adopt the parameters from the setup described in \cite{mosta2013grhydro}. For the cylinder, we set
\begin{equation}
\label{CylBWintpar}
r_\mathrm{in}= 0.8, \; r_\mathrm{out}= 1.0, \; \rho_\mathrm{in} = 10^{-2}, \; p_\mathrm{gas,in} = 1.0 ,
\end{equation}
while for the surrounding ambient medium, we set
\begin{equation}
\label{CylBWextpar}
\rho_\mathrm{out} = 10^{-4}, \; p_\mathrm{gas,out} = 3 \times 10^{-5}.
\end{equation}
\noindent Here, $r_\mathrm{in}$ and $r_\mathrm{out}$ are the radial parameters used for the density profile smoothening prescription (and similarly for the pressure profile smoothening prescription) considered in \cite{mosta2013grhydro}, such that
\begin{equation}
\label{CylBWdens}
\rho(r) = \cases{ \rho_\mathrm{in} \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad \ ; \ r \leq r_\mathrm{in} \\
\exp \Bigg[ \frac{(r_\mathrm{out} - r) \ln \rho_\mathrm{in} + (r-r_\mathrm{in}) \ln \rho_\mathrm{out}}{r_\mathrm{out} - r_\mathrm{in}} \Bigg] \, ; \ r_\mathrm{in} < r < r_\mathrm{out} \\
\rho_\mathrm{out} \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad ; \ r \geq r_\mathrm{out}
}
\end{equation}
\noindent The fluid velocity is initially set to zero and the magnetic field is initially uniform with $B^x = 0.1$ and $B^y = B^z = 0$. The test is performed on a $200 \times 200$ grid with x-~and y-coordinates spanning over the range $[-6,6]$. We adopt a Courant factor of 0.25 and an ideal fluid EOS with adiabatic index $\Gamma = 4/3$. We use the second order MINMOD reconstruction method along with the HLLE flux solver and the RK4 method for time-step evolution.
The resulting structure of the blast wave is shown in \Fref{Fig6} for the final time $t=4.0$. In particular, we show the two-dimensional distribution of gas pressure $p_\mathrm{gas}$, Lorentz factor $W$ (together with magnetic field lines), and the $x$-- and $y$--components of the magnetic field, $B^x$ and $B^y$. This Figure shows a very similar behavior as compared to the results already presented in the literature \cite{mosta2013grhydro,etienne2010relativistic,del2007echo}.
\begin{figure}[hbt!]
\begin{center}
\includegraphics[width=0.9\linewidth]{CylBW2D.eps}
\captionof{figure}{\label{Fig6} Cylindrical explosion test snapshots at the final evolution time $t=4$, showing the distribution of gas pressure $p_\mathrm{gas}$ (top--left), Lorentz factor $W$ together with magnetic field lines (top--right), and x-~and y-components of the magnetic field, $B^x$ (bottom--left) and $B^y$ (bottom--right). The resolution considered here is $\Delta x = \Delta y = 0.06$.}
\end{center}
\end{figure}
\begin{figure}[hbt!]
\begin{center}
\includegraphics[width=0.95\linewidth]{CylBW1D.eps}
\captionof{figure}{\label{Fig7} One-dimensional cut along the x-axis of the cylindrical explosion test for the final evolution time $t=4$. A comparison between a low resolution test with $N=200$ grid--points (green solid line) and high resolution test with $N=400$ grid--points (blue dashed line) is shown in the left--side panels for the rest-mass density $\rho$ (top) and $x$--component of the magnetic field $B^x$ (bottom). The right--side panels show a comparison of the same quantities between the high resolution test performed with \texttt{Spritz} (blue dashed line) and the same test performed with \texttt{GRHydro} (red solid line).}
\end{center}
\end{figure}
\begin{figure}[hbt!]
\begin{center}
\includegraphics[width=0.9\linewidth]{CylBW2D_AMR.eps}
\captionof{figure}{\label{Fig8} Cylindrical blast wave test with adaptive mesh refinement (AMR). A comparison is made for the final pressure configuration at $t=4$ between the low resolution test (left panel) performed with uniform grid and spacing $\Delta x = \Delta y = 0.06$ and the AMR test (right panel) including a refined inner grid (red box) with double resolution (i.e., grid-spacing $\Delta x = \Delta y = 0.03$). The two results are in agreement and no spurious effects are observed at the inner grid boundary.}
\end{center}
\end{figure}
\Fref{Fig7} provides instead a quantitative indication of the accuracy of our code. In this case, we show a one-dimensional slice along $y=0$ of the final blast wave configuration at time $t=4.0$, in terms of rest-mass density and $B^x$.
Two cases are considered: on the left, we compare the results obtained with low ($200$ grid--points) and high resolution ($400$ grid--points); on the right, we compare the high resolution test results obtained with \texttt{Spritz} with those obtained with \texttt{GRHydro} \cite{mosta2013grhydro}.
For the first comparison, we notice that the peaks differ slightly in $\rho$ due to the fact that the ability to capture the peak sharpness depends significantly on resolution. The values of $B^x$ show a much smaller deviation, due to a smoother variation of this quantity.
For the second comparison, the agreement between \texttt{Spritz} and \texttt{GRHydro} appears excellent, further verifying the robustness of our code.
To validate the implementation of adaptive mesh refinement, we carried out another simulation including an inner refined grid covering the x-~and y-coordinates in the range [-3,3] with grid-spacing $\Delta x = \Delta y = 0.03$ (while the rest of the domain has double grid spacing).
\Fref{Fig8} shows the comparison with the uniform grid test in terms of final pressure distribution. No significant differences are found, nor specific effects at the inner grid separation boundary, demonstrating a correct implementation of the AMR infrastructure.
\subsubsection{Magnetic Rotor}
\label{MagRot}
\hfill\\
\noindent The second two-dimensional test we consider is the magnetic cylindrical rotor, originally introduced for classic MHD in \cite{balsara1999staggered,toth2000b} and later employed also for relativistic MHD in \cite{etienne2010relativistic,del2003efficient}. The initial setup of this test consists of a dense, rapidly spinning fluid at the center, surrounded by a static ambient medium, where the entire domain is set with a uniform magnetic field and pressure. For setting the initial parameters, we take the radius of the inner rotating fluid as $r=0.1$, with inner rest-mass density $\rho_\mathrm{in} = 10.0$, uniform angular velocity $\Omega=9.95$, and therefore the maximum value of the fluid three--velocity is $v_\mathrm{max} = 0.995$. For the outer static ambient medium, we set the rest-mass density as $\rho_\mathrm{out} = 1.0$. The initial magnitudes of the magnetic field and gas pressure are $B^i=(1.0, 0, 0)$ and $p_{gas,\mathrm{in}} = p_{gas,\mathrm{out}} = 1.0$. The problem is set up on a $400 \times 400$ grid with $x$-- and $y$--coordinates lying in range $[0,1]$. Here, we fix the Courant factor to 0.25 and consider an ideal fluid EOS with adiabatic index $\Gamma = 5/3$. For the system evolution, we use the second order MINMOD reconstruction method, the HLLE flux solver, and the RK4 method for time--stepping.
\Fref{Fig9} shows the two-dimensional profiles of density $\rho$, gas pressure $p_\mathrm{gas}$, magnetic pressure $p_\mathrm{mag}= b^2/2$, and Lorentz factor $W$ along with magnetic field lines, all at the final time $t=0.4$. The rotation of the cylinder causes magnetic winding. As one can see in the bottom--right panel of \Fref{Fig9}, the field lines are twisted roughly by $\sim 90^\circ$ in the central region. This twisting of field lines eventually slows down the rotation of the cylinder. There is also a decrease in $\rho$, $p_\mathrm{gas}$, and $p_\mathrm{mag}$ in the central region, observed along with the formation of an oblate shell of higher density. Also for this test, the results are in good agreement with the ones in the literature \cite{mosta2013grhydro,etienne2010relativistic,del2003efficient}.
\begin{figure}[hbt!]
\begin{center}
\includegraphics[width=0.9\linewidth]{MagRot2D.eps}
\captionof{figure}{\label{Fig9} Magnetic rotor test with the following parameters shown for final evolved time $t=0.4$: Density $\rho$ (top--left), Gas pressure $p_\mathrm{gas}$ (top--right), Magnetic pressure $p_\mathrm{mag}$ (bottom--left),and Lorentz factor $W$ together with magnetic field lines (bottom--right). The resolution considered is $\Delta x = \Delta y = 0.0025$.}
\end{center}
\end{figure}
Similarly to the test discussed in \sref{CylExp}, we perform a quantitative check by taking a one--dimensional slice along $y=0$ of the final rotor configuration at $t=0.4$. Again two cases are considered: (i) results comparison for the low and high resolution runs having $250$ and $400$ grid--points, respectively; (ii) results comparison for our high resolution test with the corresponding one preformed with \texttt{GRHydro} \cite{mosta2013grhydro}.
\Fref{Fig10} shows this comparison made for the two quantities $\rho$ and $B^x$. For (i), as the resolution is increased, the peaks in $\rho$ as well as $B^x$ are better captured, showing signs of convergence towards the expected solution. For (ii), except for a minor difference in the peak values, the curves are comparable.
\begin{figure}[hbt!]
\begin{center}
\includegraphics[width=0.95\linewidth]{MagRot1D.eps}
\captionof{figure}{\label{Fig10} One--dimensional cut along the $x$--axis for the magnetic rotor test at the final evolution time $t=0.4$. A comparison between a low resolution test with $250$ grid--points (green solid line) and a high resolution test with $400$ grid--points (blue dashed line) is shown for the rest-mass density $\rho$ and the $x$--component of the magnetic field $B^x$ in the top and bottom left--side panels, respectively. In the top and bottom right--side panels, the same quantities are compared for the analogous high resolution test performed with \texttt{Spritz} (blue dashed line) and with \texttt{GRHydro} \cite{mosta2013grhydro} (red solid line).}
\end{center}
\end{figure}
\subsubsection{Loop Advection}
\label{LoopAdv}
\hfill\\
\noindent The third and last two--dimensional test we performed is the advection of a magnetic field loop, which was first described in \cite{devore1991flux} and appeared later in a slightly modified version (the one we consider) in \cite{mosta2013grhydro,beckwith2011second,gardiner2005unsplit,stone2008athena}. In this test, a magnetized circular field loop is propagated within the surrounding non--magnetized ambient medium with a constant velocity in a two--dimensional periodic grid. In particular, the analytical prescription for the initial imposed magnetic field (taken from \cite{mosta2013grhydro}) is given by
\begin{equation}
\label{LoopAdvMag}
B^x, \ B^y = \cases{ -A_\mathrm{loop}y/r, \ A_\mathrm{loop}x/r \ ; \quad r<R_\mathrm{loop} \\
\qquad \qquad \quad \qquad \ \; \; 0 \ ; \quad r\geq R_\mathrm{loop}
}
\end{equation}
where $A_\mathrm{loop}$ is the amplitude of the magnetic field, $r = \sqrt{x^2 + y^2}$ is the radial coordinate, $R_\mathrm{loop}$ is the loop radius, and $B^z$ is set to zero. The corresponding vector potential prescription from which \Eref{LoopAdvMag} can be obtained is given by $\bi{A}(r) = (0,0,\mathrm{max}[0,A_\mathrm{loop}(R_\mathrm{loop}-r)])$~\cite{gardiner2005unsplit}.
For the initial parameters, we set the density as $\rho=1.0$ and pressure as $p_\mathrm{gas}=3.0$ throughout the computational domain. For the loop, we assume $A_\mathrm{loop}=0.001$ and $R_\mathrm{loop}=0.3$. The fluid 3-velocity is set to $v^i=(1/12, 1/24, 0)$ for a case where $v^z=0$ and $v^i=(1/12, 1/24, 1/24)$ for a more generic case in which the vertical component of the velocity is non-zero, i.e., $v^z\neq 0$. We run the test in both low resolution with a $128\times 128$ grid and high resolution with a $256\times 256$ grid, where the $x$-- and $y$--components span the range [-0.5,0.5]. The Courant factor is 0.4 and the adiabatic index for the ideal EOS is $\Gamma=5/3$. Like the previous 2D tests, we utilize the MINMOD reconstruction method along with the HLLE flux solver and the RK4 method for time-step evolution.
The outcome of the $v^z\neq 0$ test case is shown in \Fref{Fig11}. Here, the top row illustrates the initial configuration of the magnetic loop for the quantities $B^x$ and $p_\mathrm{mag}=b^2/2$ at $t=0$. After one entire cycle of the loop across the domain at $t=24$, the same quantities are depicted in the middle row for low resolution run and the bottom row for high resolution run. We notice a significant loss of magnetic pressure due to numerical dissipation for the low resolution test after one evolution cycle as also reported in \cite{mosta2013grhydro}, which is however smaller for higher resolution. Our results are comparable with the ones presented in \cite{mosta2013grhydro}. It is worth noting that the expression for magnetic pressure used for \Fref{Fig11} is $p_\mathrm{mag}=b^2/2$ and differs from the expression used for figure 10 of \cite{mosta2013grhydro} by a factor of $1/2$ (in~\cite{mosta2013grhydro} the authors actually plotted $b^2$).
To consider a less dissipative numerical scheme, we also perform another run in low resolution employing the PPM reconstruction and compare the results with those obtained with MINMOD reconstruction. This is shown in \Fref{Fig12}, where the top and bottom panels represent the outcome of the runs with MINMOD reconstruction and PPM reconstruction, respectively.
The first column depicts the initial data at $t=0$, the second column shows the loop at final time $t=24$, while the third column shows the logarithmic values of the absolute differences between the initial and final times.
As expected, we find significantly lower dissipation in the PPM case.
\begin{figure}[hbt!]
\begin{center}
\includegraphics[width=0.85\linewidth]{LoopAdv2D.eps}
\captionof{figure}{\label{Fig11} Loop advection test with $v^z=1/24$. Left and right columns represent the x-component of the magnetic field $B^x$ and the magnetic pressure $p_\mathrm{mag} = b^2/2$, respectively. The initial data for $B^x$ and its corresponding $p_\mathrm{mag}$ at $t=0$ is depicted in the top row, while middle and bottom rows represent these quantities after one periodic cycle of evolution, i.e., at $t=24$, in low resolution ($\Delta x = 1/128$) and high resolution ($\Delta x = 1/256$), respectively. Our results are in very good agreement with those reported in \cite{mosta2013grhydro}. }
\end{center}
\end{figure}
\begin{figure}[hbt!]
\begin{center}
\includegraphics[width=\linewidth]{LoopAdv2D_PPM.eps}
\captionof{figure}{\label{Fig12} Comparison between the MINMOD and PPM reconstruction methods for the loop advection test with $v^z=1/24$. Top and bottom rows correspond to results obtained with MINMOD and PPM respectively. First column depicts the initial configuration of the magnetic field $B^x$ at $t=0$, second column shows the final configuration of $B^x$ after one periodic cycle at $t=24$, and the third column shows the logarithmic absolute differences in $B^x$ between the initial and final times.}
\end{center}
\end{figure}
\subsection{3D tests}
\label{3D}
We now present the results of our 3D tests, mostly including a fully dynamical spacetime.
\subsubsection{Spherical Explosion}
\label{SE}
\hfill\\
\noindent We present here the results of a very demanding GRMHD test which is not usually performed by other GRMHD codes and that is successfully passed by the \texttt{Spritz} code: the so--called Spherical Explosion.
Usually, GRMHD codes based on Cartesian coordinates are tested with the Cylindrical Explosion test (refer to \sref{CylExp}), because the cylindrical symmetry can be well exploited in such a geometrical setting. Spherical Explosion tests, instead, have commonly been performed with GRMHD codes working in spherical coordinates \cite{cerda2008new,cerda2007general}, which are not well-suited for dealing with cylindrical symmetry.
What make the Spherical Explosion test challenging in Cartesian coordinates are indeed the potential limitations in regions where the shock front is not parallel to the orientation of grid--cells' faces.
The test settings are an extension in spherical symmetry of the Cylindrical Explosion test of \sref{CylExp}. We consider an inner dense sphere of radius $R_\mathrm{in} = 0.8$ centered in the domain's origin with $\rho_\mathrm{in} = 10^{-2}$ and $p_{gas,\mathrm{in}} = 1.0$, surrounded by a spherical shell covering the radial range $R_\mathrm{in} < r < R_\mathrm{out} = 1.0$ where pressure and density are characterized by an exponential decay analogous to the prescription given in \Eref{CylBWdens}, except that here a spherical radius is considered instead of a cylindrical one.
At $r > R_\mathrm{out}$, we have then a low-pressure uniform fluid with $\rho_\mathrm{out} = 10^{-4}$ and $p_{gas,\mathrm{out}} = 3.0 \times 10^{-5}$.
In addition, following \cite{cerda2008new,cerda2007general}, a uniform magnetic field parallel to the z axis is added all over the domain. The domain extension is $ \left[ -6.0, 6.0 \right] $ and is covered by $160$ grid--cells, in all directions. Although a direct comparison with spherical coordinates settings of \cite{cerda2008new,cerda2007general} can not be done in a straightforward way, it is worth noting that this choice for the resolution corresponds to considering $80$ cells in the radial direction along the polar axis, i.e., the low--resolution version of the results presented in the aforementioned papers. We decided to perform the evolutions for a total time of $t_\mathrm{final} = 6.0$, with a CFL factor of $0.25$.
Our runs did not crash even at this late time, although the shock--front always reaches the boundary of the domain (that is treated with ``none'' BCs).
We also note that in this case the imposed lower limit for the rest-mass density (defining the atmospheric floor, see \Sref{Atmo}) is $\rho_\mathrm{atm} = 10^{-12}$.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=0.8\linewidth]{SphericalExplosion2D.eps}
\captionof{figure}{\label{Fig16} Spherical Explosion test results at time $t = 4.0$. Top row is the result for the non--magnetised case, middle row is the intermediate magnetization case with $B^z = 0.1$ and bottom row is the strongly magnetized case with $B^z = 1.0$. The left and right columns show respectively the logarithm of the gas pressure and the Lorentz factor along with isodensity contour lines of $ || B || = \sqrt{B^i B_i}$.}
\end{center}
\end{figure}
In \Fref{Fig16} we report on separated rows the results on the $y = 0$ plane of the tests performed respectively with magnetic field strength $B^z = 0.0, 0.1$ and $1.0$. In particular, we show the gas pressure and Lorentz factor $W$ (respectively on the left and right columns) at time $t=4$.
Looking a the top--right panel (Lorentz factor in the non--magnetized case), we can observe small deviations from spherical symmetry exactly aligned with the Cartesian axes, giving a hint of the geometrical issues brought by such a demanding test.
In fact, as already noted by \cite{del2003efficient} for the Cylindrical Explosion, the biggest problems are due to the fluid velocity components along the diagonals. However, despite the accumulation of errors along the diagonals due to the non--perpendicularity of the fluxes, the spherical shape of the shock front seems to be very well preserved in this case, even at the relatively low resolution considered here.
In presence of a dynamically important magnetic field oriented along the $z$ axis, the shock front deviates naturally from spherical symmetry (see middle row of \Fref{Fig16}). Finally, when the magnetic field strength is very high (see bottom row), the central region gets completely evacuated. Even in such an extreme case, the evolution is still performed without any problem.
A final important note is that all the tests for the Spherical Explosion here presented where performed with the minmod reconstruction and the LxF flux method, but without adopting any additional dissipation or ad-hoc fixes.
\subsubsection{TOV star}
\label{TOV}
\hfill\\
\noindent Static, spherically symmetric stars in general relativity are best described by the Tolman--Oppenheimer--Volkoff (TOV) equations \cite{oppenheimer1939massive,tolman1939static}. To further assess the stability and accuracy of our code, the next test we considered is the evolution of a non--rotating stable TOV configuration for both non--magnetised and magnetised cases. For the test setup, we adopt the model described in \cite{baiotti2005three} that we build using the TOVSolver thorn \cite{ET}.
In particular, the initial TOV star configuration is generated using a polytropic EOS with adiabatic index $\Gamma=2.0$, polytropic constant $K=100$, and initial central rest-mass density $\rho=1.28\times10^{-3}$.
We perform the evolution of this initial configuration adopting an ideal fluid EOS with the same value for $\Gamma$. For the magnetised version, we add the magnetic field to the computed TOV configuration using the analytical prescription of the vector potential $A_\phi$ given by
\begin{equation}
\label{VecPot}
A_\phi \equiv A_\mathrm{b} \varpi^2 {\rm max} \left( p - p_\mathrm{cut}, 0 \right)^{n_s} \ ,
\end{equation}
where $\varpi$ is the cylindrical radius, $A_\mathrm{b}$ is a constant, $p_\mathrm{cut}=0.04p_\mathrm{max}$ determines the cutoff when the magnetic field goes to zero inside the NS, with $p_\mathrm{max}$ corresponding to the initial maximum gas pressure, and $n_s=2$ sets the degree of differentiability of the magnetic field strength \cite{giacomazzo2011accurate}. The value of $A_b$ is chosen such that the maximum value of the initial magnetic field strength is set to $\approx1\times 10^{16} \ \mathrm{G}$. This generates a dipole-like magnetic field confined inside the NS and zero magnetic field outside.
The non-magnetised tests are run on a uniform grid with $x$--, $y$-- and $z$--coordinates spanning over the range [0, 20] with low, medium and high resolution having $(32)^3$, $(64)^3$ and $(128)^3$ grid--cells respectively, and considering reflection symmetry with respect to every direction, i.e., the so--called octant symmetry. Furthermore, we perform two more tests for non-magnetised TOV NS in high resolution (i) employing the Cowling approximation (i. e. considering a fixed space--time) \cite{cowling1941non,lindblom1990accuracy,1969ApJ...158..997T} to check the accuracy of our code by evolving just the hydrodynamical equations on a static spacetime background, and (ii) implementing a mesh refinement composed by two nested boxes centered at the origin and extending up to $x,y,z=$20 and 40, respectively, both having $(128)^3$ grid--cells in each direction (therefore the inner box corresponds to the domain evolved in the unigrid run at high resolution while the outer box allows for a further out external boundary). As the \texttt{EinsteinToolkit} does not provide a way to handle reflection symmetry for staggered variables, we perform the magnetised TOV tests in low, medium and high resolution covering the entire domain with $x$--, $y$-- and $z$--coordinates lying in the range [-20, 20] (considering no reflection symmetries) having the same respective grid-spacing as that of the non-magnetised simulations.
All the test cases are simulated for $10$~ms using the PPM reconstruction method, the HLLE flux solver, and the RK4 method for time stepping with a CFL factor of $0.25$.
\begin{figure}[t!]
\centering
\begin{minipage}{0.8\textwidth}
\centering
\includegraphics[width=1\linewidth]{TOV_NoB_rho1D.eps}
\end{minipage}
\caption{Results of the non--magnetised TOV simulations. Top: Time evolution of the normalised central rest--mass density $\rho_\mathrm{c}/\rho_\mathrm{c,0}$ for the different resolution simulations inclusive of cases with Cowling approximation and AMR. Bottom: Comparison of results on the $\rho_\mathrm{c}/\rho_\mathrm{c,0}$ evolution with those obtained with \texttt{GRHydro} for low, medium, and high resolution, showing an exact match.}
\label{Fig13}
\end{figure}
The top panel of \Fref{Fig13} shows the central rest--mass density $\rho_\mathrm{c}$ evolution for all three resolutions, the high--resolution in Cowling approximation and the high--resolution with AMR, all for the non--magnetised TOV case. It is worth noting that the AMR case (orange curve) can reproduce perfectly the result in high--resolution (green curve), this proving once again the correctness of AMR implementation within the \texttt{Spritz} code. Periodic oscillations are initiated as a result of the truncation errors generated in the initial data, while the cause of dissipation is primarily due to the numerical viscosity of the finite differencing (FD) scheme \cite{baiotti2005three,font2000non}. The results converge well after increasing the resolution, and the additional tests for the cases with Cowling approximation and AMR are also fully satisfactory. In order to further investigate the accuracy of our code, we compare the low, medium, and high resolution tests' results on the $\rho_\mathrm{c}$ evolution with those obtained with \texttt{GRHydro}. As shown in the bottom panel of \Fref{Fig13}, we observe an exact match.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=0.5\linewidth]{MagTOV_Bfield.eps}
\caption{Initial internal magnetic field configuration of the magnetised TOV. The colormap indicates the strength of the magnetic field, while the contours (in white) trace a number of representative isosurfaces of the $\phi$--component of the vector potential, $A_\phi$. The latter contours also correspond to poloidal magnetic field lines. The red line is an approximate representation of the TOV surface, showing the iso-density contour of $5\times10^5$ times the assumed atmospheric floor density.}
{\label{Fig14} }
\end{center}
\end{figure}
The initial magnetic field configuration for the magnetised TOV test is illustrated in \Fref{Fig14}. Here, the magnetic field strength is shown along with representative magnetic field lines.
\begin{figure}[t!]
\centering
\begin{minipage}{0.7\textwidth}
\centering
\includegraphics[width=1\linewidth]{MagTOV_Rho_Bmax.eps}
\end{minipage}%
\caption{Results of the magnetised TOV simulation. Top: Time evolution of the normalised central rest--mass density $\rho_\mathrm{c}/\rho_\mathrm{c,0}$ for the different resolution simulations; this gives a nearly exact match with that of the non-magnetised TOV case results (c.f., \Fref{Fig13}). Bottom: Time evolution of the maximum value of the magnetic field strength for all three resolutions.}
\label{Fig15}
\end{figure}
The top panel of \Fref{Fig15} shows the evolution of the maximum of the rest-mass density $\rho_\mathrm{c}$ for the magnetised TOV case, which matches almost exactly the one for the non--magnetised case (see the top panel of \Fref{Fig13}).
This should be expected, since the imposed magnetic field represents only a small perturbation compared to the gravitational binding energy of the system.
In addition, the time evolution of the maximum value of the magnetic field strength $B_\mathrm{max}$ is depicted in the bottom panel of \Fref{Fig15}. While $B_\mathrm{max}$ is highly damped for the lowest resolution test with a decrease by a factor of roughly $14.75$ in $10$~ms, its value stabilizes with increasing resolution, as observed for $\rho_\mathrm{c}$. We note again that here the damping is a numerical viscosity effect of the FD scheme.
\begin{figure}[t!]
\centering
\begin{minipage}{0.49\textwidth}
\centering
\includegraphics[width=1\linewidth]{TOV_NoB_OscFreqRhoMax_PostCactus_IF.eps}
\end{minipage}
\begin{minipage}{0.49\textwidth}
\centering
\includegraphics[width=1\linewidth]{TOV_B_OscFreqRhoMax_PostCactus_IF.eps}
\end{minipage}%
\caption{Power spectrum of the central rest--mass density evolution, normalized to maximum amplitude of the peaks of oscillations' frequencies. Left--panel shows the results from the runs without magnetic field, while right--panel shows the results where also magnetic field is considered.}
\label{Fig17}
\end{figure}
To conclude this section, we report in \Fref{Fig17} the oscillations' peak frequencies for the evolution of the TOV star models that were simulated with our code, in order to validate our models with the literature results. In particular, we show the results of the high--resolution simulations in pure hydrodynamics with dynamical space--time both adopting uniform grid and AMR, with the Cowling approximation (see \Fref{Fig17} left-panel), as well as of the high--resolution run with magnetic field (see \Fref{Fig17} right-panel). The power spectrum of each simulation is computed via fast Fourier transform (FFT) in order to extract the amplitudes and frequencies of the oscillations of the central rest mass density, and then the amplitudes are normalized to the maximum one relatively to each simulation. \Fref{Fig17} also shows the peak frequencies of the oscillations from the literature taken from \cite{font2002three}, that were obtained with an independent 2D code for fixed space--time and with a perturbative code in the case of hydrodynamics coupled to space--time evolution. An interesting point to note is that although the results of \cite{font2002three} were obtained with a polytropic EOS, our Ideal Fluid simulations seem to match perfectly the peak frequencies. The ideal fluid EOS produces indeed different results from a polytropic one only in presence of shocks, which in this case appears only on the low-density surface and therefore do not affect the oscillations of the core. Finally, it is worth noting that the peak frequencies of our non--magnetised and magnetised models are in perfect agreement as shown by the left panel of \Fref{Fig17}, proving the correctness of the magnetic field implementation.
\section{Conclusion and future developments}
\label{sec5}
We have presented a new fully general relativistic code, named \texttt{Spritz}, able to evolve the GRMHD equations in three spatial dimensions, on cartesian coordinates, and on dynamical backgrounds. The code is based
and considerably improves over our previous \texttt{WhiskyMHD}
code~\cite{
giacomazzo2007whiskymhd, giacomazzo2011accurate,GiacomazzoPerna2013}. The \texttt{Spritz} code benefits also from
the publicly available \texttt{GRHydro}~\cite{mosta2013grhydro} and
\texttt{IllinoisGRMHD}~\cite{etienne2015illinoisgrmhd} codes, in
particular in the handling of different EOSs and in the use of a
staggered formulation of the vector potential equations.
In this paper, we presented in detail the equations and the numerical
methods implemented in the code. We have adopted a conservative formulation of GRMHD equations, high-resolution shock-capturing schemes, and we guarantee the divergence-less character of the magnetic field by evolving the vector potential. We also presented a series of tests
in special and general relativity. We started by showing the code capability of accurately solving 1D Riemann problems by comparing the numerical results with exact solutions~\cite{giacomazzo2006exact}. We also showed, for the first time, a comparison between a
non-staggered and a staggered formulation of the vector potential,
demonstrating that the latter prevents the formation of spurious
post-shock oscillations (see~\Fref{Fig1}) and therefore does not
require to apply dissipation to the vector
potential~\cite{giacomazzo2011accurate}. We then performed a series of special relativistic MHD tests in 2D, including the cylindrical explosion, the magnetic rotor, and the loop advection tests. All tests showed very good agreement with the exact solution (loop advection) or with other GRMHD codes (cylindrical explosion and magnetic rotor). In the cylindrical explosion case we also tested the code capability of dealing with mesh refinement boundary and demonstrated that they have no effect in the correct evolution of MHD quantities. We also performed, for the first time for a fully GRMHD code, a demanding 3D spherical explosion test with different levels of magnetization. The code produced results in very good agreement with those produced by other codes. We concluded our series of tests with a standard 3D evolution of a stable TOV configuration (both with and without magnetic field) in order to show the code
ability to handle fully general relativistic regimes. In particular we checked the frequency of TOV oscillations and compared them with results available in the literature.
While the \texttt{Spritz} code can handle any equation of state, in this paper we focused on tests using simple gamma-law EOSs in order to check the robustness of our basic GRMHD routines. In a second paper we will present also tests involving the evolution
of isolated NSs with finite temperature EOSs and neutrino emission with and without magnetic fields (Sala et al., in preparation).
Once this second family of tests will be performed successfully, the \texttt{Spritz} code will be one of the very few codes worldwide able to evolve magnetised neutron stars with finite temperature EOSs and neutrino emission~\cite{most2019beyond,palenzuela2015effects}. In the
multimessenger era it is indeed crucial to take into account
different aspects of the microphysics in order to be able not only to
compute a more accurate merger and post-merge GW signal, but also
to provide reliable estimates of the EM emission, including both kilonova
and short GRBs. The former requires indeed an accurate estimate of electron fraction and temperature in the post-merger remnant as well as in the ejected material, while the latter needs a precise description of the magnetic field evolution.
Once the \texttt{Spritz} code will have been used for a first set of binary NS merger simulations, we plan to release it to the public and to ask for its inclusion in future releases of the Einstein
Toolkit~\cite{ETKpaper, EinsteinToolkit:2019_10, ET}.
\hfill\\
\section*{Acknowledgments}
Numerical calculations have been made possible through a CINECA-INFN agreement, providing access to resources on MARCONI at CINECA. F. C. acknowledges financial support from the INFN HPC\_HTC project. F. C. acknowledges the CCRG at the RIT for the computational resources provided there on the \textit{Green Prairies} local Cluster. F.C. received also access to the NCSA \textit{Blue Waters} Cluster via the NSF AST--1516150 grant and to the TACC \texttt{Frontera} Cluster via the NSF PHI--1707946 grant. F.C. has been partially supported by the NASA TCAN 80NSSC18K1488 grant for a three--months visiting period at RIT. F. C. acknowledges also Dr. V. Mewes, Prof. Y. Zlochower, Prof. M. Campanelli and Prof. C. Lousto for interesting scientific discussions. J.\,V.\,K.~kindly acknowledges the CARIPARO Foundation (https://www.fondazionecariparo.it) for funding his PhD fellowship within the PhD School in Physics at the University of Padova.
\section*{References}
\nocite{*}
\bibliographystyle{unsrt}
|
1,108,101,566,706 | arxiv |
\section{Introduction}
Cosmological mergers of galaxy clusters, AGN jets, galactic winds, and galaxy interactions drive
turbulence in the plasma filling the
intracluster medium (ICM), and this turbulence would be able to amplify weak seeds of
magnetic fields up to intensities of $\sim \mu G$, according to cosmological magneto-hydrodynamical
(MHD) simulations (e.g. \citealt{kotarba_etal_2011, beresnyak_miniati_2016, egan_etal_2016}).
This amplification mechanism could explain
the magnetic fields detected in the diffuse ICM through synchroton emission of relativistic
electrons in radio halos, and also through the Faraday rotation of polarized emission from
radio sources embedded or behind the galaxy clusters (see \citealt{brunetti_jones_2014} and references
therein). In fact, two-point statistics of the Faraday rotation maps of the ICM
reveal a magnetic field power spectrum consistent with a Kolmogorov-like power law $\propto k^{-5/3}$
\citep{ensslin_etal_2005}.
However, the use of the
standard MHD approximation to describe the dynamics of the ICM
and the development of the small-scale turbulent dynamo
is, in principle,
not well justified
since it implies a high collisionality of the plasma particles to ensure the local thermodynamical equilibrium.
Considering that
the mean-free-path for the ion-ion Coulomb collisions is typically $\lambda_{ii} \sim 30$ kpc in the ICM
(considering a density $n = 10^{-3}$~cm$^{-3}$ and temperature $T=10^8$~K; see also the ion mean-free-path distribution
inferred from cosmological simulations in \citealt{egan_etal_2016}),
collisionless
effects should be taken into account at least for scales $\lesssim \lambda_{ii}$ (see \citealt{schekochihin_cowley_2006}).
The most obvious
effect is the natural development of pressure (or temperature) anisotropy with respect
to the local magnetic field.
As a consequence there is the triggering of
electromagnetic plasma instabilities
(such as the firehose, the ion-cyclotron and the mirror instabilities; see, e.g., \citealt{gary_1993}).
These instabilities are known to constrain the anisotropy itself
(see \citealt{santos-lima_etal_2014} and references therein; \citealt{kunz_etal_2014, riquelme_etal_2015,
sironi_narayan_2015, sironi_2015, rincon_etal_2015, melville_etal_2016, santos-lima_etal_2016}).
\citet[\citetalias{santos-lima_etal_2014} hereafter]{santos-lima_etal_2014} took into account some collisionless effects in numerical simulations of turbulence
and small-scale dynamo in MHD numerical simulations considering the conditions typical of the ICM.
They found that under forced turbulence, the perpendicular temperature to the local magnetic field dominates the parallel
temperature in most of the system (the parallel temperature dominates only in narrow regions
of high compression or magnetic field reversals), leading to strong modifications in the turbulence statistics
(see also \citealt{kowal_etal_2011, falceta15} for studies on collisionless turbulence with constant pressure anisotropy)
and the complete failure of the dynamo.
On the other hand, including the relaxation of the temperature anisotropy resulting from
the microscale (scales below those resolved in the simulation, down to the ions kinetic scales)
plasma instabilities, the system
gradually converges to a similar behaviour to that
obtained by collisional MHD, depending on the anisotropy relaxing rate~\footnote{It should be
made clear that the anisotropy relaxation employed in \citetalias{santos-lima_etal_2014}
does not drive the pressure components to the isotropic state, but to the instabilities thresholds.
Therefore, the similarity to the collisional MHD results is not trivial.}.
In \citetalias{santos-lima_etal_2014} it is argued that this relaxing rate is much
faster than the MHD time-scales, and the model
that better represents the ICM constrains the maximum anisotropy levels to values very
close to the plasma stable regime.
The imprints of the large scale (of the order of turbulence injection scale)
temperature anisotropy on the Faraday rotation
maps were first studied in \citet[\citetalias{nakwacki_etal_2016} hereafter]{nakwacki_etal_2016}.
In that work, we employed a collisionless MHD formalism
with a double-isothermal closure (as implemented in \citealt{kowal_etal_2011}) to analyse the statistical
properties of the RM maps for several models of turbulence considering different values
of the fixed temperature anisotropy and different regimes of sub/super-Alfv\'enic and
trans/supersonic turbulence. The effects of the temperature anisotropy on the magnetic field structure and the RM maps were found to be significant evidencing smaller
correlation lengths when compared to collisional MHD models.
In that study it was neglected the feedback of the microscale instabilities on the plasma
which may cause the reduction of the thermal anisotropy as described in
\citetalias{santos-lima_etal_2014} (see also \citealt{schekochihin_cowley_2006}).
In this work, we will extend the analysis of \citetalias{nakwacki_etal_2016} by including this effect.
We will explore the collisionless effects on the Faraday rotation maps focusing on the turbulence
models of the intracluster medium presented in
\citetalias{santos-lima_etal_2014}, in which the anisotropy in temperature evolves according to the CGL closure
\citep{chew_etal_1956} modified to include an anisotropy relaxing term.
The important advantage of this new approach is not to use the double-isothermal closure, in which the temperature anisotropy is a fixed constant.
We will compare the RM maps and related statistical properties of two collisionless MHD models,
one similar to the models of \citetalias{nakwacki_etal_2016} (i.e. without any anisotropy relaxation),
and another including bounds in the anisotropy.
We will also compare these with the Faraday rotation maps obtained from a standard collisional MHD model.
In Section 2 we describe the numerical simulations of the collisionless MHD models used for building the synthetic
Faraday rotation maps, which are analysed in Section 3. In Section 4 we summarize our results and draw our conclusions.
\section{Numerical simulations}
\begin{table*}
\centering
\caption{Parameters and statistics of the simulations used to build the synthetic RM maps.}
\label{tab:models}
\begin{tabular}{lcccccccc}
\hline
run &
$B_0^2$ &
$c_{S0}^2$ &
$\langle u^2 \rangle$ &
$\langle B^2 \rangle$ &
$\langle \beta \rangle$ &
$\langle M_A \rangle$ &
res. &
snapshots \\
\hline
CGL1 &
$0.09$ &
$1$ &
$0.59 (0.54)$ &
$0.25 (0.35)$ &
$17 (4.8 \times 10^2)$ &
$1.8 (1.3)$ &
$512^3$ &
$4$ \\
BA1 &
$0.09$ &
$1$ &
$0.48 (0.40)$ &
$0.51 (0.33)$ &
$16 (4.8 \times 10^2)$ &
$1.2 (1.5)$ &
$512^3$ &
$4$ \\
MHD1 &
$0.09$ &
$1$ &
$0.55 (0.48)$ &
$0.58 (0.47)$ &
$17 (9.4 \times 10^2)$ &
$1.3 (1.5)$ &
$512^3$ &
$4$ \\
\hline
CGL2 &
$10^{-6}$ &
$1$ &
$0.70 (0.73)$ &
$1.2 \times 10^{-5} (7.1 \times 10^{-5})$ &
$2.0 \times 10^6 (4.7 \times 10^7)$ &
$5.9 \times 10^2 (5.7 \times 10^2)$ &
$256^3$ &
$11$ \\
CGL3 &
$10^{-6}$ &
$0.09$ &
$0.79 (0.64)$ &
$2.0 \times 10^{-4} (7.5 \times 10^{-4})$ &
$1.2 \times 10^5 (1.1 \times 10^7)$ &
$2.8 \times 10^2 (4.2 \times 10^2)$ &
$256^3$ &
$11$ \\
BA2 &
$10^{-6}$ &
$1$ &
$0.78 (0.63)$ &
$0.12 (0.15)$ &
$1.7 \times 10^2 (6.2 \times 10^3)$ &
$4.6 (6.7)$ &
$256^3$ &
$11$ \\
MHD2 &
$10^{-6}$ &
$1$ &
$0.79 (0.63)$ &
$0.18 (0.22)$ &
$1.1 \times 10^2 (2.5 \times 10^3)$ &
$3.8 (5.3)$ &
$256^3$ &
$11$ \\
\hline
\end{tabular}
\end{table*}
Table~\ref{tab:models} shows the most relevant parameters of the simulated models used to
build the synthetic RM maps.
The brackets $\langle \cdot \rangle$ denote an average over the domain and time (using the
available snapshots of the simulations, considering time intervals larger than
$\tau_{turb}$, where $\tau_{turb} = L_{turb} / U_{turb}$ is the turbulence
turn-over time, with $L_{turb}$ and $U_{turb}$ the scale and velocity of injection, respectively),
when the turbulence has reached a statistically
stationary state. The values listed in parenthesis are the statistical standard deviations which give an approximate idea about the spatial and temporal fluctuations of these
quantities.~\footnote{The
spatial/temporal statistical distributions of the fields listed in Table~\ref{tab:models} are
not Gaussian around the mean values. This becomes obvious
from the fact that all the quantities are positive and
some of the standard deviation values are larger
than the mean values.}
The first three models ({\it CGL1, BA1,} and {\it MHD1}) have the same initial
uniform magnetic field with intensity $B_0$ and thermal speed $c_{s0}$
(both shown in dimensionless code units; see below). The
rms turbulent velocity is also similar in these models (column $\langle u^2 \rangle$).
The initial thermal speed is kept approximately constant
by the use of a fast thermal relaxation (which represents the action of both radiative cooling and heat conduction;
see more details below).
The regimes of turbulence achieved for these three models are similar
being slightly subsonic ($u_{rms} \lesssim c_{S0}$) and mildly super-Alfvenic ($\langle M_A \rangle \gtrsim 1$).
From all the models studied in \citetalias{santos-lima_etal_2014}, only these three ones have simulation parameters which are similar
to those of the super-Alfvenic simulations analysed in \citetalias{nakwacki_etal_2016}
and thus can be more easily compared with this previous work.
However, the ICM is observed to have very tangled magnetic fields (e.g. \citealt{ferreti_etal_1995}),
which is indicative of a turbulence regime strongly super-Alfvenic.
The remaining models presented in Table~\ref{tab:models} ({\it CGL2, CGL3, BA2}, and {\it MHD2}) are
simulations in which the
initial uniform magnetic field is very weak,
making the intensity of the ordered component of the magnetic field to be relatively small
after the amplification of the tangled component via small-scale turbulent dynamo
(see $B_0^2$ and $\langle B^2 \rangle$ for these models in Table~\ref{tab:models}).
These four models have the same initial seed magnetic field and thermal speed, except for
model {\it CGL3}, where a smaller thermal speed is used in order to test the dependence of the results with
the plasma $\beta = p_{th} / p_{mag}$ parameter
(where $p_{th} = (2p_{\perp} + p_{\parallel})/3$ is the total thermal pressure and $p_{mag} = B^2/8\pi$
is the magnetic pressure).
The models MHD ({\it MHD1} and {\it MHD2}) have
a single scalar thermal pressure and correspond to the standard collisional MHD model where the
distribution of the thermal velocities is assumed to be isotropic. The models named
CGL ({\it CGL1, CGL2}, and {\it CGL3}) have
a thermal pressure tensor with two independent components related to two temperatures:
one associated to the thermal velocity component parallel to the local magnetic field lines $T_{\parallel}$
and another to the thermal velocity component related to the gyromotions of the particles around
the field $T_{\perp}$; these two temperatures evolve according to the CGL closure
\citep{chew_etal_1956}
which is based on the conservation of the magnetic moment of the charged particles
$d \left( T_{\perp} / B \right) / dt = 0$
and the assumption of conservation of the
entropy (no heat exchange between the fluid elements)
$d \left( T_{\perp}^{2} T_{\parallel} / n^{2} \right) / dt = 0$ (where $n$ is the density of particles)~\footnote{In fact
the CGL closure was not rigorously adopted in \citetalias{santos-lima_etal_2014}.
Instead, a conservative scheme for evolving the internal energy was used, while the
evolution of the temperatures ratio followed the CGL prescription (see Eq.~\ref{eqn:collisionless_mhd}).
This approach gives results nearly
identical to those obtained using the CGL equations of state, but is numerically more robust. Besides,
it allows the straight inclusion of the anisotropy relaxation term.}.
Finally, the models named BA
(Bounded Anisotropy: {\it BA1} and {\it BA2})
differ from models CGL by the addition of a boundary in the temperature anisotropy.
This boundary limits the temperature anisotropy by the threshold values of the
firehose (for $A<1$) and mirror (for $A>1$) instabilities, where $A$ is the temperatures
ratio $A = T_{\perp}/T_{\parallel}$, mimicking the effect of an ``instantaneous'' relaxing
of the anisotropy to the marginal values by the action of the microscale instabilities
(see \citealt{sharma_etal_2006}; \citetalias{santos-lima_etal_2014} and references therein).
An extended discussion on the applicability and limitations of this model to represent the
ICM turbulence is presented in \citet{santos-lima_etal_2016}.
The equations describing the evolution of the models presented in Table~\ref{tab:models} are
(see also \citetalias{santos-lima_etal_2014}):
{\small
\begin{equation}
\frac{\partial }{\partial t}
\begin{bmatrix}
\rho \\[6pt]
\rho \mathbf{u} \\[6pt]
\mathbf{B} \\[6pt]
e \\[6pt]
A (\rho^{3}/B^{3})
\end{bmatrix}
+ \nabla \cdot
\begin{bmatrix}
\rho \mathbf{u} \\[6pt]
\rho \mathbf{uu} + \Pi_{P} + \Pi_{B} \\[6pt]
\mathbf{uB - Bu} \\[6pt]
e \mathbf{u} + \mathbf{u} \cdot \left( \Pi_{P} + \Pi_{B} \right) \\[6pt]
A (\rho^{3}/B^{3}) \mathbf{u}
\end{bmatrix}
=
\begin{bmatrix}
0 \\[6pt]
\mathbf{f} \\[6pt]
0 \\[6pt]
\mathbf{f \cdot v} + \dot{w} \\[6pt]
\dot{A}_{S} (\rho^{3}/B^{3})
\end{bmatrix}
\rm{,}
\label{eqn:collisionless_mhd}
\end{equation} }
\noindent
where $\rho$, $\mathbf{u}$, $\mathbf{B}$, $p_{\perp,\parallel}$ are the
macroscopic variables
density, velocity, magnetic field, and thermal pressures perpendicular/parallel to the local magnetic field, respectively;
$e = p_{\perp} + p_{\parallel}/2 + \rho u^{2}/2 + B^{2}/8\pi$
(for the two-temperature models CGL and BA)
is total energy density.
For the {\it MHD} model, $e = 3p/2 + \rho u^{2}/2 + B^{2}/8\pi$.
$\Pi_P$ and $\Pi_B$ are the thermal pressure and magnetic stress tensors, respectively, defined by
$\Pi_{P} = p_{\perp} \mathbf{I} + (p_{\parallel} - p_{\perp}) \mathbf{bb}$ for the two-temperature models
and
simply $\Pi_{P} = p \mathbf{I}$ for the {\it MHD} model,
$\Pi_{B} = (B^{2}/8 \pi) \mathbf{I} - \mathbf{BB} /4 \pi$,
where $\mathbf{I}$ is the unitary dyadic tensor and $\mathbf{b} = \mathbf{B} / B$.
An ideal equation of state relates each temperature with its respective pressure component, and an adiabatic exponent
$\gamma = 5/3$ is used for the MHD models.
In the source terms, $\mathbf{f}$ represents an external bulk force responsible for driving the turbulence,
$\dot{w}$ gives the rate of change of the internal energy $w = (p_{\perp}+p_{\parallel}/2)$ of
the gas due to heat conduction and radiative cooling, and $\dot{A}_{S}$ gives the rate of change of
$A$ due to the microscale instabilities~\footnote{Though the physical process relaxing the macroscopic
temperature anisotropy is
attributed to the ions anomalous scattering in this approach, it can also
represent (with some limitations) the situation when the relaxation is not mediated by the
instantaneous break of magnetic momentum, as it is the case of the mirror instability development under
continuous driving of temperature anisotropy
(\citealt{kunz_etal_2014, riquelme_etal_2015, rincon_etal_2015, melville_etal_2016}). See discussion in
\citet{santos-lima_etal_2016}.}
The turbulence is injected by adding a random (but solenoidal) velocity field
(delta correlated in time)
to the gas at the end of
each time-step. This velocity field is concentrated inside a spherical shell
in the Fourier space of radius $k=2.5$ (i.e, with characteristic wavelength
$L_{turb} = L/2.5$, being $L$ is the side of the cubic domain).
We employed an artificial but simple thermal relaxation prescription,
which brings the specific internal energy $w^*$
to its initial value $w_0^*$ at a rate $\nu_{th} = 5$
(in code units, which gives a characteristic time approximately 20 times faster than
the turbulence turn-over time $\tau_{turb}$)
for the models presented in Table~\ref{tab:models}:
\begin{equation}
\dot{w} = - \nu_{th} (w^* - w_0^*) \rho.
\end{equation}
The instantaneous anisotropy relaxation (represented by the source term $\dot{A}$) is implemented as follows:
after the numerical integration of the equations at each time-step,
the anisotropy is replaced, at each grid cell,
by the marginally stable value,
whenever this evolves to an unstable value
beyond the threshold for the firehose
or mirror instability
(see more details regarding the source terms and the numerical methods employed in the simulations in
\citetalias{santos-lima_etal_2014}). As the results presented in this work
are dimensionless and the above equations do not
carry any physical constant, it is not necessary to attribute physical dimensions to the models.
We did not use explicit viscous or resistive terms in the numerical
simulations (except for a small resistivity that provides a dissipation very close to the numerical one in the {\it CGL1} model for numerical stability purposes)
aiming at reducing the dissipation to the minimum value provided by the numerical scheme,
in order to maximize the inertial range of the turbulence. For the methods used in these simulations,
the dissipation range starts at scales of approximately 16 cells (inferred from the magnetic and velocity
power spectra of the MHD simulations).
Therefore, we cannot assess the dependence of the results with the Reynolds $R$ and/or the magnetic Prandtl
number $Pm$ ($R \equiv L_{turb} U_{turb} / \nu$ and $Pm \equiv \eta/\nu$, where $\nu$ and $\eta$ are the
viscous and magnetic diffusivities, respectively).
We estimate the $Pm$ number as approximately equal to unit in all our simulations.
\section{Synthetic Faraday rotation maps}
\begin{figure*}
\begin{tabular}{c c c}
\input{./figs/map_rm_CGL1} &
\input{./figs/map_rm_BA1} &
\input{./figs/map_rm_MHD1}
\end{tabular}
\caption{Normalized RM maps calculated from the simulated cubes: {\it CGL1} model with no anisotropy relaxing by the
microscale instabilities (left),
{\it BA1} with fast anisotropy relaxing by the instabilities (middle), and collisional {\it MHD1}
model (right).
The angle $\theta$ between the line-of-sight and the direction of the uniform magnetic field is $45\degree$
for all the maps.}
\label{fig:maps}
\end{figure*}
\begin{figure}
\begin{tabular}{c}
\input{./figs/mean_rm1} \\
\input{./figs/dispersion_rm1}
\end{tabular}
\caption{Normalized values of average (top) and dispersion (bottom) of the RM as a function of the angle $\theta$
between the line-of-sight and the uniform magnetic field.}
\label{fig:statistics}
\end{figure}
\begin{figure}
\begin{tabular}{c}
\input{./figs/ps_rm1}
\end{tabular}
\caption{Power spectrum of the RM maps. The multiple lines presented for each model correspond to the power spectrum calculated
for different angles $\theta$ between the line-of-sight and the uniform magnetic field,
from $\theta = 0$ to $\theta = 90\degree$. A thin grey straight line with slope $-8/3$ is drawn for comparison.}
\label{fig:ps}
\end{figure}
\begin{table*}
\centering
\caption{Statistical moments of the synthetic RM maps built from the strongly super-Alfvenic models.}
\label{tab:moments}
\begin{tabular}{lccc}
\hline
run &
$\langle \delta RM^2 \rangle / \left( n_{e0} B_{rms}^2 L \right)^2$ &
$\langle \delta RM^3 \rangle / \langle \delta RM^2 \rangle^{3/2}$ &
$\langle \delta RM^4 \rangle / \langle \delta RM^2 \rangle^{2}$ \\
\hline
CGL2 &
$1.6 \times 10^{-2}$ &
$0.93$ &
$7.1$ \\
CGL3 &
$2.5 \times 10^{-2}$ &
$0.42$ &
$8.2$ \\
BA2 &
$4.0 \times 10^{-2}$ &
$3.6 \times 10^{-2}$ &
$4.0$ \\
MHD2 &
$3.6 \times 10^{-2}$ &
$-8.5 \times 10^{-3}$ &
$4.0$ \\
\hline
\end{tabular}
\end{table*}
\begin{figure}
\begin{tabular}{c}
\input{./figs/ps_rm2}
\end{tabular}
\caption{Same as in Figure~\ref{fig:ps}, but for models $CGL2$, $CGL3$, $BA2$, and $MHD2$.}
\label{fig:models2}
\end{figure}
The statistics of the turbulence (that is, one and two point statistics) of
the models described in the previous Section
was studied in detail in \citetalias{santos-lima_etal_2014} where models
$Amhd$, $A2$, $A1$, $Cmhd$, $C2$, $C3$, and $C1$
correspond to {\it MHD1, CGL1, BA1, MHD2, CGL2, CGL3}, and $BA2$,
respectively.
Figure~\ref{fig:maps} presents the maps of the dimensionless Faraday rotation measurement ($RM$):
\begin{equation}
RM = \int_0^L n_e B_{LOS} dl,
\end{equation}
normalized by $n_{e0} B_0 L$ (where $B_0$ is the intensity of the mean magnetic field, $n_{e0}$ is the
average density of electrons and $L$ is the length of the Faraday screen) for
the models {\it CGL1, BA1}, and {\it MHD1},
calculated for an arbitrary line-of-sight (LOS)
whose direction has angle $\theta = 45 \degree$ with the mean magnetic field. The last snapshot
of the simulations (at $\approx 10 \tau_{turb}$)
were used for the calculations.
A visual inspection shows that the RM map of model {\it CGL1} presents fluctuations of smaller amplitude
compared to the {\it MHD1} model. Model {\it BA1} on the other hand, has the RM map appearance similar
to model {\it MHD1}.
Figure~\ref{fig:statistics} shows the normalized values of the average (top)
and dispersion (bottom) of RM as a function of the angle $\theta$, for the mildly super-Alfvenic models. The maps were built using 20 values of $\theta$ equally
spaced between $\theta = 0$ and $90\degree$, and the statistical moments were averaged over maps built
from the different
snapshots available for each model.
The two-temperature
models develop excess of perpendicular pressure
in most of the domain ($A>1$), and larger anti-correlation between
the magnetic and density fluctuations (when compared to the one-temperature collisional MHD model;
see Figure 10 in \citetalias{santos-lima_etal_2014}). This enhanced anti-correlation is expected to lead to a net
reduction of the rotation measure in the case of the {\it CGL1} model when compared to the
{\it MHD1} model. However,
this reduction is found to be small (only a few percent) for small angles $\theta$ in the top plot
of Figure~\ref{fig:statistics}. For increasing angles $\theta$, the mean RM for the {\it CGL1}
model
converges to values close to the MHD model. The model {\it BA1} has the RM mean value
similar to the {\it MHD1} model for all angles.
Due to the dominance of the perpendicular temperature component in the {\it CGL1} model,
the thermal stresses offer resistance to motions perpendicular to the local field lines then reducing
the fluctuations of the magnetic fields.
In consequence, the fluctuations
of the RM produced by the magnetic field turbulence are also affected. The bottom panel
of Figure~\ref{fig:statistics} compares the normalized dispersion of RM for the three models. This relative dispersion
of the {\it CGL1} is about 2 times smaller (for small $\theta$) compared to the {\it MHD1} model,
and this difference is smaller for larger values of $\theta$. The inclusion of the fast anisotropy relaxation by the
microscale instabilities
(model {\it BA1}) makes this relative dispersion in RM very similar to the MHD
model.
Figure~\ref{fig:ps} compares the power spectrum of the RM maps for the
mildly super-Alfvenic models. For each model,
the power spectrum is shown for different values of $\theta$ (from $\theta = 0$ to $90\degree$).
For wavenumbers approximately in the estimated inertial range ($5<k<30$) the slopes of the power
spectrum for the different lines of sight are nearly the same.
While the {\it BA1} model has almost indistinguishable power spectrum from the {\it MHD1} model, the
model {\it CGL1} has less power in all scales and is slightly flatter.
We also note that the RM spectrum of the {\it CGL1} model has a power law close to $k^{-8/3}$
(which is expected
when only the magnetic field fluctuates, that is, the density fluctuations are negligible) and an unidimensional
power spectrum $|B_k|^2 \propto k^{-5/3}$,
while model {\it MHD1} has a slightly flatter slope at small $k$ values and then becomes slightly steeper at larger $k$.
The corresponding magnetic power spectrum
is slightly flatter than the Kolmogorov power law
$k^{-5/3}$ in our simulation as shown in Fig. 6 of \citetalias{santos-lima_etal_2014};
the same can be observed in a similar simulation presented in \citetalias{nakwacki_etal_2016}
(Fig. 6 left panel, model Bext=1, cs=1).
Compared to model {\it MHD1}, the slow decay of the dissipation range of model {\it CGL1}
points to an accumulation of power at the small scales
(which are not properly solved by our grid resolution) caused by the kinetic instabilities (\citetalias{nakwacki_etal_2016}).
We repeated the analysis above for the strongly super-Alfvenic models
{\it CGL2, CGL3, BA2}, and {\it MHD2}.
Naturally, for these models the turbulent component of the magnetic field dominates the uniform one. This implies that
the statistics of the RM maps built from these models
is generally independent of the adopted LOS (except for the value corresponding to the mean field). In fact the statistics
of the CGL models RM maps keeps a marginal dependence on the LOS, as the turbulent magnetic field is not as amplified here as in the MHD case.
Table~\ref{tab:moments} shows the statistical moments for RM averaged over maps with different LOS
(using 20 values of $\theta$ uniformly spaced in the interval between $0$ and $90 \degree$).
The dispersion values $\langle \delta RM^2\rangle$ are compared to $(n_{e0} B_{rms} L)^2$, in order
to check how precisely we can track the intensity of the turbulent component of the magnetic field.
Compared to the MHD model, the CGL models give a smaller value (by a factor of two), but this also depends
on the compressibility of the turbulence, being slightly higher for the more compressible model $CGL3$.
The model with bounds on the anisotropy $BA2$ shows a dispersion similar to the MHD case.
The skewness and kurtosis of the distribution of the RM are also shown in Table~\ref{tab:moments}
(in columns $\langle \delta RM^3 \rangle / \langle \delta RM^2 \rangle^{3/2}$ and
$\langle \delta RM^4 \rangle / \langle \delta RM^2 \rangle^2$, respectively). While model {\it BA2} presents results very
similar to {\it MHD2}, with nearly zero skewness and the same values for the kurtosis, the
CGL models show a positive skewness (which means a longer tail of large values) and a kurtosis
approximately twice that of the MHD model, so that the distribution being is more peaked.
Figure~\ref{fig:models2} shows the power spectrum of RM for the highly super-Alfvenic models (averaged over the
different LOS).
The curves for the CGL models are displaced in the vertical axis and the values are multiplied by
a factor of $100$ in order to make the difference of the slopes between the models better observed.
Similar to the mildly super-Alfvenic case, the CGL models present
a flatter spectrum at large $k$ values of the inertial range compared to the standard MHD model. The slope for model $CGL2$ is even flatter than $k^{-8/3}$
(due to the increase of the small scale magnetic fluctuations caused by the instabilities which are stronger
in this high beta plasma regime compared to the previous mildly super-Alfvenic case), while the {\it MHD2}
model exhibits a power similar to the mildly super-Alfvenic case.
\section{Summary and conclusions}
In this work we explored the role of plasma collisionless effects on simulated Faraday rotation maps resembling the conditions of
the intracluster medium of galaxies (ICM). We presented a
statistical analysis of the Faraday rotation maps obtained from simulations of forced turbulence
in a three-dimensional domain with periodic boundaries considering three different models of the ICM plasma. The first
one-temperature collisional MHD
model considers isotropy in the velocity thermal distribution of the particles,
an assumption that is not suitable {\it a priori} for the weakly collisional ICM,
where the mean free path for ion-ion Coulomb collisions is distributed typically in the range $2-100$~kpc
(see \citealt{egan_etal_2016}). The second model (CGL) allows for
the development of anisotropy in the velocity thermal distribution (two-temperature approach),
according to the conservation of the first adiabatic invariant (the magnetic momentum) of charged
particles and the absence of heat conduction. The third model
(BA) differs from the second by the inclusion of
a phenomenological constraint on the temperature anisotropy due to the fast development
of the firehose and mirror instabilities at the microscales (much smaller than the typical turbulence scales,
reaching the ions kinetic scales).
These instabilities are triggered by the
temperature anisotropy itself.
Compared to the Faraday rotation maps resulting from the one-temperature collisional model (MHD),
those from the collisionless CGL model
present a relative dispersion smaller, with a steeper and less intense power
spectrum in all scales.
On the other hand, the statistical properties of the RM maps resulting from the collisionless BA model,
which bounds the anisotropy to the firehose and mirror stable thresholds,
are very similar to those of the MHD model.
As stressed in Section 1, in \citetalias{nakwacki_etal_2016} we performed a similar RM analysis of collisionless
two-temperature (with fixed values) models for the intracluster medium, but without considering the effects of the
thermal relaxation by the kinetic instabilities. In this case the results were
similar to those of the CGL model above, i.e., with significant differences in the RM maps and
their statistical properties with regard to the collisional MHD model. Specifically, important
imprints of the pressure anisotropy were found to prevail in the magnetic field structure resulting
in Faraday rotation maps with smaller correlation lengths.
It has been demonstrated in \citetalias{santos-lima_etal_2014} that the inclusion of the
anisotropy relaxation by the kinetic mirror and firehose instabilities in collisionless
two-temperature systems makes the statistical properties of the turbulence (in high $\beta$ plasmas)
as well as the amplification of the magnetic fields via the small-scale turbulent dynamo very similar
to those of collisional MHD systems. The later approach is in fact used in most numerical
simulations of the intracluster medium.
Therefore, the present result, {\it in principle}, reinforces the justification for the use of the collisional MHD approximation at least in studies of the large scale properties of the ICM.
Nevertheless, this study has limitations and several questions still remain opened, as we briefly address below.
Recently \cite{santos-lima_etal_2016} have reviewed the limitations of the
anisotropy relaxation approach employed, e.g., in \citetalias{santos-lima_etal_2014}.
For instance, this neglects the effects of the
microscale magnetic fields generated by microinstabilities
on the stretching rate of the large scale component (near the injection scale of the turbulence) (see also \citealt{schekochihin_cowley_2006, mogavero_schekochihin_2014, melville_etal_2016}).
Furthermore, the present study has focussed
only
on the subsonic regime of the turbulence
driven by purely solenoidal forcing,
which explains the dominance of incompressible motions.
On the other hand, the turbulence generated by the merging processes in the ICM
is expected to be
partially compressional (at the injection scales)
and at least mildly supersonic
(\citealt{brunetti_jones_2014, bruggen_vazza_2015, bykov_etal_2015})
and, in fact, a compressible cascade in the ICM can reach small scales ($0.1-1$ kpc)
before being dissipated.
This implies that the magnetic fields can be entangled and/or advected also by compressive motions. In addition, weak shocks and collisionless effects will also affect the microphysics of processes like heating transport and thermal conduction (e.g., \citealt{santos-lima_etal_2016}), and
may be important to the re-acceleration of particles in the ICM (see for example \citealt{brunetti_lazarian_2007, brunetti_lazarian_2011}).
The complex interplay between compressible modes (and shocks) and collisionless effects
(as the collisionless damping) which have been neglected in the present collisionless MHD approach turn it inadequate to treat the compressible turbulent regime of the ICM
(see further discussion on this subject in \citealt{santos-lima_etal_2016}).
\section*{Acknowledgements}
RSL acknowledges support from a grant of the Brazilian Agency FAPESP (2013/15115-8),
EMGDP partial support from FAPESP (2013/10559-5) and CNPq (306598/2009-4) grants.
G.K. acknowledges support from FAPESP (grants no. 2013/04073-2 and 2013/18815-0) and PNPD/CAPES (grant no. 1475088) through a Postdoctoral Fellowship at University Cruzeiro do Sul.
The numerical simulations in this work were carried out in the supercluster of the Astrophysical Informatics Laboratory (LAi) of IAG-USP and UnicSul whose purchase was made possible by FAPESP.
The authors would also like to acknowledge the anonymous referee
for the critics and suggestions which helped to improve this work.
|
1,108,101,566,707 | arxiv | \section{Introduction}
\label{sec:intro}
Automatic speech recognition (ASR) systems are frequently used to transcribe meetings, calls, or dictated notes.
Most ASR models are trained to predict all lowercase or all uppercase transcripts without any punctuation.
Lack of text formatting makes it more difficult to read and comprehend text, even when it is free of speech recognition errors~\cite{grindlay2002missing}.
In addition to improving the legibility of speech transcripts, punctuation and capitalization restoration also facilitates other natural language processing (NLP) tasks, such as named entity recognition (NER)~\cite{lita2003truecasing}, part-of-speech tagging, syntactic parsing, and discourse segmentation.
Recently, \cite{zelasko2021whathelpstransformers} have shown that the presence of punctuation and truecasing is the single largest factor affecting the accuracy of various dialog act segmentation and classification systems. Furthermore, they found that punctuation is highly correlated with the presence of dialog act classes that correspond to conversational cues, such as incomplete utterances, restarts, repairs, and backchannels.
It is clear that the presence of punctuation is important for automated transcript processing -- but does there exist a single, correct way of inserting the punctuation symbols?
Truecasing and punctuation are abundantly available in written text resources. In some languages (such as Polish) the use of punctuation is well-defined by a set of rules. Other languages, such as English, permit -- to some extent -- arbitrary use of punctuation (e.g., the insertion of commas).
However, the use of truecasing and punctuation in speech transcripts is more complex due to the increased syntactic complexity of speech~\cite{kempson2000dynamic}.
Annotation of speech transcripts is a difficult and time-consuming task, which requires expertise and complex annotation guidelines.
We expect that the level of inter-annotator agreement for inserting punctuation is likely to be lower for conversational data, due to its spontaneity, disfluencies, and often unfinished sentences~\cite{kempson2000dynamic,charniak2001edit,purver-etal-2009-split}.
We hypothesize that there is a distribution mismatch between truecasing and punctuation present in written text resources, such as books, and in conversational transcripts.
In addition, conversational transcripts are typically much more difficult to come by -- except for abundantly-resourced languages such as English.
To investigate our hypothesis, we prepare an experimental setup with two text corpora that are representative of their respective domains -- the Gutenberg project books~\cite{gerlach2020standardized} and Fisher conversational transcripts~\cite{cieri2004fisher}. We consider a multi-task approach for joint punctuation and truecasing prediction to leverage the label correlations and dependencies between these tasks.
We specifically investigate the following research questions:
\begin{itemize}
\item How mismatched are book text and conversational transcripts domains in terms of punctuation and truecasing use?
\item Can this mismatch be mitigated by leveraging cross-domain transfer learning, and to what extent?
\item How much conversational data is needed to effectively leverage transfer learning from books? Note that we specifically choose the Fisher corpus due to its large size to answer this question.
\item How much improvement can we expect from multi-task learning and is it consistent in both the books domain and the conversational domain?
\item Does seeing truecased text in pretraining improve the performance of further fine-tuning for truecasing prediction?
\end{itemize}
The rest of the paper is organized as follows: in section~\ref{sec:related_work} we briefly highlight related literature; sections~\ref{sec:methods} and~\ref{sec:experiments} describe our methods and experimental setup. We present our results in section~\ref{sec:results} and conclude our findings in section~\ref{sec:conclusions}.
\section{Related work}
\label{sec:related_work}
The early efforts tackle truecasing and punctuation prediction using n-gram language models~\cite{lita2003truecasing,gravano2009restoring}.
However, the performance of simple n-gram language models suffers when long-range lexical information is required to disambiguate between punctuation classes~\cite{beeferman1998cyberpunc}.
Joint modelling of truecasing and punctuation tasks is considered in~\cite{sunkara2020robust,pahuja2017joint} using deep learning models in a classification framework.
Authors in~\cite{sunkara2020robust} assume punctuation as an independent task and truecasing as conditionally dependent on punctuation given latent representation of the input.
However, it is treated as a multi-task problem in~\cite{pahuja2017joint} where both truecasing and punctuation are independent given the input latent representation.
Recently, \cite{o2021spgispeech} proposed training ASR models to directly transcribe text with truecasing and punctuation, which is enabled by the release of a large speech corpus with rich-format transcriptions. Unfortunately, their data use is not permissible for commercial applications.
Speech signal holds some cues such as pauses and intonation patterns to predict punctuation marks~\cite{levy2012effect}.
Incorporation of speech cues to the text-based models is explored in~\cite{zelasko2018punctuation, sunkara2020multimodal} and have shown improvements in punctuation prediction.
The distribution mismatch between text and conversational domains can be mitigated by retrofitting word embeddings to the target domain~\cite{augustyniak2020punctuation} when GloVe~\cite{pennington2014glove} embeddings are used in the model.
\section{Our approach}
\label{sec:methods}
Recent works have shown significant improvements by fine-tuning pre-trained language models for NLP-related downstream tasks~\cite{devlin2018bert, dai2019transformer, liu2019roberta}.
Inspired by this research, we choose to follow a similar method for truecasing and punctuation prediction.
In this work, we fine-tune base versions of BERT~\cite{devlin2018bert} pre-trained models for truecasing and punctuation prediction.
BERT base model consists of a sequence of 12 blocks of self-attention layer and fully-connected layer.
It is trained to optimize masked language model (MLM) objective and next sentence prediction (NSP).
For fine-tuning, we remove MLM and NSP heads from the model and only consider the encoder part for further modelling.
Below, we explain the procedure followed in this work to adapt the BERT model for joint prediction of truecasing and punctuation.
\subsection{Multi-tasking of truecasing and punctuation}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{figs/punctuation_diagram_updated.png}
\caption{Flow chart for joint prediction of truecasing and punctuation}
\label{fig:flowchart}
\end{figure}
In this work, we optimize truecasing and punctuation loss functions together in a multi-tasking manner to exploit relations between truecasing and punctuation.
Flow chart for the multi-task model with an example is shown in Fig.~\ref{fig:flowchart}.
As shown in the figure, we process the BERT encoder output using task-specific layers to obtain model predictions for our tasks.
Task-specific layers include a dropout layer and a fully connected layer.
As the BERT encoder is common for both tasks, it has to encode information related to both the tasks and then task-specific layers retain relevant information depending on the task.
Through optimization of the loss computed for each task, we adapt the pre-trained weights and also learn the newly introduced parameters.
Loss for each task is computed by comparing fully connected layers output against the corresponding ground truth labels.
We use the cross-entropy loss function to compute losses as the targets for both the tasks are categorical.
For joint optimization, we minimize a weighted combination of both losses as shown in~\eqref{eq:multi_task_obj}.
We use a hyper-parameter $\lambda$ to balance/control the relative learning of the tasks.
We show experiments with various values of $\lambda$.
\begin{equation}
CE_{joint}=\lambda CE_{c} + (1-\lambda) CE_{p} \label{eq:multi_task_obj}
\end{equation}
where $CE_{c}$ and $CE_{p}$ are the cross-entropy loss functions for truecasing and punctuation respectively.
We use truecasing and punctuation models trained without multi-tasking as our baselines.
We obtain baseline models for truecasing and punctuation by setting $\lambda$ to one and zero respectively.
To achieve truecasing and punctuation, we assign labels for every word in the input document and force the model to make a prediction for every word.
However, a word could be divided into several sub-word tokens as the BERT models use sub-word tokens at the input.
We compute loss only for the first sub-word token and do not consider the losses of other sub-word tokens.
We do not discard other sub-word tokens from training as they could be helpful to disambiguate the first sub-word token from other similar sub-word tokens.
In our experiments, we found that computing loss for the first or last sub-token token did not matter.
Loss for an input document is computed by taking a summation of each word loss which further used to calculate the batch loss.
The efficacy of fine-tuned models for downstream tasks could depend on the training data used for pre-training.
The training data used for the BERT base model include BooksCorpus~\cite{zhu2015aligning} and English Wikipedia.
Considering one of the target tasks in this paper, truecasing, it is interesting to see how the casing during pre-training affects the target tasks performance.
In this work, we compare BERT models trained with and without casing when adapted for truecasing and punctuation tasks.
\subsection{Exploration of model performance on conversational text in low-resource scenarios}
Given the effort required, minimizing the amount of annotated conversational data needed to achieve strong model performance is desirable in practical applications.
For that reason, we investigate the performance on various subsets of the Fisher dataset, most of which can be considered low resource scenarios.
Fig.~\ref{fig:finetuning_lowresource} shows the pipeline we follow in our experiments to achieve truecasing and punctuation prediction in low-resource scenarios.
We explore knowledge transfer from models trained on written text resources to conversational transcripts.
For this purpose, we use truecasing and punctuation prediction on Gutenberg dataset as an intermediate task.
That is, we first fine-tune the BERT-uncased model on the Gutenberg dataset to learn truecasing and punctuation patterns in the dataset then again fine-tune on small amounts of Fisher training data.
We compare it with a model obtained without any intermediate task i.e., fine-tuning BERT-uncased model directly on Fisher dataset.
We hypothesize that intermediate task optimization boosts performance on the Fisher dataset when only a few documents of Fisher are available. Ideally, a small number of conversations would be sufficient to bridge the gap between the two domains.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{figs/low_resource_punct_TC.jpg}
\caption{Fine-tuning process for low-resource scenarios. TC denotes truecasing; Punct. denotes punctuation; conv. denotes conversational}
\label{fig:finetuning_lowresource}
\end{figure}
\begin{table}
\centering
\caption{Label count for each class in our datasets. Perc. denotes percentage of the label}
\label{tab:label_count_dataset}
\resizebox{\columnwidth}{!}{%
\begin{tabular}{@{}l|l|ll|ll@{}}
\midrule
& & Fisher & & \multicolumn{2}{c}{Gutenberg} \\
\midrule
& & Count & Perc. (\%) & Count & Perc. (\%) \\
\midrule
\multirow{8}{*}{\begin{tabular}[c]{@{}c@{}}Punctuation\\ marks\end{tabular}} &
Blank &
12 425 398 &
72.05 &
11 019 746 &
86.44 \\
& Comma & 2 047 050 & 11.87 & 955 422 & 7.49 \\
& Ellipsis & 44 077 & 0.26 & 2 928 & 0.02 \\
& Exclamation & 9 922 & 0.06 & 31 731 & 0.25 \\
& FullStop & 1 454 839 & 8.44 & 559 424 & 4.39 \\
& Question & 220 998 & 1.28 & 32 677 & 0.26 \\
& SemiColon & 2 214 & 0.01 & 89 262 & 0.7 \\
& DoubleDash & 1 041 119 & 6.04 & 57 198 & 0.45 \\
\midrule
\multirow{3}{*}{\begin{tabular}[c]{@{}c@{}}Truecasing\\ classes\end{tabular}} &
AUC &
841 697 &
4.88 &
184 877 &
1.45 \\
& LC & 14 259 782 & 82.69 & 11 400 740 & 89.43 \\
& UC & 2 144 138 & 12.43 & 1 162 771 & 9.12 \\
\bottomrule
\end{tabular}%
}
\end{table}
\section{Experimental setup}
\label{sec:experiments}
\subsection{Corpora}
\label{sec:experiments:corpora}
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{figs/datasets_plot_2.png}
\caption{Distribution of truecasing classes after a punctuation occurs in Fisher and Gutenberg datasets. For example, more than 90\% of the words after Blank have LowerCase; around 60\% of the words after Exclamtaion have UpperCase in Fisher.}
\label{fig:datasets_punct_vs_TC}
\end{figure}
\textbf{The Fisher corpus}~\cite{cieri2004fisher} includes telephone call recordings along with rich manual transcriptions including capitalization and punctuation.
Participants across a variety of demographic categories had conversations about randomly pre-assigned topics to maximize inter-speaker variation and vocabulary breadth.
Transcriptions along with capitalization and punctuation marks were manually introduced by annotators based on automatically segmented recordings, where punctuation marks denote the pauses and intonations.
The Fisher dataset consists of 9168 transcripts for training, 509 for development and 510 for testing.
\textbf{The Gutenberg corpus} is a collection of over 50k fictional, multilingual books~\cite{gerlach2020standardized}.
It contains copyright-free books including biographies, fantasy, art, fiction, poetry and philosophy. The dataset has rich and high-quality punctuation annotation that was double-checked by the editor's team.
However, due to the license restrictions, most of the books are dated between 1800-1930 making the language possibly outdated, especially when considering truecasing prediction.
In this work, we use 75\%, 5\% and 25\% splits for model training, development and testing.
\subsection{Data preparation for truecasing and punctuation}
We assign one truecasing and one punctuation label for every word.
For truecasing, we consider three labels, namely all upper casing (AUC), lower casing (LC) and upper casing (UC).
AUC is assigned for words with all upper case characters; LC is assigned for words with all lower case characters; UC is assigned for words starting with an upper case character and also for words with mixed casing.
For punctuation, we assign a punctuation label out of 8 possible labels for each word.
The list of punctuation considered in this work are Blank, Full-stop (.), Comma (,), Question mark (?), Exclamation mark (!), Semicolon (;), Double-dash (--), Ellipsis (...).
In cases where punctuation appears after a word, we assign that word with that punctuation otherwise we assign Blank.
Note that this setup diverges from previous works on the Fisher corpus~\cite{zelasko2018punctuation,augustyniak2020punctuation,sunkara2020multimodal} -- we included more punctuation classes specifically to investigate how well the models deal with less frequent punctuation marks that exist in both domains.
Class count and their percentage in the corresponding dataset are shown in Table~\ref{tab:label_count_dataset}.
The distribution of the token count for punctuation and truecasing classes are highly skewed.
As expected, LC class frequency is much higher than AUC and UC in both datasets.
Truecasing classes distribution between the datasets is more similar compared to punctuation classes distribution.
Blank dominates in both the datasets followed by Comma and FullStop.
One noticeable difference between the datasets is the use of DoubleDash: it occurs for 6.04\% of the tokens in Fisher whereas only for 0.45\% of the tokens in the Gutenberg dataset.
We observed that DoubleDash is majorly used at the change of turns in the Fisher conversations.
Question mark usage is more frequent in Fisher compared to the Gutenberg dataset which could be expected -- questions are the main tool of eliciting information from the other parties in conversations.
Ellipsis, although rare, is significantly more frequent in conversations, reflecting a larger portion of unfinished utterances.
Finally, semicolons are practically non-existent in conversation transcripts.
\section{Results}
\label{sec:results}
In this section, we present the results of our experiments on truecasing and punctuation.
First, we show the impact of casing and punctuation in the input document on each other's task followed by a multi-task model to improve the performance of both tasks.
Then, we explore the model performance on conversational text when only limited data is available.
\subsection{Correlation between truecasing and punctuation}
\label{subsec:correlation_truecasing_punctuation}
\begin{table}
\caption{Effect of punctuation and truecasing on each other's prediction. We adapt BERT-cased model for this study. All numbers denote Macro F1 scores.} \label{table:effect_punct_casing}
\resizebox{\columnwidth}{!}{
\begin{tabular}{c|c|c|c|c}
\toprule
\multirow{2}{*}{Task} & \multicolumn{2}{c|}{Input documents with} & \multirow{2}{*}{Gutenberg } & \multirow{2}{*}{Fisher} \\
& Punctuation & Casing & & \\
\midrule
\multirow{2}{*}{Truecasing} & Yes & No & 97.23 & 97.54 \\
& No & No & 94.33 & 92.67 \\
\midrule
\multirow{2}{*}{Punctuation} & No & Yes & 80.16 & 50.49 \\
& No & No & 75.10 & 47.61 \\
\bottomrule
\end{tabular}
}
\end{table}
Fig.~\ref{fig:datasets_punct_vs_TC} presents the statistics of word casing for the next word after punctuation for Fisher and Gutenberg datasets.
As is the case with some of the basic rules in the English language, we can observe that the most frequent casing after the full stop, question mark and exclamation is the upper casing.
Similarly, lowercased words follow the majority of the time after comma, semicolon and Blank.
We can observe that the most frequent casing after the double dash is different in both datasets: a lower casing (57.41\%) in Fisher and an upper casing (65.38\%) in Gutenberg.
Calculating Macro F1 with the most frequent casing, we see 54.85\% for Fisher and 52.7\% for the Gutenberg dataset.
In this case, we almost never predict the AUC class and miss many UC words which are likely the most important classes for applications like named entity recognition.
Table~\ref{table:effect_punct_casing} shows the impact of truecasing and punctuation on each other's prediction.
To evaluate the effect of punctuation on truecasing prediction, we experiment with and without including punctuation in the input documents.
Similarly, to quantify the effect of casing on punctuation prediction, we experiment with and without including casing in the input documents.
We fine-tune the BERT-cased model for this experiment as the tokenization process removes word casing in the BERT-uncased model.
We can observe that the truecasing task performance is dropped by 2.9\% and 4.87\% absolute on Gutenberg and Fisher respectively without punctuation in the input.
Punctuation task performance is dropped by 5.06\% and 2.88\% absolute on Gutenberg and Fisher respectively without casing in the input.
The performance drop can be explained by the data statistics presented in Fig.~\ref{fig:datasets_punct_vs_TC}.
Results of this experiment along with Fig.~\ref{fig:datasets_punct_vs_TC} strongly suggest that truecasing and punctuation are correlated.
Hence, we expect their joint modelling improves the performance of both tasks.
\begin{table}
\centering
\caption{Results with multi-tasking using \textbf{BERT-uncased} pre-trained model for fine-tuning. TC -- Truecasing task; Punct. -- Punctuation task. Best results (Macro F1 scores) in each column are in bold.}
\label{tab:multi_tasking_BERT_uncased}
\resizebox{0.8\columnwidth}{!}{%
\begin{tabular}{@{}c||c|c||c|c@{}}
\toprule
& \multicolumn{2}{c||}{Fisher} & \multicolumn{2}{c}{Gutenberg } \\
\cmidrule{2-5}
Lambda & TC & Punct. & TC & Punct. \\
\midrule
1 & 92.92 & - & 95.45 & - \\
0.9 & \textbf{93.06} & 46.86 & \textbf{95.85} & 73.45 \\
0.75 & \textbf{93.06} & 47.56 & 95.80 & 75.64 \\
0.5 & 93.02 & \textbf{48.06} & 95.71 & 76.23 \\
0.25 & 92.89 & 47.70 & 95.51 & 76.61 \\
0.1 & 92.66 & 47.65 & 95.24 & 76.76 \\
0 & - & 47.08 & - & \textbf{77.58} \\
\bottomrule
\end{tabular}%
}
\end{table}
\subsection{Experiments with multi-tasking}
Tables~\ref{tab:multi_tasking_BERT_uncased} and~\ref{tab:multi_tasking_BERT_cased} present the results of multi-task models when fine-tuned from BERT-uncased and BERT-cased pre-trained models respectively along with corresponding baselines.
The multi-task model is trained with the objective function shown in~\eqref{eq:multi_task_obj} where setting $\lambda$ to 0 provides a baseline for truecasing and 1 for punctuation task.
We show experiments with $\lambda=0.1, 0.25, 0.5, 0.75, 0.9$ for multi-task model to find the optimum values for $\lambda$.
For truecasing task, multi-tasking provides improvements with $\lambda=0.9, 0.75, 0.5$ and sometimes even when $\lambda$ is less than 0.5.
We can observe improved performance for punctuation task too with $\lambda=0.1$ in all cases except when BERT-uncased is fine-tuned for Gutenberg.
It supports our hypothesis that joint modelling helps both truecasing and punctuation tasks.
Comparing BERT-cased (Table~\ref{tab:multi_tasking_BERT_cased}) and BERT-uncased (Table~\ref{tab:multi_tasking_BERT_uncased}) pre-trained models, the latter suited better for truecasing task.
The BERT-cased model was trained on the cased text and the BERT-uncased model on uncased text.
For the punctuation task, the BERT-uncased model provided better results on both Gutenberg and Fisher compared to BERT-cased model except for the Fisher baseline.
From these results, we can conclude that pre-training with uncased text is more suitable for both truecasing and punctuation tasks compared to training with cased text.
We can observe that punctuation task performance ranges are quite different for Fisher (around 47\%) and Gutenberg dataset (around 75\%).
We suspect two reasons for the difference: 1) Gutenberg is a collection of books, usually well proof-read, we can expect consistent punctuation compared to the punctuation in conversations and 2) BERT models are pre-trained on written text.
Note that it is a mere comparison demonstrating the difficulty of recognizing punctuation in written and conversational text but not to be taken in a strict sense as the corresponding models are trained and tested on different data.
\begin{table}
\centering
\caption{Results with multi-tasking using \textbf{BERT-cased} pre-trained model for fine-tuning. TC -- Truecasing task; Punct. -- Punctuation task. Best results (Macro F1 scores) in each column are in bold.}
\label{tab:multi_tasking_BERT_cased}
\resizebox{0.8\columnwidth}{!}{%
\begin{tabular}{@{}c||c|c||c|c@{}}
\toprule
& \multicolumn{2}{c||}{Fisher} & \multicolumn{2}{c}{Gutenberg } \\
\cmidrule{2-5}
Lambda & TC & Punct. & TC & Punct. \\
\midrule
1 & 92.67 & - & 94.33 & - \\
0.9 & \textbf{93.00} & 46.56 & \textbf{95.47} & 71.93 \\
0.75 & 92.96 & 47.37 & 95.39 & 72.84 \\
0.5 & 92.93 & 47.58 & 95.21 & 75.40 \\
0.25 & 92.77 & 47.49 & 94.98 & 75.09 \\
0.1 & 92.51 & \textbf{47.63} & 94.85 & \textbf{75.86} \\
0 & - & 47.61 & \textbf{-} & 75.10 \\
\bottomrule
\end{tabular}
}
\end{table}
\subsection{Experiments on Fisher in low-resource scenarios}
\begin{table}
\centering
\caption{Results on Fisher test set with various training dataset sizes. With Interm. Task denotes fine-tuning is on intermediate task model; W/O Interm. Task denotes fine-tuning on BERT-uncased model directly without intermediate task. * denotes the corresponding model is evaluated without fine-tuning.}
\label{tab:fisher_lowresource_setting}
\resizebox{\columnwidth}{!}{%
\begin{tabular}{@{}c|c|c|c|c@{}}
\toprule
\multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}\#Fisher \\ documents\end{tabular}} & \multicolumn{2}{c|}{Truecasing} & \multicolumn{2}{c}{Punctuation} \\
\cmidrule{2-5}
& With Interm. Task & W/O Interm. Task & With Interm. Task & W/O Interm. Task \\
\midrule
0 & \, \, 73.95* & - & \, \, 21.62* & - \\
50 & 72.76 & 32.17 & 26.22 & 9.74 \\
100 & 88.69 & 76.33 & 37.93 & 22.65 \\
250 & 90.98 & 89.29 & 42.39 & 35.09 \\
500 & 91.68 & 91.07 & 43.64 & 42.91 \\
1000 & 92.08 & 91.76 & 45.01 & 45.13 \\
5000 & 92.76 & 92.78 & 47.4 & 46.81 \\
9168 & \textbf{92.99} & \textbf{93.02} & \textbf{47.83} & \textbf{48.06} \\
\bottomrule
\end{tabular}%
}
\end{table}
\begin{table*}[ht]
\centering
\caption{Comparison of punctuation class-wise f1-scores of models fine-tuned with different amounts of Fisher data.}
\label{tab:classwise_f1_lowresource}
\resizebox{\textwidth}{!}{%
\begin{tabular}{@{}c|cccccccc|cccccccc@{}}
\toprule
\multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}\#Fisher \\ Documents\end{tabular}}& \multicolumn{8}{c|}{With Intermediate Task} & \multicolumn{8}{c}{Without Intermediate Task} \\
\cmidrule{2-17}
& Blank & Comma & Ellipsis & Exclamation & FullStop & Question & SemiColon & DoubleDash & Blank & Comma & Ellipsis & Exclamation & FullStop & Question & SemiColon & DoubleDash \\
\midrule
0 & 91.81 & 46.93 & 0 & 0 & 7.01 & 27.21 & 0 & 0 & - & - & - & - & - & - & - & - \\
50 & 91.48 & 42.92 & 0 & 1.66 & 42.93 & 30.79 & 0 & 0.01 & 53.37 & 7.71 & 0.31 & 0.07 & 9.63 & 0 & 0 & 6.80 \\
100 & 93.65 & 55.67 & 0 & 0 & 63.50 & 49.69 & 0 & 40.94 & 91.74 & 44.49 & 0 & 0 & 44.97 & 0 & 0 & 0.02 \\
250 & 94.50 & 60.55 & 0 & 0 & 67.53 & 58.00 & 0 & 58.55 & 93.91 & 57.05 & 0 & 0 & 62.43 & 12.12 & 0 & 55.23 \\
500 & 94.68 & 61.78 & 0 & 0.63 & 68.91 & 61.61 & 0 & 61.49 & 94.42 & 59.91 & 6.14 & 0 & 66.97 & 55.72 & 0 & 60.10 \\
1000 & 94.87 & 62.89 & 3.98 & 0.64 & 69.94 & 63.96 & 0 & 63.81 & 94.73 & 62.15 & 10.28 & 0 & 69.01 & 61.90 & 0 & 62.97 \\
5000 & 95.17 & 64.56 & 12.23 & 0.65 & 71.72 & 67.67 & 0 & 67.24 & 95.12 & 64.21 & 7.49 & 0.70 & 71.65 & 67.75 & 0 & 67.56 \\
9168 & 95.25 & 64.88 & 10.83 & 3.14 & 72.22 & 68.61 & 0 & 67.74 & 95.27 & 65.06 & 10.58 & 3.28 & 72.4 & 69.34 & 0 & 68.53 \\
\bottomrule
\end{tabular}%
}
\end{table*}
We evaluate the re-usability of truecasing and punctuation models trained on written text (Gutenberg) for conversational text (Fisher).
For this purpose, we fine-tune the BERT-uncased model on the Gutenberg dataset with $\lambda$ equal to 0.5 as an intermediate task (refer Fig.~\ref{fig:finetuning_lowresource}).
Then, we fine-tune again on a limited number of transcriptions from Fisher and compare it with the BERT-uncased model fine-tuned on Fisher directly.
Table~\ref{tab:fisher_lowresource_setting} shows the results on the Fisher dataset when only a limited number of annotated conversational documents are available.
Each row, for example 2$^{nd}$ row can be interpreted as -- we achieve 72.76\% truecasing performance on Fisher test set when using an intermediate task and only 50 Fisher documents are available for training.
Similarly, 32.17\% in the same row can be read as -- we achieve 32.17\% truecasing performance when we fine-tune the BERT-uncased model on only 50 Fisher documents directly without any intermediate task.
For both tasks, the performance improved with more training data size.
We can observe significant performance improvements when trained with 100 documents compared to training with 50 documents.
Adding more documents to the training after 250 documents has little effect on truecasing task.
Whereas for punctuation, the performance steadily improved with more training samples although relatively less improvement after 500 documents.
Table~\ref{tab:classwise_f1_lowresource} shows class-wise f1-scores for punctuation corresponding to the Macro F1 scores presented in Table~\ref{tab:fisher_lowresource_setting}.
We can observe 91.81\%, 46.93\% and 27.21\% for Blank, Comma and Question mark when the intermediate task model (trained with Gutenberg ) is evaluated on Fisher documents.
Significant improvements to the punctuation marks FullStop, Question and DoubleDash have come at lower training data sizes (50-250 documents) and then a steady improvement with more training documents.
We can observe that fine-tuning the Gutenberg dataset model provided better results compared to fine-tuning the BERT-uncased model in most cases.
For Question mark, fine-tuning from the BERT-uncased model yielded only 12.12\% f1-score even with 250 Fisher documents whereas fine-tuning a written text model provided 30.79\% with just 50 documents suggesting that the usage of the Question mark is similar in both Gutenberg and Fisher datasets.
A recall of 45.32\% for FullStop (f1-score of 42.93\% in Table~\ref{tab:classwise_f1_lowresource}) implies that reasonable sentence segmentation can be achieved with 50 spoken documents for training.
The most frequently used punctuation marks Blank, Comma, FullStop and Question are recognized decently with just 50 Fisher documents when the Gutenberg dataset model is fine-tuned.
From a practical point of view, it implies that most of the spoken documents can be enriched with punctuation even with very little annotated conversational data.
Other punctuation marks, Ellipsis, Exclamation and SemiColon are not recognized even with the larger set of training documents.
We suspect their inconsistent usage and less frequency in training data could have contributed to their poor performance.
\section{Conclusions and future work}
\label{sec:conclusions}
In this paper, we presented a multi-tasking model for truecasing and punctuation to take advantage of the correlations between them.
Our experiments have shown that multi-task modelling improves the performance of both tasks.
Experiments with fine-tuning BERT-based pre-trained models revealed that pre-training with casing provides inferior results compared to pre-training with uncased text on truecasing and punctuation prediction.
Through knowledge transfer from written text to conversational text models, we found that as little as 50 annotated spoken documents can provide decent performance for most frequently used punctuation marks such as Comma, FullStop, and Question mark.
And, adding more data provided larger improvements up until training set size 500 documents and smaller improvements after that.
In future work, we plan to investigate the transfer of knowledge from other datasets that are similar to conversational text to improve truecasing and punctuation task performance on conversational text.
We also look to explore cross-lingual knowledge transfer to minimize annotation costs in low-resource languages.
\bibliographystyle{IEEEbib}
|
1,108,101,566,708 | arxiv | \section{Introduction}
\vspace{-2mm}
In the last decade, distributed optimisation \cite{Boyd11ADMM} has drawn increasing attention due to the demand for massive-data processing and easy remote access to ubiquitous computing units (e.g., a computer or a mobile phone) over a network. Its basic principle is to allocate the data over a set of computing units instead of one server and then allow the computing units to collaborate with each other in a distributed manner to iteratively obtain a global solution (e.g., a machine learning (ML) model) of an optimisation problem which is formulated via the data. In general, the typical challenges faced by distributed optimisation include, for instance, data-heterogeneity across the network, expensive communication, data-privacy requirements, massive scalability, and heterogeneous local computational resources \cite{Dimakis10GossipAlg, Li19Fed}. Depending on the applications, various methods have been developed for addressing one or more challenges in the considered network (e.g., \cite{Boyd06gossip, Richtarik16Dist, Li14Dist}).
Considering the application of distributed optimisation for learning an ML model, distributed learning \cite{Zhang16PDMM, Kenta20KDD} over a decentralized (i.e., peer-to-peer (P2P)) network and federated learning \cite{Kairouz19Fed} over a centralised (i.e., server-client topology) network have been two of the most active research topics in recent years. In a P2P network, network nodes can be connected arbitrarily in an equal relationship. In this situation, distributed optimisation methods are designed to be node-independent w.r.t. local computation and communication to enable network scalability. The algorithms in the literature can be roughly classified as either average-consensus based or primal-dual based.
In brief, the average-consensus approach \cite{Li20gradientTrack, Li19GradTrack, Blot16Gossip} allows the network nodes to share and average (or fuse) the estimated models to be learned among neighbours iteratively until reaching global consensus. On the other hand, the primal-dual approach \cite{Zhang16PDMM, Kenta20KDD, Rajawat20PDMM} intends to explicitly represent the neighbouring consensus requirements via linear equality constraints in terms of neighbouring model variables and then iteratively solve the reformulated optimisation problem via either Peaceman-Rachford (PR) splitting or Douglas-Rachford (DR) splitting (e.g., \cite{OksendalBook03, Ryu16Mono}). In particular, the alternating direction method of multipliers (ADMM) \cite{Giselsson17ADMM} and the primal-dual method of multipliers (PDMM) \cite{Zhang16PDMM,Sherson17PDMM} are two known algorithms based on DR splitting and PR splitting, respectively. One major advantage of the second approach is that it is able to handle heterogeneous\footnote{Alternatively referred to as non i.i.d. data across different network nodes. } data implictly by imposing linear equality constraints w.r.t. model variables.
Federated learning focuses on networks with server-client topologies \cite{Kairouz19Fed}. In the learning procedure, the server is responsible for collecting, fusing, and broadcasting information from/to all the clients while each client only needs to communicate with the server directly, which makes it easily implementable. In general, federated learning is more time-effective through global information collection and spread at the cost of limited scalability than distributed learning over a P2P network \cite{Li19Fed}. The algorithms developed for a P2P network (e.g., \cite{Kenta20KDD}) can often be utilised for federated learning by viewing the server-client structure as a special type of P2P network. Recent developed algorithms for federated learning include, for example, FEDAC \cite{Yuan20Fed}, FedSplit \cite{Pathak2021}, and SCAFFOLD \cite{Karimireddy20SCAFFOLD}. SCAFFOLD can be viewed as belonging to the primal-dual approach due to the introduced covariates in its update expressions for compensating the functional heterogeneity.
In this paper, we revisit the primal-dual method of multipliers (PDMM) proposed in \cite{Zhang16PDMM, Connor17PDMM}. The method was originally designed to solve a decomposable optimisation problem over a graphical model $\mathcal{G}=(\mathcal{V},\mathcal{E})$:
\begin{align}
\hspace{-3mm}\min_{\{\boldsymbol{x}_i\}}\hspace{-0.6mm}\Big(\hspace{-0.3mm} \sum_{i\in \mathcal{V}} f_i(\boldsymbol{x}_i) \hspace{-0.3mm}\Big) \; \textrm{s. t.} \; \boldsymbol{B}_{i|j} \boldsymbol{x}_i \hspace{-0.6mm}=\hspace{-0.6mm} \boldsymbol{B}_{j|i} \boldsymbol{x}_j \; \forall (i,j)\in \mathcal{E},
\label{equ:optiGeneral}
\end{align}
where the notation $\textrm{s.~t.}$ stands for ``subject to", $\mathcal{V}$ and $\mathcal{E}$ represent the sets of nodes and undirected edges respectively, and $f_i(\cdot)$ denotes the local function at node $i\in \mathcal{V}$. The two constant matrices $\boldsymbol{B}_{i|j}$ and $\boldsymbol{B}_{j|i}$ specify the linear equality constraint for $(i,j)\in \mathcal{E}$. As PDMM belongs to PR splitting, it enjoys the benefit that PR splitting gives the best convergence bounds with proper parameter setups for a certain class of functions \cite[Remark 4]{Giselsson17ADMM}. The recent work \cite{Kenta20KDD} has successfully applied Inexact PDMM (or gradient based PDMM) for training deep neural networks (DNNs) over P2P networks to the case of heterogeneous data. In \cite{Rajawat20PDMM}, the authors successfully extend PDMM by incorporating SAGA, L-SVRG, and SVRG++ over P2P networks. The performance of PDMM for centralised networks remains to be explored.
This paper studies the relationship between PDMM, and the two methods FedSplit and SCAFFOLD from the literature for optimisation over centralised networks. Our contributions are three-fold. Firstly, it is found that PDMM reduces to FedSplit when applied to a centralized network. We identify the cause for the poor reported performance of Inexact FedSplit (i.e., gradient based FedSplit) in \cite{Pathak2021}, as being due to the improper parameter initialisation at the client side per iteration.
Secondly, to correct the issue of Inexact FedSplit, we propose two versions of inexact PDMM, which are referred to as gradient-based PDMM (GPDMM) and accelerated GPDMM (AGPDMM), respectively. It is noted that GPDMM only needs to transmit one variable (a combination of a primal variable and a dual variable) per iteration between the server and clients. To accelerate the convergence speed of GPDMM, AGPDMM is designed to transmit two variables (a primal variable and a dual variable) per iteration from the server to the clients. Linear convergence rates for strongly convex and sublinear convergence rates for general convex cases are then established for GPDMM, which lead to tighter convergence bounds than those in \cite{Pathak2021}. We note that, in principle, the analysis results in \cite{Connor17PDMM, Rajawat20PDMM} for GPDMM over a decentralied network also hold for centralised networks. However, \cite{Connor17PDMM} only shows the convergence of GPDMM while the recent work \cite{Rajawat20PDMM} only shows the sublinear convergence rates.
Thirdly, it is found that both AGPDMM and SCAFFOLD reduce to the vanilla gradient descent operation under proper parameter setup when the number $K$ of gradient steps at the client side per iteration is set to $K=1$. Experimental results show that GPDMM produces slightly worse performance than SCAFFOLD which transmits two variables between the server and clients per iteration. On the other hand, AGPDMM converges faster than SCAFFOLD when $K>1$.
\vspace{-3mm}
\section{Problem Description}
\vspace{-1mm}
\noindent\textbf{Notation and definition of a convex conjugate function}: We use bold small letters to denote vectors and bold capital letters to denote matrices. In particular, $\boldsymbol{I}$ denotes the identity matrix. The superscript $(\cdot)^T$ represents the transpose operator. Given a vector $\boldsymbol{y}$, we use $\|\boldsymbol{y}\|$ to denote its $l_2$ norm. Given a graphical model $\mathcal{G}=(\mathcal{V}, \mathcal{E})$, we use $\mathcal{N}_i$ to denote the set of neighbours for node $i$. Suppose $h:\mathbb{R}^n\rightarrow \mathbb{R}\cup \{+\infty\}$ is a closed, proper and convex function. Then the conjugate of $h(\cdot)$ is defined as \cite{SawaragiBook85}[Definition 2.1.20]
\begin{align}
h^{\ast}(\boldsymbol{\delta})\stackrel{\Delta}{=} \max_{\boldsymbol{y}} \boldsymbol{\delta}^T\boldsymbol{y}-h(\boldsymbol{y}), \label{equ:conj_def}
\end{align}
where the conjugate function $h^{\ast}$ is again a closed, proper and convex function.
\noindent\textbf{Problem settings}: As a special case of (\ref{equ:optiGeneral}), we focus on a network of one server responsible for coordinating the learning process of $m$ clients, which can be represented as
\begin{align}
\hspace{-3mm}\min_{\{\boldsymbol{x}_s, \boldsymbol{x}_i\in\mathbb{R}^d \}}\hspace{-0.6mm}\left(\hspace{-0.3mm} \sum_{i=1}^m f_i(\boldsymbol{x}_i) \hspace{-0.3mm}\right) \; \textrm{s. t.} \; \boldsymbol{x}_s \hspace{-0.6mm}=\hspace{-0.6mm} \boldsymbol{x}_i \; i=1,\ldots, m,
\label{equ:optiFed}
\end{align}
where the edge set $\mathcal{E}$ in the graph is $\mathcal{E}=\{(i,s)\}_{i=1}^m$, the server function $f_s(\boldsymbol{x}_s)=0$, and each client function $f_i: \mathbb{R}^{d}\rightarrow \mathbb{R}$ is both continuously differentiable with the Lipschitz continuous gradient $L>0$ \cite{Zhou18Duality}
\begin{align}
&\hspace{-3mm}f_i(\boldsymbol{y}_i) \geq f_i(\boldsymbol{x}_i) \hspace{-0.6mm}+\hspace{-0.6mm} \nabla f_i(\boldsymbol{x})^T(\boldsymbol{y}_i \hspace{-0.6mm}- \hspace{-0.6mm}\boldsymbol{x}_i) \hspace{-0.6mm}\nonumber \\
&\hspace{8mm}+ \frac{1}{2L}\| \nabla f_i(\boldsymbol{x}_i) \hspace{-0.7mm}- \hspace{-0.7mm} \nabla f_i(\boldsymbol{y}_i) \|^2, \label{equ:gradLips2}
\end{align}
and (strongly) convex
\begin{align}
&\hspace{-3mm}f_i(\boldsymbol{y}_i) \geq f_i(\boldsymbol{x}_i) \hspace{-0.6mm}+\hspace{-0.6mm} \nabla f_i(\boldsymbol{x})^T(\boldsymbol{y}_i \hspace{-0.6mm}- \hspace{-0.6mm}\boldsymbol{x}_i) \hspace{-0.6mm}+ \hspace{-0.6mm} \frac{\mu}{2}\| \boldsymbol{x}_i \hspace{-0.7mm}- \hspace{-0.7mm} \boldsymbol{y}_i \|^2, \label{equ:muStrong}
\end{align}
for all $ \boldsymbol{y}_i\in \mathbb{R}^d, \boldsymbol{x}_i \in \mathbb{R}^d$. It is noted that convergence analysis for GPDMM will be conducted for both strong convexity ($\mu>0$) and general convexity ($\mu=0$) later on.
It is worth noting that (\ref{equ:gradLips2}) is essential to prove the linear convergence speed of GPDMM later on. In principle, the gradient difference $\|\nabla f_i(\boldsymbol{x}_i) - \nabla f_i(\boldsymbol{y}_i)\|^2$ is able to capture how the estimates of the dual variables of the method evolve over iterations.
The Lagrangian function for (\ref{equ:optiFed}) can be constructed as
\begin{align}
\mathcal{L}(\boldsymbol{x}_s, \{\boldsymbol{x}_i, \boldsymbol{\delta}_i\})= \sum_{i=1}^m f_i(\boldsymbol{x}_i) + \sum_{i=1}^m\boldsymbol{\delta}_{i}(\boldsymbol{x}_s -\boldsymbol{x}_i), \label{equ:fed_Lag}
\end{align}
where $\{\boldsymbol{\delta}_i\}$ are the Lagrangian multipliers, and can also be viewed as the \emph{dual} variables as opposed to the primal variables $\boldsymbol{x}_s$ and $ \{\boldsymbol{x}_i\}$. We assume there exists a saddle point $\boldsymbol{x}_s^{\star}, \{\boldsymbol{x}_i^{\star}, \boldsymbol{\delta}_i^{\star}\}$ for (\ref{equ:fed_Lag}). The corresponding KKT conditions are given by
\begin{align}
\nabla f_i(\boldsymbol{x}_i^{\star}) = \boldsymbol{\delta}_i^{\star}\; \forall i, \quad \boldsymbol{x}_i^{\star} = \boldsymbol{x}_s^{\star} \; \forall i, \quad \sum_{i=1}^m \boldsymbol{\delta}_i^{\star} = 0. \label{equ:KKT3}
\end{align}
The research goal is to obtain a good estimation of $\boldsymbol{x}_s^{\star}$ via local computation and communication between the server and the $m$ clients after a reasonably number of iterations. We will propose two versions of Inexact PDMM by inspection of the update expressions of PDMM later on to reduce the computational complexity of PDMM per iteration.
\vspace{-2mm}
\section{Relationship between PDMM and FedSplit}
\vspace{-1mm}
\label{sec:PDMM_Fedsplit}
In this section, we first briefly describe the updating procedure of PDMM for both the general problem (\ref{equ:optiGeneral}) and the special case (\ref{equ:optiFed}). We will then explain that the recently developed method \emph{FedSplit} is identical to PDMM for solving the special problem (\ref{equ:optiFed}). After that, the poor performance for Inexact FedSplit in \cite{Pathak2021} will be studied.
\vspace{-2mm}
\subsection{ PDMM}
\vspace{-1mm}
\noindent \textbf{Iterates over a general graph}: Before introducing the method, we first present the dual problem for (\ref{equ:optiGeneral}), which can be obtained by constructing and optimising the so-called (primal) Lagrangian function
\begin{align}
\hspace{-1mm}&\hspace{0mm}\max_{ \{\boldsymbol{\delta}_{ij}\} }\min_{\{\boldsymbol{x}_i\}} \Big( \sum_{i\in \mathcal{V}}\hspace{-0.6mm} f_i(\boldsymbol{x}_i)\hspace{-0.6mm}-\hspace{-2mm}\sum_{(i,j)\in \mathcal{E}}\hspace{-1.5mm}\boldsymbol{\delta}_{ij}^{T}(\boldsymbol{B}_{i|j}\boldsymbol{x}_i\hspace{-0.6mm}-\hspace{-0.8mm}\boldsymbol{B}_{j|i}\boldsymbol{x}_j\hspace{-0.3mm}) \Big) \nonumber \\
\hspace{-1mm}&\stackrel{(a)}{\footnotesize \Longleftrightarrow} \hspace{-2mm} \max_{ \{\boldsymbol{\lambda}_{i|j}, \boldsymbol{\lambda}_{j|i}\} } \min_{\{\boldsymbol{x}_i\}} \hspace{-1mm} \sum_{i\in \mathcal{V}}\hspace{-1.3mm} \Big( f_i(\boldsymbol{x}_i) \hspace{-0.6mm} -\hspace{-0.6mm} \boldsymbol{x}_i^T \hspace{-1.2mm}\sum_{j\in \mathcal{N}_i}\hspace{-1.5mm} \boldsymbol{B}_{i|j}^T\boldsymbol{\lambda}_{i|j}\hspace{-0.6mm} \Big), \hspace{-1.2mm}\; \left\{\hspace{-2mm} \begin{array}{l} \boldsymbol{\lambda}_{i|j} \hspace{-0.7mm}=\hspace{-0.7mm} -\hspace{-0.7mm} \boldsymbol{\lambda}_{j|i} \\ \forall (i,j)\in \mathcal{E} \end{array}\right.
\nonumber \\
\hspace{-1mm}&\stackrel{(b)}{\footnotesize \Longleftrightarrow} \hspace{-1.5mm} \max_{ \{\boldsymbol{\lambda}_{i|j}, \boldsymbol{\lambda}_{j|i}\} } \sum_{i\in \mathcal{V}}\hspace{-0.6mm} -f_i^{\ast} \Big( \sum_{j\in \mathcal{N}_i}\boldsymbol{B}_{i|j}^T\boldsymbol{\lambda}_{i|j} \Big), \hspace{-1.2mm}\; \left\{\hspace{-2mm} \begin{array}{l} \boldsymbol{\lambda}_{i|j} = -\boldsymbol{\lambda}_{j|i} \\ \forall (i,j)\in \mathcal{E} \end{array}\right. \hspace{-4mm} ,
\label{equ:dualGen}
\end{align}
where $\boldsymbol{\delta}_{ij}$ is the Lagrangian multiplier (or the dual variable) for each constraint $\boldsymbol{B}_{i|j}\boldsymbol{x}_i=\boldsymbol{B}_{j|i}\boldsymbol{x}_j$, which by using the lifting technique \cite{Zhang16PDMM}, can be further replaced by two dual variables $(\boldsymbol{\lambda}_{i|j},\boldsymbol{\lambda}_{j|i})$ under the constraint $\boldsymbol{\lambda}_{i|j}=-\boldsymbol{\lambda}_{j|i}$ in step $(a)$. The variable $\boldsymbol{\lambda}_{i|j}$ is owned by node $i$ and is related to neighbour $j$. It is noted that $\mathcal{N}_i$ denotes the set of neighbours for node $i$. $f_i^{\ast}$ in step $(b)$ is the conjugate function of $f_i$ (see (\ref{equ:conj_def}) for the definition). We use $\boldsymbol{\lambda}_i$ to denote the vector by concatenating all $\boldsymbol{\lambda}_{i|j}$, $j\in\mathcal{N}_i$. Finally, we let $\boldsymbol{\lambda}=[\boldsymbol{\lambda}_1^T,\ldots,\boldsymbol{\lambda}_{|\mathcal{V}|}^T]^T$ and $\boldsymbol{x}=[\boldsymbol{x}_1^T,\ldots,\boldsymbol{x}_{|\mathcal{V}|}^T]^T$, where the dimension of $\boldsymbol{\lambda}$ depends on the network topology.
Instead of solving the primal problem (\ref{equ:optiGeneral}) or the dual one (\ref{equ:dualGen}) separately, PDMM is designed to iteratively approach a saddle point of an augmented primal-dual Lagrangian function obtained by combining (\ref{equ:optiGeneral}) and (\ref{equ:dualGen}) \cite{Zhang16PDMM}:
\begin{align}
\hspace{-4mm}\mathcal{L}_{\rho}(\boldsymbol{x},\boldsymbol{\lambda}&)=\sum_{i\in \mathcal{V}} \hspace{-0.6mm}\Big[f_i(\boldsymbol{x}_i)+\hspace{-1.5mm}\sum_{j\in \mathcal{N}_i}\hspace{-0.5mm}\boldsymbol{\lambda}_{j|i}^T(\boldsymbol{B}_{i|j}\boldsymbol{x}_i) \nonumber\\
&\hspace{-3mm}-\hspace{-0.5mm}f_i^{\ast}\Big(\sum_{j\in\mathcal{N}_i}\boldsymbol{B}_{i | j}^T\boldsymbol{\lambda}_{i|j}\Big)\hspace{-0.5mm}\Big]
\hspace{-0.5mm}+h_{\rho}(\boldsymbol{x})\hspace{-0.5mm}-\hspace{-0.5mm}g_{\rho}(\boldsymbol{\lambda})
\label{equ:PDLag2}
\end{align}
where $h_{\rho}(\boldsymbol{x})$ and $g_{\rho}(\boldsymbol{\lambda})$ are defined as
\begin{align}
\hspace{-1mm}h_{\rho }(\boldsymbol{x})=&\hspace{-1mm}\sum_{(i,j)\in \mathcal{E}} \hspace{-0.5mm}\frac{\rho }{2}\left\|\boldsymbol{B}_{i | j}\boldsymbol{x}_{i}-\boldsymbol{B}_{j | i}\boldsymbol{x}_{j}\right\|^2\label{equ:quadFunP}\\
g_{\rho}(\boldsymbol{\lambda})=&\sum_{(i,j)\in \mathcal{E}}\frac{1}{2\rho }\left\|\boldsymbol{\lambda}_{i|j}+\boldsymbol{\lambda}_{j|i}\right\|^2\hspace{-1mm},
\label{equ:quadFunD}
\end{align}
where $\rho>0$. $\mathcal{L}_{\rho}$ is convex in $\boldsymbol{x}$ and concave in $\boldsymbol{\lambda}$.
Synchronous PDMM optimises $\mathcal{L}_{\rho}$ by updating $\boldsymbol{x}$ and $\boldsymbol{\lambda}$ simultaneously per iteration through node-oriented computation. At iteration $r$, each $i$ computes a new estimate $\boldsymbol{x}_i^{r+1}$ by locally solving a small-size optimisation problem based on the neighbouring estimates $\{\boldsymbol{x}^r_j | j \in \mathcal{N}_i \}$ and $\{\boldsymbol{\lambda}^r_{j|i} | j \in \mathcal{N}_i\}$ from the last iteration:
\begin{align}
\hspace{-0mm}\boldsymbol{x}_i^{r+1} =&\arg\min_{\boldsymbol{x}_i} \Big[f_i(\boldsymbol{x}_i)+ \sum_{j\in \mathcal{N}_i}(\boldsymbol{\lambda}_{j|i}^{r})^T\boldsymbol{B}_{i|j}\boldsymbol{x}_i \nonumber
\end{align}
\begin{align}
\hspace{-0mm}&\hspace{1mm}+ \sum_{j\in \mathcal{N}_i}\frac{\rho }{2}\| \boldsymbol{B}_{i|j}\boldsymbol{x}_i \hspace{-0.6mm} - \hspace{-0.6mm} \boldsymbol{B}_{j|i}\boldsymbol{x}_j^{r} \|^2 \Big] \quad i\in \mathcal{V} \label{equ:x_update}
\end{align}
In principle, each estimate $\boldsymbol{\lambda}_{i}^{r+1}$ can be obtained similarly by solving a small-size optimisation problem that involves the conjugate function $f_i^{\ast}$ from (\ref{equ:PDLag2}). It is shown in \cite{Zhang16PDMM} that once $\boldsymbol{x}_i^{r+1} $ is obtained, $\{\boldsymbol{\lambda}_{i}^{r+1}\}$ can be computed directly as:
\begin{align}
\hspace{-2mm}\boldsymbol{\lambda}_{i|j}^{r+1} \hspace{-1mm}=& \rho (\boldsymbol{B}_{j|i}\boldsymbol{x}_j^r \hspace{-0.7mm}-\hspace{-0.7mm} \boldsymbol{B}_{i|j}\boldsymbol{x}_i^{r+1} )
\hspace{-0.7mm}-\hspace{-0.7mm} \boldsymbol{\lambda}_{j|i}^r \; i\in \mathcal{V}, j\in \mathcal{N}_i. \label{equ:lambda_update_2nd}
\end{align}
One can also design an asynchronous updating procedure for PDMM, where the network nodes are activated asynchronously for parameter updating at different iterations (see \cite{Zhang16PDMM} for more details).
We note that the above description of $\mathcal{L}_{\rho}$ and the update expressions (\ref{equ:x_update})-(\ref{equ:lambda_update_2nd}) for PDMM builds a foundation for the convergence analysis later on. The general linear constraints $\{\boldsymbol{B}_{i|j} \boldsymbol{x}_i \hspace{-0.6mm}=\hspace{-0.6mm} \boldsymbol{B}_{j|i} \boldsymbol{x}_j\}$ in (\ref{equ:optiGeneral}) enable PDMM to cover a broader class of problems than those methods which only focus on the special constraints $\{\boldsymbol{x}_i \hspace{-0.6mm}=\hspace{-0.6mm} \boldsymbol{x}_j\}$. Another nice property of PDMM is that two dual variables $(\boldsymbol{\lambda}_{i|j}, \boldsymbol{\lambda}_{j|i})$ are introduced per linear constraint, which makes the update expressions node-oriented, thus facilitating practical implementation. It is shown in \cite{Sherson17PDMM} that PDMM can be alternatively derived from the PR splitting by using monotone operator theory \cite{Ryu16Mono}.
\noindent \textbf{Iterates over the server-client graph for (\ref{equ:optiFed})}: We now consider applying PDMM to the problem (\ref{equ:optiFed}) by setting $\boldsymbol{B}_{i|s} =\boldsymbol{B}_{s|i}=\boldsymbol{I}$ for all the edges $(i,s)\in \mathcal{E}$. Instead of performing synchronous updates, we let the server compute the estimates $(\boldsymbol{x}_s^{r+1},\{\boldsymbol{\lambda}_{s|i}^{r+1}\})$ only after receiving the estimates $\{\boldsymbol{x}_i^{r+1}, \boldsymbol{\lambda}_{i|s}^{r+1}\}$ from the clients at iteration $r$. That is, at iteration $r$, the server uses the most up-to-date estimates $\{\boldsymbol{x}_i^{r+1}, \boldsymbol{\lambda}_{i|s}^{r+1}\}$ from the clients instead of the old estimates $\{\boldsymbol{x}_i^{r}, \boldsymbol{\lambda}_{i|s}^{r}\}$ in computing $(\boldsymbol{x}_s^{r+1},\{\boldsymbol{\lambda}_{s|i}^{r+1}\})$. By inspection of (\ref{equ:x_update})-(\ref{equ:lambda_update_2nd}), one can then derive the following update expressions with a slight index modification:
\begin{align}
&\hspace{-2mm} \textrm{clients}\hspace{-1mm}\left\{ \hspace{-2mm}\begin{array}{l}
\hspace{-0mm}\boldsymbol{x}_i^{r+1} \hspace{-1mm}=\hspace{-1mm} \arg\min_{\boldsymbol{x}_i} \hspace{-1mm} \Big[f_i(\boldsymbol{x}_i) \hspace{-0.7mm}+\hspace{-0.7mm} \frac{\rho}{2}\|\boldsymbol{x}_i \hspace{-0.7mm} - \hspace{-0.7mm} \boldsymbol{x}_s^{r} \hspace{-0.7mm} +\hspace{-0.7mm} \boldsymbol{\lambda}_{s|i}^{r}/\rho \|^2 \Big] \\
\hspace{0mm}\boldsymbol{\lambda}_{i|s}^{r+1} = \rho (\boldsymbol{x}_s^r \hspace{-0.7mm}-\hspace{-0.7mm} \boldsymbol{x}_i^{r+1} )
\hspace{-0.7mm}-\hspace{-0.7mm} \boldsymbol{\lambda}_{s|i}^r \end{array}\right. \label{equ:client_update} \\
&\hspace{-1mm} \textrm{server} \hspace{-1mm}\left\{ \hspace{-2mm}\begin{array}{l}
\hspace{-0mm}\boldsymbol{x}_s^{r+1} \hspace{-0.7mm}=\hspace{-0.7mm} \frac{1}{m}\sum_{i=1}^m (\boldsymbol{x}_i^{r+1} \hspace{-0.7mm}-\hspace{-0.7mm} \boldsymbol{\lambda}_{i|s}^{r+1}/\rho ) \\
\hspace{0mm}\boldsymbol{\lambda}_{s|i}^{r+1} = \rho (\boldsymbol{x}_i^{r+1} \hspace{-0.7mm}-\hspace{-0.7mm} \boldsymbol{x}_s^{r+1} )
\hspace{-0.7mm}-\hspace{-0.7mm} \boldsymbol{\lambda}_{i|s}^{r+1} \end{array}\right. \hspace{-2mm}, \label{equ:server_update}
\end{align}
where the computation for $\boldsymbol{x}_{s}^{r+1}$ uses the fact that $f_s(\boldsymbol{x}_s)=0$.
Next we briefly discuss the variables that must be transmitted between the server and the clients per iteration for PDMM to work. It is noted from (\ref{equ:client_update}) that at iteration $r$, each client $i$ only needs the quantity $ \boldsymbol{x}_s^{r} \hspace{-0.7mm} -\hspace{-0.7mm} \boldsymbol{\lambda}_{s|i}^{r}/\rho $ from the server for the computation of $(\boldsymbol{x}_i^{r+1}, \boldsymbol{\lambda}_{i|s}^{r+1})$. Similarly, the server only needs the quantity $ \boldsymbol{x}_i^{r+1} \hspace{-0.7mm} -\hspace{-0.7mm} \boldsymbol{\lambda}_{i|s}^{r+1}/\rho $ from client $i$ to update $\boldsymbol{x}_s$ and $\boldsymbol{\lambda}_{s|i}$. That is, both the server and the client need only to transmit one variable to each other per iteration, where the variable is a combination of the primal and dual estimates.
\vspace{-2mm}
\subsection{(Inexact) FedSplit}
\vspace{-1mm}
\noindent \textbf{Iterates procedure}: Recently, the authors of \cite{Pathak2021} applied Peaceman-Rachford splitting to solve the special problem (\ref{equ:optiFed}). The resulting update expressions at iteration $r$ can be summarised as follows:
\begin{align}
&\hspace{-1mm} \textrm{clients}\hspace{-1mm}\left\{ \hspace{-2mm}\begin{array}{l}
\hspace{-0mm}\boldsymbol{x}_i^{r+1} \hspace{-1mm}=\hspace{-1mm} \arg\min \hspace{-1mm} \Big[f_i(\boldsymbol{x}_i) \hspace{-0.6mm}+\hspace{-0.6mm} \frac{1}{2\gamma}\|\boldsymbol{x}_i \hspace{-0.6mm} - \hspace{-0.6mm} \boldsymbol{z}_{s|i}^r \|^2 \Big] \\
\hspace{0mm} \boldsymbol{z}_{i|s}^{r+1} = 2\boldsymbol{x}_i^{r+1} - \boldsymbol{z}_{s|i}^{r} \end{array}\right. \label{equ:client_update_split} \\
&\hspace{-1mm} \textrm{server} \hspace{-1mm}\left\{ \hspace{-2mm}\begin{array}{l}
\hspace{-0mm}\boldsymbol{x}_s^{r+1} \hspace{-0.7mm}=\hspace{-0.7mm} \frac{1}{m}\sum_{i=1}^m \boldsymbol{z}_{i|s}^{r+1} \\
\boldsymbol{z}_{s|i}^{r+1} = 2\boldsymbol{x}_s^{r+1} - \boldsymbol{z}_{i|s}^{r+1} \end{array} \right. \hspace{-2mm}, \label{equ:server_update_split}
\end{align}
where the parameter $\gamma>0$, and $\{\boldsymbol{z}_{i|s}, \boldsymbol{z}_{s|i}\}$ are the auxiliary variables introduced in FedSplit. It is noted again that the clients only need to send $\{\boldsymbol{z}_{i|s}\}$ to the server for parameter updating while the server only needs to send $\boldsymbol{z}_{s|i}$ to client $i$, which is in line with that of PDMM.
\noindent \textbf{On equivalence between PDMM and FedSplit}: We now briefly show that the iterates (\ref{equ:client_update})-(\ref{equ:server_update}) of PDMM reduce to (\ref{equ:client_update_split})-(\ref{equ:server_update_split}) by proper hyper-parameter setup and reformulation. Specifically, by letting $\rho={1/\gamma}$, $\boldsymbol{z}_{i|s} = \boldsymbol{x}_{i} - \gamma \boldsymbol{\lambda}_{i|s} $, and $\boldsymbol{z}_{s|i} = \boldsymbol{x}_{s} - \gamma \boldsymbol{\lambda}_{s|i} $ in (\ref{equ:client_update})-(\ref{equ:server_update}), one can easily oberse that the resulting expressions are identical to (\ref{equ:client_update_split})-(\ref{equ:server_update_split}). The equivalence between PDMM and FedSplit is due to the fact that both methods are based on Peaceman-Rachford splitting (see \cite{Sherson17PDMM} for more details about PDMM). However, PDMM is more general than FedSplit since it can also be applied for decentralised networks.
\noindent \textbf{Inexact iterates}: In practice, it might be difficult or expensive to obtain a closed form solution for $\boldsymbol{x}_i^{r+1}$ in (\ref{equ:client_update_split}) due to the complexity of $f_i(\boldsymbol{x}_i)$. One common practice is to conduct an inexact computation based on gradient descent.
The authors of \cite{Pathak2021} considered simplifying the minimisation problem in (\ref{equ:client_update_split}) by performing $K$ steps of consecutive gradient descent operations for each client $i$ at iteration $r$ to obtain a sequence of $K$ estimates: $\{\boldsymbol{x}_i^{r,k}| k=1, \ldots, K\}$. By starting with $\boldsymbol{x}_i^{r, k=0}= \boldsymbol{z}_{s|i}^r$, the estimate $\boldsymbol{x}_i^{r, k+1}$ at step $k$ of iteration $r$ is computed as
\begin{align}
&\boldsymbol{x}_i^{r, k+1} = \boldsymbol{x}_i^{r, k} - \eta \nabla h_i^{r}( \boldsymbol{x}_i^{r, k}) \;\; 0\leq k <K,
\label{equ:gradient_xi_fedsplit}
\end{align}
where $\eta$ is the stepsize, and the function $h_i^{r}(\boldsymbol{x}_i)$ at iteration $r$ is defined to be
\begin{align}
&h_i^{r}( \boldsymbol{x}_i) = f_i(\boldsymbol{x}_i) + \frac{1}{2\gamma}\| \boldsymbol{x}_i -\boldsymbol{z}_{s|i}^r \|^2. \label{equ:f_i_approximate_fedsplit}
\end{align}
We note that the initialisation $\boldsymbol{x}_i^{r, k=0}= \boldsymbol{z}_{s|i}^r$ for the set of $K$ steps within each iteration is not a good option, especially for finite $K$ or small $\rho$ value. From the analysis about equivalence on PDMM and FedSplit, we notice that $\boldsymbol{x}_i^{r, k=0}= \boldsymbol{z}_{s|i}^r = \boldsymbol{x}_s^{r} - \boldsymbol{\lambda}_{s|i}^{r}/\rho$. That is, $\boldsymbol{z}_{s|i}^r$ is a combination of both the primal and dual variables. A good initialisation of $\boldsymbol{x}_i^{r, k=0}$ should not include the dual variable $ \boldsymbol{\lambda}_{s|i}^r$. This is because in general, the optimal solution $\boldsymbol{\lambda}_{s|i}^{\ast}$ of the dual variable $\boldsymbol{\lambda}_{s|i}$ is not zero. Even the special initialisation $\boldsymbol{\lambda}_{s|i}^{r=0} = 0$ would not guarantee that $\boldsymbol{\lambda}_{s|i}^r$ is zero when the iteration $r>0$. The component $ \boldsymbol{\lambda}_{s|i}^{r}/\rho$ makes the initialisation $\boldsymbol{x}_i^{r, k=0} = \boldsymbol{x}_s^{r} - \boldsymbol{\lambda}_{s|i}^{r}/\rho$ less effective than an initialisation without the dual variable. Small $\rho$ value would increase the impact of $\boldsymbol{\lambda}_{s|i}^{r}$. There are different ways to correct the improper initialisation of Inexact FedSplit depending on how to choose the estimates for $\{\boldsymbol{x}_i^{r, k=0}\}$. See the next section for the two versions of Inexact PDMM.
\begin{figure}[t!]
\centering
\includegraphics[width=60mm]{fedsplit_correction_pro.eps}
\vspace*{-0.2cm}
\caption{\footnotesize{ Plots of the optimality gap $F(\boldsymbol{x}_s^r) - F^{\ast}$ versus the iteration number $r$ for Inexact FedSplit applied to a least-square problem over a network of 25 clients and one server, where $F(\boldsymbol{x}_s^r) =\sum_{i=1}^m f_i(\boldsymbol{x}_s^r)$ and $F^{\ast}$ denotes the minimum functional value. See Subsection~\ref{subsec:least_square} for more details about the problem. }}
\label{fig:FedSplit}
\vspace{-0.4cm}
\end{figure}
A simple evaluation of Inexact FedSplit is conducted for solving a least-square problem. As shown in Fig.~\ref{fig:FedSplit}, when the step number $K$ is finite (e.g., $K=1, 3$), Inexact FedSplit does not converge to the optimal solution due to the improper initialisation $\boldsymbol{x}_i^{r, k=0}= \boldsymbol{z}_{s|i}^r$. If on the other hand, client $i$ initialises $\boldsymbol{x}_{i}^{r, k=0}$ to be $\boldsymbol{x}_{i}^{r, k=0}= \boldsymbol{x}_s^{r}$ at each iteration $r$, the method converges for both $K=(1,3)$.
\textbf{Convergence bounds of Inexact FedSplit}: We note that the convergence bounds derived in \cite{Pathak2021} for Inexact FedSplit are not tight. Suppose all the client functions are strongly convex and have Lipschitz continuous gradients. Assume that at each iteration $r$, the error $\|\boldsymbol{x}_i^{r, k=K} - \boldsymbol{x}_i^{r, k=\infty}\|$ for each client is always upper-bounded by a scalar $b$. With proper setup for $\gamma$ in (\ref{equ:client_update_split}) and (\ref{equ:gradient_xi_fedsplit}), it is shown in \cite{Pathak2021} that the error $\|\boldsymbol{x}_s^{r+1} - \boldsymbol{x}_s^{\star} \|$, $r\geq 1$, is upper bounded by
\begin{align}
&\|\boldsymbol{x}_s^{r+1} \hspace{-0.7mm} - \hspace{-0.7mm} \boldsymbol{x}_s^{\star} \| \leq \left(1 \hspace{-0.7mm}- \hspace{-0.7mm} \frac{2}{\sqrt{\kappa} +1}\right)^r \hspace{-0.7mm} \frac{\| \boldsymbol{x}_s^{0} \hspace{-0.7mm}- \hspace{-0.7mm} \boldsymbol{x}_s^{\star} \|}{\sqrt{m}} \hspace{-0.7mm}+ \hspace{-0.7mm} (\sqrt{\kappa}+1)b, \nonumber
\end{align}
where the parameter $\kappa>0$ is determined by the properties (e.g., $L$, $\mu$ in (\ref{equ:gradLips2})-(\ref{equ:muStrong})) of the client functions. It is clear that the scalar $b$ is a loose offset for quantifying the error introduced by the gradient descent operations in Inexact FedSplit. The convergence results in Fig.~\ref{fig:FedSplit} indicates that Inexact FedSplit may even not converge for small $K$, which can be explained by a large offset $b$.
\vspace{-3mm}
\section{Inexact PDMM and its comparison to SCAFFOLD}
\vspace{-1mm}
\label{sec:GPDMM}
In this section, we first present the two versions of Inexact PDMM: namely, GPDMM and AGPDMM. In particular, GPDMM is designed for both the server and clients to transmit one variable to each other per iteration. To accelerate the convergence speed of GPDMM, AGPDMM requires the server to transmit two variables to each client per iteration. After that, we investigate the similarity of AGPDMM and SCAFFOLD. We show that when the number $K$ of gradient steps at the client slide per iteration is set to $K=1$, both AGPDMM and SCAFFOLD reduce to vanilla gradient descent under proper parameter setups. As will be discussed later, SCAFFOLD requires both the server and clients to transmit two variables to each other per iteration.
\vspace{-3mm}
\subsection{GPDMM by sending one variable from server to each client}
\vspace{-1mm}
\begin{algorithm}[tb]
\caption{ GPDMM for a centralised network}
\label{GCFOLD}
\begin{algorithmic}[1]
\STATE {\bfseries Init.:}$\{\boldsymbol{x}_i^{r=0, K}\hspace{-0.7mm}=\hspace{-0.7mm}\boldsymbol{x}_s^{1}\}$, $\{\hspace{-0.7mm}\boldsymbol{\lambda}_{s|i}^{1}\hspace{-0.7mm}=\hspace{-0.7mm}0\}$, $\eta$, $\rho=\frac{1}{K\eta}$
\STATE For each iteration $r=1,\ldots, R$ do
\STATE $\;$ Server $s$ transmits $\boldsymbol{x}_s^{r} - \boldsymbol{\lambda}_{s|i}^r/\rho$ to each client $i$
\STATE $\;$ On client $i$ in parallel do
\STATE $\;\;\;$ Init.: $\boldsymbol{x}_i^{r,k=0} = \boldsymbol{x}_i^{r-1, K}$
\STATE $\;\;\;$ For $k=0,\ldots, K-1$ do
\STATE $\;\;\;\;\;$ $\boldsymbol{x}_i^{r, k+1} \hspace{-0.7mm}= \hspace{-0.7mm} \boldsymbol{x}_i^{r, k} \hspace{-0.8mm}-\hspace{-0.8mm} \frac{1}{1/\eta+\rho} \big[\nabla f_i(\boldsymbol{x}_i^{r, k}) \hspace{-0.7mm}+\hspace{-0.7mm} \rho(\boldsymbol{x}_i^{r, k} \hspace{-0.7mm}-\hspace{-0.7mm} \boldsymbol{x}_s^{r}) \hspace{-0.7mm}+\hspace{-0.7mm} \boldsymbol{\lambda}_{s|i}^{r}\big] $
\STATE $\;\;\;$ End for
\STATE $\;\;\;$ $\boldsymbol{\lambda}_{i|s}^{r+1} = \rho (\boldsymbol{x}_s^{r} \hspace{-0.7mm}-\hspace{-0.7mm} \bar{\boldsymbol{x}}_i^{r, K} )
\hspace{-0.7mm}-\hspace{-0.7mm} \boldsymbol{\lambda}_{s|i}^{r} $ where $ \bar{\boldsymbol{x}}_i^{r, K} \hspace{-0.7mm}=\hspace{-0.7mm} \frac{1}{K}\sum_{k=1}^K\boldsymbol{\lambda}_{i}^{r,k}$
\STATE $\;\;\;$ client $i$ transmits $ \bar{\boldsymbol{x}}_i^{r, K} - \boldsymbol{\lambda}_{i|s}^{r+1}/\rho $ to server $s$
\STATE $\;$ End on client
\STATE $\;$ $\boldsymbol{x}_s^{r+1} \hspace{-0.7mm}= \hspace{-0.7mm} \frac{1}{m}\sum_{i=1}^m (\boldsymbol{x}_i^{r+1} \hspace{-0.7mm}-\hspace{-0.7mm} \boldsymbol{\lambda}_{i|s}^{r+1}/\rho ) $
\STATE $\;$ $\boldsymbol{\lambda}_{s|i}^{r+1} = \rho ( \bar{\boldsymbol{x}}_i^{r, K} \hspace{-0.7mm}-\hspace{-0.7mm} \boldsymbol{x}_s^{r+1} )
\hspace{-0.7mm}-\hspace{-0.7mm} \boldsymbol{\lambda}_{i|s}^{r+1} $
\STATE End for
\end{algorithmic}
\end{algorithm}
To correct the convergence issue of Inexact FedSplit, GPDMM is designed to avoid using the estimate $\boldsymbol{x}_s- \boldsymbol{\lambda}_{s|i}/\rho$ when conducting approximate optimisation at the client side. Specifically, at iteration $r$, client $i$ sets $\boldsymbol{x}_i^{r, k=0}=\boldsymbol{x}_i^{r-1, K}$ and then performs $K$ steps of gradient-based approximate optimisations to obtain a sequence of estimates $\{\boldsymbol{x}_i^{r,1},\ldots, \boldsymbol{x}_i^{r,K}\}$. The estimate $\boldsymbol{x}_i^{r,k+1}$ at step $k$ is computed as
\begin{align}
\hspace{-2mm}\boldsymbol{x}_i^{r, k+1} \hspace{-1mm}&=\hspace{-1mm} \arg\min_{\boldsymbol{x}_i} \hspace{-1mm} \Big[f_i^{r,k}(\boldsymbol{x}_i) \hspace{-0.6mm}+\hspace{-0.6mm} \frac{\rho}{2}\|\boldsymbol{x}_i \hspace{-0.6mm} - \hspace{-0.6mm} \boldsymbol{x}_s^{r} \hspace{-0.7mm} +\hspace{-0.7mm} \boldsymbol{\lambda}_{s|i}^{r}/\rho \|^2 \Big] \nonumber \\
\hspace{-2mm}&\hspace{-2mm}=\boldsymbol{x}_i^{r, k} \hspace{-0.7mm} - \hspace{-0.7mm} \frac{1}{1/\eta+\rho}\big[\nabla f_i(\boldsymbol{x}_i^{r, k}) \hspace{-0.7mm}+\hspace{-0.7mm} \rho(\boldsymbol{x}_i^{r, k} \hspace{-0.7mm}-\hspace{-0.7mm} \boldsymbol{x}_s^{r}) \hspace{-0.7mm}+\hspace{-0.7mm} \boldsymbol{\lambda}_{s|i}^r \big], \label{equ:gradient_xi}
\end{align}
where $f_i^{r,k}(\boldsymbol{x}_i)$ is a quadratic approximation of $f_i(\boldsymbol{x}_i)$ at $\boldsymbol{x}_i^{r,k}$:
\begin{align}
\hspace{-3mm}f_i^{r,k}(\boldsymbol{x}_i) \hspace{-0.1mm} =& \hspace{-0.1mm} f_i(\boldsymbol{x}_i^{r,k}) \hspace{-0.6mm}+\hspace{-0.6mm} (\boldsymbol{x}_i \hspace{-0.6mm}-\hspace{-0.6mm} \boldsymbol{x}_i^{r,k})^T \nabla f_i(\boldsymbol{x}_i^{r,k}) \hspace{-0.6mm} \nonumber \\
&+\hspace{-0.7mm} 1/(2\eta) \|\boldsymbol{x}_i \hspace{-0.6mm}-\hspace{-0.6mm} \boldsymbol{x}_i^{r,k} \|^2, \label{equ:f_i_approximate}
\end{align}
where $1/L\geq \eta>0$ is the gradient stepsize. The optimality condition for $\boldsymbol{x}_i^{r, k+1}$ in (\ref{equ:gradient_xi}) can be rewritten as
\begin{align}
\nabla f_i(\boldsymbol{x}_i^{r, k}) =& 1/\eta (\boldsymbol{x}_i^{r, k} \hspace{-0.6mm}-\hspace{-0.6mm} \boldsymbol{x}_i^{r,k+1} ) \nonumber \\
&- \rho (\boldsymbol{x}_i^{r,k+1} \hspace{-0.6mm} - \hspace{-0.6mm} \boldsymbol{x}_s^{r} \hspace{-0.7mm} +\hspace{-0.7mm} \boldsymbol{\lambda}_{s|i}^{r}/\rho).
\label{equ:opti_r}
\end{align}
After finishing the computation for $\boldsymbol{x}_i^{r,K}$, client $i$ then sets $\boldsymbol{\lambda}_{i|s}^{r+1}$ to be
\begin{align}
\boldsymbol{\lambda}_{i|s}^{r+1} \hspace{-0.6mm}=\hspace{-0.6mm} \rho\Big(\boldsymbol{x}_s^r \hspace{-0.6mm}-\hspace{-0.6mm}\frac{1}{K} \sum_{k=1}^K \boldsymbol{x}_i^{r,k}\Big)\hspace{-0.6mm}-\hspace{-0.6mm}\boldsymbol{\lambda}_{s|i}^{r}, \label{equ:lambda_update_client}
\end{align}
where, to facilitate convergence analysis, the average estimate $\frac{1}{K} \sum_{k=1}^K \boldsymbol{x}_i^{r,k}$ is used for computing $\boldsymbol{\lambda}_{i|s}^{r+1}$ instead of the final estimate $\boldsymbol{x}_i^{r,K}$. See remark below for our detailed motivation.
\begin{remark}
\vspace{-2mm}
We note that the computation for $\boldsymbol{\lambda}_{i|s}^{r+1}$ in (\ref{equ:lambda_update_client}) is not the optimal setup from the viewpoint of fast convergence speed. One should replace the average estimate $\frac{1}{K} \sum_{k=1}^K \boldsymbol{x}_i^{r,k}$ in (\ref{equ:lambda_update_client}) with the most recent estimate $\boldsymbol{x}_i^{r,K}$ when computing $\boldsymbol{\lambda}_{i|s}^{r+1}$, which can be represented as
\begin{align}
\boldsymbol{\lambda}_{i|s}^{r+1} \hspace{-0.6mm}=\hspace{-0.6mm} \rho\Big(\boldsymbol{x}_s^r \hspace{-0.6mm}-\hspace{-0.6mm} \boldsymbol{x}_i^{r,K}\Big)\hspace{-0.6mm}-\hspace{-0.6mm}\boldsymbol{\lambda}_{s|i}^{r}. \label{equ:lambda_GPDMM}
\end{align}
This is because the most recent estimate $\boldsymbol{x}_{i}^{r, K}$ provides a more accurate approximation of the optimal solution which minimises $f_i(\boldsymbol{x}_i) \hspace{-0.6mm}+\hspace{-0.6mm} \frac{\rho}{2}\|\boldsymbol{x}_i \hspace{-0.6mm} - \hspace{-0.6mm} \boldsymbol{x}_s^{r} \hspace{-0.7mm} +\hspace{-0.7mm} \boldsymbol{\lambda}_{s|i}^{r}/\rho \|^2$ in (\ref{equ:client_update}) than the average estimate.
As will be analysed in next section, the average estimate $\frac{1}{K} \sum_{k=1}^K \boldsymbol{x}_i^{r,k}$ in (\ref{equ:lambda_update_client}) facilitates convergence analysis. We leave the convergence analysis for employing the update expression (\ref{equ:lambda_GPDMM}) for future research work.
\vspace{-2mm}
\end{remark}
At the server side, once it receives the estimates $\{\boldsymbol{x}_i^{r+1} - \boldsymbol{\lambda}_{i|s}^{r+1}/\rho \}$ at iteration $r$, the estimates $\boldsymbol{x}_s^{r+1}$ and $\{\boldsymbol{\lambda}_{s|i}^{r+1}\}$ can be computed by following (\ref{equ:server_update}). By inspection of (\ref{equ:server_update}), it is not difficult to show that
\begin{align}
\sum_{i=1}^m\boldsymbol{\lambda}_{s|i}^{r+1} = 0, \label{equ:s_lambda_equality}
\end{align}
which always holds no matter how Inexact PDMM is performed at the client side. It is noted that the above equation is in line with one of the KKT conditions in (\ref{equ:KKT3}). Equ.~(\ref{equ:s_lambda_equality}) will be used for convergence analysis later on. See Alg.~1 for a brief summary for GPDMM, where $\rho$ is set to $\rho=1/(K\eta)$, which is inspired by the update expressions of SCAFFOLD as will be discussed later on.
There are two differences between Inexact FedSplit and GPDMM. Firstly, each time, GPDMM approximates $f_i(\boldsymbol{x}_i)$ by (\ref{equ:f_i_approximate}) while Inexact FedSplit approximates the summation $h_i^{r}( \boldsymbol{x}_i) = f_i(\boldsymbol{x}_i) + \frac{1}{2\gamma}\| \boldsymbol{x}_i -\boldsymbol{z}_{s|i}^r \|^2$ in (\ref{equ:f_i_approximate_fedsplit}) by a quadratic function.
Secondly, Inexact FedSplit initialises $\boldsymbol{x}_i^{r, k=0}$ with the starting point $\boldsymbol{z}_{s|i}^r = \boldsymbol{x}_s^r - \boldsymbol{\lambda}_{s|i}^r/\rho $ while GPDMM initialises $\boldsymbol{x}_i^{r, k=0}$ with the starting point $\boldsymbol{x}_i^{r-1, K}$ from the last iteration. As concluded from last section, $\boldsymbol{z}_{s|i}$ involves both the primal and dual variables, and is thus not suitable for initialisation.
\begin{algorithm}[tb]
\caption{ AGPDMM for a centralised network}
\label{GCFOLD}
\begin{algorithmic}[1]
\STATE {\bfseries Init.:} $\boldsymbol{x}_s^{1}$, $\{\boldsymbol{\lambda}_{s|i}^{1}\hspace{-0.7mm}=\hspace{-0.7mm}0\}$, $\eta$, $\rho=\frac{1}{K\eta}$
\STATE For each iteration $r=1,\ldots, R$ do
\STATE $\;$ Server $s$ transmits $\boldsymbol{x}_s^r$ and $\boldsymbol{\lambda}_{s|i}^r$ to each client $i$
\STATE $\;$ On client $i$ in parallel do
\STATE $\;\;\;$ Init.: $\boldsymbol{x}_i^{r,k=0} = \boldsymbol{x}_s^{r}$
\STATE $\;\;\;$ For $k=0,\ldots, K-1$ do
\STATE $\;\;\;\;\;$ $\boldsymbol{x}_i^{r, k+1} \hspace{-0.7mm}= \hspace{-0.7mm} \boldsymbol{x}_i^{r, k} \hspace{-0.8mm}-\hspace{-0.8mm} \frac{1}{1/\eta+\rho}\big[\nabla f_i(\boldsymbol{x}_i^{r, k}) \hspace{-0.7mm}+\hspace{-0.7mm} \rho(\boldsymbol{x}_i^{r, k} \hspace{-0.7mm}-\hspace{-0.7mm} \boldsymbol{x}_s^{r}) \hspace{-0.7mm}+\hspace{-0.7mm} \boldsymbol{\lambda}_{s|i}^{r}\big] $
\STATE $\;\;\;$ End for
\STATE $\;\;\;$ $\boldsymbol{\lambda}_{i|s}^{r+1} = \rho (\boldsymbol{x}_s^{r} \hspace{-0.7mm}-\hspace{-0.7mm} \boldsymbol{x}_i^{r, K} )
\hspace{-0.7mm}-\hspace{-0.7mm} \boldsymbol{\lambda}_{s|i}^r$
\STATE $\;\;\;$ client $i$ transmits $\boldsymbol{x}_i^{r, K} - \boldsymbol{\lambda}_{i|s}^{r+1}/\rho $ to server $s$
\STATE $\;$ End on client
\STATE $\;$ $\boldsymbol{x}_s^{r+1} \hspace{-0.7mm}= \hspace{-0.7mm} \frac{1}{m}\sum_{i=1}^m (\boldsymbol{x}_i^{r, K} \hspace{-0.7mm}-\hspace{-0.7mm} \boldsymbol{\lambda}_{i|s}^{r+1}/\rho ) $
\STATE $\;$ $\boldsymbol{\lambda}_{s|i}^{r+1} = \rho (\boldsymbol{x}_i^{r, K} \hspace{-0.7mm}-\hspace{-0.7mm} \boldsymbol{x}_s^{r+1} )
\hspace{-0.7mm}-\hspace{-0.7mm} \boldsymbol{\lambda}_{i|s}$
\STATE End for
\end{algorithmic}
\vspace{-0.5mm}
\end{algorithm}
\subsection{AGPDMM by sending two variables from server to each client}
\label{subsec:AGPDMM}
\noindent \textbf{Updating and transmission procedure}: We note that the convergence speed of GPDMM can be accelerated by a slight modification of its updating expressions. It is known for both PDMM and GPDMM that the server aggregates information from all the clients at each iteration. At iteration $r$, the global estimate $\boldsymbol{x}_s^{r}$ should be more accurate than each individual estimate $\boldsymbol{x}_i^{r-1, K}$. Therefore, it is preferable for each client $i$ to employ the global estimate $\boldsymbol{x}_s^{r}$ instead of $\boldsymbol{x}_i^{r-1, K}$ when conducting $K$ steps of gradient-based approximate optimisation at iteration $r$. That is, the quantity $\boldsymbol{x}_i^{r,k=0}$ should be initialised as $\boldsymbol{x}_i^{r,k=0} = \boldsymbol{x}_s^{r}$ to achieve fast convergence speed. The computation for $\boldsymbol{\lambda}_{i|s}^{r+1}$ follows from (\ref{equ:lambda_GPDMM}) instead of (\ref{equ:lambda_update_client}) to further accelerate the convergence speed. Alg.~2 summarises the updating procedure of AGPDMM, which is obtained by following the above guideline.
We now briefly discuss the variables that need to be transmitted from the server to the clients. At iteration $r$, it is clear that AGPDMM has to send both $\boldsymbol{x}_s^{r}$ and $\boldsymbol{\lambda}_{s|i}^{r}$ to each client $i$ to allow for parameter update while GPDMM only needs to send the combination $\boldsymbol{x}_s^{r}-\boldsymbol{\lambda}_{s|i}^{r}/\rho$ to client $i$. The two versions of inexact PDMM exhibit a trade-off between convergence speed and transmission bandwidth. AGPDMM accelerates the convergence speed of GPDMM at the cost of transmitting two times the number of parameters as GPDMM from the server to each client per iteration. In practice, one can select a proper version of Inexact PDMM depending on the requirement of the considered application.
\noindent \textbf{Performance of AGPDMM when $K=1$}: We will show in the following that under proper parameter selection, the update expression for AGPDMM when $K=1$ reduces to the vanilla gradient descent operation. Specifically, $\boldsymbol{x}_{s}^{r+1}$ at iteration $r$ can be represented as
\begin{align}
\hspace{-0mm}\boldsymbol{x}_{s}^{r+1} &= \frac{1}{m}\sum_{i=1}^m (\boldsymbol{x}_i^{r, K=1} \hspace{-0.7mm}-\hspace{-0.7mm} \boldsymbol{\lambda}_{i|s}^{r+1}/\rho ) \nonumber \\
&\hspace{0mm}\stackrel{(a)}{=} \frac{1}{m}\sum_{i=1}^m (\boldsymbol{x}_s^{r} - \frac{2}{1/\eta+\rho}\big(\nabla f_i(\boldsymbol{x}_s^{r}) +\boldsymbol{\lambda}_{s|i}^{r}\big) \hspace{-0.7mm} +\hspace{-0.7mm} \boldsymbol{\lambda}_{s|i}^r/\rho ) \nonumber \\
&\hspace{0mm}\stackrel{(b)}{=} \boldsymbol{x}_s^{r} - \frac{2}{1/\eta+\rho} \frac{1}{m}\sum_{i=1}^m \nabla f_i(\boldsymbol{x}_s^{r}) \label{equ:xs_AGPDMM_K1_1} \\
&\hspace{0mm}\stackrel{\rho=\frac{1}{\eta}}{=} \boldsymbol{x}_s^{r} - \eta \frac{1}{m}\sum_{i=1}^m \nabla f_i(\boldsymbol{x}_s^{r}), \label{equ:xs_AGPDMM_K1}
\end{align}
where step $(a)$ utilises the expressions $\boldsymbol{\lambda}_{i|s}^{r+1} = \rho (\boldsymbol{x}_s^{r} \hspace{-0.7mm}-\hspace{-0.7mm} \boldsymbol{x}_i^{r, K=1} )
\hspace{-0.7mm}-\hspace{-0.7mm} \boldsymbol{\lambda}_{s|i}^r$ and $\boldsymbol{x}_i^{r, K=1}=\boldsymbol{x}_s^{r} - \frac{1}{1/\eta+\rho}\big[\nabla f_i(\boldsymbol{x}_s^{r}) +\boldsymbol{\lambda}_{s|i}^{r}\big]$. Step $(b)$ employs the equality (\ref{equ:s_lambda_equality}).
It is clear from (\ref{equ:xs_AGPDMM_K1_1}) that the update expression for $\boldsymbol{x}_{s}^{r+1}$ is actually the vanilla gradient descent expression over the function $\frac{1}{m}\sum_{i=1}^m f_i(\boldsymbol{x})$ at the estimate $\boldsymbol{x}_s^r$. The estimates $\{\boldsymbol{\lambda}_{s|i}^r \}$ for the dual variables have no effect on the computation of $\boldsymbol{x}_{s}^{r+1}$. The parameter $\rho$ only affects the stepsize computation. When $\rho = \frac{1}{\eta}$, the stepsize becomes $\eta$ as indicated by (\ref{equ:xs_AGPDMM_K1}).
\begin{remark}
Alternatively, we can take Inexact FedSplit with the special initialisation $\{\boldsymbol{x}_{i}^{r,k=0}=\boldsymbol{x}^r | r\geq 0\}$ as a variant of AGPDMM. In this case, one can show that the estimate $\boldsymbol{x}_s^{r+1}$ when $K=1$ is given by
\begin{align}
\hspace{-0mm}\boldsymbol{x}_{s}^{r+1} & = \boldsymbol{x}_s^{r} - 2\eta \frac{1}{m}\sum_{i=1}^m \nabla f_i(\boldsymbol{x}_s^{r}).
\label{equ:xs_AGPDMM_var_K1}
\end{align}
It is seen that the step-size in (\ref{equ:xs_AGPDMM_var_K1}) is $2\eta$ in comparison to the step-size $\eta$ in (\ref{equ:xs_AGPDMM_K1}). This is because the quadratic term $\|\boldsymbol{x}_i -\boldsymbol{x}_s^{r} + \boldsymbol{\lambda}_{s|i}^{r+1}/\rho\|^2$ in (\ref{equ:client_update}) is treated differently in AGPDMM and its variant.
\end{remark}
\vspace{-3mm}
\subsection{Comparison with SCAFFOLD}
\vspace{-1mm}
\noindent \textbf{Updating and transmission procedure of SCAFFOLD}: The recent work \cite{Karimireddy20SCAFFOLD} proposes SCAFFOLD for stochastic distributed optimisation over a centralized network. To make a fair comparison with Inexact PDMM, we present the update expressions of SCAFFOLD for solving (\ref{equ:optiFed}), which can be represented as
\begin{align}
&\hspace{-2mm} \textrm{clients}\hspace{-1mm}\left\{ \hspace{-2mm}\begin{array}{l}
\hspace{0mm}\boldsymbol{x}_{i}^{r, 0} = \boldsymbol{x}_{s}^{r} \\
\hspace{-0mm}\boldsymbol{x}_i^{r, k+1} \hspace{-1mm}=\hspace{-1mm} \boldsymbol{x}_i^{r, k} \hspace{-1mm}-\hspace{-1mm} \eta (\nabla f_i(\boldsymbol{x}_i^{r, k}) \hspace{-1mm}-\hspace{-1mm}\boldsymbol{c}_{i}^r \hspace{-1mm}+\hspace{-1mm}\boldsymbol{c}^r\hspace{-0.6mm}) \;\; k \hspace{-0.5mm} = |_{0}^{K-1} \\
\hspace{0mm}\boldsymbol{c}_{i}^{r+1} = \boldsymbol{c}_{i}^{r} - \boldsymbol{c}^{r} +\frac{1}{K\eta} (\boldsymbol{x}_s^{r} - \boldsymbol{x}_i^{r, K}) \end{array}\right. \hspace{-2.5mm} \label{equ:SCAFFOLD_client_update} \\
&\hspace{-1mm} \textrm{server} \hspace{-1mm}\left\{ \hspace{-2mm}\begin{array}{l}
\hspace{-0mm}\boldsymbol{x}_s^{r+1} \hspace{-0.7mm}=\hspace{-0.7mm} \boldsymbol{x}_s^{r} + \eta_g \frac{1}{m}\sum_{i=1}^m (\boldsymbol{x}_i^{r, K} - \boldsymbol{x}_s^{r}) \\
\hspace{0mm}\boldsymbol{c}^{r+1} = \boldsymbol{c}^{r} +\hspace{-0.7mm} \frac{1}{m} \sum_{i=1}^m (\boldsymbol{c}_{i}^{r+1}- \boldsymbol{c}_{i}^{r}) \end{array}\right., \hspace{-2mm} \label{equ:SCAFFOLD_server_update}
\end{align}
where all clients are included for information fusion at the server side per iteration, $k \hspace{-0.5mm} = |_{0}^{K-1}$ is a short notation for $k=0,\ldots, K$, and $(\eta, \eta_g)$ are the stepsizes. The parameters $\boldsymbol{c}$ and $\{\boldsymbol{c}_i\}$ are the so-called server and client control variates to compensate for the functional heterogeneity over different clients \cite{Karimireddy20SCAFFOLD}. From a high-level point of view, the control variates of SCAFFOLD play a similar role as the dual variables in (Inexact) PDMM.
We point out that in the computation of $\boldsymbol{c}_{i}^{r+1}$ in (\ref{equ:SCAFFOLD_client_update}), the variable difference $(\boldsymbol{x}_s^{r} - \boldsymbol{x}_i^{r, K})$ is scaled by the factor $\frac{1}{K\eta}$. In Alg.~1 and 2, the setup $\rho=\frac{1}{K\eta}$ is selected to ensure that the variable difference is also scaled by $\frac{1}{K\eta}$ in computing $\boldsymbol{\lambda}_{i|s}^{r+1}$.
From (\ref{equ:SCAFFOLD_client_update})-(\ref{equ:SCAFFOLD_server_update}), it is not difficult to conclude that at iteration $r$, the server needs to send the two variables $(\boldsymbol{x}_s^r, \boldsymbol{c}^r)$ to the clients to enable parameter update. Each client $i$ needs to send the two variables $(\boldsymbol{x}_i^{r, K} - \boldsymbol{x}_s^{r}, \boldsymbol{c}_{i}^{r+1}- \boldsymbol{c}_{i}^{r})$ to the server for information fusion. In contrast, the two versions of Inexact PDMM only require each client to transmit one variable to the server per iteration. The transmission load from the server to the clients depends on how Inexact PDMM is realised as discussed earlier. As will be shown in the experiment, AGPDMM converges faster than SCAFFOLD when $K>1$.
\noindent \textbf{Performance of SCAFFOLD when $K=1$}: We now show that when $K=1$, the update expression for $\boldsymbol{x}_s^{r+1}$ in (\ref{equ:SCAFFOLD_server_update}) also reduces to vanilla gradient descent operation under proper parameter selection. Assume $\sum_{i=1}^m (\boldsymbol{c}_i^r- \boldsymbol{c}^r)\hspace{-0.6mm}=\hspace{-0.6mm}0$. It is immediate that
\begin{align}
\boldsymbol{x}_s^{r+1} \hspace{-0.7mm}= \hspace{-0.7mm} \boldsymbol{x}_s^{r} \hspace{-0.7mm} -\hspace{-0.7mm} \frac{\eta_g\eta}{m} \hspace{-0.7mm} \sum_{i=1}^m \hspace{-0.7mm} \nabla f_i(\boldsymbol{x}_s^{r}) \hspace{-0.7mm} \stackrel{\eta_g=1}{=}\hspace{-0.7mm} \boldsymbol{x}_s^{r} \hspace{-0.7mm} -\hspace{-0.7mm} \eta \frac{1}{m} \hspace{-0.7mm} \sum_{i=1}^m \hspace{-0.7mm} \nabla f_i(\boldsymbol{x}_s^{r}).\label{equ:xs_SCAFFOLD_K1_1}
\end{align}
One can also easily show that $\sum_{i=1}^m (\boldsymbol{c}_i^{r+1}\hspace{-0.6mm}-\hspace{-0.6mm} \boldsymbol{c}^{r+1}) =0 $ based on the assumption $\sum_{i=1}^m (\boldsymbol{c}_i^r- \boldsymbol{c}^r)\hspace{-0.6mm}=\hspace{-0.6mm}0$.
Note that the parameter $\eta_g$ only affects the overall stepsize of the vanilla gradient descent. When $\eta_g=1$, (\ref{equ:xs_SCAFFOLD_K1_1}) is identical to (\ref{equ:xs_AGPDMM_K1}).
To summarise, when $K=1$, both SCAFFOLD and AGPDMM may reduce to the vanilla gradient descent operation. For SCAFFOLD, it is required that the initialisation $\sum_{i=1}^m (\boldsymbol{c}_i^0- \boldsymbol{c}^0)\hspace{-0.6mm}=\hspace{-0.6mm}0$. In the special case of $K=1$, the parameter $\rho$ in AGPDMM and $\eta_g$ in SCAFFOLD only affect the overall stepsizes of the vanilla gradient descent as discussed above.
\vspace{-3mm}
\section{Convergence Analysis of GPDMM}
\vspace{-1mm}
\label{sec:convergenceAnalysis}
\noindent \textbf{An inequality for each estimate $\boldsymbol{x}_i^{r, k+1}$ }: Using the fact that the client functions $\{f_i\}$ are (strongly) convex and have Lipschitz continuous gradients, we derive an inequality for $\boldsymbol{x}_i^{r, k+1}$ in (\ref{equ:gradient_xi}) at step $k$ of iteration $r$ in a lemma below:
\begin{lemma}
Let $(1/\eta) \geq L$ in the approximation function (\ref{equ:f_i_approximate}). Then for any $\boldsymbol{x}_i\in \mathbb{R}^{d}$ and $\theta \in[0,1]$, we have
\begin{align}
& \hspace{-3mm} f_i(\boldsymbol{x}_i) - f_i(\boldsymbol{x}_i^{r, k+1}) \nonumber \\
\geq & \hspace{-0.6mm} (\boldsymbol{x}_i \hspace{-0.6mm}-\hspace{-0.6mm} \boldsymbol{x}_i^{r, k+1})^T [ \rho(\hspace{-0.6mm} \boldsymbol{x}_s^{r} \hspace{-0.7mm} - \hspace{-0.6mm} \boldsymbol{x}_i^{r,k+1} ) \hspace{-0.6mm} -\hspace{-0.6mm} \boldsymbol{\lambda}_{s|i}^{r}] \hspace{-0.6mm}+\hspace{-0.6mm} \frac{1}{2\eta}
\|\boldsymbol{x}_i - \boldsymbol{x}_i^{r, k+1} \|^2 \nonumber\\
& - \hspace{-0.6mm} \frac{1/\eta - \theta\mu}{2}\|\boldsymbol{x}_i^{r,k}-\boldsymbol{x}_i \|^2 \hspace{-0.6mm} +\hspace{-0.6mm} \frac{1/\eta- L}{2} \|\boldsymbol{x}_i^{r, k+1} \hspace{-0.6mm}- \hspace{-0.6mm}\boldsymbol{x}_i^{r, k} \|^2 \nonumber \\
&+ \frac{1-\theta}{2L}\| \nabla f_i(\boldsymbol{x}_i^{r,k})- \nabla f_i ( \boldsymbol{x}_i) \|^2, \hspace{-2mm}
\label{equ:primal_inequality_general}
\end{align}
where $\mu=0$ corresponds to the general convex case.
\label{lemma:primal_inequality_general}
\vspace{0mm}f
\end{lemma}
\begin{proof}
See Appendix~\ref{appendix:lemma_ineq} for detailed derivation.
\end{proof}
\noindent\textbf{An inequality for all estimates $\{\boldsymbol{x}_i^{r, k}| k=1,\ldots, K \}_{i=1}^m$}:
Suppose $\{\boldsymbol{x}_s^{\star}=\boldsymbol{x}_i^{\star}\}_{i=1}^m$ together with $\{\boldsymbol{\lambda}_{i|s}^{\star} = - \boldsymbol{\lambda}_{s|i}^{\star})\}_{i=1}^m$ is an optimal solution satisfying (\ref{equ:KKT3}) by letting $\{\boldsymbol{\lambda}_{i|s}^{\star}=\boldsymbol{\delta}_i^{\star}\}_{i=1}^m$. We utilise Lemma~\ref{lemma:primal_inequality_general} to derive an inequality involving $\{\boldsymbol{x}_i^{r, k}| k=1,\ldots, K \}_{i=1}^m$ and the above optimal solution:
\begin{sloppypar}
\begin{lemma}
Suppose the estimates $\{\boldsymbol{x}_i^{r, k} \}$ are obtained by performing (\ref{equ:gradient_xi})-(\ref{equ:f_i_approximate}) under the condition that $1/\eta\geq L$. Let $\bar{\boldsymbol{x}}_i^{r, K}=\frac{1}{K}\sum_{k=1}^K \boldsymbol{x}_i^{r, K}$. Then
\begin{align}
& \sum_{i=1}^m \frac{1}{K}\sum_{k=0}^{K-1} \hspace{-0.6mm} \frac{1/\eta - \theta \mu}{2}\|\boldsymbol{x}_i^{r,k}-\boldsymbol{x}_i^{\star} \|^2 \hspace{-0.6mm} \nonumber \\
&+ \sum_{i=1}^m \frac{1}{4\rho} \|\rho(\bar{\boldsymbol{x}}_i^{r, K} - \boldsymbol{x}_i^{\star}) + (\boldsymbol{\lambda}_{i|s}^{r+1} -\boldsymbol{\lambda}_{i|s}^{\star} ) \|^2 \nonumber \\
\hspace{-2mm}&\geq \sum_{i=1}^m \Big[ f_i(\bar{\boldsymbol{x}}_i^{r, K} ) - (\bar{\boldsymbol{x}}_i^{r,K})^T \boldsymbol{\lambda}_{i|s}^{\star} - f_i(\boldsymbol{x}_i^{\star}) \nonumber \\
& \hspace{9mm} + \frac{1}{K}\sum_{k=0}^{K-1} \Big(\frac{1}{2\eta} \|\boldsymbol{x}_i^{\star} - \boldsymbol{x}_i^{r, k+1} \|^2 \nonumber\\
\hspace{-3mm}&\hspace{9mm} +\hspace{-0.6mm} \frac{1/\eta - L}{2} \|\boldsymbol{x}_i^{r, k+1} \hspace{-0.6mm}- \hspace{-0.6mm}\boldsymbol{x}_i^{r, k} \|^2 \hspace{-0.7mm} + \hspace{-0.7mm} \frac{1-\theta}{2L}\| \rho(\hspace{-0.6mm} \boldsymbol{x}_s^{r} \hspace{-0.7mm} - \hspace{-0.6mm} \boldsymbol{x}_i^{r,k+1} ) \hspace{-0.6mm}\nonumber \\
\hspace{-3mm}& \hspace{9mm} -\hspace{-0.6mm} \boldsymbol{\lambda}_{s|i}^{r} \hspace{-0.6mm} - \boldsymbol{\lambda}_{i|s}^{\star} -\hspace{-0.6mm} (1/\eta)(\boldsymbol{x}_i^{r, k+1} \hspace{-0.6mm} -\hspace{-0.6mm} \boldsymbol{x}_i^{r, k}) \hspace{-0.6mm}\|^2 \Big) \nonumber \\
&\hspace{9mm} + \frac{1}{4\rho} \|\rho(\bar{\boldsymbol{x}}_i^{r+1, K} - \boldsymbol{x}_i^{\star}) + (\boldsymbol{\lambda}_{i|s}^{r+2} -\boldsymbol{\lambda}_{i|s}^{\star} ) \|^2 \Big],
\label{equ:upper_bound_final}
\end{align}
where $1\geq \theta \geq 0$.
\label{lemma:twoBounds}
\vspace{0mm}
\end{lemma}
\end{sloppypar}
\begin{proof}
See Appendix~\ref{appendix:Lemma_upperbound} for the proof.
\end{proof}
Next we show that $\sum_{i=1}^m \Big[ f_i(\bar{\boldsymbol{x}}_i^{r, K} ) - (\bar{\boldsymbol{x}}_i^{r,K})^T \boldsymbol{\lambda}_{i|s}^{\star} - f_i(\boldsymbol{x}_i^{\star})\Big]$ in (\ref{equ:upper_bound_final}) is lower-bounded by zero in a lemma below:
\begin{lemma}
Suppose $\{\boldsymbol{x}_s^{\star}=\boldsymbol{x}_i^{\star}\}_{i=1}^m$ together with $\{\boldsymbol{\lambda}_{i|s}^{\star} = - \boldsymbol{\lambda}_{s|i}^{\star})\}_{i=1}^m$ is an optimal solution satisfying (\ref{equ:KKT3}) by letting $\{\boldsymbol{\lambda}_{i|s}^{\star}=\boldsymbol{\delta}_i^{\star}\}_{i=1}^m$.
For any $\{\boldsymbol{x}_i\in \mathbb{R}^d\}_{i=1}^m$,
\begin{align}
&\sum_{i=1}^m \Big[ f_i(\boldsymbol{x}_i ) - f_i(\boldsymbol{x}_i^{\star}) - \boldsymbol{x}_i^{T}\boldsymbol{\lambda}_{i|s}^{\star} \Big] \geq 0.
\label{equ:lowerbound}
\end{align}
\label{lemma:lower_bound}
\vspace{-0mm}
\end{lemma}
See Appendix~\ref{appendix:lemma_lowerbound} for the proof. Basically, (\ref{equ:lowerbound}) suggests that the RHS of (\ref{equ:upper_bound_final}) is always lower-bounded by zero. If needed, the quantity $\sum_{i=1}^m [ f_i(\bar{\boldsymbol{x}}_i^{r, K} ) - f_i(\boldsymbol{x}_i^{\star}) - (\bar{\boldsymbol{x}}_i^{r, K})^T\boldsymbol{\lambda}_{i|s}^{\star} ]$ can be ignored in (\ref{equ:upper_bound_final}) due to its nonnegativity.
\begin{figure*}[t!]
\centering
\includegraphics[width=120mm]{compare_quadratic_2m_fedave_pro.eps}
\psfrag{e}{$\eta$}
\vspace*{-0.2cm}
\caption{\footnotesize{ Performance comparison of FedAve, GPDMM, AGPDMM, and SCAFFOLD for solving a least square problem which is specified by synthetic data. }}
\label{fig:synthetic}
\vspace*{-0.0cm}
\end{figure*}
\begin{figure*}[t!]
\centering
\includegraphics[width=120mm]{minist_5K_all_K1_pro.eps}
\vspace*{-0.0cm}
\caption{\footnotesize{Performance comparison for softmax regression over the MNIST and Fashion-MNIST datasets, where the five subplots in the first row are for MNIST. As classification over Fashion-MNIST is more challenging than that over MNIST, the training losses over Fashion-MNIST are larger than those over MNIST.} }
\label{fig:compare_MNIST}
\vspace*{-0.3cm}
\end{figure*}
\noindent\textbf{Linear convergence results}: With Lemma~\ref{lemma:twoBounds} and \ref{lemma:lower_bound}, we are ready to show the linear convergence speed for GPDMM in Alg.~1. Our main objective is to show that the coefficients before $\|\boldsymbol{x}_i^{r, K} - \boldsymbol{x}_i^{\star}\|^2$ and $ \|\rho(\bar{\boldsymbol{x}}_i^{r+1, K} - \boldsymbol{x}_i^{\star}) + (\boldsymbol{\lambda}_{i|s}^{r+2} -\boldsymbol{\lambda}_{i|s}^{\star} ) \|^2$ on the RHS of (\ref{equ:upper_bound_final}) are greater than the ones before $\|\boldsymbol{x}_i^{r-1, K} - \boldsymbol{x}_i^{\star}\|^2$ and $ \|\rho(\bar{\boldsymbol{x}}_i^{r, K} - \boldsymbol{x}_i^{\star}) + (\boldsymbol{\lambda}_{i|s}^{r+1} -\boldsymbol{\lambda}_{i|s}^{\star} ) \|^2$ on the LHS of (\ref{equ:upper_bound_final}) for each client $i$. The other quantities in (\ref{equ:upper_bound_final}) are either dropped or combined to produce the above mentioned ones. We summarise the results in a theorem below:
\begin{theorem} Suppose the estimates $\{\boldsymbol{x}_i^{r, k} \}$ are obtained by performing (\ref{equ:gradient_xi})-(\ref{equ:f_i_approximate}) under the condition that $1/\eta > L\geq \mu>0$. Let $Q^r$, $r\geq 1$, be
\begin{align}
&Q^r = \sum_{i=1}^m \Big[\frac{1/\eta - \theta\mu}{2K} \| \boldsymbol{x}_i^{r-1, K} - \boldsymbol{x}_i^{\star} \|^2 \nonumber \\
&\hspace{2mm}+ \hspace{-0.7mm} (\frac{1}{4\rho} \hspace{-0.7mm}-\hspace{-0.7mm} \frac{\gamma_{2}}{2}) \|\rho(\bar{\boldsymbol{x}}_i^{r, K} \hspace{-0.7mm}-\hspace{-0.7mm} \boldsymbol{x}_i^{\star}) \hspace{-0.7mm}+\hspace{-0.7mm} (\boldsymbol{\lambda}_{i|s}^{r+1} \hspace{-0.7mm}-\hspace{-0.7mm} \boldsymbol{\lambda}_{i|s}^{\star} ) \|^2 \Big], \label{equ:}
\end{align}
where
\begin{align}
& \gamma_{2} \hspace{-0.7mm}=\hspace{-0.7mm} \min\left( \frac{\theta\mu \phi}{2\rho^2}, \frac{\gamma_1\eta^2}{2} \right), \label{equ:gamma_2}
\end{align}
where $1 \hspace{-0.7mm}>\hspace{-0.7mm} \theta \hspace{-0.7mm}> \hspace{-0.7mm}0$, $ 1 \hspace{-0.7mm}>\hspace{-0.7mm}\phi \hspace{-0.7mm}> \hspace{-0.7mm}0$ satisfy $\frac{\theta\mu \phi}{4\rho^2} < \frac{1}{4\rho}$, and
\begin{align}
\gamma_{1} \hspace{-0.7mm}=\hspace{-0.7mm} \min\left(\frac{1-\theta}{2L \eta^2} , \frac{1/\eta \hspace{-0.7mm}-\hspace{-0.7mm} L}{2}\right). \label{equ:gamma_1}
\end{align}
Then
\begin{align}
Q^{k+1} \leq \beta Q^k, \label{equ:linear_conv}
\end{align}
where $0<\beta<1$ is computed as
\begin{align}
\beta &= \max\left( \frac{1/(4\rho)- \gamma_{2}/2 }{1/(4\rho)}, \frac{ 1/\eta - \theta\mu }{ 1/\eta - \theta\mu\phi} \right).
\nonumber
\end{align}
\label{theorem:linear_conv}
\end{theorem}
\begin{proof}
See Appendix~\ref{appendix:linear_conv} for the proof. The constraint $0<\beta<1$ is guaranteed by the fact that $ 1/\eta> L\geq\mu > \theta\mu$,
$\frac{1}{4\rho} > \frac{\theta\mu \phi}{4\rho^2} \geq \frac{\gamma_{2}}{2} $, and $1>\phi>0$.
\end{proof}
\noindent \textbf{Sublinear convergence results}: For the special case that the client functions are not strongly convex, (i.e., $\mu=0$ in (\ref{equ:muStrong})), the method exhibits sublinear convergence speed. The convergence rate can be characterised by setting $\mu=0$ and $\theta=0$ in (\ref{equ:upper_bound_final}), performing summation from $r=1$ to $r=R$, and applying Jensen's inequality. We summarise the results in a theorem below:
\begin{theorem} Consider the special case $\mu=0$ in (\ref{equ:muStrong}) for all clients. Suppose the estimates $\{\boldsymbol{x}_i^{r,k}\}$ are obtained by performing (\ref{equ:gradient_xi})-(\ref{equ:f_i_approximate}) under the condition that $1/\eta > L$. Let $\bar{\boldsymbol{x}}_i^{R, K} =\frac{1}{R}\sum_{r=1}^R\bar{\boldsymbol{x}}_i^{r, K}=\frac{1}{RK}\sum_{r=1}^{R}\sum_{k=1}^K \boldsymbol{x}_i^{r,k}$ and $\bar{\boldsymbol{\lambda}}_{i|s}^{R} =\frac{1}{R}\sum_{r=1}^R \boldsymbol{\lambda}_{i|s}^{r+1}$. Then
\begin{align}
\hspace{-3mm} &\lim_{R\rightarrow \infty}\sum_{i=1}^m \Big[ f_i(\bar{\boldsymbol{x}}_i^{R,K} ) \hspace{-0.7mm}-\hspace{-0.7mm} \boldsymbol{\lambda}_{i|s}^{\star, T}\bar{\boldsymbol{x}}_i^{R,K} \hspace{-0.7mm}-\hspace{-0.7mm} f_i(\boldsymbol{x}_i^{\star}) \Big] \hspace{-0.7mm}=\hspace{-0.7mm} \mathcal{O}(1/R) \label{equ:sublinear1}
\end{align}
\begin{align}
\hspace{-3mm} &\lim_{R\rightarrow \infty}\sum_{i=1}^m \Big[ \hspace{-0.6mm}\frac{\gamma_{1}\eta^2}{2} \|\bar{\boldsymbol{\lambda}}_{i|s}^{R} \hspace{-0.7mm}-\hspace{-0.7mm} \boldsymbol{\lambda}_{i|s}^{\star} \|^2 \Big] = \mathcal{O}(1/R), \label{equ:sublinear2}
\end{align}
where $\gamma_{1}$ is given by (\ref{equ:gamma_1}) by setting $\theta=0$.
\label{theorem:sublinear}
\end{theorem}
\begin{proof}
See Appendix~\ref{appendix:sublinear} for the proof.
\end{proof}
It is clear from Lemma~\ref{lemma:lower_bound} that the LHS of (\ref{equ:sublinear1}) is lower-bounded by zero for all $R\geq 1$. When $R$ approaches to infinity, we have $\{\nabla f_i(\bar{\boldsymbol{x}}_i^{R,K} ) = \boldsymbol{\lambda}_{i|s}^{\star}\}_{i=1}^m$, showing that the limiting point $\{\bar{\boldsymbol{x}}_i^{R, K}\}$ is in fact the optimal solution.
\vspace{-2mm}
\section{Experimental Results}
\vspace{-2mm}
Two experiments were conducted to evaluate FedAve \cite{McMahan17}, GPDMM, AGPDMM, and SCAFFOLD. Inexact FedSplit is not considered because of its poor performance as demonstrated in Fig.~\ref{fig:FedSplit}. The two experiments are least square minimisation over synthetic data and softmax regression over MNIST and Fashion-MNIST datasets, respectively.
\vspace{-2mm}
\subsection{Least square minimisation over synthetic data}
\vspace{-2mm}
\label{subsec:least_square}
We consider solving a least square problem over a centralised network (see \cite{Pathak2021} for a similar experimental setup). The objective function $f_i(\boldsymbol{x}_i)$ takes the form $f_i(\boldsymbol{x}_i) = \frac{1}{2}\|\boldsymbol{A}_i \boldsymbol{x}_i - \boldsymbol{b}_i \|^2$, where $\boldsymbol{A}_i\in \mathbb{R}^{5000\times 500}$ are generated element-wise from a Normal distribution.
The vector $\boldsymbol{b}_i $ is obtained by letting $\boldsymbol{b}_i = \boldsymbol{A}_i\boldsymbol{y}_0+\boldsymbol{v}_i$, where $\boldsymbol{y}_0$ is a predefined vector and $\boldsymbol{v}_i\sim N(0, 0.25\boldsymbol{I}_{5000\times 5000})$.
In all four methods, $\{\boldsymbol{x}_i\}$ and $\boldsymbol{x}_s$ were initialised to be zero. In addition, the other hyper-parameters $\eta=\{5e-5, 1e-4\}$, $m=\{25, 500\}$, and $K=\{1,3,5,10, 20\}$ were tested. The parameter $\eta_g$ in SCAFFOLD was set to $\eta_g=1$ to be in line with the setup $\rho=\frac{1}{\eta}$ of AGPDMM in (\ref{equ:xs_AGPDMM_K1}). Finally, the control covariates of SCAFFOLD were initialised to be zero.
Fig.~\ref{fig:synthetic} displays the convergence results for the four methods. Firstly, one observes that FedAve has poor performance when $K>1$, which is due to the functional heterogeneity across the clients nodes (i.e., the global optimal solution $\boldsymbol{x}_s^{\ast}$ is inconsistent with the optimal solutions of individual client functions \cite{Pathak2021}). Secondly, it is clear that AGPDMM converges faster than GPDMM for all tested $K$ values. As explained in Section~\ref{sec:GPDMM}, the performance gain of AGPDMM is due to the fact that at each iteration $r$, the global estimate $\boldsymbol{x}_s^r$ instead of the individual estimate $\boldsymbol{x}_i^{r-1, K}$ is utilised to perform approximate optimisations at the client $i$. Thirdly, one can also find from the figure that AGPDMM converges faster than SCAFFOLD when $K>1$. This might be because the computation of $\boldsymbol{\lambda}_{s|i}^{r+1}$ in AGPDMM utilises both $\{\boldsymbol{x}_s^{r} - \boldsymbol{x}_i^{r, K} \}$ and $\{\boldsymbol{x}_s^{r+1} - \boldsymbol{x}_i^{r, K} \}$ while the computation of $\boldsymbol{c}^{r+1}$ in SCAFFOLD utilises only $\{\boldsymbol{x}_s^{r} - \boldsymbol{x}_i^{r, K} \}$. When $K=1$, both methods have identical performance as FedAve. This is because both methods have the identical update expression for the estimate $\boldsymbol{x}_s^{r+1}$, which is in fact the expression of vanilla gradient descent in FedAve.
\vspace{-2mm}
\subsection{Softmax regression over MNIST and Fashion-MNIST }
\vspace{-1mm}
In this experiment, we consider performing softmax regression (i.e., a convex optimisation problem) over the MNIST and Fashion-MNIST datasets, where each dataset has 10 classes. The number of clients is set to be $m=10$ for each dataset, where each client carries the training images of a single class. The above setup implies that the distributions of the training data are heterogeneous across the different clients.
Similarly to the first experiment, $\{\boldsymbol{x}_i\}$ and $\boldsymbol{x}_s$ were initialised to be zero in the four methods . The other hyper-parameters $\eta=0.05$ and $K=\{1, 5, 10, 30, 40\}$ were tested. The parameter $\eta_g$ and the control covariates for SCAFFOLD were set as in the first experiment. At each gradient step of an iteration at a client node, a mini-batch of 300 training samples was utilised to compute the gradient and update the model parameters accordingly. It is noted that the mini-batches were taken in a pre-defined order instead of in a random manner to remove any effect of randomness. That is, the training procedure is deterministic.
The training results and validation accuracies are summarised in Fig.~\ref{fig:compare_MNIST} and Table~\ref{tab:val_acc}, respectively. One observes that for each dataset, the training loss of each method improves gradually as $K$ increases from 1 to $40$ except FedAve. In addition, it is clear that AGPDMM performs the best w.r.t. the training loss. As for validation accuracy, AGPGMM outperforms others for most scenarios except $K=10$ for Fashion-MNIST. SCAFFOLD performs slightly better than GPDMM. The above phenomenon suggests that the initialisation for each iteration at the client side is crucial for Inexact PDMM.
\begin{table}[t]
\caption{\small Validation accuracy (in percentage) of the three methods for the MNIST and Fashion-MNIST datasets}
\label{tab:val_acc}
\centering
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
& \hspace{-1.5mm}\scriptsize{ K } \hspace{-1.5mm} & \footnotesize{1} & \footnotesize{5}& \footnotesize{10}& \footnotesize{30} & \footnotesize{40}
\\
\hline
{\scriptsize \multirow{4}{*}{\rotatebox{90}{MNIST}}} & \hspace{-1.5mm}\scriptsize{ FedAve } \hspace{-1.5mm} & \footnotesize{90.80} & \footnotesize{91.70} & \footnotesize{91.67}& \footnotesize{91.32} & \footnotesize{91.16} \\
\cline{2-7}
& \hspace{-1.5mm}\scriptsize{ GPDMM } \hspace{-1.5mm} & \footnotesize{90.25} & \footnotesize{91.92} & \footnotesize{92.20}& \footnotesize{92.46} & \footnotesize{92.52} \\
\cline{2-7}
& \hspace{-1.5mm}\scriptsize{SCAFFOLD} \hspace{-1.5mm} & \footnotesize{90.80} & \footnotesize{92.10} & \footnotesize{92.29} & \footnotesize{92.53} & \footnotesize{92.59} \\
\cline{2-7}
& \hspace{-1.5mm}\scriptsize{AGPDMM } \hspace{-1.5mm} & \footnotesize{90.80} & \footnotesize{\textbf{92.14}} & \footnotesize{\textbf{92.37}} & \footnotesize{\textbf{92.61}} & \footnotesize{\textbf{92.64}} \\
\hline
\hline
{\scriptsize \multirow{4}{*}{\rotatebox{90}{Fashion-MNIST}}}
& \hspace{-1.5mm}\scriptsize{ FedAve } \hspace{-1.5mm} & \footnotesize{82.24} & \footnotesize{83.08} & \footnotesize{83.13}& \footnotesize{83.09} & \footnotesize{82.83} \\
\cline{2-7}
& \hspace{-1.5mm}\scriptsize{ GPDMM } \hspace{-1.5mm} & \footnotesize{81.43} & \footnotesize{83.64} & \footnotesize{84.18}& \footnotesize{84.58} & \footnotesize{84.64} \\
\cline{2-7}
& \hspace{-1.5mm}\scriptsize{SCAFFOLD} \hspace{-1.5mm} & \footnotesize{82.24} & \footnotesize{83.97} & \footnotesize{\textbf{84.49}} & \footnotesize{84.66} & \footnotesize{84.65} \\
\cline{2-7}
& \hspace{-1.5mm}\scriptsize{AGPDMM } \hspace{-1.5mm} & \footnotesize{82.24} & \footnotesize{\textbf{84.08}} & \footnotesize{{84.46}} & \footnotesize{\textbf{84.67}} & \footnotesize{\textbf{84.65}} \\
\hline
\end{tabular}
\vspace{-0mm}
\end{table}
\section{Conclusions}
In this paper, we first showed that PDMM reduces to FedSplit when applied to a centralised network. The poor reported performance of Inexact FedSplit in \cite{Pathak2021} is analysed, which was found to be due to the improper parameter initialisation at the client side. Two versions of Inexact PDMM were then proposed to correct the convergence issue of Inexact FedSplit, which are GPDMM and AGPDMM. The main difference between the methods is that at each iteration $r$, AGPDMM utilises the global estimate $\boldsymbol{x}_s^r$ to conduct approximate optimisations at the client slide, which is more informative than the individual estimates $\{\boldsymbol{x}_i^{r-1, K}\}$. Linear and sublinear convergence rates are established for GPDMM for any number ($K> 0$) of approximate optimisations conducted at the client side per iteration. It is also shown analytically that when $K=1$, both AGPDMM and SCAFFOLD reduce to the vanilla gradient descent operation under proper parameter selection. Therefore, convergence results of the classical vanilla gradient descent operation apply directly to AGPDMM when $K=1$. Experimental results show that AGPDMM converges faster than both SCAFFOLD and GPDMM.
One future work would be to provide a convergence analysis for AGPDMM when $K>1$. One can also extend the deterministic analysis for GPDMM to the stochastic scenario.
\appendices
\vspace{-3mm}
\section{Proof for Lemma~\ref{lemma:primal_inequality_general}}
\label{appendix:lemma_ineq}
Before presenting the proof, we first introduce two lemmas that will be needed later on:
\begin{lemma} For any $\boldsymbol{y}_i \in\mathbb{R}^d$, $i=1,\ldots, 4$, the following equality holds
\begin{align}
&(\boldsymbol{y}_1-\boldsymbol{y}_2)^T(\boldsymbol{y}_3- \boldsymbol{y}_4) \nonumber \\
&= \frac{1}{2}\left(\|\boldsymbol{y}_1 \hspace{-0.7mm}+\hspace{-0.7mm}\boldsymbol{y}_3 \|^2 \hspace{-0.7mm}-\hspace{-0.7mm} \|\boldsymbol{y}_2 \hspace{-0.7mm}+\hspace{-0.7mm} \boldsymbol{y}_4 \|^2 \hspace{-0.7mm}-\hspace{-0.7mm} \|\boldsymbol{y}_2\hspace{-0.7mm}+\hspace{-0.7mm}\boldsymbol{y}_3 \|^2 \hspace{-0.7mm}+\hspace{-0.7mm} \|\boldsymbol{y}_2\hspace{-0.7mm}+\hspace{-0.7mm}\boldsymbol{y}_4 \|^2\right). \nonumber
\end{align}
\label{lemma:identity}
\end{lemma}
\begin{lemma}
Suppose $f_i$ has the Lipschitz continuous gradient $L>0$. Then the following inequality
\begin{align}
&\hspace{-3mm}f_i(\boldsymbol{y}_i) \hspace{-0.5mm}\leq\hspace{-0.5mm} f_i(\boldsymbol{x}_i) \hspace{-0.5mm}+\hspace{-0.5mm} \nabla f_i(\boldsymbol{x}_i)^T(\boldsymbol{y}_i\hspace{-0.5mm}-\hspace{-0.5mm}\boldsymbol{x}_i) \hspace{-0.5mm}+\hspace{-0.5mm} \frac{L}{2}\|\boldsymbol{x}_i \hspace{-0.5mm}-\hspace{-0.5mm} \boldsymbol{y}_i \|^2 \nonumber
\end{align}
holds, which is a consequence of the inequality (\ref{equ:gradLips2}) (see \cite{Zhou18Duality}).
\label{lemma:gradLips}
\end{lemma}
\vspace{-5mm}
\begin{proof}
We now describe the proof for Lemma~\ref{lemma:primal_inequality_general}. The expression $f_i(\boldsymbol{x}_i) - f_i(\boldsymbol{x}_i^{r, k+1})$ for client $i$ can be lower-bounded to be
\begin{align}
& \hspace{-1mm} f_i(\boldsymbol{x}_i) - f_i(\boldsymbol{x}_i^{r, k+1}) \nonumber \\
\stackrel{(a)}{\geq}& \Big[f_i(\boldsymbol{x}_i^{r, k}) + (\boldsymbol{x}_i - \boldsymbol{x}_i^{r,k})^T \nabla f_i (\boldsymbol{x}_i^{r,k}) + \frac{\theta \mu }{2}\|\boldsymbol{x}_i^{r,k}-\boldsymbol{x}_i \|^2 \hspace{2mm}
\nonumber \\
&+ \frac{1-\theta}{2L}\| \nabla f_i(\boldsymbol{x}_i^{r,k})- \nabla f_i ( \boldsymbol{x}_i) \|^2 \Big] - \hspace{-0.7mm} \Big[ f_i(\boldsymbol{x}_i^{r,k}) \hspace{-0.7mm} \nonumber \\
& + \hspace{-0.7mm} (\boldsymbol{x}_i^{r,k+1} \hspace{-0.7mm}-\hspace{-0.7mm} \boldsymbol{x}_i^{r,k})^T \nabla f_i(\boldsymbol{x}_i^{r,k}) \hspace{-0.7mm}+ \hspace{-0.7mm} \frac{L}{2} \|\boldsymbol{x}_i^{r,k+1} \hspace{-0.7mm}- \hspace{-0.7mm}\boldsymbol{x}_i^{r,k} \|^2 \Big] \nonumber \\
=& \hspace{-0.2mm} (\boldsymbol{x}_i \hspace{-0.7mm}-\hspace{-0.7mm} \boldsymbol{x}_i^{r, k+1})^T \nabla \hspace{-0.2mm} f_i(\boldsymbol{x}_i^{r,k}) \hspace{-0.7mm} + \hspace{-0.7mm} \frac{\theta\mu}{2}\|\boldsymbol{x}_i^{r,k} \hspace{-0.7mm}-\hspace{-0.7mm} \boldsymbol{x}_i \|^2 \hspace{-0.8mm}
\nonumber \\
&- \hspace{-0.8mm} \frac{L}{2} \|\boldsymbol{x}_i^{r,k+1} \hspace{-0.8mm} - \hspace{-0.7mm}\boldsymbol{x}_i^{r, k} \|^2 \hspace{-0.7mm}+\hspace{-0.7mm} \frac{1-\theta}{2L}\| \nabla f_i(\boldsymbol{x}_i^{r,k})- \nabla f_i ( \boldsymbol{x}_i) \|^2 \nonumber
\end{align}
\begin{align}
\stackrel{(b)}{=}& \hspace{-0.6mm} (\boldsymbol{x}_i \hspace{-0.7mm}-\hspace{-0.7mm} \boldsymbol{x}_i^{r,k+1})^T \Big( \rho(\hspace{-0.6mm} \boldsymbol{x}_s^{r} \hspace{-0.7mm} - \hspace{-0.6mm} \boldsymbol{x}_i^{r,k+1} ) \hspace{-0.6mm} -\hspace{-0.6mm} \boldsymbol{\lambda}_{s|i}^{r} \hspace{-0.7mm} -\hspace{-0.7mm} \frac{1}{\eta} (\boldsymbol{x}_i^{r, k+1} \hspace{-0.7mm} -\hspace{-0.7mm} \boldsymbol{x}_i^{r, k})\Big) \nonumber\\
&+ \hspace{-0.6mm} \frac{\theta\mu}{2}\|\boldsymbol{x}_i^{r, k}-\boldsymbol{x}_i \|^2 \hspace{-0.6mm} -\hspace{-0.6mm} \frac{L}{2} \|\boldsymbol{x}_i^{r, k+1} \hspace{-0.6mm}- \hspace{-0.6mm}\boldsymbol{x}_i^{r, k} \|^2 \nonumber \\
& + \frac{1-\theta}{2L}\| \nabla f_i(\boldsymbol{x}_i^{r,k})- \nabla f_i ( \boldsymbol{x}_i) \|^2 \nonumber \\
\stackrel{(c)}{=}& \hspace{-0.6mm} (\boldsymbol{x}_i \hspace{-0.6mm}-\hspace{-0.6mm} \boldsymbol{x}_i^{r, k+1})^T [ \rho(\hspace{-0.6mm} \boldsymbol{x}_s^{r} \hspace{-0.7mm} - \hspace{-0.6mm} \boldsymbol{x}_i^{r,k+1} ) \hspace{-0.6mm} -\hspace{-0.6mm} \boldsymbol{\lambda}_{s|i}^{r}] \hspace{-0.6mm}+\hspace{-0.6mm} \frac{1}{2\eta}
\|\boldsymbol{x}_i - \boldsymbol{x}_i^{r, k+1} \|^2 \nonumber\\
& - \hspace{-0.6mm} \frac{1/\eta - \theta\mu}{2}\|\boldsymbol{x}_i^{r,k}-\boldsymbol{x}_i \|^2 \hspace{-0.6mm} +\hspace{-0.6mm} \frac{1/\eta- L}{2} \|\boldsymbol{x}_i^{r, k+1} \hspace{-0.6mm}- \hspace{-0.6mm}\boldsymbol{x}_i^{r, k} \|^2 \nonumber \\
&+ \frac{1-\theta}{2L}\| \nabla f_i(\boldsymbol{x}_i^{r,k})- \nabla f_i ( \boldsymbol{x}_i) \|^2
\end{align}
where step $(a)$ follows from (\ref{equ:gradLips2})- (\ref{equ:muStrong}) and Lemma~\ref{lemma:gradLips}, which are due to the fact that $f_i$ is $\mu$-convex ($\mu\geq 0$) and has Lipschitz continuous gradient $L\geq \mu$. The parameter $\theta$ satisfy $\{1\geq \theta\geq 0\}$. Step $(b)$ uses the optimality condition (\ref{equ:opti_r}). Step $(c)$ makes use of Lemma~\ref{lemma:identity}.
The proof is complete.
\end{proof}
\section{Proof for Lemma~\ref{lemma:twoBounds}}
\label{appendix:Lemma_upperbound}
\begin{proof}
Invoking Lemma~\ref{lemma:primal_inequality_general} with $\boldsymbol{x}_i=\boldsymbol{x}_i^{\star}$, summing over all the clients and all gradient steps $i=1,\ldots, K$, for the iteration $r$, and rearranging the quantities, we obtain
\begin{align}
& \sum_{i=1}^m \frac{1}{K}\sum_{k=0}^{K-1} \hspace{-0.6mm} \frac{1/\eta - \theta \mu}{2}\|\boldsymbol{x}_i^{r,k}-\boldsymbol{x}_i^{\star} \|^2 \hspace{-0.6mm} \nonumber \\
\hspace{-2mm}&\geq \sum_{i=1}^m \hspace{-0.5mm} \frac{1}{K}\hspace{-0.5mm} \sum_{k=0}^{K-1} \Big[ f_i(\boldsymbol{x}_i^{r, k+1} ) - f_i(\boldsymbol{x}_i^{\star}) \hspace{-0.6mm} + \frac{1}{2\eta}
\|\boldsymbol{x}_i^{\star} - \boldsymbol{x}_i^{r, k+1} \|^2 \nonumber \\
& \hspace{10mm} - ( \boldsymbol{x}_i^{r, k+1} - \boldsymbol{x}_i^{\star} )^T [ \rho(\hspace{-0.6mm} \boldsymbol{x}_s^{r} \hspace{-0.7mm} - \hspace{-0.6mm} \boldsymbol{x}_i^{r,k+1} ) \hspace{-0.6mm} -\hspace{-0.6mm} \boldsymbol{\lambda}_{s|i}^{r}] \nonumber \\
&+\hspace{-0.6mm} \frac{1/\eta \hspace{-0.5mm}- \hspace{-0.5mm} L}{2} \|\boldsymbol{x}_i^{r, k+1} \hspace{-0.7mm}- \hspace{-0.7mm}\boldsymbol{x}_i^{r, k} \|^2 \hspace{-0.7mm} + \hspace{-0.7mm} \frac{1 \hspace{-0.5mm}- \hspace{-0.5mm} \theta}{2L}\| \nabla f_i(\boldsymbol{x}_i^{r, k}) \hspace{-0.7mm} -\hspace{-0.7mm} \nabla f_i ( \boldsymbol{x}_i^{\star}) \|^2 \Big] \nonumber\\
\hspace{-2mm}&\stackrel{(a)}{=} \sum_{i=1}^m \hspace{-0.5mm} \frac{1}{K}\hspace{-0.5mm} \sum_{k=0}^{K-1} \Big[ f_i(\boldsymbol{x}_i^{r, k+1} ) - f_i(\boldsymbol{x}_i^{\star}) \hspace{-0.6mm} + \frac{1}{2\eta}
\|\boldsymbol{x}_i^{\star} - \boldsymbol{x}_i^{r, k+1} \|^2 \nonumber\\
& \hspace{10mm} - ( \boldsymbol{x}_i^{r, k+1} - \boldsymbol{x}_i^{\star} )^T [ \rho(\hspace{-0.6mm} \boldsymbol{x}_s^{r} \hspace{-0.7mm} - \hspace{-0.6mm} \boldsymbol{x}_i^{r,k+1} ) \hspace{-0.6mm} -\hspace{-0.6mm} \boldsymbol{\lambda}_{s|i}^{r}] \nonumber \\
\hspace{-3mm}&\hspace{9mm} +\hspace{-0.6mm} \frac{1/\eta - L}{2} \|\boldsymbol{x}_i^{r, k+1} \hspace{-0.6mm}- \hspace{-0.6mm}\boldsymbol{x}_i^{r, k} \|^2 \hspace{-0.7mm} + \hspace{-0.7mm} \frac{1-\theta}{2L}\| \rho(\hspace{-0.6mm} \boldsymbol{x}_s^{r} \hspace{-0.7mm} - \hspace{-0.6mm} \boldsymbol{x}_i^{r,k+1} ) \hspace{-0.6mm}\nonumber \\
\hspace{-3mm}& \hspace{9mm} -\hspace{-0.6mm} \boldsymbol{\lambda}_{s|i}^{r} \hspace{-0.6mm} - \boldsymbol{\lambda}_{i|s}^{\star} -\hspace{-0.6mm} (1/\eta)(\boldsymbol{x}_i^{r, k+1} \hspace{-0.6mm} -\hspace{-0.6mm} \boldsymbol{x}_i^{r, k}) \hspace{-0.6mm}\|^2 \Big] \nonumber \\
\hspace{-2mm}&\stackrel{(b)}{=} \sum_{i=1}^m \Big[ f_i(\bar{\boldsymbol{x}}_i^{r, K} ) - f_i(\boldsymbol{x}_i^{\star}) \hspace{-0.6mm} + \frac{1}{K}\sum_{k=0}^{K-1} \frac{1}{2\eta}
\|\boldsymbol{x}_i^{\star} - \boldsymbol{x}_i^{r, k+1} \|^2 \nonumber\\
& \hspace{10mm} - ( \bar{\boldsymbol{x}}_i^{r, K} - \boldsymbol{x}_i^{\star} )^T \boldsymbol{\lambda}_{i|s}^{r+1} \nonumber \\
\hspace{-3mm}&\hspace{9mm} +\hspace{-0.6mm} \frac{1/\eta - L}{2} \|\boldsymbol{x}_i^{r, k+1} \hspace{-0.6mm}- \hspace{-0.6mm}\boldsymbol{x}_i^{r, k} \|^2 \hspace{-0.7mm} + \hspace{-0.7mm} \frac{1-\theta}{2L}\| \rho(\hspace{-0.6mm} \boldsymbol{x}_s^{r} \hspace{-0.7mm} - \hspace{-0.6mm} \boldsymbol{x}_i^{r,k+1} ) \hspace{-0.6mm}\nonumber \\
\hspace{-3mm}& \hspace{9mm} -\hspace{-0.6mm} \boldsymbol{\lambda}_{s|i}^{r} \hspace{-0.6mm} - \boldsymbol{\lambda}_{i|s}^{\star} -\hspace{-0.6mm} (1/\eta)(\boldsymbol{x}_i^{r, k+1} \hspace{-0.6mm} -\hspace{-0.6mm} \boldsymbol{x}_i^{r, k}) \hspace{-0.6mm}\|^2 \Big] ,
\label{equ:upper_bound1}
\end{align}
where step $(a)$ uses the optimality condition (\ref{equ:opti_r}) and $ \{\nabla f_i ( \boldsymbol{x}_i^{\star}) = \boldsymbol{\lambda}_{i|s}^{\star}\}_{i=1}^m$. Step $(b)$ is obtained by employing Jensen's inequality, $ \bar{\boldsymbol{x}}_i^{r,K} = \frac{1}{K}\sum_{k=1}^K \boldsymbol{x}_i^{r,k}$, and $ \boldsymbol{\lambda}_{i|s}^{r+1}= \rho(\hspace{-0.6mm} \boldsymbol{x}_s^{r} \hspace{-0.7mm} - \hspace{-0.6mm} \bar{\boldsymbol{x}}_i^{r,K} ) \hspace{-0.6mm} -\hspace{-0.6mm} \boldsymbol{\lambda}_{s|i}^{r}$.
To further simplify (\ref{equ:upper_bound1}), we first present a lemma below:
\begin{lemma}
Suppose the estimates $\{\boldsymbol{x}_i^{r, k}\}_{k=1}^{K}$ are obtained by performing (\ref{equ:gradient_xi})-(\ref{equ:f_i_approximate}) under the condition that $1/\eta \geq L$. Then the expression $\sum_{i=1}^m ( \bar{\boldsymbol{x}}_i^{r, K} - \boldsymbol{x}_i^{\star} )^T \boldsymbol{\lambda}_{i|s}^{r+1} $ in the RHS of (\ref{equ:upper_bound1}) can be alternatively represented
as
\begin{align}
&\hspace{-0mm} 2\sum_{i=1}^m ( \bar{\boldsymbol{x}}_i^{r,K} - \boldsymbol{x}_i^{\star} )^T \boldsymbol{\lambda}_{i|s}^{r+1} \nonumber \\
&= 2\sum_{i=1}^m \boldsymbol{\lambda}_{i|s}^{\star} \bar{\boldsymbol{x}}_i^{r, K} + \sum_{i=1}^m \frac{1}{2\rho} \|\rho(\bar{\boldsymbol{x}}_i^{r, K} - \boldsymbol{x}_i^{\star}) + (\boldsymbol{\lambda}_{i|s}^{r+1} -\boldsymbol{\lambda}_{i|s}^{\star} ) \|^2 \nonumber \\
& \hspace{3mm} - \sum_{i=1}^m \frac{1}{2\rho} \|\rho(\bar{\boldsymbol{x}}_i^{r+1, K} - \boldsymbol{x}_i^{\star}) + (\boldsymbol{\lambda}_{i|s}^{r+2} -\boldsymbol{\lambda}_{i|s}^{\star} ) \|^2.
\label{equ:client_ineq_final}
\end{align}
We postpone the proof for Lemma~\ref{lemma:client_ineq_final} in Appendix~\ref{appendix:proof_client_ineq}.
\label{lemma:client_ineq_final}
\end{lemma}
Plugging (\ref{equ:client_ineq_final}) into (\ref{equ:upper_bound1}) and rearranging the quantities produces (\ref{equ:upper_bound_final}). The proof is complete. \end{proof}
\vspace{-3mm}
\section{Proof for Lemma~\ref{lemma:client_ineq_final}}
\label{appendix:proof_client_ineq}
\vspace{-3mm}
\begin{proof}
In the first step, we derive two different but mathematically equivalent expressions for the quantity $\sum_{i=1}^m ( \bar{\boldsymbol{x}}_i^{r,K} - \boldsymbol{x}_i^{\star} )^T \boldsymbol{\lambda}_{i|s}^{r+1}$.
Firstly, by plugging the expressions $\{\boldsymbol{\lambda}_{i|s}^{r+1} = \rho (\boldsymbol{x}_s^r \hspace{-0.7mm}-\hspace{-0.7mm} \bar{\boldsymbol{x}}_i^{r,K} ) -\boldsymbol{\lambda}_{s|i}^r \}$ into $\sum_{i=1}^m ( \bar{\boldsymbol{x}}_i^{r,K} - \boldsymbol{x}_i^{\star} )^T \boldsymbol{\lambda}_{i|s}^{r+1}$, we have
\begin{align}
&\sum_{i=1}^m ( \bar{\boldsymbol{x}}_i^{r,K} - \boldsymbol{x}_i^{\star} )^T \boldsymbol{\lambda}_{i|s}^{r+1} \nonumber \\
&=\sum_{i=1}^m ( \rho ( \boldsymbol{x}_s^r - \bar{\boldsymbol{x}}_i^{r,K} )
-\boldsymbol{\lambda}_{s|i}^r)^T (\bar{\boldsymbol{x}}_i^{r,K} - \boldsymbol{x}_i^{\star}) \nonumber \\
&= \sum_{i=1}^m \Big( \rho ( \boldsymbol{x}_s^r - \bar{\boldsymbol{x}}_i^{r, K} )
+\boldsymbol{\lambda}_{s|i}^{r+1}-\boldsymbol{\lambda}_{s|i}^r\Big)^T (\bar{\boldsymbol{x}}_i^{r, K} - \boldsymbol{x}_i^{\star}) \hspace{-0.6mm} \nonumber \\
&\hspace{3mm}-\hspace{-0.6mm} \sum_{i=1}^m \boldsymbol{\lambda}_{s|i}^{r+1} (\bar{\boldsymbol{x}}_i^{r, K} - \boldsymbol{x}_i^{\star}) \nonumber \\
&= \sum_{i=1}^m \Big( \rho ( \boldsymbol{x}_s^r - \boldsymbol{x}_s^{r+1} )
+\boldsymbol{\lambda}_{s|i}^{r+1}-\boldsymbol{\lambda}_{s|i}^r\Big)^T (\bar{\boldsymbol{x}}_i^{r,K} - \boldsymbol{x}_i^{\star}) \nonumber \\
&\hspace{3mm} -\hspace{-0.6mm} \sum_{i=1}^m \boldsymbol{\lambda}_{s|i}^{r+1} (\bar{\boldsymbol{x}}_i^{r, K} - \boldsymbol{x}_i^{\star}) \nonumber \\
&\hspace{3mm} +\sum_{i=1}^m \rho(\boldsymbol{x}_s^{r+1} - \bar{\boldsymbol{x}}_i^{r, K} )^T(\bar{\boldsymbol{x}}_i^{r,K} - \boldsymbol{x}_i^{\star}).
\label{equ:client_inequality_current_sum1}
\end{align}
Next we derive the 2nd expression for $\sum_{i=1}^m ( \bar{\boldsymbol{x}}_i^{r,K} - \boldsymbol{x}_i^{\star} )^T \boldsymbol{\lambda}_{i|s}^{r+1}$. To do so, we note that $\bar{\boldsymbol{x}}_i^{r,K}$ can be represented in terms of $\boldsymbol{\lambda}_{i|s}^{r+1}$ as
\begin{align}
\hspace{-3mm} \bar{\boldsymbol{x}}_i^{r, K} \hspace{-0.5mm}=\hspace{-0.5mm} \boldsymbol{x}_s^{r} \hspace{-0.5mm}-\hspace{-0.5mm} \frac{1}{\rho } (\boldsymbol{\lambda}_{s|i}^r \hspace{-0.5mm}+\hspace{-0.5mm} \boldsymbol{\lambda}_{i | s}^{r+1}) \;\; i=1,\ldots, m, \label{equ:lambda_update_reverse}
\end{align}
Similarly to the derivation for (\ref{equ:client_inequality_current_sum1}), we plug the expression (\ref{equ:lambda_update_reverse}) for $\bar{\boldsymbol{x}}_i^{r,K}$ where appropriate, which is given by
\begin{align}
&\sum_{i=1}^m ( \bar{\boldsymbol{x}}_i^{r,K} - \boldsymbol{x}_i^{\star} )^T \boldsymbol{\lambda}_{i|s}^{r+1} \nonumber \\
&=\hspace{-0.7mm} \sum_{i=1}^m (\boldsymbol{\lambda}_{i|s}^{r+1} \hspace{-0.7mm}-\hspace{-0.7mm} \boldsymbol{\lambda}_{i | s }^{\star} )^T \hspace{-0.5mm} \bar{\boldsymbol{x}}_i^{r,K} \hspace{-0.7mm}-\hspace{-0.7mm} \sum_{i=1}^m \hspace{-0.7mm} \boldsymbol{\lambda}_{i | s}^{r+1,T}\hspace{-0.5mm} \boldsymbol{x}_i^{\star} + \sum_{i=1}^m \boldsymbol{\lambda}_{i | s}^{\star,T} \bar{\boldsymbol{x}}_i^{r,K} \nonumber
\end{align}
\begin{align}
&= \sum_{i=1}^m \left [ \boldsymbol{x}_s^{r} \hspace{-0.5mm}-\hspace{-0.5mm} \frac{1}{\rho} (\boldsymbol{\lambda}_{s|i}^r \hspace{-0.5mm}+\hspace{-0.5mm} \boldsymbol{\lambda}_{i | s}^{r+1}) \right]^T (\boldsymbol{\lambda}_{i|s}^{r+1} -\boldsymbol{\lambda}_{i|s}^{\star} ) \nonumber \\
&\hspace{3mm} -\hspace{-0.7mm} \sum_{i=1}^m \hspace{-0.7mm} \boldsymbol{\lambda}_{i | s}^{r+1,T}\hspace{-0.5mm} \boldsymbol{x}_i^{\star} + \sum_{i=1}^m \boldsymbol{\lambda}_{i | s}^{\star,T} \bar{\boldsymbol{x}}_i^{r,K} \nonumber \\
&= \sum_{i=1}^m \left [(\boldsymbol{x}_s^{r} -\boldsymbol{x}_s^{r+1} ) \hspace{-0.5mm}-\hspace{-0.5mm} \frac{1}{\rho}(\boldsymbol{\lambda}_{s|i}^r \hspace{-0.5mm}+\hspace{-0.5mm} \boldsymbol{\lambda}_{i | s}^{r+1}) \right]^T (\boldsymbol{\lambda}_{i|s}^{r+1} -\boldsymbol{\lambda}_{i|s}^{\star} ) \hspace{-0.5mm} \nonumber \\
&+\hspace{-0.5mm} \sum_{i =1}^m \boldsymbol{x}_s^{r+1,T} (\boldsymbol{\lambda}_{i|s}^{r+1} \hspace{-0.5mm}-\hspace{-0.5mm} \boldsymbol{\lambda}_{i|s}^{\star} ) -\hspace{-0.7mm} \sum_{i=1}^m \hspace{-0.7mm} \boldsymbol{\lambda}_{i | s}^{r+1,T}\hspace{-0.5mm} \boldsymbol{x}_i^{\star} + \sum_{i=1}^m \boldsymbol{\lambda}_{i | s}^{\star,T} \bar{\boldsymbol{x}}_i^{r,K} \nonumber \\
&= \sum_{i=1}^m \left [(\boldsymbol{x}_s^{r} -\boldsymbol{x}_s^{r+1} ) \hspace{-0.5mm}-\hspace{-0.5mm} \frac{1}{\rho}(\boldsymbol{\lambda}_{s|i}^r \hspace{-0.5mm}-\hspace{-0.5mm} \boldsymbol{\lambda}_{s|i}^{r+1}) \right]^T (\boldsymbol{\lambda}_{i|s}^{r+1} -\boldsymbol{\lambda}_{i|s}^{\star} ) \hspace{-0.5mm} \nonumber \\
&+\hspace{-0.5mm} \sum_{i =1}^m \boldsymbol{x}_s^{r+1,T} (\boldsymbol{\lambda}_{i|s}^{r+1} \hspace{-0.5mm}-\hspace{-0.5mm} \boldsymbol{\lambda}_{i|s}^{\star} ) -\hspace{-0.7mm} \sum_{i=1}^m \hspace{-0.7mm} \boldsymbol{\lambda}_{i | s}^{r+1,T}\hspace{-0.5mm} \boldsymbol{x}_i^{\star} + \sum_{i=1}^m \boldsymbol{\lambda}_{i | s}^{\star,T} \bar{\boldsymbol{x}}_i^{r,K} \nonumber \\
& \hspace{-0mm} - \sum_{i=1}^m \frac{1}{\rho}(\boldsymbol{\lambda}_{s|i}^{r+1} \hspace{-0.5mm}+\hspace{-0.5mm} \boldsymbol{\lambda}_{i | s}^{r+1})^T(\boldsymbol{\lambda}_{i|s}^{r+1} -\boldsymbol{\lambda}_{i|s}^{\star} ).
\label{equ:client_inequality_current_sum2}
\end{align}
In the 2nd step, we derive two different but mathematically equivalent expressions for $ \sum_{i=1}^m \boldsymbol{\lambda}_{s|i}^{r+1,T}( \boldsymbol{x}_s^{r+1} - \boldsymbol{x}_s^{\star}) $. By using (\ref{equ:s_lambda_equality}) and the expression for $\boldsymbol{\lambda}_{s|i}^{r+1} = \rho(\bar{\boldsymbol{x}}_i^{r, K} - \boldsymbol{x}_s^{r+1} ) - \boldsymbol{\lambda}_{i|s}^{r+1} $, we have
\begin{align}
0 &= \sum_{i=1}^m \boldsymbol{\lambda}_{s|i}^{r+1,T}( \boldsymbol{x}_s^{r+1} - \boldsymbol{x}_s^{\star}) \nonumber \\
& = \sum_{i=1}^m \left[\rho (\bar{\boldsymbol{x}}_i^{r,K} - \boldsymbol{x}_s^{r+1} )
- \boldsymbol{\lambda}_{i|s}^{r+1} \right]^{T}( \boldsymbol{x}_s^{r+1} - \boldsymbol{x}_s^{\star}) \nonumber \\
& = \sum_{i=1}^m \rho(\bar{\boldsymbol{x}}_i^{r, K} - \boldsymbol{x}_s^{r+1} )^T( \boldsymbol{x}_s^{r+1} - \boldsymbol{x}_s^{\star})
\nonumber \\
&- \sum_{i=1}^m \boldsymbol{\lambda}_{i|s}^{r+1,T}( \boldsymbol{x}_s^{r+1} - \boldsymbol{x}_s^{\star}).
\label{equ:server_inequality_current_sum1}
\end{align}
The 2nd expression for $ \sum_{i=1}^m \boldsymbol{\lambda}_{s|i}^{r+1,T}( \boldsymbol{x}_s^{r+1} - \boldsymbol{x}_s^{\star}) $ can be derived by utilising $\boldsymbol{x}_s^{r+1} = \bar{\boldsymbol{x}}_i^{r,K}-\frac{1}{\rho}(\boldsymbol{\lambda}_{s|i}^{r+1} + \boldsymbol{\lambda}_{i|s}^{r+1})$ as:
\begin{align}
\hspace{-2mm}0 &= \sum_{i=1}^m \boldsymbol{\lambda}_{s|i}^{r+1,T} ( \boldsymbol{x}_s^{r+1} - \boldsymbol{x}_s^{\star}) \nonumber \\
&= \hspace{-0.7mm} \sum_{i=1}^m \left( \boldsymbol{\lambda}_{s|i}^{r+1} \hspace{-0.7mm}- \hspace{-0.7mm} \boldsymbol{\lambda}_{s|i}^{\star} \right)^T \hspace{-0.7mm}\boldsymbol{x}_s^{r+1}
\hspace{-0.7mm}+ \hspace{-0.7mm}\sum_{i=1}^m \hspace{-0.7mm} \boldsymbol{\lambda}_{s|i}^{\star,T} \boldsymbol{x}_s^{r+1}
\hspace{-0.7mm}- \hspace{-0.7mm} \sum_{i=1}^m \hspace{-0.7mm} \boldsymbol{\lambda}_{s|i}^{r+1,T} \boldsymbol{x}_s^{\star} \nonumber \\
&= \sum_{i=1}^m \left( \boldsymbol{\lambda}_{s|i}^{r+1} - \boldsymbol{\lambda}_{s|i}^{\star} \right)^T \left[\bar{\boldsymbol{x}}_i^{r, K}-\frac{1}{\rho}(\boldsymbol{\lambda}_{s|i}^{r+1} + \boldsymbol{\lambda}_{i|s}^{r+1}) \right] \nonumber\\
&+\sum_{i=1}^m \boldsymbol{\lambda}_{s|i}^{\star,T} \boldsymbol{x}_s^{r+1}
- \sum_{i=1}^m \boldsymbol{\lambda}_{s|i}^{r+1,T} \boldsymbol{x}_s^{\star} \nonumber \\
&= \sum_{i=1}^m \left( \boldsymbol{\lambda}_{s|i}^{r+1} - \boldsymbol{\lambda}_{s|i}^{\star} \right)^T \bar{\boldsymbol{x}}_i^{r,K}
+\sum_{i=1}^m \boldsymbol{\lambda}_{s|i}^{\star,T} \boldsymbol{x}_s^{r+1}
\nonumber \\
&- \hspace{-0.7mm} \sum_{i=1}^m \hspace{-0.7mm} \left( \boldsymbol{\lambda}_{s|i}^{r+1} \hspace{-0.7mm}- \hspace{-0.7mm} \boldsymbol{\lambda}_{s|i}^{\star} \right)^T\frac{1}{\rho}(\boldsymbol{\lambda}_{s|i}^{r+1} \hspace{-0.7mm}+ \hspace{-0.7mm} \boldsymbol{\lambda}_{i|s}^{r+1}) \hspace{-0.7mm}- \hspace{-0.7mm} \sum_{i=1}^m \boldsymbol{\lambda}_{s|i}^{r+1,T} \boldsymbol{x}_s^{\star}. \hspace{-2mm}
\label{equ:server_inequality_current_sum2}
\end{align}
Finally, combining (\ref{equ:client_inequality_current_sum1}) and (\ref{equ:client_inequality_current_sum2})-(\ref{equ:server_inequality_current_sum2}) produces
\begin{align}
&2\sum_{i=1}^m ( \bar{\boldsymbol{x}}_i^{r, K} - \boldsymbol{x}_i^{\star} )^T \boldsymbol{\lambda}_{i|s}^{r+1} \nonumber
\end{align}
\begin{align}
&= \sum_{i=1}^m \Big( \rho( \boldsymbol{x}_s^r - \boldsymbol{x}_s^{r+1} )
\hspace{-0.8mm}+\hspace{-0.8mm}\boldsymbol{\lambda}_{s|i}^{r+1}\hspace{-0.7mm}-\hspace{-0.7mm}\boldsymbol{\lambda}_{s|i}^r\Big)^T (\bar{\boldsymbol{x}}_i^{r, K} \hspace{-0.8mm}-\hspace{-0.8mm} \boldsymbol{x}_i^{\star}) \nonumber \\
&-\hspace{-0.6mm} \sum_{i=1}^m \boldsymbol{\lambda}_{s|i}^{r+1} (\bar{\boldsymbol{x}}_i^{r, K} \hspace{-0.8mm}-\hspace{-0.8mm} \boldsymbol{x}_i^{\star}) \hspace{-0.8mm}+\hspace{-0.8mm}\sum_{i=1}^m \rho(\boldsymbol{x}_s^{r+1} \hspace{-0.8mm}-\hspace{-0.8mm} \bar{\boldsymbol{x}}_i^{r,K} )^T(\bar{\boldsymbol{x}}_i^{r, K} \hspace{-0.8mm}-\hspace{-0.8mm} \boldsymbol{x}_i^{\star}) \nonumber \\
&\hspace{0mm} +\sum_{i=1}^m \left [(\boldsymbol{x}_s^{r} -\boldsymbol{x}_s^{r+1} ) \hspace{-0.5mm}-\hspace{-0.5mm} \frac{1}{\rho}(\boldsymbol{\lambda}_{s|i}^r \hspace{-0.5mm}-\hspace{-0.5mm} \boldsymbol{\lambda}_{s|i}^{r+1}) \right]^T (\boldsymbol{\lambda}_{i|s}^{r+1} \hspace{-0.7mm} -\hspace{-0.7mm} \boldsymbol{\lambda}_{i|s}^{\star} ) \hspace{-0.5mm}\nonumber \\
&+\hspace{-0.5mm} \sum_{i =1}^m \boldsymbol{x}_s^{r+1,T} (\boldsymbol{\lambda}_{i|s}^{r+1} \hspace{-0.7mm}-\hspace{-0.7mm} \boldsymbol{\lambda}_{i|s}^{\star} ) \hspace{-0.7mm} -\hspace{-0.7mm} \sum_{i=1}^m \hspace{-0.7mm} \boldsymbol{\lambda}_{i | s}^{r+1,T}\hspace{-0.5mm} \boldsymbol{x}_i^{\star} \hspace{-0.7mm} +\hspace{-0.7mm} \sum_{i=1}^m \boldsymbol{\lambda}_{i | s}^{\star,T} \bar{\boldsymbol{x}}_i^{r, K} \nonumber \\
& - \sum_{i=1}^m \frac{1}{\rho}(\boldsymbol{\lambda}_{s|i}^{r+1} \hspace{-0.5mm}+\hspace{-0.5mm} \boldsymbol{\lambda}_{i | s}^{r+1})^T(\boldsymbol{\lambda}_{i|s}^{r+1} -\boldsymbol{\lambda}_{i|s}^{\star} ) \nonumber \\
& \hspace{0mm} + \sum_{i=1}^m \rho(\bar{\boldsymbol{x}}_i^{r, K} \hspace{0.7mm}- \hspace{0.7mm} \boldsymbol{x}_s^{r+1} )^T( \boldsymbol{x}_s^{r+1} \hspace{0.7mm}- \hspace{0.7mm} \boldsymbol{x}_s^{\star}) \nonumber \\
&\hspace{0.7mm} - \hspace{0.7mm} \sum_{i=1}^m \boldsymbol{\lambda}_{i|s}^{r+1,T}( \boldsymbol{x}_s^{r+1} - \boldsymbol{x}_s^{\star}) + \sum_{i=1}^m \left( \boldsymbol{\lambda}_{s|i}^{r+1} - \boldsymbol{\lambda}_{s|i}^{\star} \right)^T \bar{\boldsymbol{x}}_i^{r, K} \nonumber \\
&- \sum_{i=1}^m \left( \boldsymbol{\lambda}_{s|i}^{r+1} - \boldsymbol{\lambda}_{s|i}^{\star} \right)^T\frac{1}{\rho}(\boldsymbol{\lambda}_{s|i}^{r+1} + \boldsymbol{\lambda}_{i|s}^{r+1})
+\sum_{i=1}^m \boldsymbol{\lambda}_{s|i}^{\star,T} \boldsymbol{x}_s^{r+1} \nonumber \\
& - \sum_{i=1}^m \boldsymbol{\lambda}_{s|i}^{r+1,T} \boldsymbol{x}_s^{\star} \nonumber \\
& \stackrel{(a)}{=} \sum_{i=1}^m \frac{1}{\rho} \Big[ \rho( \boldsymbol{x}_s^r - \boldsymbol{x}_s^{r+1} )
+\boldsymbol{\lambda}_{s|i}^{r+1}-\boldsymbol{\lambda}_{s|i}^r\Big]^T \nonumber \\
& \hspace{3mm} \cdot [ \rho(\bar{\boldsymbol{x}}_i^{r,K} - \boldsymbol{x}_i^{\star}) + \boldsymbol{\lambda}_{i|s}^{r+1} -\boldsymbol{\lambda}_{i|s}^{\star}] + 2\sum_{i=1}^m \boldsymbol{\lambda}_{i|s}^{\star} \bar{\boldsymbol{x}}_i^{r, K} \nonumber \\
& \hspace{3mm} - \sum_{i=1}^m \rho \| \boldsymbol{x}_s^{r+1} - \bar{\boldsymbol{x}}_i^{r,K} \|^2 - \sum_{i=1}^m\frac{1}{\rho}\|\boldsymbol{\lambda}_{s|i}^{r+1} + \boldsymbol{\lambda}_{i|s}^{r+1} \|^2 \nonumber \\
& \stackrel{(b)}{=} \sum_{i=1}^m \frac{1}{2\rho} \|\rho(\boldsymbol{x}_s^r - \boldsymbol{x}_i^{\star}) - (\boldsymbol{\lambda}_{s|i}^r +\boldsymbol{\lambda}_{i|s}^{\star} ) \|^2 \nonumber \\
& \hspace{3mm} - \sum_{i=1}^m \frac{1}{2\rho} \|\rho(\boldsymbol{x}_s^{r+1} - \boldsymbol{x}_i^{\star}) - (\boldsymbol{\lambda}_{s|i}^{r+1} +\boldsymbol{\lambda}_{i|s}^{\star} ) \|^2 \nonumber\\
& \hspace{3mm} - \sum_{i=1}^m \frac{1}{2\rho} \|\rho(\boldsymbol{x}_s^r - \bar{\boldsymbol{x}}_i^{r, K}) - (\boldsymbol{\lambda}_{s|i}^r +\boldsymbol{\lambda}_{i|s}^{r+1} ) \|^2 \nonumber \\
& \hspace{3mm} + \sum_{i=1}^m \frac{1}{2\rho} \|\rho(\boldsymbol{x}_s^{r+1} - \bar{\boldsymbol{x}}_i^{r, K}) - (\boldsymbol{\lambda}_{s|i}^{r+1} +\boldsymbol{\lambda}_{i|s}^{r+1} ) \|^2 \nonumber\\
& \hspace{3mm} + 2\sum_{i=1}^m \boldsymbol{\lambda}_{i|s}^{\star} \bar{\boldsymbol{x}}_i^{r, K} - \sum_{i=1}^m \rho \| \boldsymbol{x}_s^{r+1} - \bar{\boldsymbol{x}}_i^{r, K} \|^2 \nonumber \\
& \hspace{3mm} - \sum_{i=1}^m\frac{1}{\rho}\|\boldsymbol{\lambda}_{s|i}^{r+1} + \boldsymbol{\lambda}_{i|s}^{r+1} \|^2 \nonumber \\
& \stackrel{(c)}{=} \sum_{i=1}^m \frac{1}{2\rho} \|\rho(\boldsymbol{x}_s^r - \boldsymbol{x}_i^{\star}) - (\boldsymbol{\lambda}_{s|i}^r +\boldsymbol{\lambda}_{i|s}^{\star} ) \|^2 \nonumber \\
& \hspace{3mm} - \hspace{-0.7mm} \sum_{i=1}^m \frac{1}{2\rho} \|\rho(\boldsymbol{x}_s^{r+1} \hspace{-0.7mm} -\hspace{-0.7mm} \boldsymbol{x}_i^{\star})
\hspace{-0.7mm} -\hspace{-0.7mm} (\boldsymbol{\lambda}_{s|i}^{r+1} \hspace{-0.7mm} +\hspace{-0.7mm} \boldsymbol{\lambda}_{i|s}^{\star} ) \|^2 \hspace{-0.7mm} +\hspace{-0.7mm} 2\sum_{i=1}^m \boldsymbol{\lambda}_{i|s}^{\star} \bar{\boldsymbol{x}}_i^{r, K} \nonumber \\
& \stackrel{(d)}{=} \sum_{i=1}^m \frac{1}{2\rho} \|\rho(\bar{\boldsymbol{x}}_i^{r, K} - \boldsymbol{x}_i^{\star}) + (\boldsymbol{\lambda}_{i|s}^{r+1} -\boldsymbol{\lambda}_{i|s}^{\star} ) \|^2 \nonumber
\end{align}
\begin{align}
& \hspace{3mm} - \sum_{i=1}^m \frac{1}{2\rho} \|\rho(\bar{\boldsymbol{x}}_i^{r+1, K} - \boldsymbol{x}_i^{\star}) + (\boldsymbol{\lambda}_{i|s}^{r+2} -\boldsymbol{\lambda}_{i|s}^{\star} ) \|^2 \nonumber \\
& \hspace{3mm} + 2\sum_{i=1}^m \boldsymbol{\lambda}_{i|s}^{\star} \bar{\boldsymbol{x}}_i^{r, K},
\end{align}
where step $(a)$ uses the fact that $\sum_{i=1}^m \boldsymbol{\lambda}_{s|i}^{\star} = \sum_{i=1}^m \boldsymbol{\lambda}_{i|s}^{\star} = 0$, step $(b)$ follows from Lemma~\ref{lemma:identity}, step $(c)$ uses the identities of $\rho(\boldsymbol{x}_s^r-\bar{\boldsymbol{x}}_i^{r, K}) - (\boldsymbol{\lambda}_{s|i}^r + \boldsymbol{\lambda}_{i|s}^{r+1}) =0 $ and $\rho(\boldsymbol{x}_s^{r+1}-\bar{\boldsymbol{x}}_i^{r, K}) + (\boldsymbol{\lambda}_{s|i}^{r+1} + \boldsymbol{\lambda}_{i|s}^{r+1}) =0 $ from (\ref{equ:client_update_split})-(\ref{equ:server_update_split}), and step $(d)$ uses $\rho(\boldsymbol{x}_s^k-\bar{\boldsymbol{x}}_i^{r,K}) - (\boldsymbol{\lambda}_{s|i}^r + \boldsymbol{\lambda}_{i|s}^{r+1}) =0 $ and $\rho(\boldsymbol{x}_s^{r+1}-\bar{\boldsymbol{x}}_i^{r+1, K}) - (\boldsymbol{\lambda}_{s|i}^{r+1} + \boldsymbol{\lambda}_{i|s}^{r+2}) =0 $. The proof is complete.
\end{proof}
\section{Proof for Lemma~\ref{lemma:lower_bound} }
\label{appendix:lemma_lowerbound}
\begin{proof}
The lower bound in (\ref{equ:lowerbound}) can be easily proved to be:
\begin{align}
& \sum_{i=1}^m \Big[ f_i(\boldsymbol{x}_i ) - f_i(\boldsymbol{x}_i^{\star}) - \boldsymbol{x}_i^{T}\boldsymbol{\lambda}_{i|s}^{\star} \Big] \nonumber \\
&\stackrel{(a)}{\geq} \sum_{i=1}^m \Big[ -f_i^{\ast}(\boldsymbol{\lambda}_{i|s}^{\star}) - f_i(\boldsymbol{x}_i^{\star}) \Big] \stackrel{}{=} 0 \nonumber,
\end{align}
where $f_i^{\ast}(\cdot)$ is the conjugate function of $f_i(\cdot)$ as defined in (\ref{equ:conj_def}). Step~$(a)$ uses Fenchel's inequality (see \cite{Boyd04ConvexOptimization}). It is known that for a convex function, the duality gap is 0 at the optimal solution. The proof is complete.
\end{proof}
\section{Proof for Theorem~\ref{theorem:linear_conv}}
\label{appendix:linear_conv}
\begin{proof}
The proof for Theorem~\ref{theorem:linear_conv} is mainly based on the results in Lemma~\ref{lemma:twoBounds} and \ref{lemma:lower_bound}. Assume that $1>\theta>0$ and $1/\eta > L\geq \mu >0 \}$. The RHS of (\ref{equ:upper_bound_final}) in Lemma~\ref{lemma:twoBounds} can be further lower bounded by
\begin{align}
& \sum_{i=1}^m \frac{1}{K}\sum_{k=0}^{K-1} \hspace{-0.6mm} \frac{1/\eta - \theta \mu}{2}\|\boldsymbol{x}_i^{r,k}-\boldsymbol{x}_i^{\star} \|^2 \hspace{-0.6mm} \nonumber \\
&+ \sum_{i=1}^m \frac{1}{4\rho} \|\rho(\bar{\boldsymbol{x}}_i^{r, K} - \boldsymbol{x}_i^{\star}) + (\boldsymbol{\lambda}_{i|s}^{r+1} -\boldsymbol{\lambda}_{i|s}^{\star} ) \|^2 \nonumber \\
\hspace{-2mm}&\geq \sum_{i=1}^m \Big[ f_i(\bar{\boldsymbol{x}}_i^{r, K} ) - (\bar{\boldsymbol{x}}_i^{r,K})^T \boldsymbol{\lambda}_{i|s}^{\star} - f_i(\boldsymbol{x}_i^{\star}) \nonumber \\
& \hspace{9mm} + \frac{1}{K}\sum_{k=0}^{K-1} \Big(\frac{1}{2\eta} \|\boldsymbol{x}_i^{\star} - \boldsymbol{x}_i^{r, k+1} \|^2 \nonumber\\
\hspace{-3mm}&\hspace{9mm} +\hspace{-0.6mm} \frac{1/\eta - L}{2} \|\boldsymbol{x}_i^{r, k+1} \hspace{-0.6mm}- \hspace{-0.6mm}\boldsymbol{x}_i^{r, k} \|^2 \hspace{-0.7mm} + \hspace{-0.7mm} \frac{1-\theta}{2L}\| \rho(\hspace{-0.6mm} \boldsymbol{x}_s^{r} \hspace{-0.7mm} - \hspace{-0.6mm} \boldsymbol{x}_i^{r,k+1} ) \hspace{-0.6mm}\nonumber \\
\hspace{-3mm}& \hspace{9mm} -\hspace{-0.6mm} \boldsymbol{\lambda}_{s|i}^{r} \hspace{-0.6mm} - \boldsymbol{\lambda}_{i|s}^{\star} -\hspace{-0.6mm} (1/\eta)(\boldsymbol{x}_i^{r, k+1} \hspace{-0.6mm} -\hspace{-0.6mm} \boldsymbol{x}_i^{r, k}) \hspace{-0.6mm}\|^2 \Big)\nonumber \\
&\hspace{9mm} + \frac{1}{4\rho} \|\rho(\bar{\boldsymbol{x}}_i^{r+1, K} - \boldsymbol{x}_i^{\star}) + (\boldsymbol{\lambda}_{i|s}^{r+2} -\boldsymbol{\lambda}_{i|s}^{\star} ) \|^2 \Big] \nonumber \\
&\stackrel{(a)}{\geq} \sum_{i=1}^m \Big[ \frac{1}{K}\sum_{k=0}^{K-1} \Big(\frac{1}{2\eta} \|\boldsymbol{x}_i^{\star} - \boldsymbol{x}_i^{r, k+1} \|^2 \nonumber\\
\hspace{-3mm}&\hspace{9mm} +\hspace{-0.6mm} \frac{1/\eta - L}{2} \|\boldsymbol{x}_i^{r, k+1} \hspace{-0.6mm}- \hspace{-0.6mm}\boldsymbol{x}_i^{r, k} \|^2 \hspace{-0.7mm} + \hspace{-0.7mm} \frac{1-\theta}{2\eta^2 L}\| \eta( \rho(\hspace{-0.6mm} \boldsymbol{x}_s^{r} \hspace{-0.7mm} - \hspace{-0.6mm} \boldsymbol{x}_i^{r,k+1} ) \hspace{-0.6mm}\nonumber
\end{align}
\begin{align}
\hspace{-3mm}& \hspace{9mm} -\hspace{-0.6mm} \boldsymbol{\lambda}_{s|i}^{r} \hspace{-0.6mm} - \boldsymbol{\lambda}_{i|s}^{\star}) -\hspace{-0.6mm} (\boldsymbol{x}_i^{r, k+1} \hspace{-0.6mm} -\hspace{-0.6mm} \boldsymbol{x}_i^{r, k}) \hspace{-0.6mm}\|^2 \Big) \nonumber \\
&\hspace{5mm} + \frac{1}{4\rho} \|\rho(\bar{\boldsymbol{x}}_i^{r+1, K} - \boldsymbol{x}_i^{\star}) + (\boldsymbol{\lambda}_{i|s}^{r+2} -\boldsymbol{\lambda}_{i|s}^{\star} ) \|^2 \Big] \nonumber \\
&\stackrel{(b)}{\geq} \sum_{i=1}^m \Big[ \frac{1}{K}\sum_{k=0}^{K-1} \Big( \frac{1/\eta -\theta\mu \phi + \theta\mu \phi}{2} \|\boldsymbol{x}_i^{\star} - \boldsymbol{x}_i^{r, k+1} \|^2 \nonumber\\
\hspace{-3mm}&\hspace{9mm} + \hspace{-0.7mm} \frac{\gamma_1}{2} \| \eta( \rho(\hspace{-0.6mm} \boldsymbol{x}_s^{r} \hspace{-0.7mm} - \hspace{-0.6mm} \boldsymbol{x}_i^{r,k+1} ) -\hspace{-0.6mm} \boldsymbol{\lambda}_{s|i}^{r} \hspace{-0.6mm} - \boldsymbol{\lambda}_{i|s}^{\star}) \hspace{-0.6mm}\|^2\Big) \nonumber \\
&\hspace{5mm} + \frac{1}{4\rho} \|\rho(\bar{\boldsymbol{x}}_i^{r+1, K} - \boldsymbol{x}_i^{\star}) + (\boldsymbol{\lambda}_{i|s}^{r+2} -\boldsymbol{\lambda}_{i|s}^{\star} ) \|^2 \Big] \nonumber \\
&\stackrel{(c)}{\geq} \sum_{i=1}^m \Big[ \frac{1}{K}\sum_{k=0}^{K-1} \frac{1/\eta -\theta\mu \phi}{2} \|\boldsymbol{x}_i^{\star} - \boldsymbol{x}_i^{r, k+1} \|^2 \nonumber\\
\hspace{-3mm}&\hspace{9mm} + \frac{\theta\mu \phi}{2} \|\boldsymbol{x}_i^{\star} - \bar{\boldsymbol{x}}_i^{r, K} \|^2 + \hspace{-0.7mm} \frac{\gamma_1 \eta^2}{2} \| \boldsymbol{\lambda}_{i|s}^{r+1} \hspace{-0.6mm} - \boldsymbol{\lambda}_{i|s}^{\star} \hspace{-0.6mm}\|^2 \nonumber \\
&\hspace{5mm} + \frac{1}{4\rho} \|\rho(\bar{\boldsymbol{x}}_i^{r+1, K} - \boldsymbol{x}_i^{\star}) + (\boldsymbol{\lambda}_{i|s}^{r+2} -\boldsymbol{\lambda}_{i|s}^{\star} ) \|^2 \Big] \nonumber \\
&\stackrel{(d)}{\geq} \sum_{i=1}^m \Big[ \frac{1}{K}\sum_{k=0}^{K-1} \frac{1/\eta -\theta\mu \phi}{2} \|\boldsymbol{x}_i^{\star} - \boldsymbol{x}_i^{r, k+1} \|^2 \nonumber\\
\hspace{-3mm}&\hspace{9mm} +\frac{\gamma_2}{2} \|\rho(\bar{\boldsymbol{x}}_i^{r, K} - \boldsymbol{x}_i^{\star}) + (\boldsymbol{\lambda}_{i|s}^{r+1} -\boldsymbol{\lambda}_{i|s}^{\star} ) \|^2 \nonumber \\
&\hspace{5mm} + \frac{1}{4\rho} \|\rho(\bar{\boldsymbol{x}}_i^{r+1, K} - \boldsymbol{x}_i^{\star}) + (\boldsymbol{\lambda}_{i|s}^{r+2} -\boldsymbol{\lambda}_{i|s}^{\star} ) \|^2 \Big], \label{equ:proof_theoremLinear_1}
\end{align}
where step $(a)$ follows from Lemma~\ref{lemma:lower_bound}. Step $(b)$ introduces $1>\phi>0$ and utilises the inequality $\|\boldsymbol{b}\|^2+\|\boldsymbol{c}\|^2\geq \frac{1}{2}\|\boldsymbol{b}+ \boldsymbol{c}\|^2$. The parameter $\gamma_{1}$ is defined as
\begin{align}
\gamma_{1} &= \min\left(\frac{1-\theta}{2L \eta^2} , \frac{1/\eta - L}{2}\right).
\label{equ:gamma_i1}
\end{align}
Step $(c)$ employs Jensen's inequality and $\boldsymbol{\lambda}_{i|s}^{r+1} = \rho(\boldsymbol{x}_s^r - \bar{\boldsymbol{x}}_i^{r,K}) - \boldsymbol{\lambda}_{s|i}^r$. Step $(d)$ utilises the inequality $\|\boldsymbol{b}\|^2+\|\boldsymbol{c}\|^2\geq \frac{1}{2}\|\boldsymbol{b}+ \boldsymbol{c}\|^2$ again, and the parameter $\gamma_{2}$ is defined as
\begin{align}
\gamma_{2} &= \min\left( \frac{\theta\mu\phi}{2\rho^2}, \frac{\gamma_{1}\eta^2}{2} \right).
\label{equ:gamma_i2}
\end{align}
By using $\{\boldsymbol{x}_i^{r-1,K}= \boldsymbol{x}_i^{r,k=0}\}$, the inequality (\ref{equ:proof_theoremLinear_1}) can be reformulated as
\begin{align}
& \sum_{i=1}^m \hspace{-0.6mm} \frac{1/\eta - \theta \mu}{2K}\|\boldsymbol{x}_i^{r-1,K}-\boldsymbol{x}_i^{\star} \|^2 \hspace{-0.6mm} \nonumber \\
&+ \sum_{i=1}^m \Big(\frac{1}{4\rho} - \frac{\gamma_2}{2} \Big) \|\rho(\bar{\boldsymbol{x}}_i^{r, K} - \boldsymbol{x}_i^{\star}) + (\boldsymbol{\lambda}_{i|s}^{r+1} -\boldsymbol{\lambda}_{i|s}^{\star} ) \|^2 \nonumber \\
& \geq \sum_{i=1}^m \frac{1}{K}\sum_{k=1}^{K-1} \frac{\theta\mu (1-\phi)}{2} \|\boldsymbol{x}_i^{\star} - \boldsymbol{x}_i^{r, k} \|^2 \nonumber\\
& + \sum_{i=1}^m \frac{1/\eta -\theta\mu \phi}{2K} \|\boldsymbol{x}_i^{\star} - \boldsymbol{x}_i^{r, K} \|^2 \nonumber\\
&\hspace{5mm} + \sum_{i=1}^m \frac{1}{4\rho} \|\rho(\bar{\boldsymbol{x}}_i^{r+1, K} - \boldsymbol{x}_i^{\star}) + (\boldsymbol{\lambda}_{i|s}^{r+2} -\boldsymbol{\lambda}_{i|s}^{\star} ) \|^2 \nonumber \\
& \geq \sum_{i=1}^m \frac{1/\eta -\theta\mu \phi}{2K} \|\boldsymbol{x}_i^{\star} - \boldsymbol{x}_i^{r, K} \|^2 \nonumber
\end{align}
\begin{align}
&\hspace{5mm} + \sum_{i=1}^m \frac{1}{4\rho} \|\rho(\bar{\boldsymbol{x}}_i^{r+1, K} - \boldsymbol{x}_i^{\star}) + (\boldsymbol{\lambda}_{i|s}^{r+2} -\boldsymbol{\lambda}_{i|s}^{\star} ) \|^2.
\label{equ:proof_theoremLinear_2}
\end{align}
We note that when $\phi$ is chosen to satisfy $ \frac{1}{4\rho} > \frac{\theta\mu \phi}{4\rho^2}$, we have $ \frac{1}{4\rho} > \frac{\theta\mu \phi}{4\rho^2} \geq \frac{\gamma_{2}}{2} $ based on the definition of $\gamma_{2}$ in (\ref{equ:gamma_i2}). As a result, it is clear from (\ref{equ:proof_theoremLinear_2}) that the coefficients before $\|\rho(\boldsymbol{x}_i^{r-1,K}-\boldsymbol{x}_i^{\star}) \|^2 $ and $\|\rho(\bar{\boldsymbol{x}}_i^{r, K} - \boldsymbol{x}_i^{\star}) + (\boldsymbol{\lambda}_{i|s}^{r+1} -\boldsymbol{\lambda}_{i|s}^{\star} ) \|^2$ are smaller than those coefficients before $\|\rho(\boldsymbol{x}_i^{r, K}-\boldsymbol{x}_i^{\star}) \|^2 $ and $\|\rho\bar{\boldsymbol{x}}_i^{r+1, K} - \boldsymbol{x}_i^{\star}) + (\boldsymbol{\lambda}_{i|s}^{r+2} -\boldsymbol{\lambda}_{i|s}^{\star} ) \|^2$. Therefore, we can conclude that GPDMM has linear convergence rate under certain conditions. The expression for the parameter $\beta$ in Theorem~\ref{theorem:linear_conv} can be easily derived from (\ref{equ:proof_theoremLinear_2}). The proof is complete.
\end{proof}
\section{Proof for Theorem~\ref{theorem:sublinear}}
\label{appendix:sublinear}
\begin{proof}
Similar to Appendix~\ref{appendix:linear_conv}, the proof for Theorem~\ref{theorem:sublinear} is also based on the results in Lemma~\ref{lemma:twoBounds} and \ref{lemma:lower_bound}. Summing the inequality (\ref{equ:upper_bound_final}) in Lemma~\ref{lemma:twoBounds} from $r=1$ until $r=R$ and setting $\mu=0$ and $\theta=0$ produces
\begin{align}
&\frac{1}{R}\sum_{r=1}^R \frac{1}{K} \Big[\hspace{-0.6mm} \frac{1/\eta }{2}\|\boldsymbol{x}_i^{0,K}-\boldsymbol{x}_i^{\star} \|^2 \hspace{-0.6mm} \nonumber \\
&\hspace{8mm} + \frac{1}{4\rho} \|\rho(\bar{\boldsymbol{x}}_i^{1, K} - \boldsymbol{x}_i^{\star}) + (\boldsymbol{\lambda}_{i|s}^{2} -\boldsymbol{\lambda}_{i|s}^{\star} ) \|^2 \Big] \nonumber \\
\hspace{-2mm}&\geq \frac{1}{R}\sum_{r=1}^R\sum_{i=1}^m \Big[ f_i(\bar{\boldsymbol{x}}_i^{r, K} ) - (\bar{\boldsymbol{x}}_i^{r,K})^T \boldsymbol{\lambda}_{i|s}^{\star} - f_i(\boldsymbol{x}_i^{\star}) \nonumber \\
\hspace{-1mm}&\hspace{1mm} +\hspace{-0.7mm}\frac{1}{K}\hspace{-0.7mm}\sum_{k=0}^{K-1}\hspace{-0.7mm}\Big(\hspace{-0.6mm} \frac{1/\eta \hspace{-0.7mm}-\hspace{-0.7mm} L}{2} \|\boldsymbol{x}_i^{r, k+1} \hspace{-0.7mm}- \hspace{-0.7mm}\boldsymbol{x}_i^{r, k} \|^2 \hspace{-0.7mm} + \hspace{-0.7mm} \frac{1}{2L\eta^2}\| \eta(\rho(\hspace{-0.6mm} \boldsymbol{x}_s^{r} \hspace{-0.7mm} - \hspace{-0.6mm} \boldsymbol{x}_i^{r,k+1} ) \hspace{-0.6mm}\nonumber \\
\hspace{-1mm}& \hspace{9mm} -\hspace{-0.6mm} \boldsymbol{\lambda}_{s|i}^{r} \hspace{-0.6mm} - \boldsymbol{\lambda}_{i|s}^{\star}) -\hspace{-0.6mm} (\boldsymbol{x}_i^{r, k+1} \hspace{-0.6mm} -\hspace{-0.6mm} \boldsymbol{x}_i^{r, k}) \hspace{-0.6mm}\|^2 \Big) \Big] \nonumber \\
\hspace{-2mm}&\stackrel{(a)}{\geq} \frac{1}{R}\sum_{r=1}^R\sum_{i=1}^m \Big[ f_i(\bar{\boldsymbol{x}}_i^{r, K} ) - (\bar{\boldsymbol{x}}_i^{r,K})^T \boldsymbol{\lambda}_{i|s}^{\star} - f_i(\boldsymbol{x}_i^{\star}) \nonumber \\
\hspace{-1mm}&\hspace{1mm} +\hspace{-0.7mm}\frac{1}{K}\hspace{-0.7mm}\sum_{k=0}^{K-1}\hspace{-0.7mm}\Big(\hspace{-0.6mm} \frac{\gamma_1}{2}\| \eta(\rho(\hspace{-0.6mm} \boldsymbol{x}_s^{r} \hspace{-0.7mm} - \hspace{-0.6mm} \boldsymbol{x}_i^{r,k+1} ) \hspace{-0.6mm} - \hspace{-0.6mm} \boldsymbol{\lambda}_{s|i}^{r} \hspace{-0.6mm} - \boldsymbol{\lambda}_{i|s}^{\star}) \|^2 \Big) \Big] \nonumber \\
\hspace{-2mm}&\stackrel{(b)}{\geq} \frac{1}{R}\sum_{r=1}^R\sum_{i=1}^m \Big[ f_i(\bar{\boldsymbol{x}}_i^{r, K} ) - (\bar{\boldsymbol{x}}_i^{r,K})^T \boldsymbol{\lambda}_{i|s}^{\star} - f_i(\boldsymbol{x}_i^{\star}) \nonumber \\
\hspace{-1mm}&\hspace{15mm} +\hspace{-0.7mm}\Big(\hspace{-0.6mm} \frac{\gamma_1\eta^2}{2}\| \boldsymbol{x}_{i|s}^{r+1} \hspace{-0.6mm} - \boldsymbol{\lambda}_{i|s}^{\star} \|^2 \Big) \Big] \nonumber \\
\hspace{-2mm}&\stackrel{(c)}{\geq} \sum_{i=1}^m \Big[ f_i(\bar{\boldsymbol{x}}_i^{R, K} ) - (\bar{\boldsymbol{\lambda}}_i^{R,K})^T \boldsymbol{\lambda}_{i|s}^{\star} - f_i(\boldsymbol{x}_i^{\star}) \nonumber \\
\hspace{-1mm}&\hspace{15mm} +\hspace{-0.7mm}\Big(\hspace{-0.6mm} \frac{\gamma_1\eta^2}{2}\| \bar{\boldsymbol{\lambda}}_{i|s}^{R} \hspace{-0.6mm} - \boldsymbol{\lambda}_{i|s}^{\star} \|^2 \Big) \Big],
\label{appendix:theorem_sublinear_proof_1}
\end{align}
where step $(a)$ utilises the inequality $\|\boldsymbol{b}\|^2+\|\boldsymbol{c}\|^2\geq \frac{1}{2}\|\boldsymbol{b}+ \boldsymbol{c}\|^2$, and the parameter $\gamma_{1}$ is given by (\ref{equ:gamma_i1}) with $\theta=0$. Step $(b)$ employs Jensen's inequality and $\boldsymbol{\lambda}_{i|s}^{r+1} = \rho(\boldsymbol{x}_s^r - \bar{\boldsymbol{x}}_i^{r,K}) - \boldsymbol{\lambda}_{s|i}^r$. Step $(b)$ employs Jensen's inequality again. The results in Theorem \ref{theorem:sublinear} follows directly using the property that the LHS of (\ref{appendix:theorem_sublinear_proof_1}) decays in the order of $O(1/R)$. The proof is complete. \end{proof}
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
|
1,108,101,566,709 | arxiv | \section{Conclusion}
\section{Discussion}
\section{Introduction}
Predictive analytics has become an increasingly hot topic in higher education.
In particular,
predictive-analytics tools have been used
to predict various measures of student success (e.g., course completion, retention, and degree attainment) by mapping the input set of attributes of individuals (e.g., the student’s high school GPA and demographic features) with their outcomes (e.g., college credits accumulated) \cite{ekowo2016promise}.
Campus officials have used these predictions to guide decisions surrounding college admissions and student-support interventions, such as providing more intensive advising to certain students \cite{ekowo2016promise}.
Despite the potentials for predictive analytics, there is a critical disconnection between predictive analytics in higher education research and accessibility of them in practice. Two major barriers to existing uses of predictive analytics in higher education that cause this disconnection are the lack of democratization in deployment and the potential to exacerbate inequalities.
First, education researchers and policy makers face many challenges in deploying predictive and statistical techniques in practice. These challenges present in different steps of modeling including data cleaning (e.g. imputation), identifying most important attributes associated with success, selecting the correct predictive modeling technique, and calibrating the hyperparameters of the selected model.
Nevertheless, each of these steps can introduce additional bias to the system if not appropriately performed \cite{barocas2016big}. Missing Values are the frequent latent causes behind many data analysis challenges. Most large-scale and nationally representative education data sets suffer from a significant number of incomplete responses from the research participants. While many education-related studies addressed the challenges of missing data, \cite{missing-review1,Missing3-MI,dataprep3}, little is known about the impact of handling missing values on the fairness of predictive outcomes in practice.
To date, few works studied the impact of data preparation on the unfairness of the predictive outcome in a limited setting \cite{valentim2019impact} or using merely a single notion of fairness metrics \cite{missing2021}.
Second, predictive models rely on historical data and have the potential to exacerbate social inequalities \cite{ekowo2016promise,kizilcec2020algorithmic}. Over the last decade, researchers realized that disregarding the consequences and especially the societal impact of algorithmic decision making, might negatively impact individuals lives. COMPAS, a criminal justice support tool, was found to be decidedly biased against Black people \cite{propublica}.
Colleges and universities have been using risk algorithms to evaluate their students. Recently, Markup investigated four major public universities and found that EAB's Navigate software is racially biased \cite{markup}.
Achieving this goal, however, is complex and it requires education researchers and practitioners undergo an comprehensive algorithm audit to ensure technical correctness and social accountability of their algorithms.
It is imperative that predictive models are designed with careful attention to their potential social consequences. A wave of fair decision making algorithms and more particularly fair machine learning models for prediction, has been proposed in recent years years\cite{Fair-accurate-education,AIunfairness-education}. Nevertheless, most of the proposed research either deals with inequality in the pre-processing or post-processing steps, or consider model-based in-processing approach. To take any of the aforementioned routes for bias mitigation, it is critical to first audit the unfairness of the predictive algorithms outcome and identify the most severe unfairness issues to address.
Following these concerns, fairness audits of algorithmic decision
systems have been pioneered in a variety of fields \cite{kondmann2021under,kearns2018preventing}. The auditing process of unfairness detection of the model provides a comprehensive guideline for education researchers and officials to evaluate the inequalities of predictive modeling algorithms from different perspective before deploying them in practice. To the best of our knowledge, there is no work in ML for higher education that have transparently audit ML performance and unfairness in education using a real dataset.
In this paper, we first study if predictive modeling techniques for student success shows inequalities for or against a sample of marginalized
communities. We use a real national level education dataset to analyze the case
of discrimination. We consider a wide range of Machine Learning models for student-success predictions. Then, we audit if
prediction outcomes are discriminating against certain subgroups considering different notions of fairness to identify a potential bias
in predictions. Furthermore, we investigate the impact of imputing the missing values using various techniques on model performance and fairness to key insights for educational practitioner for responsible ML pipeline.
This study has the potential to significantly impact the practice of data-driven decision-making in higher education investigating the impact of a critical pre-processing step on predictive inequalities.
In particular, how different imputation techniques fundamentally compare to one another, and to what extent they impact the performance and the fairness of a student-success prediction outcome.
We predict the most common proxy attribute \emph{graduation completion} concerning equal treatment to different demographic groups through different notions of fairness. The comprehensive study of the real large scale datasets of ESL:2002 and IPEDS allow us to validate the performance of different ML techniques for predictive analytics in higher education in a real situation.
To the best of our knowledge, none of the existing fair machine learning (ML) models have studied existing large-scale datasets for student success modeling. Most of the extant applications of fair ML demonstrate results using small datasets considering a limited number of attributes (e.g., \cite{kleinberg2018algorithmic,yu2020towards}) or in a specific context, such as law school admission \cite{kusner2017counterfactual,cole2019avoiding}.
\section{Experiments}
\input{plots}
\stitle{Data Prepration.} As previously stated, we use the ELS dataset in this study to audit the fairness of ML models in the development pipeline for predicting student success.
The ELS dataset includes many categorical variables. Therefore, we begin by creating appropriate labeling and converting categorical attributes to numeric ones (dummy variable) following the NCES dataset documentation\footnote
\url{https://nces.ed.gov/surveys/els2002/avail_data.asp}}).
Next, we perform a transformation on the considered response variable \emph{highest level of degree} to construct a binary classification problem. That is, we label students with a college degree (BS degree and higher) as the favorable outcome (label=1), and others as the Unfavorable outcome (label=0). A Data cleaning would then is performed to identify and rename the missing values (based on the documentation) and remove the observations that have many missing attributes ($>75\%$ of the attributes are missing).
The final and significantly important task is then to handle the remaining missing values in the dataset.
We consider different imputation techniques; Simple Imputation {\bf SI}, Multiple Imputation {\bf MI}, and KNN Imputation {\bf KNN-I}. We consider a baseline where we remove observations with missing attributes and referred to it as {\bf Remove NA}. For {\bf KNN-I} we consider additional scenario where we do not impute the response variable. In this scenario we remove observations with missing $Y$ and apply {\bf KNN-I} on the set of attributes $X$.
\stitle{Model Training} procedure follows the data preperation step as we obtain clean-format datasets. We aim to analyze the performance of different Machine learning (ML) models under each imputation technique, to audit the inequalities in the prediction outcome. We consider Decision Tree {\bf DT}, Random Forest {\bf RF}, Support Vector Classifier {\bf SVC}, Linear Discriminant Analysis {\bf LDA}, Logistic Regression {\bf Log}, and K-Nearest Neighbor {\bf KNN} ML models.
For each model we perform a hyperparameter tuning to find the best model under each imputated dataset. For example, Table~\ref{tab:hyperparam} represents the best obtained hyperparameters for each model while fitting on the KNN-imputed dataset.
\vspace{-1mm}
\begin{table}[!tb]
\centering
\tiny
\caption{Hyperparameter settings of different ML Models}
\vspace{-2mm}
\begin{tabular}{rrll}
\multicolumn{1}{l}{Model} & \multicolumn{3}{l}{Hyperparameters} \\
\midrule
& & & \\
\multicolumn{1}{l}{\textbf{Decision Tree Classifier}:} & \multicolumn{1}{l}{maxdepth=6} & minsplit=30 & maxfeatures=30 \\
\multicolumn{1}{l}{\textbf{Random Forest Classifier}:} & \multicolumn{1}{l}{maxdepth=10} & minsplit=10 & maxfeatures=10 \\
\multicolumn{1}{l}{\textbf{Support vector Classifier }:}&
\multicolumn{1}{l}{$C=0.5$} & kernel=Linear \\
\multicolumn{1}{l}{\textbf{LDA Classifier}:} & & Shrinkage=None & Solver=svd \\
\multicolumn{1}{l}{\textbf{Logit Classifier}:} & \multicolumn{1}{l}{$C=1$} & penalty=$l_1$ & Solver=liblinear\\
\multicolumn{1}{l}{\textbf{K-nearest neighbor Classifier}:} & \multicolumn{1}{l}{K=34} & weight=distance \\
\end{tabular}%
\vspace{-7mm}
\label{tab:hyperparam}%
\end{table}%
\stitle{Results.}
In this section, we summarize our findings in three key discussions. First, we compare the performance of the considered ML models on different imputed datasets. Then, we analyze the impact of imputation on unfairness gaps among protected racial groups. Lastly, we compare the correlation results before and after imputation to identify the source of difference in fairness performance.
One noticeable fact is that both testing and training accuracy increases after imputation across all models (detailed report on the accuracy is provided in the appendix). In fact, the model fitted on the imputed dataset has a higher generalization power.
{\bf KNN} however, has the largest performance gap before and after imputation (about 8\% decrease on testing and 20\% decrease training) compared to other models, for which the accuracy levels (training and testing) increase by 2\%-6\%, on average.
Figures \ref{exp:bars_sp}-\ref{exp:bars_ae} represents the unfairness of different ML models for different imputation techniques. Each group of bars corresponds to one racial subgroups. The results for \emph{gender} as the sensitive attribute is provided in the appendix.
As shown in Table~\ref{tab:notions}, Statistical Parity ({\bf SP}) is a fairness metric that compares positive (or success) prediction outcome ($\hat{Y}=1$) among students with different demographic groups without considering their true $Y$ label (real outcome). Based on Figures \ref{exp:bars_sp}, we can observe that Statistical parity is model independent, and observable changes only occur under different imputation techniques. However, other fairness metrics are both model and imputation dependent, as the unfairness gaps are considerably different from one model to the other across imputation scenarios. Note that SP increases with imputation for majority of subgroups. \emph{More than one race} and \emph{White} subgroups are exceptions.
Predictive Parity ({\bf PP}) considers the students who are predicted to be successful given their racial subgroup, and measures if they are correctly classified. The {\bf PP} unfairness gaps increases after imputation in the majority of cases, however, the effect also depends on the model type. For example, {\bf KNN} classifier tends to do worst across all racial groups after imputation. The {\bf PP} gaps also increase for Black students after imputation with most of the models.
Predictive Equality ({\bf PE}) focuses mainly on unsuccessful students who are incorrectly predicted as successful given their racial subgroup. This type of unfairness could lead to considerably unfavorable results in higher education, where policymakers fail to identify those in need. Figures \ref{exp:bars_pe}, show that imputation mainly decreases {\bf PE} gaps across racial subgroups while some models are exceptions. For instance, {\bf Log} and {\bf SVC} classifiers increase the unfairness for Asian students after imputation, which leads to unfavorable prediction outcomes for Asian students who are less likely to succeed.
Equal Opportunity ({\bf EoP}) also emphasizes the positive prediction outcome and measures how much the model correctly classifies successful students given their racial subgroup. Based on Figures \ref{exp:bars_eop}, we can observe that imputation decreases the unfairness gaps for all racial groups except for the Black subgroup. In fact, following identification as a minority group, Black students may encounter even more discrimination. In other words, successful Black students are less likely to be predicted as such after imputation.
Equalized Odds ({\bf EO}) measures the True Positive and False Positive rates among different racial subgroups. It merely emphasizes on the positive prediction outcome $\hat{y}=1$, which is \emph{$>$BS degree attainment}. Figures \ref{exp:bars_eo}, show that imputation drastically decreases the unfairness gaps across different racial groups. That is, the models tend to predict equal positive outcomes across different racial subgroups.
Accuracy Equality ({\bf AE}) measures the overall accuracy in prediction outcome for students given their racial subgroup. Figures \ref{exp:bars_ae}, show that imputation drastically decreases the {\bf AE} gaps across different racial groups. That is, the models tend to be more equally accurate in prediction outcome across different racial subgroups after imputation. However, models perform discriminatory for each racial group when we remove all observations with missing values.
Note that fairness metrics are model dependant. For example, comparing {\bf KNN} with {\bf DT} and {\bf RF} under the notion of predictive parity ({\bf PP}), we can observe that the gaps mostly increases after imputation unlike the others, which mostly achieve lower unfairness. A comprehensive analysis and plots are provided in the appendix.
Figures~\ref{corr:remove1} to~\ref{corr:remove2}, demonstrate heatmaps of correlations between attributes. We considered two subsets of unprotected attributes: School-related, and grade/credit-based attributes. The plots indicate the impact of imputation on the correlation of both sensitive and unprotected attributes. Correlations enable us to identify the changes in the distribution of the unprotected attributes, and how their correlations with sensitive attributes (which is indicating behavioral bias) change and affect the unfairness of the model.
For example, comparing Figures~\ref{corr:remove1} with \ref{corr:knn1}, the correlation between college entrance exam and White increase after {\bf KNN-I}, which can amplify the unfairness of the outcome, accordingly. Moreover considering the second subset of attributes, comparing Figures~\ref{corr:remove2} with \ref{corr:knn2}, we can observe that the correlation between all credits taken and parent education (which is highly correlated with White) increases after {\bf KNN-I}. Similarly, most of the other score and GPA-related attributes correlation with White increases after {\bf KNN-I}. The reason is White is the majority group in the data and imputation is biased towards it. This shall cause inequalities in the predictive outcome.
While increasing the correlation further exacerbate the bias in prediction, decreasing the correlation can induce unfairness reduction. In fact, that is the main reason behind the unfairness change using different imputation techniques and ML models. As a result, we observe that for some fairness metrics (e.g., {\bf PP}) the gaps have been enlarged after imputation while others (e.g., {\bf EO}) benefit from imputation as it enables the models to decrease the inequalities in prediction. The correlation heatmaps for other imputation techniques and more discussions are provided in the appendix.
\subsection{Correlation Analysis}
\vspace{-8mm}
\begin{table*}[!h]
\centering
\tiny
\caption{Performance and unfairness measurements for different models and imputation techniques}
\vspace{-5mm}
\begin{tabular}{|c|c|c|l|c|c|c|c|c|c|c|}
\cmidrule{5-11} \multicolumn{1}{r}{} & \multicolumn{1}{r}{} & \multicolumn{1}{r}{} & & \multicolumn{5}{c|}{\textbf{S=Race }} & \multicolumn{2}{c|}{\textbf{S=Gender }} \\
\midrule
\textbf{ML Models } & \textbf{Accuracy } & \textbf{Missing values } & \multicolumn{1}{c|}{\textbf{Fairness Metric }} & \textbf{Asian} & \textbf{Black } & \textbf{Hispanic } & \textbf{More race } & \textbf{White} & \textbf{Female } & \textbf{Male } \\
\midrule
\multicolumn{1}{|c|}{\multirow{30}[60]{*}{DT }} & \multicolumn{1}{c|}{\multirow{6}[12]{*}{Train: 0.84, Test=0.83 }} & \multicolumn{1}{c|}{\multirow{6}[12]{*}{Remove-NA }} & Statistical Parity & 0.08 & 0.069 & 0.064 & 0.4 & 0.168 & 0.008 & 0.008 \\
\cmidrule{4-11} & & & Predictive parity & 0.116 & 0.015 & 0.213 & 0.045 & 0.085 & 0.014 & 0.014 \\
\cmidrule{4-11} & & & Predictive Equlity & 0.208 & 0.011 & 0.171 & 0.095 & 0.064 & 0.048 & 0.048 \\
\cmidrule{4-11} & & & Equal Opportunity & 0.203 & 0.002 & 0.061 & 0.471 & 0.127 & 0.085 & 0.085 \\
\cmidrule{4-11} & & & Accuracy Equality & 0.205 & 0.015 & 0.031 & 0.233 & 0.086 & 0.079 & 0.079 \\
\cmidrule{4-11} & & & Equalized odds & 0.205 & 0.006 & 0.116 & 0.283 & 0.095 & 0.066 & 0.066 \\
\cmidrule{2-11} & \multicolumn{1}{c|}{\multirow{6}[12]{*}{Train: 0.87, Test=0.85 }} & \multicolumn{1}{c|}{\multirow{6}[12]{*}{KNN-I }} & Statistical Parity & 0.136 & 0.14 & 0.068 & 0.041 & 0.09 & 0.043 & 0.043 \\
\cmidrule{4-11} & & & Predictive parity & 0.029 & 0.043 & 0.042 & 0.114 & 0.047 & 0.021 & 0.021 \\
\cmidrule{4-11} & & & Predictive Equlity & 0.039 & 0.03 & 0.005 & 0.049 & 0.007 & 0.026 & 0.026 \\
\cmidrule{4-11} & & & Equal Opportunity & 0.025 & 0.022 & 0.004 & 0.052 & 0.012 & 0.031 & 0.031 \\
\cmidrule{4-11} & & & Accuracy Equality & 0.019 & 0.031 & 0.012 & 0.042 & 0.012 & 0.002 & 0.002 \\
\cmidrule{4-11} & & & Equalized odds & 0.032 & 0.026 & 0.004 & 0.05 & 0.01 & 0.029 & 0.029 \\
\cmidrule{2-11} & \multicolumn{1}{c|}{\multirow{6}[12]{*}{Train: 0.89, Test=0.86}} & \multicolumn{1}{c|}{\multirow{6}[12]{*}{Simple-I}} & Statistical Parity & 0.125 & 0.129 & 0.069 & 0.048 & 0.087 & 0.037 & 0.037 \\
\cmidrule{4-11} & & & Predictive parity & 0.016 & 0.095 & 0.043 & 0.047 & 0.045 & 0.003 & 0.003 \\
\cmidrule{4-11} & & & Predictive Equlity & 0.028 & 0.006 & 0.005 & 0.011 & 0.001 & 0.015 & 0.015 \\
\cmidrule{4-11} & & & Equal Opportunity & 0.036 & 0.019 & 0.047 & 0.001 & 0.004 & 0.011 & 0.011 \\
\cmidrule{4-11} & & & Accuracy Equality & 0.01 & 0.014 & 0.011 & 0.004 & 0.014 & 0.020 & 0.020 \\
\cmidrule{4-11} & & & Equalized odds & 0.032 & 0.013 & 0.026 & 0.006 & 0.002 & 0.013 & 0.013 \\
\cmidrule{2-11} & \multicolumn{1}{c|}{\multirow{6}[12]{*}{Train: 0.90, Test=0.87}} & \multicolumn{1}{c|}{\multirow{6}[12]{*}{Multiple-I }} & Statistical Parity & 0.145 & 0.143 & 0.085 & 0.036 & 0.092 & 0.031 & 0.031 \\
\cmidrule{4-11} & & & Predictive parity & 0.002 & 0.098 & 0.163 & 0.027 & 0.096 & 0.020 & 0.020 \\
\cmidrule{4-11} & & & Predictive Equlity & 0.07 & 0.005 & 0.047 & 0.001 & 0.025 & 0.025 & 0.025 \\
\cmidrule{4-11} & & & Equal Opportunity & 0.011 & 0.033 & 0.023 & 0.039 & 0.01 & 0.006 & 0.006 \\
\cmidrule{4-11} & & & Accuracy Equality & 0.035 & 0.006 & 0.029 & 0.02 & 0.009 & 0.013 & 0.013 \\
\cmidrule{4-11} & & & Equalized odds & 0.04 & 0.019 & 0.035 & 0.02 & 0.017 & 0.016 & 0.016 \\
\cmidrule{2-11} & \multicolumn{1}{c|}{\multirow{6}[12]{*}{Train: 0.86, Test=0.86}} & \multicolumn{1}{c|}{\multirow{6}[12]{*}{KNN-I (no NA Y) }} & Statistical Parity & 0.117 & 0.138 & 0.092 & 0.055 & 0.12 & 0.067 & 0.067 \\
\cmidrule{4-11} & & & Predictive parity & 0.004 & 0.022 & 0.087 & 0.035 & 0.051 & 0.038 & 0.038 \\
\cmidrule{4-11} & & & Predictive Equlity & 0.071 & 0.024 & 0.022 & 0.01 & 0.005 & 0.050 & 0.050 \\
\cmidrule{4-11} & & & Equal Opportunity & 0.071 & 0.006 & 0.007 & 0.041 & 0.063 & 0.018 & 0.018 \\
\cmidrule{4-11} & & & Accuracy Equality & 0.095 & 0.034 & 0.008 & 0.016 & 0.016 & 0.022 & 0.022 \\
\cmidrule{4-11} & & & Equalized odds & 0.071 & 0.015 & 0.014 & 0.025 & 0.034 & 0.034 & 0.034 \\
\midrule
\midrule
\multicolumn{1}{|c|}{\multirow{30}[60]{*}{RF }} & \multicolumn{1}{c|}{\multirow{6}[12]{*}{Train: 0.88, Test=0.82}} & \multicolumn{1}{c|}{\multirow{6}[12]{*}{Remove-NA }} & Statistical Parity & 0.08 & 0.069 & 0.064 & 0.4 & 0.168 & 0.008 & 0.008 \\
\cmidrule{4-11} & & & Predictive parity & 0.105 & 0.004 & 0.203 & 0.056 & 0.101 & 0.015 & 0.015 \\
\cmidrule{4-11} & & & Predictive Equlity & 0.151 & 0.032 & 0.151 & 0.112 & 0.071 & 0.006 & 0.006 \\
\cmidrule{4-11} & & & Equal Opportunity & 0.161 & 0.004 & 0.062 & 0.556 & 0.131 & 0.083 & 0.083 \\
\cmidrule{4-11} & & & Accuracy Equality & 0.155 & 0.022 & 0.025 & 0.43 & 0.092 & 0.063 & 0.063 \\
\cmidrule{4-11} & & & Equalized odds & 0.156 & 0.018 & 0.107 & 0.334 & 0.101 & 0.045 & 0.045 \\
\cmidrule{2-11} & \multicolumn{1}{c|}{\multirow{6}[12]{*}{Train: 0.92, Test=0.86}} & \multicolumn{1}{c|}{\multirow{6}[12]{*}{KNN-I }} & Statistical Parity & 0.136 & 0.14 & 0.068 & 0.041 & 0.09 & 0.043 & 0.043 \\
\cmidrule{4-11} & & & Predictive parity & 0.021 & 0.129 & 0.116 & 0.066 & 0.093 & 0.011 & 0.011 \\
\cmidrule{4-11} & & & Predictive Equlity & 0.035 & 0.017 & 0.042 & 0.026 & 0.033 & 0.021 & 0.021 \\
\cmidrule{4-11} & & & Equal Opportunity & 0.013 & 0.038 & 0.052 & 0.01 & 0.029 & 0.02 & 0.02 \\
\cmidrule{4-11} & & & Accuracy Equality & 0.028 & 0.003 & 0.031 & 0.01 & 0.012 & 0.006 & 0.006 \\
\cmidrule{4-11} & & & Equalized odds & 0.024 & 0.028 & 0.047 & 0.018 & 0.031 & 0.021 & 0.021 \\
\cmidrule{2-11} & \multicolumn{1}{c|}{\multirow{6}[12]{*}{Train: 0.91, Test=0.87}} & \multicolumn{1}{c|}{\multirow{6}[12]{*}{Simple-I}} & Statistical Parity & 0.125 & 0.129 & 0.069 & 0.048 & 0.087 & 0.037 & 0.037 \\
\cmidrule{4-11} & & & Predictive parity & 0.026 & 0.12 & 0.063 & 0.018 & 0.054 & 0.006 & 0.006 \\
\cmidrule{4-11} & & & Predictive Equlity & 0.018 & 0.017 & 0.015 & 0.004 & 0.007 & 0.006 & 0.006 \\
\cmidrule{4-11} & & & Equal Opportunity & 0.031 & 0.028 & 0.05 & 0.006 & 0.01 & 0.011 & 0.011 \\
\cmidrule{4-11} & & & Accuracy Equality & 0.008 & 0.003 & 0.018 & 0.007 & 0.007 & 0.003 & 0.003 \\
\cmidrule{4-11} & & & Equalized odds & 0.024 & 0.022 & 0.033 & 0.005 & 0.009 & 0.009 & 0.009 \\
\cmidrule{2-11} & \multicolumn{1}{c|}{\multirow{6}[12]{*}{Train: 0.93, Test=0.88}} & \multicolumn{1}{c|}{\multirow{6}[12]{*}{Multiple-I }} & Statistical Parity & 0.145 & 0.143 & 0.085 & 0.036 & 0.092 & 0.031 & 0.031 \\
\cmidrule{4-11} & & & Predictive parity & 0.018 & 0.062 & 0.07 & 0.071 & 0.073 & 0.005 & 0.005 \\
\cmidrule{4-11} & & & Predictive Equlity & 0.074 & 0.004 & 0.017 & 0.032 & 0.025 & 0.006 & 0.006 \\
\cmidrule{4-11} & & & Equal Opportunity & 0.008 & 0.044 & 0.043 & 0.017 & 0.029 & 0.01 & 0.01 \\
\cmidrule{4-11} & & & Accuracy Equality & 0.04 & 0.004 & 0.016 & 0.005 & 0.014 & 0.012 & 0.012 \\
\cmidrule{4-11} & & & Equalized odds & 0.041 & 0.024 & 0.03 & 0.024 & 0.027 & 0.008 & 0.008 \\
\cmidrule{2-11} & \multicolumn{1}{c|}{\multirow{6}[12]{*}{Train: 0.91, Test=0.88}} & \multicolumn{1}{c|}{\multirow{6}[12]{*}{KNN-I (no NA Y) }} & Statistical Parity & 0.117 & 0.138 & 0.092 & 0.055 & 0.12 & 0.067 & 0.067 \\
\cmidrule{4-11} & & & Predictive parity & 0.018 & 0.043 & 0.116 & 0.042 & 0.057 & 0.01 & 0.01 \\
\cmidrule{4-11} & & & Predictive Equlity & 0.031 & 0.01 & 0.04 & 0.015 & 0.013 & 0.026 & 0.026 \\
\cmidrule{4-11} & & & Equal Opportunity & 0.072 & 0.017 & 0.003 & 0.024 & 0.053 & 0.008 & 0.008 \\
\cmidrule{4-11} & & & Accuracy Equality & 0.08 & 0.033 & 0.009 & 0.011 & 0.015 & 0.026 & 0.026 \\
\cmidrule{4-11} & & & Equalized odds & 0.051 & 0.014 & 0.022 & 0.019 & 0.033 & 0.017 & 0.017 \\
\bottomrule
\end{tabular}%
\label{tab:addlabel}%
\end{table*}%
\vspace{-6mm}
\begin{table*}[htbp]
\centering
\tiny
\begin{tabular}{|c|c|c|l|c|c|c|c|c|c|c|}
\cmidrule{5-11} \multicolumn{1}{r}{} & \multicolumn{1}{r}{} & \multicolumn{1}{r}{} & & \multicolumn{5}{c|}{\textbf{S=Race }} & \multicolumn{2}{c|}{\textbf{S=Gender }} \\
\midrule
\textbf{ML Models } & \textbf{Accuracy } & \textbf{Missing values } & \multicolumn{1}{c|}{\textbf{Fairness Metric }} & \textbf{Asian} & \textbf{Black } & \textbf{Hispanic } & \textbf{More race } & \textbf{White} & \textbf{Female } & \textbf{Male } \\
\cmidrule{1-4}\cmidrule{10-11} \multicolumn{1}{|c|}{\multirow{30}[60]{*}{Logit}} & \multicolumn{1}{c|}{\multirow{6}[12]{*}{Train: 0.83, Test=0.80}} & \multicolumn{1}{c|}{\multirow{6}[12]{*}{Remove-NA }} & Statistical Parity & 0.08 & 0.069 & 0.064 & 0.4 & 0.168 & 0.008 & 0.008 \\
\cmidrule{4-11} & & & Predictive parity & 0.009 & 0.035 & 0.089 & 0.086 & 0.024 & 0.047 & 0.047 \\
\cmidrule{4-11} & & & Predictive Equlity & 0.022 & 0.024 & 0.173 & 0.167 & 0.133 & 0.032 & 0.032 \\
\cmidrule{4-11} & & & Equal Opportunity & 0.131 & 0.049 & 0.105 & 0.461 & 0.068 & 0.096 & 0.096 \\
\cmidrule{4-11} & & & Accuracy Equality & 0.09 & 0.045 & 0.136 & 0.206 & 0.011 & 0.059 & 0.059 \\
\cmidrule{4-11} & & & Equalized odds & 0.077 & 0.037 & 0.139 & 0.314 & 0.1 & 0.064 & 0.064 \\
\cmidrule{2-11} & \multicolumn{1}{c|}{\multirow{6}[12]{*}{Train: 0.85, Test=0.85}} & \multicolumn{1}{c|}{\multirow{6}[12]{*}{KNN-I }} & Statistical Parity & 0.136 & 0.14 & 0.068 & 0.041 & 0.09 & 0.043 & 0.043 \\
\cmidrule{4-11} & & & Predictive parity & 0.008 & 0.155 & 0.054 & 0.178 & 0.136 & 0.02 & 0.02 \\
\cmidrule{4-11} & & & Predictive Equlity & 0.1 & 0.001 & 0.003 & 0.055 & 0.033 & 0.009 & 0.009 \\
\cmidrule{4-11} & & & Equal Opportunity & 0.03 & 0.043 & 0.039 & 0.114 & 0.021 & 0.014 & 0.014 \\
\cmidrule{4-11} & & & Accuracy Equality & 0.072 & 0 & 0.007 & 0.001 & 0.018 & 0.002 & 0.002 \\
\cmidrule{4-11} & & & Equalized odds & 0.065 & 0.022 & 0.021 & 0.085 & 0.027 & 0.011 & 0.011 \\
\cmidrule{2-11} & \multicolumn{1}{c|}{\multirow{6}[12]{*}{Train: 0.86, Test=0.86}} & \multicolumn{1}{c|}{\multirow{6}[12]{*}{Simple-I}} & Statistical Parity & 0.125 & 0.129 & 0.069 & 0.048 & 0.087 & 0.037 & 0.037 \\
\cmidrule{4-11} & & & Predictive parity & 0.001 & 0.153 & 0.017 & 0.134 & 0.116 & 0.07 & 0.07 \\
\cmidrule{4-11} & & & Predictive Equlity & 0.072 & 0.004 & 0.021 & 0.029 & 0.02 & 0.021 & 0.021 \\
\cmidrule{4-11} & & & Equal Opportunity & 0.005 & 0.048 & 0.02 & 0.139 & 0.004 & 0 & 0 \\
\cmidrule{4-11} & & & Accuracy Equality & 0.041 & 0.006 & 0.012 & 0.023 & 0.006 & 0.009 & 0.009 \\
\cmidrule{4-11} & & & Equalized odds & 0.038 & 0.026 & 0.021 & 0.084 & 0.012 & 0.011 & 0.011 \\
\cmidrule{2-11} & \multicolumn{1}{c|}{\multirow{6}[12]{*}{Train: 0.87, Test=0.86}} & \multicolumn{1}{c|}{\multirow{6}[12]{*}{Multiple-I }} & Statistical Parity & 0.145 & 0.143 & 0.085 & 0.036 & 0.092 & 0.031 & 0.031 \\
\cmidrule{4-11} & & & Predictive parity & 0.009 & 0.093 & 0.069 & 0.168 & 0.104 & 0.025 & 0.025 \\
\cmidrule{4-11} & & & Predictive Equlity & 0.09 & 0.012 & 0.003 & 0.063 & 0.024 & 0.002 & 0.002 \\
\cmidrule{4-11} & & & Equal Opportunity & 0.001 & 0.034 & 0.039 & 0.103 & 0.006 & 0 & 0 \\
\cmidrule{4-11} & & & Accuracy Equality & 0.048 & 0.009 & 0.009 & 0.004 & 0.007 & 0.002 & 0.002 \\
\cmidrule{4-11} & & & Equalized odds & 0.045 & 0.023 & 0.021 & 0.083 & 0.015 & 0.001 & 0.001 \\
\cmidrule{2-11} & \multicolumn{1}{c|}{\multirow{6}[12]{*}{Train: 0.86, Test=0.85}} & \multicolumn{1}{c|}{\multirow{6}[12]{*}{KNN-I (no NA Y) }} & Statistical Parity & 0.117 & 0.138 & 0.092 & 0.055 & 0.12 & 0.067 & 0.067 \\
\cmidrule{4-11} & & & Predictive parity & 0.012 & 0.115 & 0.136 & 0.013 & 0.118 & 0.002 & 0.002 \\
\cmidrule{4-11} & & & Predictive Equlity & 0.109 & 0.01 & 0.021 & 0.034 & 0.017 & 0.033 & 0.033 \\
\cmidrule{4-11} & & & Equal Opportunity & 0.089 & 0.041 & 0.046 & 0.028 & 0.029 & 0.012 & 0.012 \\
\cmidrule{4-11} & & & Accuracy Equality & 0.104 & 0.028 & 0.007 & 0.035 & 0.014 & 0.027 & 0.027 \\
\cmidrule{4-11} & & & Equalized odds & 0.099 & 0.025 & 0.033 & 0.031 & 0.023 & 0.022 & 0.022 \\
\midrule
\midrule
\multicolumn{1}{|c|}{\multirow{30}[60]{*}{LDA}} & \multicolumn{1}{c|}{\multirow{6}[12]{*}{Train: 0.84, Test=0.79}} & \multicolumn{1}{c|}{\multirow{6}[12]{*}{Remove-NA }} & Statistical Parity & 0.08 & 0.069 & 0.064 & 0.4 & 0.168 & 0.008 & 0.008 \\
\cmidrule{4-11} & & & Predictive parity & 0.089 & 0.013 & 0.057 & 0.071 & 0.024 & 0.021 & 0.021 \\
\cmidrule{4-11} & & & Predictive Equlity & 0.084 & 0.062 & 0.022 & 0.149 & 0.061 & 0 & 0 \\
\cmidrule{4-11} & & & Equal Opportunity & 0.093 & 0.013 & 0.004 & 0.446 & 0.089 & 0.092 & 0.092 \\
\cmidrule{4-11} & & & Accuracy Equality & 0.083 & 0.017 & 0.003 & 0.2 & 0.025 & 0.071 & 0.071 \\
\cmidrule{4-11} & & & Equalized odds & 0.089 & 0.038 & 0.013 & 0.298 & 0.075 & 0.046 & 0.046 \\
\cmidrule{2-11} & \multicolumn{1}{c|}{\multirow{6}[12]{*}{Train: 0.85, Test=0.84}} & \multicolumn{1}{c|}{\multirow{6}[12]{*}{KNN-I }} & Statistical Parity & 0.136 & 0.14 & 0.068 & 0.041 & 0.09 & 0.043 & 0.043 \\
\cmidrule{4-11} & & & Predictive parity & 0.011 & 0.189 & 0.036 & 0.171 & 0.143 & 0.034 & 0.034 \\
\cmidrule{4-11} & & & Predictive Equlity & 0.095 & 0.02 & 0.01 & 0.061 & 0.043 & 0.004 & 0.004 \\
\cmidrule{4-11} & & & Equal Opportunity & 0.01 & 0.065 & 0.024 & 0.046 & 0.027 & 0.031 & 0.031 \\
\cmidrule{4-11} & & & Accuracy Equality & 0.062 & 0.013 & 0.005 & 0.014 & 0.021 & 0.011 & 0.011 \\
\cmidrule{4-11} & & & Equalized odds & 0.053 & 0.042 & 0.017 & 0.053 & 0.035 & 0.017 & 0.017 \\
\cmidrule{2-11} & \multicolumn{1}{c|}{\multirow{6}[12]{*}{Train: 0.85, Test=0.85}} & \multicolumn{1}{c|}{\multirow{6}[12]{*}{Simple-I}} & Statistical Parity & 0.125 & 0.129 & 0.069 & 0.048 & 0.087 & 0.037 & 0.037 \\
\cmidrule{4-11} & & & Predictive parity & 0.01 & 0.147 & 0.055 & 0.21 & 0.133 & 0.057 & 0.057 \\
\cmidrule{4-11} & & & Predictive Equlity & 0.081 & 0.005 & 0.002 & 0.065 & 0.033 & 0.017 & 0.017 \\
\cmidrule{4-11} & & & Equal Opportunity & 0.005 & 0.035 & 0.029 & 0.086 & 0.011 & 0.005 & 0.005 \\
\cmidrule{4-11} & & & Accuracy Equality & 0.053 & 0.003 & 0.001 & 0.01 & 0.011 & 0.006 & 0.006 \\
\cmidrule{4-11} & & & Equalized odds & 0.043 & 0.02 & 0.015 & 0.075 & 0.022 & 0.011 & 0.011 \\
\cmidrule{2-11} & \multicolumn{1}{c|}{\multirow{6}[12]{*}{Train: 0.87, Test=0.86}} & \multicolumn{1}{c|}{\multirow{6}[12]{*}{Multiple-I }} & Statistical Parity & 0.145 & 0.143 & 0.085 & 0.036 & 0.092 & 0.031 & 0.031 \\
\cmidrule{4-11} & & & Predictive parity & 0 & 0.121 & 0.017 & 0.194 & 0.107 & 0.026 & 0.026 \\
\cmidrule{4-11} & & & Predictive Equlity & 0.072 & 0.009 & 0.019 & 0.085 & 0.035 & 0.006 & 0.006 \\
\cmidrule{4-11} & & & Equal Opportunity & 0.009 & 0.063 & 0.023 & 0.058 & 0.017 & 0.007 & 0.007 \\
\cmidrule{4-11} & & & Accuracy Equality & 0.039 & 0.01 & 0.011 & 0.025 & 0.013 & 0.003 & 0.003 \\
\cmidrule{4-11} & & & Equalized odds & 0.04 & 0.036 & 0.021 & 0.072 & 0.026 & 0.007 & 0.007 \\
\cmidrule{2-11} & \multicolumn{1}{c|}{\multirow{6}[12]{*}{Train: 0.86, Test=0.85}} & \multicolumn{1}{c|}{\multirow{6}[12]{*}{KNN-I (no NA Y) }} & Statistical Parity & 0.117 & 0.138 & 0.092 & 0.055 & 0.12 & 0.067 & 0.067 \\
\cmidrule{4-11} & & & Predictive parity & 0.001 & 0.09 & 0.121 & 0.014 & 0.095 & 0.008 & 0.008 \\
\cmidrule{4-11} & & & Predictive Equlity & 0.088 & 0.012 & 0.023 & 0.016 & 0.013 & 0.035 & 0.035 \\
\cmidrule{4-11} & & & Equal Opportunity & 0.077 & 0.039 & 0.027 & 0.044 & 0.03 & 0.003 & 0.003 \\
\cmidrule{4-11} & & & Accuracy Equality & 0.095 & 0.036 & 0.006 & 0.034 & 0.007 & 0.022 & 0.022 \\
\cmidrule{4-11} & & & Equalized odds & 0.083 & 0.025 & 0.025 & 0.03 & 0.022 & 0.019 & 0.019 \\
\bottomrule
\end{tabular}%
\label{tab:addlabel}%
\end{table*}%
\begin{table*}[htbp]
\centering
\tiny
\begin{tabular}{|c|c|c|l|c|c|c|c|c|c|c|}
\cmidrule{5-11} \multicolumn{1}{r}{} & \multicolumn{1}{r}{} & \multicolumn{1}{r}{} & & \multicolumn{5}{c|}{\textbf{S=Race }} & \multicolumn{2}{c|}{\textbf{S=Gender }} \\
\midrule
\textbf{ML Models } & \textbf{Accuracy } & \textbf{Missing values } & \multicolumn{1}{c|}{\textbf{Fairness Metric }} & \textbf{Asian} & \textbf{Black } & \textbf{Hispanic } & \textbf{More race } & \textbf{White} & \textbf{Female } & \textbf{Male } \\
\multicolumn{1}{|c|}{\multirow{30}[59]{*}{KNN }} & \multicolumn{1}{c|}{\multirow{6}[11]{*}{Train: 0.76, Test=0.73}} & \multicolumn{1}{c|}{\multirow{6}[11]{*}{Remove-NA }} & Statistical Parity & 0.08 & 0.069 & 0.064 & 0.4 & 0.168 & 0.008 & 0.008 \\
\cmidrule{4-11} & & & Predictive parity & 0.054 & 0.138 & 0.208 & 0.051 & 0.079 & 0.006 & 0.006 \\
\cmidrule{4-11} & & & Predictive Equlity & 0.149 & 0.085 & 0.204 & 0.152 & 0.017 & 0.007 & 0.007 \\
\cmidrule{4-11} & & & Equal Opportunity & 0.183 & 0.075 & 0.043 & 0.379 & 0.095 & 0.005 & 0.005 \\
\cmidrule{4-11} & & & Accuracy Equality & 0.192 & 0.049 & 0.074 & 0.139 & 0.045 & 0.004 & 0.004 \\
\cmidrule{4-11} & & & Equalized odds & 0.166 & 0.08 & 0.123 & 0.265 & 0.056 & 0.006 & 0.006 \\
\cmidrule{2-11} & \multicolumn{1}{c|}{\multirow{6}[12]{*}{Train: 1, Test=0.81}} & \multicolumn{1}{c|}{\multirow{6}[12]{*}{KNN-I }} & Statistical Parity & 0.136 & 0.14 & 0.068 & 0.041 & 0.09 & 0.043 & 0.043 \\
\cmidrule{4-11} & & & Predictive parity & 0.107 & 0.345 & 0.355 & 0.209 & 0.265 & 0.018 & 0.018 \\
\cmidrule{4-11} & & & Predictive Equlity & 0.031 & 0.037 & 0.095 & 0.069 & 0.08 & 0.018 & 0.018 \\
\cmidrule{4-11} & & & Equal Opportunity & 0.038 & 0.042 & 0.019 & 0.036 & 0.011 & 0.005 & 0.005 \\
\cmidrule{4-11} & & & Accuracy Equality & 0.053 & 0.001 & 0.06 & 0.049 & 0.035 & 0.017 & 0.017 \\
\cmidrule{4-11} & & & Equalized odds & 0.035 & 0.039 & 0.057 & 0.053 & 0.045 & 0.012 & 0.012 \\
\cmidrule{2-11} & \multicolumn{1}{c|}{\multirow{6}[12]{*}{Train: 1, Test=0.81}} & \multicolumn{1}{c|}{\multirow{6}[12]{*}{Simple-I}} & Statistical Parity & 0.125 & 0.129 & 0.069 & 0.048 & 0.087 & 0.037 & 0.037 \\
\cmidrule{4-11} & & & Predictive parity & 0.023 & 0.284 & 0.261 & 0.066 & 0.256 & 0.026 & 0.026 \\
\cmidrule{4-11} & & & Predictive Equlity & 0.079 & 0.03 & 0.063 & 0.003 & 0.076 & 0.01 & 0.01 \\
\cmidrule{4-11} & & & Equal Opportunity & 0.005 & 0.035 & 0.02 & 0.014 & 0.016 & 0.013 & 0.013 \\
\cmidrule{4-11} & & & Accuracy Equality & 0.054 & 0.007 & 0.034 & 0.001 & 0.031 & 0.017 & 0.017 \\
\cmidrule{4-11} & & & Equalized odds & 0.042 & 0.033 & 0.041 & 0.008 & 0.046 & 0.012 & 0.012 \\
\cmidrule{2-11} & \multicolumn{1}{c|}{\multirow{6}[12]{*}{Train: 1, Test=0.81}} & \multicolumn{1}{c|}{\multirow{6}[12]{*}{Multiple-I }} & Statistical Parity & 0.145 & 0.143 & 0.085 & 0.036 & 0.092 & 0.031 & 0.031 \\
\cmidrule{4-11} & & & Predictive parity & 0.1 & 0.338 & 0.265 & 0.137 & 0.24 & 0.057 & 0.057 \\
\cmidrule{4-11} & & & Predictive Equlity & 0.026 & 0.047 & 0.064 & 0.039 & 0.078 & 0.014 & 0.014 \\
\cmidrule{4-11} & & & Equal Opportunity & 0.014 & 0.003 & 0.03 & 0.059 & 0 & 0.007 & 0.007 \\
\cmidrule{4-11} & & & Accuracy Equality & 0.04 & 0.009 & 0.036 & 0.005 & 0.024 & 0.002 & 0.002 \\
\cmidrule{4-11} & & & Equalized odds & 0.02 & 0.025 & 0.047 & 0.049 & 0.039 & 0.011 & 0.011 \\
\cmidrule{2-11} & \multicolumn{1}{c|}{\multirow{6}[12]{*}{Train: 1, Test=0.82}} & \multicolumn{1}{c|}{\multirow{6}[12]{*}{KNN-I (no NA Y) }} & Statistical Parity & 0.117 & 0.138 & 0.092 & 0.055 & 0.12 & 0.067 & 0.067 \\
\cmidrule{4-11} & & & Predictive parity & 0.094 & 0.375 & 0.441 & 0.013 & 0.24 & 0 & 0 \\
\cmidrule{4-11} & & & Predictive Equlity & 0.026 & 0.064 & 0.119 & 0.011 & 0.072 & 0.034 & 0.034 \\
\cmidrule{4-11} & & & Equal Opportunity & 0.083 & 0.127 & 0.049 & 0.099 & 0.032 & 0.042 & 0.042 \\
\cmidrule{4-11} & & & Accuracy Equality & 0.08 & 0.004 & 0.066 & 0.038 & 0.037 & 0.003 & 0.003 \\
\cmidrule{4-11} & & & Equalized odds & 0.054 & 0.096 & 0.084 & 0.055 & 0.052 & 0.038 & 0.038 \\
\midrule
\midrule
\multicolumn{1}{|c|}{\multirow{30}[59]{*}{SVC}} & \multicolumn{1}{c|}{\multirow{6}[12]{*}{Train: 0.85, Test=0.79}} & \multicolumn{1}{c|}{\multirow{6}[12]{*}{Remove-NA }} & Statistical Parity & 0.08 & 0.069 & 0.064 & 0.4 & 0.168 & 0.008 & 0.008 \\
\cmidrule{4-11} & & & Predictive parity & 0.007 & 0.013 & 0.057 & 0.071 & 0.005 & 0 & 0 \\
\cmidrule{4-11} & & & Predictive Equlity & 0.003 & 0.062 & 0.022 & 0.149 & 0.095 & 0.024 & 0.024 \\
\cmidrule{4-11} & & & Equal Opportunity & 0.116 & 0.013 & 0.004 & 0.446 & 0.099 & 0.068 & 0.068 \\
\cmidrule{4-11} & & & Accuracy Equality & 0.083 & 0.017 & 0.003 & 0.2 & 0.025 & 0.059 & 0.059 \\
\cmidrule{4-11} & & & Equalized odds & 0.06 & 0.038 & 0.013 & 0.298 & 0.097 & 0.046 & 0.046 \\
\cmidrule{2-11} & \multicolumn{1}{c|}{\multirow{6}[12]{*}{Train: 0.86, Test=0.85}} & \multicolumn{1}{c|}{\multirow{6}[12]{*}{KNN-I }} & Statistical Parity & 0.136 & 0.14 & 0.068 & 0.041 & 0.09 & 0.043 & 0.043 \\
\cmidrule{4-11} & & & Predictive parity & 0.008 & 0.149 & 0.084 & 0.207 & 0.119 & 0.036 & 0.036 \\
\cmidrule{4-11} & & & Predictive Equlity & 0.075 & 0.007 & 0.015 & 0.079 & 0.032 & 0.005 & 0.005 \\
\cmidrule{4-11} & & & Equal Opportunity & 0.015 & 0.045 & 0.036 & 0.037 & 0.027 & 0.015 & 0.015 \\
\cmidrule{4-11} & & & Accuracy Equality & 0.056 & 0.001 & 0.013 & 0.029 & 0.016 & 0.004 & 0.004 \\
\cmidrule{4-11} & & & Equalized odds & 0.045 & 0.026 & 0.025 & 0.058 & 0.03 & 0.01 & 0.01 \\
\cmidrule{2-11} & \multicolumn{1}{c|}{\multirow{6}[12]{*}{Train: 0.87, Test=0.86}} & \multicolumn{1}{c|}{\multirow{6}[12]{*}{Simple-I}} & Statistical Parity & 0.125 & 0.129 & 0.069 & 0.048 & 0.087 & 0.037 & 0.037 \\
\cmidrule{4-11} & & & Predictive parity & 0.007 & 0.123 & 0.06 & 0.119 & 0.103 & 0.044 & 0.044 \\
\cmidrule{4-11} & & & Predictive Equlity & 0.071 & 0.006 & 0.003 & 0.034 & 0.025 & 0.011 & 0.011 \\
\cmidrule{4-11} & & & Equal Opportunity & 0.013 & 0.057 & 0.006 & 0.06 & 0.024 & 0.006 & 0.006 \\
\cmidrule{4-11} & & & Accuracy Equality & 0.053 & 0.005 & 0.005 & 0.006 & 0.012 & 0.002 & 0.002 \\
\cmidrule{4-11} & & & Equalized odds & 0.042 & 0.031 & 0.004 & 0.047 & 0.024 & 0.009 & 0.009 \\
\cmidrule{2-11} & \multicolumn{1}{c|}{\multirow{6}[12]{*}{Train: 0.89, Test=0.87}} & \multicolumn{1}{c|}{\multirow{6}[12]{*}{Multiple-I }} & Statistical Parity & 0.145 & 0.143 & 0.085 & 0.036 & 0.092 & 0.031 & 0.031 \\
\cmidrule{4-11} & & & Predictive parity & 0.039 & 0.055 & 0.091 & 0.105 & 0.077 & 0.017 & 0.017 \\
\cmidrule{4-11} & & & Predictive Equlity & 0.019 & 0.014 & 0.024 & 0.042 & 0.021 & 0.001 & 0.001 \\
\cmidrule{4-11} & & & Equal Opportunity & 0.008 & 0.056 & 0.051 & 0.097 & 0.017 & 0.008 & 0.008 \\
\cmidrule{4-11} & & & Accuracy Equality & 0.021 & 0.005 & 0.022 & 0.02 & 0.006 & 0.008 & 0.008 \\
\cmidrule{4-11} & & & Equalized odds & 0.014 & 0.035 & 0.037 & 0.07 & 0.019 & 0.004 & 0.004 \\
\cmidrule{2-11} & \multicolumn{1}{c|}{\multirow{6}[11]{*}{Train: 0.86, Test=0.86}} & \multicolumn{1}{c|}{\multirow{6}[11]{*}{KNN-I (no NA Y) }} & Statistical Parity & 0.117 & 0.138 & 0.092 & 0.055 & 0.12 & 0.067 & 0.067 \\
\cmidrule{4-11} & & & Predictive parity & 0.003 & 0.072 & 0.121 & 0.006 & 0.082 & 0.016 & 0.016 \\
\cmidrule{4-11} & & & Predictive Equlity & 0.083 & 0.012 & 0.03 & 0.018 & 0.013 & 0.038 & 0.038 \\
\cmidrule{4-11} & & & Equal Opportunity & 0.081 & 0.043 & 0.009 & 0.046 & 0.036 & 0.011 & 0.011 \\
\cmidrule{4-11} & & & Accuracy Equality & 0.098 & 0.04 & 0.002 & 0.037 & 0.009 & 0.032 & 0.032 \\
\cmidrule{4-11} & & & Equalized odds & 0.082 & 0.028 & 0.02 & 0.032 & 0.025 & 0.024 & 0.024 \\
\end{tabular}%
\label{tab:addlabel}%
\end{table*}%
\section{Fairness Audits}
Notwithstanding the awareness of biases and unfairness in machine learning, the actual challenges of ML practitioners have been discussed in a few previous research \cite{veale2018fairness,holstein2019improving} with the focus on specific contexts, such as predictive policing \cite{propublica} and
child mistreatment detection \cite{chouldechova2018case}.
ML practitioners often struggle to apply existing auditing and de-biasing methods in their contexts \cite{holstein2019improving}. The concept of auditing algorithms and ethics-based auditing in various contexts has lately reached its pinnacle. \cite{mokander2021ethics,raji2020closing,wilson2021building,kondmann2021under,mokander2021ethics}. The final goal of the fairness auditing process is to determine whether the ML model's results are fair. As a result, the auditing process aids in determining the appropriate actions to take, the best bias mitigation method to employ, and the most suitable technique to use throughout the ML development pipeline \cite{raji2020closing}.
A designated user of predictive modeling in higher education needs support to audit the ML model performance and inequalities before adopting and deploying it in practice.
To address the education practitioners and policymakers on assessing the inequalities of the predictive outcome, in this paper, we audit the unfairness of ML models for student success prediction using major notions of fairness to identify bias in algorithms using different metrics, Table \ref{tab:notions}. We also audit unfairness to ensure an ethical pre-processing approach. We audit a wide range of fairness metrics and conduct a comprehensive analysis on the performance of different ML models and their inequalities across different racial and gender subgroups throughout the data preparation (imputation) and the model training steps using the ELS dataset.
\section{Bias in Education}
\emph{``Bias in, bias out''.}
The first step toward auditing and addressing disparity in student success prediction is to understand and identify different sources of bias in the dataset. Most of the social data including education data is almost always biased as it inherently reflects historical biases and stereotypes \cite{olteanu2019social}. Data collection and representation methods often introduce additional bias. Disregarding the societal impact of the modeling using biased data, exacerbates the discrimination further in the predictive modeling outcome. The term bias refers to demographic disparities in the sampled data that compromise its representativeness \cite{olteanu2019social,fairmlbook}. \emph{Population bias} in the data prevents a model to be accurate for minorities \cite{asudeh2019assessing}. Figure~\ref{fig:pie} shows the racial population bias in the ELS dataset based on our preliminary analysis.
\begin{figure}
\centering
\vspace{-5mm}
\includegraphics[width=0.7\linewidth]{figs/pie_Race.pdf}
\vspace{-2mm}\caption{ELS population bias}
\label{fig:pie}
\vspace{-6mm}
\end{figure}
On the other hand, bias in the value distribution of attributes across different demographic groups, which is referred to as \emph{behavioral bias}. This is due to the high correlation of
sensitive attributes with other attributes in the dataset. For example, Figure \ref{fig:pie} demonstrates the population bias in the ELS dataset with White people representing the majority, with about 69\% of the observations.
Biased values in the data directly yield bias in the algorithmic outcomes.
Moreover, the distribution of attributes’ values across different demographic groups could indicate another source of bias, commonly referred to as behavioral bias. This is due to the high correlation of sensitive attributes with other attributes in the dataset.
We can observe a behavioral bias in the highest degree earned of students across the top 4 representative racial groups, as shown in Figure \ref{fig:pie} (a).
This indeed indicates final degree attainments below the bachelor’s degree is highly frequent in Black and Hispanic communities, respectively. Figure \ref{fig:pie} (b), reveals that below bachelor’s degree attainment is most frequent in students from middle class and low-income families (excluding social class-degree attained groups with a frequency of less than 1\%). Using degree attainment as a student-success indicator, therefore, requires careful consideration and efforts to mitigate the effect of both population and behavioral bias.
\begin{figure}[!tb]
\centering
\subfigure[Degree Attainment
]{\label{exp:alpha-compas}\includegraphics[width=0.41\linewidth]{figs/Race-attain.pdf}}
\subfigure[Attainment with Family Income ]{\label{exp:nested-compas}\includegraphics[width=0.57\linewidth]{figs/pie_new.pdf}}
\vspace{-5mm}
\caption{ELS behavioral bias example}\label{exp:corr_1}
\label{fig:pie}
\vspace{-6mm}
\end{figure}
Transitioning toward a more comprehensive and specified model of disparities among population groups, yields a greater understanding of the key drivers of disparities among population groups.
The key causes behind disparities among population groups that have been identified in the previous education research include, but are not limited to, social class\cite{disparity-racial,disparity-social}, race and ethnicity \cite{disparity-racial}, gender \cite{disparity-structure,disparity-gender2,voyer2014gender}, household characteristics(e.g. education of adults) \cite{disparity-structure}, community characteristics (e.g. presence of schools) \cite{disparity-structure}, and socioeconomic status\cite{disparity-structure,disparity-socioeconomic}.
In this paper, our goal is to investigate and identify different factors causing and sustaining these biases using data science techniques to better specify relationships within and between demographic, socioeconomic, pre-college academic preparation, grades, expenditures, extra activities, and school climate, and their impacts on disparities in educational outcome and success. Bias detection shed the light to identify the correct imputation technique and to control the adverse impact of imputation on the performance and fairness of the model later. To accomplish this end, we audit the unfairness of the predictive outcome before and after imputation using different techniques, and demonstrate how the correlations of the variables that are highly correlated with the sensitive attributes ( potential sources of behavioral bias) varies and affect the unfairness.
By examining bias in existing data and identifying the key characteristics of vulnerable populations, this paper illuminates how predictive models can produce discriminatory results if the bias is not addressed, and how we need to resolve the predictive outcome disparity.
To identify potential sources of behavioral bias, Figure \ref{fig:box} (a) and (b) illustrate racial disparities with respect to total credits and math/reading test scores, respectively. More specifically, Figure \ref{fig:box} (a) shows that Black, Hispanic, and American Indian groups have lower median earned credits with their first and second quartile (50\% of observations) plotted with lower values compared to others. Similarly, Figure \ref{fig:box} (a) indicates that the student standardized combined math/reading test score has a lower median for Black, Hispanic, and American Indian groups. In addition, the size of each boxplot (from lower quartile to upper quartile) provides insight about the distribution of each group. For example, in Figure \ref{fig:box} (a), Hispanic subgroup has a large box plot meaning that these students have very different outcomes in terms of total earned credits from very low to high values. However, the box plot for the White group of students indicates more similar credit outcomes, mainly distributed around the median value. More, disparity detection plots are provided in the Appendix.
\begin{figure}[!tb]
\centering
\subfigure[]{\label{exp:alpha-compas}\includegraphics[width=0.44\linewidth]{figs/bx_1.pdf}}
\subfigure[]{\label{exp:nested-compas}\includegraphics[width=0.54\linewidth]{figs/bx_4.pdf}}
\vspace{-5mm}
\caption{Boxplots of two unprotected attributes $\not\!\perp\!\!\!\perp$ race}\label{exp:corr_1}
\label{fig:box}
\vspace{-4mm}
\end{figure}
\section{Dataset}
We select a subset of influential attributes including students' demographic information, socio-economic status, family and environmental factors, grades and achievements, and extra activities.
\begin{table}[!htbp]
\centering
\begin{adjustbox}{width=1\columnwidth}
\begin{tabular}{|l|c|l|c|}
\toprule
\textbf{Variables } & \textbf{\% Missing} & \textbf{Variables } & \textbf{\% Missing} \\
\midrule
S-T relationship & 38.97 & Number of school activities & 1.07 \\
\midrule
F3-loan-owed & 31.23 & Std Math/Reading & 0.03 \\
\midrule
\%white teacher & 27.05 & English & 0.03 \\
\midrule
\%Black teacher & 23.76 & High school attendence & 0.02 \\
\midrule
\%Hispanic teacher & 21.94 & Family Composition & 0 \\
\midrule
F3\_GPA(first year) & 16.54 & Race\_Hispanic, race specified & 0 \\
\midrule
TV/video(h/day) & 15.68 & F3\_Separated no partner & 0 \\
\midrule
F3\_GPA(first attended) & 15.09 & F3\_Never Married w partner & 0 \\
\midrule
Work(h/week) & 14.24 & F3\_Never Married no partner & 0 \\
\midrule
F2\_College entrance & 12.74 & F3\_Married & 0 \\
\midrule
credits (first year) & 11.4 & F3\_Divorced/Widowed w partner & 0 \\
\midrule
F3\_GPA(all) & 10.58 & F3\_Divorced/Widowed no partner & 0 \\
\midrule
F1\_TV/video(h/day) & 10.52 & Race\_White & 0 \\
\midrule
Credits (total) & 10.2 & Race\_More than one race & 0 \\
\midrule
Generation & 9.76 & Race\_Hispanic, no race specified & 0 \\
\midrule
F3\_Credits\_ math & 9.22 & School Urbanicity & 0 \\
\midrule
F3\_Credits\_Science & 9.21 & Race\_Black or African Amer & 0 \\
\midrule
F3\_Highest level of education & 7.58 & Race\_Asian, Hawaii/Pac. Isl & 0 \\
\midrule
F3\_Employment & 7.58 & Race\_Amer. Indian/Alaska & 0 \\
\midrule
F1\_Std Math & 7.54 & Gender\_Male & 0 \\
\midrule
F1\_frequency of computer use & 6.64 & Gender\_Female & 0 \\
\midrule
F1\_units in math & 6.15 & Parents education & 0 \\
\midrule
Athletic level & 5.59 & Income & 0 \\
\midrule
F1\_Work(h/week) & 4.4 & F1\_Drop out & 0 \\
\midrule
Homework(h/week) & 2.37 & F3\_Separated w partner & 0 \\
\bottomrule
\end{tabular}%
\end{adjustbox}
\label{tab:Missings}%
\end{table}%
\section{Background}
\vspace{-2mm}
\section{Fairness In Predictive Modeling}\label{sec:fair-predictive}
\emph{Fairness-aware learning} has received considerable attention in the machine learning literature (fairness in ML) \cite{fairML3,fairML4-zafar}. More specifically, fairness in ML seeks to develop methodologies such that the predicted outcome becomes fair or non-discriminatory for individuals based on their protected attributes such as race and sex.
The goal of improving fairness in learning problems can be achieved by intervention at pre-processing, in-processing (algorithms), or post-processing strategies.
Pre-processing strategies involve the fairness measure in the data preparation step to mitigate the potential bias in the input data and produce fair outcomes~\cite{feldman2015certifying,kamiran2012data,calmon2017optimized}.
In-process approaches~\cite{zafar2015fairness, zhang2021omnifair, anahideh2020fair} incorporate fairness in the design of the algorithm to generate a fair outcome. Post-process methods~\cite{pleiss2017fairness,feldman2015certifying,zehlike2017fa}, manipulate the outcome of the algorithm to mitigate the unfairness of the outcome for the decision making process.
There are various definition for fairness in the literature~\cite{vzliobaite2017measuring,fairmlbook,narayanan2018translation,barocas2016big,dwork2012fairness}.
The fairness definitions fall into different categories including
Statistical Parity~\cite{hardt2016equality},
Equalized Odds~\cite{hardt2016equality}, Predictive Parity \cite{chouldechova2017fair}, Predictive Equality \cite{corbett2017algorithmic}, Equal Opportunity \cite{madras2019fairness}, and Accuracy Equality \cite{berk2021fairness}. Table \ref{tab:notions} demonstrates the mathematical definitions of each of these common metrics \cite{makhlouf2021applicability}. Let $S=\{0,1\}$ be a binary sensitive attribute, in a binary classification setting let $Y=\{0,1\}$ be the true label, and $\hat{Y}=\{0,1\}$ be the predicted class label. Most of the fairness notions are derived based the conditional probabilities using these variables to reveal the inequalities of the predictive model.
Evaluating the fairness of algorithmic predictions requires a notion
of fairness, which can be difficult to choose in practice. Different metrics have been leveraged regarding different contexts, business necessities, and regulations.
A predictive modeling outcome might have inequalities under one notion of fairness and might not have any under others.
\begin{table}[!tb]
\begin{adjustbox}{width=1\columnwidth}
\centering
\begin{tabular}{|c|c|}
\toprule
\textbf{Fairness Notion } & \textbf{Formulation } \\
\midrule
Statistical Parity ({\bf SP}) & $|P(\hat{Y}=1|S=1)-P(\hat{Y}=1|S=0)|$ \\
\midrule
Equalized Odds ({\bf EO}) & $|P(\hat{Y}=1|Y=y,S=1)-P(\hat{Y}=1|Y=y,S=0)|, \forall {y \in \{0,1\}}$ \\
\midrule
Equal Opportunity ({\bf EoP})& $|P(\hat{Y}=1|Y=1,S=1)-P(\hat{Y}=1|Y=1,S=0)|$
\\
\midrule
Predictive Equality ({\bf PE})& $|P(\hat{Y}=1|Y=0,S=1)-P(\hat{Y}=1|Y=0,S=0)|$
\\
\midrule
Predictive Parity ({\bf PP})& $|P(Y=1|\hat{Y}=1,S=1)-P(Y=1|\hat{Y}=1,S=0)|$ \\
\midrule
Accuracy Equality ({\bf AE}) & $|P(\hat{Y}=Y|S=1)-P(\hat{Y}=Y|S=0)|$ \\
\bottomrule
\end{tabular}%
\end{adjustbox}
\vspace{-2mm}
\caption{Common Fairness Definitions}
\label{tab:notions}%
\vspace{-6mm}
\end{table}%
In the education context, to give some examples, a) Demographic (statistical) parity is referred to as: the discrepancy of the predicted highest level of degree (success) across different demographic groups of students, and b) Equal opportunity indicates: the discrepancy of the predicted highest level of degree across different demographic groups of students, given their success is 1. In this paper, we use a binary classification setting but across multilevel racial and gender population subgroups ($S$ is not necessarily binary).
We extend the fairness metrics described in Table 1 for non-binary sensitive attributes, by considering \emph{one-versus-rest} approach for unfairness calculation. More specifically, to calculate the unfairness gaps, we consider each subgroup as $S=1$ and compare it against the rest $S=0$ (i.e. all other subgroups), one at a time. In this paper, we mainly focus on the racial and gender disparities, however, our proposed approach for auditing fairness and investigating the imputation impact can be extended to use other sensitive attributes. For example, the decision maker can use Martial Status as a sensitive.
\subsection{Correlation Analysis}
\subsection{Data Pre-processing}\label{sec:data-preprocessign}
\vspace{-2mm}
\subsection{Missing Values and Imputation}\label{sec:data-preprocessign}
\emph{Missing Values} are the frequent latent causes behind many data analysis challenges, from modeling to prediction, and from accuracy to fairness for protected (sensitive) subgroups. Therefore, \emph{handling Missing values} is a complicated problem that requires careful consideration in education research \cite{missing-review1,Missing3-MI}. In this regard, different imputation techniques have been proposed in the literature and the effectiveness of each methodology on various applications has been studied. \emph{Mean Imputation}, \emph{Multiple Imputation}, and other clustering-based imputation strategies such as \emph{KNN-imputation}, are among the well-known techniques in handling missing values which we will briefly describe here.
Most large-scale and nationally representative education data sets, (e.g.,ELS) suffer from a significant number of incomplete responses from the research participants. While features that contain more than 75\% missing or unknown values are not usually informative, most features suffer from less than 25\% missing values, and are worth keeping. Removing all observations with missing values induces significant information loss in success prediction.
\textbf{Simple Imputation} is one the most basic imputation strategies. The process involves replacing the missing variable of an observation with the mean (or median) of the observations with available values for the same variable. The mean imputation method is known to decrease the standard error of the mean. This fact exacerbates the risk of failing to capture the reality through statistical tests
\cite{missing-book,missing-review1}.
\textbf{Multiple Imputation (MI) \cite{rubin1996multiple}} is a more advanced imputation strategy that aims to estimate the natural variation in the data through performing several missing data imputations. In fact, MI produces a sets of estimates through various imputed datasets, and combine them into a single set of estimates by averaging across different values. The standard errors of parameter estimates produced with this
method has shown to be unbiased \cite{rubin1996multiple}.
\textbf{KNN Imputation} is a non-parametric imputation strategy, which has been shown to be successful for different contexts \cite{missing-knn}. KNN imputer replaces each sample’s missing values with the mean value from $K$ nearest neighbors found in the dataset. In fact, two samples are considered close neighbors if the features that neither is missing are close. KNN imputation is able to capture structure in the dataset while the underlying data distribution is unknown \cite{somasundaram2011evaluation}. To the best of our knowledge, KNN imputation has not been used in the education context.
Overall, ignoring missing data can not be an effective approach of handle missing values, and more importantly can result in predictive disparity for minorities. While many education-related studies addressed the challenges of missing data, as discussed,
little is known about the impact of applying different imputation techniques on the fairness outcome of the final model. In this project, we aim to address this gap considering the 3 above mentioned imputation strategies. To the best of our knowledge, none of the prior works in the education domain worked on ensuring fairness while imputing for missing values in pre-processing steps.
\section{Success Prediction}
Before moving to the ML pipeline, we first discuss the prediction problem of interest. In this paper, we specifically focus on predicting the academic success of students in higher education. Student-success prediction is critical for institution performance evaluation, college admission, intervention policy design, and many more use cases in higher education \cite{yu2020towards,stephan2015will}.
Quantifying student success is a very complex topic since the true quality of any candidate is hidden and there is limited information available. There are proxy attributes such as first-year GPA or graduation completion that are typically used as the measure of success.
In this work, we are primarily interested in studying the prediction of highest level of degree (classification problem) using ELS:2002 dataset.
Numerous factors can affect the student success ~\cite{voyer2014gender}. Thus, identifying the most informative and significant subset of potential variables is a critical task in predictive modeling\cite{FSstudy}. To select a proper subset of attributes, we conducted a thorough literature search and combine it with the domain expert knowledge.
These factors include, but are not limited to, academic performance (SAT scores, GPA) \cite{chamorro2008personality}, student demographic attributes (e.g. race, gender) \cite{voyer2014gender,fairethic-education}, socio-economic status \cite{disparity-structure,disparity-socioeconomic2}, environmental factors, and extra(out of school) activities.
Incorporating protected attributes in the modeling procedure has raised concerns in the fair-ML domain \cite{barocas2016big}. Machine learning models are based on correlation, and any feature associated with an outcome can be used as a decision basis. However, the predictive outcome depends on the information available to the model and the specific algorithm used.
A model may leverage any feature associated with the outcome, and common measures of model performance and fairness will be essentially unaffected. In contrast, in some cases the inclusion of unprotected attributes may adversely affect the performance and fairness of a predictive model due to a latent correlation with other protected attributes. In this paper, we shall audit the unfairness of the model and the impact of imputation when we incorporate the sensitive attribute as determinants.
Decision Tree \cite{DT-corr-perform}, Random Forest \cite{RF-risk}, K-Nearest Neighbor \cite{dudani1976distance} \cite{tanner2010predicting}, LDA cite{riffenburgh1957linear} \cite{alyahyan2020predicting}, Logistic Regression \cite{thompson2018predicting}, and SVM \cite{SVM-DT-NN} are among the well-known ML models in higher education. Table \ref{tab:Missings}, represents the list of variables in this study, and their corresponding missing value percentages.
\begin{table}[!tb]
\centering
\begin{adjustbox}{width=1\columnwidth}
\begin{tabular}{|l|c|l|c|}
\toprule
\textbf{Variables } & \textbf{\% Missing} & \textbf{Variables } & \textbf{\% Missing} \\
\midrule
S-T relationship & 38.97 & Number of school activities & 1.07 \\
\midrule
F3-loan-owed & 31.23 & Std Math/Reading & 0.03 \\
\midrule
\%white teacher & 27.05 & English & 0.03 \\
\midrule
\%Black teacher & 23.76 & High school attendence & 0.02 \\
\midrule
\%Hispanic teacher & 21.94 & Family Composition & 0 \\
\midrule
F3\_GPA(first year) & 16.54 & Race\_Hispanic, race specified & 0 \\
\midrule
TV/video(h/day) & 15.68 & F3\_Separated no partner & 0 \\
\midrule
F3\_GPA(first attended) & 15.09 & F3\_Never Married w partner & 0 \\
\midrule
Work(h/week) & 14.24 & F3\_Never Married no partner & 0 \\
\midrule
F2\_College entrance & 12.74 & F3\_Married & 0 \\
\midrule
credits (first year) & 11.4 & F3\_Divorced/Widowed w partner & 0 \\
\midrule
F3\_GPA(all) & 10.58 & F3\_Divorced/Widowed no partner & 0 \\
\midrule
F1\_TV/video(h/day) & 10.52 & Race\_White & 0 \\
\midrule
Credits (total) & 10.2 & Race\_More than one race & 0 \\
\midrule
Generation & 9.76 & Race\_Hispanic, no race specified & 0 \\
\midrule
F3\_Credits\_ math & 9.22 & School Urbanicity & 0 \\
\midrule
F3\_Credits\_Science & 9.21 & Race\_Black or African Amer & 0 \\
\midrule
F3\_Highest level of education & 7.58 & Race\_Asian, Hawaii/Pac. Isl & 0 \\
\midrule
F3\_Employment & 7.58 & Race\_Amer. Indian/Alaska & 0 \\
\midrule
F1\_Std Math & 7.54 & Gender\_Male & 0 \\
\midrule
F1\_frequency of computer use & 6.64 & Gender\_Female & 0 \\
\midrule
F1\_units in math & 6.15 & Parents education & 0 \\
\midrule
Athletic level & 5.59 & Income & 0 \\
\midrule
F1\_Work(h/week) & 4.4 & F1\_Drop out & 0 \\
\midrule
Homework(h/week) & 2.37 & F3\_Separated w partner & 0 \\
\bottomrule
\end{tabular}%
\end{adjustbox}
\caption{List of Variables}
\label{tab:Missings}%
\vspace{-7mm}
\end{table}%
\section{Fairness-Accuracy Trade-off}
Imagine a situation that one of the protected attributes, e.g. race, is highly correlated with the target variable (e.g., highest degree attained). As a result, a biased yet accurate model predicts a low target value (e.g., lower degree) for individuals in a specific racial groups.
In this case, enforcing the model to satisfy a fairness constraint during the training, prevents it to relate race with the predicted highest level of education.
This procedure might decrease the model accuracy but achieves higher fairness for protected groups. Indeed, the ideal goal of fairness in ML may not be possible in all problems as some existing works have pointed out that an inherent trade-off exists between fairness and accuracy \cite{fair-trade-off,fair-trade-off2,fair-trade-off3}. In other words, prediction accuracy needs to be sacrificed to some extent in order to lower prediction bias \cite{fair-trade-off3,fair-trade-off4,fair-tradeoff-main}.
To balance the fairness and accuracy trade-off, some recent works considered a Pareto frontier construction (multiple objective optimization technique) \cite{fair-trade-off-pareto1,fair-trade-off-pareto2} based on fairness and accuracy or proposed.
In this proposal, we aim to consider a classification ( categorical response: e.g. Attainment)
problem to predict student success. While some ML models can only be used for either regression or classification problems, some others can be applied in both problems after necessary modifications. Decision Tree \cite{DT-corr-perform,DT-admission,DT-impfactor,SVM-DT-RF-NN}, Random Forest \cite{RF-Progression,RF-risk,SVM-DT-RF-NN}, Neural Network \cite{SVM-NN,SVM-DT-NN,SVM-DT-RF-NN,NN}, and SVM \cite{SVM-DT-NN,SVM-DT-RF-NN,SVM-KNN,SVM-NN} are among the well-known models in higher education that can be applied for both regression and classification problems. Moreover, some models are interpretable (e.g Decision Tree~\cite{DT-admission}) while others require extra efforts for interpretation (e.g Nural Netwrok~\cite{explainableAI}). The ability to explain the rationale behind each prediction outcome is a necessity in human related fields such as education. As a result, we prefer interpretable models due to their explainability for higher education researchers. However, we investigate the non-interpretable ones as well and provide insights on how to extract useful information from them using explainable AI techniques \cite{explainableAI,explainableAI2}.
|
1,108,101,566,710 | arxiv | \section{Introduction}\label{S1}
In 1987 LeBrun\footnote{As discussed in the paper by A.\,C.\ Ferreira in this volume, the same result
was established in 1953 by A.\,Blanchard \cite{B} using the ideas anticipating twistors.
The proofs of \cite{B,LeB} are reviewed there.}
\cite{LeB} proved the following restricted non-existence result for the 6-sphere.
Let $(M,g)$ be a connected oriented Riemannian manifold.
Denote by $\mathcal{J}_g(M)$ the space of almost complex structures $J$ on $M$ that
are compatible with the metric (i.e.\ $J^*g=g$) and with the orientation.
This is the space of sections of an $SO(6)/U(3)$ fiber bundle, so whenever non-empty it
is infinite-dimensional. Associating to $J\in\mathcal{J}_g(M)$ the almost symplectic structure
$\oo(X,Y)=g(JX,Y)$, $X,Y\in TM$, we get a bijection between $\mathcal{J}_g(M)$ and
the space of almost Hermitian triples $(g,J,\oo)$ on $M$ with fixed $g$.
\begin{theorem}\label{thm1}
No $J\in\mathcal{J}_{g_0}(\Ss)$ is integrable (is a complex structure) for the standard (round) metric
$g_0$. In other words, there are no Hermitian structures on $\Ss$ associated to the metric $g_0$.
\end{theorem}
There are several proofs of this statement, we are going to review some of those.
The method of proof of Theorem \ref{thm1} by Salamon \cite{Sal} uses the fact that the twistor space of $(\Ss,g_0)$ is
$\mathcal{Z}(\Ss)=SO(8)/U(4)$ which is a K\"ahler manifold (it has a complex structure
because $\Ss$ is conformally flat, and the metric is induced by $g_0$), and so the holomorphic embedding
$s_J:\Ss\to\mathcal{Z}(\Ss)$ would induce a K\"ahler structure on $\Ss$.
Here the symmetry of $g_0$ is used (homogeneity), so this proof is not applicable for $g\approx g_0$
(but as mentioned in \cite{BHL}, a modification of the original approach of \cite{LeB}, based on an
isometric embedding of $(\Ss,g)$ into a higher-dimensional Euclidean space, is possible).
A generalization of Theorem \ref{thm1} obtained in \cite{BHL} is as follows.
\begin{theorem}\label{thm2}
Let $g$ be a Riemannian metric on $\Ss$. Denote by $R_g$ its Riemannian curvature,
considered as a $(3,1)$-tensor,
and by $\tilde{R}_g:\La^2T^*\Ss\to\La^2T^*\Ss$ the associated $(2,2)$ tensor (curvature operator).
Assume that its spectrum (15 functions $\lambda_i$ on $\Ss$ counted with multiplicities)
$\op{Sp}(\tilde{R}_g)=\{\lambda_{\op{min}}\leq\dots\leq\lambda_{\op{max}}\}$ is positive
$\lambda_{\op{min}}>0$ and satisfies $5\lambda_{\op{max}}<7\lambda_{\op{min}}$. Then
no $J\in\mathcal{J}_{g}(\Ss)$ is integrable.
\end{theorem}
This theorem will be proven in Section \ref{S4} after we introduce the notations and recall
the required knowledge in Sections \ref{S2} and \ref{S3}. Then we will give another proof
of Theorem \ref{thm1} due to Sekigawa and Vanhecke \cite{SV} in Section \ref{S5}. Then in Section \ref{S6}
we generalize it in the spirit of Theorem \ref{thm2}. Section \ref{S7} will be a short
summary and an outlook.
\smallskip
Let us start with an alternative proof of Theorem \ref{thm1} following Bor and Hern\'andez-Lamoneda \cite{BHL}.
\smallskip
\begin{Proof}{Sketch of the proof of Theorem \ref{thm1}}
Let $K=\La^{3,0}(\Ss)$ be the canonical line bundle of the hypothetical complex structure $J$.
Equip it with the Levi-Civita connection $\nabla$ that is induced from $\La^3_\C(\Ss)$ by the
orthogonal projection. The curvature of $K$ with respect to $\nabla$ is
\begin{equation}\label{NF}
\Omega=R_\nabla|_{\Lambda^{3,0}}+\Phi^*\we\Phi =i\tilde{R}_g(\omega)+\Phi^*\we\Phi,
\end{equation}
where $\Phi$ is the second fundamental form (see \S\ref{S31}). It has type $(1,0)$ and so $i\Phi^*\we\Phi\leq0$ (see \S\ref{S32}).
Since for the round metric $g=g_0$ we have $\tilde{R}_g=\op{Id}$, so
$$
i\Omega=-\omega+ i\Phi^*\we\Phi<0.
$$
Thus $-i\Omega$ is a non-degenerate (positive) scalar valued 2-form which is closed by
the Bianchi identity. This implies that $\Ss$ is symplectic which is impossible due to
$H^2_{\text{dR}}(\Ss)=0$.
\end{Proof}
It is clear from the proof that for $g\approx g_0$ the operator $R_g\approx\op{Id}$ is still positive,
so the conclusion holds for a small ball around $g_0$ in $\Gamma(\odot^2_+T^*\Ss)$.
It only remains to justify the quantitative claim.
\section{Background I: connections on Hermitian bundles}\label{S2}
Let $M$ be a complex $n$-dimensional manifold.
In this section we collect the facts about calculus on $M$ important for the proof.
A hurried reader should proceed to the next section returning here for reference.
Let $\pi:E\to M$ be a Hermitian vector bundle, that is a holomorphic bundle over $M$
equipped with the Riemannian structure $\langle,\rangle$ in fibers for which the
complex structure $J$ in the fibers is orthogonal. Examples are the tangent bundle $TM$ and
the canonical line bundle $K=\Lambda^{n,0}(M)$.
Note that a Hermitian structure is given via
a $\C$-bilinear symmetric product $\odot^2(E\otimes\C)\to\C$ as follows: the restriction
$(,):E'\otimes E''\to\C$, where $E\otimes\C=E'\oplus E''=E_{(1,0)}\oplus E_{(0,1)}$
is the canonical decomposition into $+i$ and $-i$ eigenspaces of the operator $J$,
gives the Hermitian metric $\langle,\rangle:E\otimes E\to\C$, $\langle\xi,\eta\rangle=(\xi,\bar\eta)$.
There are several canonical connections on $E$.
\subsection{The Chern connection}
This is also referred to as the canonical metric \cite{GH} or characteristic \cite{GBNV} connection and
is constructed as follows. Recall that the Dolbeaux complex of a holomorphic vector bundle is
$$
0\to\Gamma(E)\stackrel{\bar\partial}\to\Omega^{0,1}(M;E)=\Gamma(E)\otimes\Omega^{0,1}(M)\to
\Omega^{0,2}(M;E)\stackrel{\bar\partial}\to\dots
$$
where the first Dolbeaux differential $\bar\partial$, generating all the other differentials
in the complex, is given by localization as follows (for simplicity of notations, everywhere below we keep
using $M$ for the localization).
If $e_1,\dots,e_m$ is a basis of holomorphic sections
and $\xi=\sum f^ie_i\in\Gamma(E)$ a general section, $f^i\in C^\infty(M,\C)$,
then $\bar\partial\xi=\sum\bar\partial(f^i)e_i$. It is easy to check by passing to another
holomorphic frame that this operator $\bar\partial$ is well-defined, and that its extension by
the Leibnitz rule $\bar\partial(\xi\otimes\alpha)=\bar\partial(\xi)\we\alpha+\xi\cdot d\alpha$
yields a complex, $\bar\partial^2=0$.
\begin{theorem}
There exists a unique linear connection on the vector bundle $E$, i.e.\ a map $D:\Gamma(E)\to \Omega^1(M,E)=\Omega^{1}(M)\otimes\Gamma(E)$, that is
\begin{itemize}
\item compatible with the metric:
$d\langle\xi,\eta\rangle=\langle D\xi,\eta\rangle+\langle\xi,D\eta\rangle$,
\item compatible with the complex structure:
$D''=\bar{\partial}$.
\end{itemize}
The first condition is $Dg=0$ and the second implies $DJ=0$.
\end{theorem}
Above, $D''$ is the $(0,1)$-part of $D$, i.e.\ the composition of $D$ with the projection $\Omega^1(M,E)\to\Omega^{0,1}(M,E)$.
\begin{proof}
The statement is local, so we can use a local holomorphic frame $e_i$ to compute.
Thus, a linear connection $D$ is given by a connection form
$\theta=[\theta_a^b]\in\Omega^1(M;gl(n,\C))$: $De_a=\theta_a^be_b$. We use the notations
$e_{\bar a}=\bar{e_a}$, $\theta_{\bar a}^{\bar b}=\bar{\theta_a^b}$, etc, cf.\ \cite{GH}.
Let $g_{a\bar b}=\langle e_a,e_b\rangle=(e_a,e_{\bar b})$ be the components of the Hermitian metric.
The first condition on $D$ writes
$$
dg_{a\bar b}=\langle De_a,e_b\rangle+\langle e_a,D e_b\rangle=
\theta_a^c g_{c\bar{b}}+\theta_{\bar b}^{\bar c} g_{a\bar{c}}.
$$
The second condition means that all $\theta_a^b$ are $(1,0)$-forms, so the above formula splits:
$\partial g_{a\bar{b}}=\theta_a^c g_{c\bar{b}}$ $\Leftrightarrow$
$\bar\partial g_{a\bar{b}}=\theta_{\bar b}^{\bar c} g_{a\bar{c}}$.
Consequently, the connection form satisfying the two conditions is uniquely given by
$\theta=g^{-1}\cdot\partial g$, or
in components $\theta_a^b=g^{b\bar{c}}\partial g_{a\bar{c}}$.
\end{proof}
We will denote the Chern connection, so obtained, by $\DD$.
In particular, there is a canonical connection $\DD$ on the tangent bundle of a Hermitian manifold.
Its torsion is equal to
$$
T_\DD=\pi_{2,0}(d^c\omega)^\sharp,
$$
where $d^c\oo(\xi,\eta,\zeta)=-d\omega(J\xi,J\eta,J\zeta)$,
$\sharp:\Lambda^3T^*M\hookrightarrow\Lambda^2T^*M\otimes T^*M\to\Lambda^2T^*M\otimes TM$ is the index lift operator
and $\pi_{2,0}:\Lambda^2T^*M\otimes TM\to\Lambda^{2,0}T^*M\otimes TM=\{B:B(J\xi,\eta)=B(\xi,J\eta)=JB(\xi,\eta)\}$ is the projection, cf.\ \cite{GBNV}.
This implies that the Chern connection $\DD$ on $TM$
has a non-trivial torsion unless $(g,J,\oo)$ is K\"ahler.
\subsection{The Levi-Civita connection}
A Hermitian metric induces a canonical
torsionless metric connection $\nabla$ on $TM$: $\nabla g=0$, $T_\nabla=0$.
Due to computation of the torsion $T_\DD$ above,
the Levi-Civita connection $\nabla$ does not preserve $J$ unless $M$ is K\"ahler.
In other words, $\DD=\nabla$ only in this case.
Choosing the frame $e_a=\p_{z^a}$, $e_{\bar{a}}=\p_{\bar{z}^a}$, for a holomorphic
coordinate system $(z^a)$ on $M$, we get
$$
\nabla e_a=\Gamma^c_{ab}e^b\otimes e_c+ \Gamma^c_{a\bar{b}}e^{\bar{b}}\otimes e_c+
\Gamma^{\bar{c}}_{ab}e^b\otimes e_{\bar{c}}+\Gamma^{\bar{c}}_{a\bar{b}}e^{\bar{b}}\otimes e_{\bar{c}}
$$
and (because $g_{ab}=0=g_{\bar{a}\bar{b}}$) the Christoffel coefficients have the standard but shorter
form, e.g.
$$
\Gamma^c_{ab}=
\tfrac12g^{c\bar{d}}\Bigl(\frac{\p g_{a\bar{d}}}{\p z^b}+\frac{\p g_{b\bar{d}}}{\p z^a}\Bigr),\
\Gamma^c_{a\bar{b}}=
\tfrac12g^{c\bar{d}}\Bigl(\frac{\p g_{a\bar{d}}}{\p\bar{z}^b}-\frac{\p g_{a\bar{b}}}{\p\bar{z}^d}\Bigr),\
\text{etc.}
$$
Introducing the 1-forms $\vartheta_a^c=\Gamma_{ab}^ce^b+\Gamma_{a\bar{b}}^ce^{\bar{b}}$
(not necessarily of $(1,0)$-type) we obtain the induced connection on the
holomorphic bundles $T_{(1,0)}M$, $T^{(1,0)}M$ (and their conjugate):
$$
\nabla e_a=\vartheta_a^c e_c,\ \nabla e^c=-\vartheta_a^c e^a,\ \text{etc.}
$$
\subsection{The canonical connection}
Though we will almost not use it, let us mention also the canonical connection \cite{GBNV,T}
\begin{equation}\label{can}
\D=\tfrac12(\nabla-J\nabla J)=\nabla-\tfrac12J\nabla(J).
\end{equation}
This connection is both metric $\D(g)=0$ and complex $\D(J)=0$.
The price of this additional (second) property is the emergence of torsion: $T_{\D}(X,Y)=\frac12(\nabla_X(J)JY-\nabla_Y(J)JX)$.
Clearly $\nabla$ is the canonical connection iff the structure $(g,J,\omega)$ is K\"ahler.
Also, if the Chern connection $\mathbb{D}$ is canonical, then $(\nabla_XJ)Y=(\nabla_YJ)X$,
and this implies that the structure $(g,J,\omega)$ is almost K\"ahler. A Hermitian
almost K\"ahler structure is necessarily K\"ahler \cite{GrH}.
\subsection{Induced connections}
The above connections naturally induce canonical connections on the canonical bundle $K$.
For the Chern connection $\DD e_a=\theta_a^ce_c$ this is given via the section
$\Omega=e^1\we\dots\we e^n\in\Gamma(K)$ by $\DD\Omega=-\op{tr}(\theta)\ot\Omega$,
where $\op{tr}(\theta)=\theta_a^a$. Due to
$\langle\Omega,\Omega\rangle=\op{det}(g^{a\bar{b}})=\det{g}^{-1}$ we also have
$\DD\Omega=\frac{-1}2\p\log g\ot\Omega$.
Similarly, for the Levi-Civita connection $\nabla e_a=\vartheta_a^ce_c$ we get
$\nabla\Omega=-\op{tr}(\vartheta)\ot\Omega$, and in general the connection form
$\op{tr}(\vartheta)=\vartheta_a^a$ differs from that for the Chern connection
(but coincides with it in the K\"ahler case).
\subsection{Curvature and the second fundamental form}
Pick a linear connection $D$
on a vector bundle $E$ over $M$. Denote $\Omega^k(M,E)=\Gamma(\La^kT^*M\ot E)$.
Then $D$ can be uniquely extended to a sequence of maps
$D:\Omega^k(M,E)\to\Omega^{k+1}(M,E)$ by the Leibnitz super-rule:
for $\alpha\in\Omega^\bullet(M)$ and $s\in\Gamma(E)$ let
$D(\alpha\ot s)=d\alpha\ot s+(-1)^{|\alpha|}\a\we Ds$.
The curvature is the obstruction for $(\Omega^\bullet(M,E),D)$ to be a complex:
identify $D^2:\Omega^0(M,E)\to\Omega^2(M,E)$ with
$R_D\in\Gamma(\La^2T^*M\ot\op{End}(E))$,
$R_D(\xi,\eta)=[D_\xi,D_\eta]-D_{[\xi,\eta]}\in\Gamma(\op{End}(E))$,
$\xi,\eta\in\Gamma(TM)$.
Here it is important which sign convention we choose. In terms of the connection matrix
$\theta_E=(\theta_a^b)$ we get:
$$
D^2e_a=D(\theta_a^be_b)=(d\theta_a^b-\theta_a^c\we\theta_c^b)e_b.
$$
This is the Maurer-Cartan form of the curvature:
$\Theta_a^b=d\theta_a^b-\theta_a^c\we\theta_c^b$,
or in coordinate-free notation $\Theta_E=d\theta_E-\frac12[\theta_E,\theta_E]$.
Note that if $D$ is the Chern connection (on a Hermitian bundle), then $\Theta_E$ is a matrix of $(1,1)$-forms,
but for the Levi-Civita connection in general this is not the case.
Let $E_0\subset E$ be a holomorphic subbundle and $E_1=E_0^\perp$ its ortho-complement.
Since $E_0$ is a Hermitian bundle in its own, we have two first order differential operators
$$
D_E|_{\Gamma(E_0)}:\Gamma(E_0)\to\Omega^1(M,E)\ \text{ and }\
D_{E_0}:\Gamma(E_0)\to\Omega^1(M,E_0).
$$
The second fundamental form of the subbundle $E_0$ in $E$ with normal bundle $E_1$
is the tensor $\Phi\in\Omega^1(M)\ot\Gamma(\op{Hom}(E_0,E_1))$ given by
$$
\Phi=D_E|_{\Gamma(E_0)}-D_{E_0}:\Gamma(E_0)\to\Omega^1(M,E_1).
$$
Note that for $D=\DD$ the Chern connection, $\Phi\in\Omega^{(1,0)}(M,\op{Hom}(E_0,E_1))$.
The connection matrix in the splitting $E=E_0\oplus E_1$ writes
$$
\theta_E=\begin{bmatrix}\theta_{E_0} & \Phi^*\\ \Phi & \theta_{E_1}\end{bmatrix},
$$
where $\Phi^*=\bar\Phi^t$. Hence the curvature is
$$
\Theta_E= d\theta_E-\theta_E\we\theta_E=
\begin{bmatrix}
d\theta_{E_0}-\theta_{E_0}\we\theta_{E_0}-\Phi^*\we\Phi & \star\ \\
\star & \star\
\end{bmatrix}
$$
so that
\begin{equation}\label{SFF}
\Theta_E|_{E_0}=\Theta_{E_0}-\Phi^*\we\Phi,
\end{equation}
where for vector spaces $V,W$ and elements $\a,\b\in V^*\ot\op{End}(W)$, $X,Y\in V$, we let
$(\a\we\b)(X,Y)=\a(X)\b(Y)-\a(Y)\b(X)$, $\a^*(X)=(\a(X))^*$.
\section{Background II: positivity}\label{S3}
In this section preliminary computations are made, following \cite{BHL}.
The first subsection is just linear algebra and so is applicable to a complex vector space
$(V,J)$ ($=T_xM$), equipped with a complex-valued symmetric $\C$-bilinear non-degenerate form $(\cdot,\cdot)$
with $(X,\bar X)\ge0$. The corresponding Hermitian metric is $\langle X,Y\rangle=(X,\bar{Y})$.
Let us call a 2-form $\omega\in\La^2V^*$ {\em positive\/} (resp.\ non-negative $\omega\ge0$)
if the corresponding bilinear form $b(X,Y)=\omega(X,JY)$ is symmetric positive definite (resp.\ positive semidefinite).
In other words, this 2-form is $J$-invariant, i.e.\ $\omega\in\La^{1,1}V^*$, and
$\frac1i\omega(X',X'')>0$ (resp.\ $\ge0$) for $X\neq0$.
Here $X'=\frac12(X-iJX)$ is the projection of $X\in V$ to $V_{1,0}$
and $X''=\bar{X'}=\frac12(X+iJX)$ is the projection of $X$ to $V_{0,1}$.
Next, a 2-form $\Omega\in\La^2V^*\otimes\op{End}(W)$
with values in Hermitian endomorphisms of a complex space $W$
is {\em positive\/} (resp.\ non-negative $\Omega\ge0$)
if the scalar valued 2-form $\langle \Omega w,w\rangle$ is positive (resp.\ $\ge0$) $\forall w\neq0$.
\subsection{Projection on the canonical bundle}\label{S31}
For an anti-symmetric endomorphism $A:V\to V$ denote $\hat{A}\in\La^2V^*$ the element given
by lowering indices: $\hat{A}(v\we w)=(v,Aw)=-(Av,w)$, $v,w\in V$.
Note that $(A^*\a,\b)=(\hat{A},\a\we\b)$ for arbitrary $\a,\b\in V^*$. Indeed, using
the operator $\sharp:V^*\to V$ of raising indices, we get
$$
(\hat{A},\a\we\b)=\hat{A}(\a^\sharp\we\b^\sharp)=(\a^\sharp,A\b^\sharp)=\a(A\b^\sharp)=(A^*\a)(\b^\sharp)=(A^*\a,\b).
$$
Here and below star denotes the usual pull-back $A^*:\La^kV^*\to\La^kV^*$.
\begin{lem}
Let $n=\dim_\C V$ and $\omega(X,Y)=\langle JX,Y\rangle$ be the symplectic form on $V$. Then,
denoting $\pi_0:\La^nV^*\ot\C\to\La^{n,0}V^*$ the orthogonal projection, we have
$$
\pi_0A^*\pi_0^*=-i(\hat{A},\omega)\in\op{End}(\La^{n,0}V^*).
$$
\end{lem}
\begin{proof}
In a unitary (holomorphic) basis $\{e_a\}$ of $V$ with the dual basis $\{e^a\}$ of $V^*$ we get
$\omega=i(e^1\we e^{\bar1}+{\dots}+e^n\we e^{\bar n})\in\La^{1,1}V^*$. The holomorphic volume form is
$\Omega=e^1\we\dots\we e^n\in\La^{n,0}V^*$. Hence
$\pi_0A^*\pi_0^*:\La^{n,0}V^*\to\La^{n,0}V^*$ is equal to
$$
\langle A^*\Omega,\Omega\rangle=
\langle A^*e^1,e^1\rangle+{\dots}+\langle A^*e^n,e^n\rangle
=(\hat{A},e^1\we e^{\bar1}+{\dots}+e^n\we e^{\bar n})
$$
and the last expression is $(\hat{A},-i\omega)$.
\end{proof}
For $R\in\La^2V^*\ot\op{End}(V)$ with values in anti-symmetric endomorphisms
define $\tilde{R}\in\op{End}(\La^2V^*)$ as the composition (where $\flat=\sharp^{-1}$)
$$
\La^2V^*\ot\op{End}_{\text{skew}}(V)\stackrel{\flat}\to
\La^2V^*\ot\La^2V^*\stackrel{\1\ot\sharp^{\we2}}\longrightarrow\La^2V^*\ot\La^2V=\op{End}(\La^2V^*).
$$
In other words, if $R=\sum\a_k\ot A_k$, then for $\b\in\La^2V^*$ the action is
$\tilde{R}(\b)=\sum(\hat{A}_k,\b)\a_k$. Now the previous lemma implies
\begin{cor}
Denote by $R_\nabla$ the curvature of the connection induced from the Levi-Civita connection
on $\La^nTM^*$. Then
$$
R_\nabla|_K=i\tilde{R}(\omega)\in\Omega^2(M).
$$
\end{cor}
\begin{proof}
If $R=\sum\a_k\ot A_k$, then $R_\nabla=-\sum\a_k\ot A_k^*$ and therefore
$\pi_0R_\nabla^*\pi_0^*=i\sum(\hat{A}_k,\omega)\a_k=i\tilde{R}(\omega)$.
\end{proof}
This corollary and decomposition \eqref{SFF} yield formula \eqref{NF} from the introduction.
\subsection{Type of the second fundamental form}\label{S32}
Recall \cite{GH} that for the Chern connection the second fundamental form is always of type $(1,0)$.
For the Levi-Civita connection this is not always so, however for the subbsundle
$K\subset\La^nT^*M\ot\C$ this property holds.
\begin{lem}
The second fundamental form of the canonical bundle $K$ satisfies
$\Phi\in\Omega^{1,0}(M)\ot\Gamma(\op{Hom}(K,\La^nT^*M\ot\C/K))$.
\end{lem}
\begin{proof}
Let us first prove the same property for the subbundle $\La^{1,0}(M)\subset T^*M\ot\C$.
Let $e_a$ be a (local) unitary frame and $e^a$ the dual co-frame.
Since $\nabla e^a=-\vartheta_c^ae^c-\vartheta_{\bar c}^ae^{\bar c}$, the claim means
$\vartheta_{\bar c}^a\in\Omega^{1,0}(M)$.
Decompose the $(0,1)$-part of this connection form
component: $\vartheta''{}_{\bar b}^a=\sum\beta_{abc}e^{\bar c}$.
The Leibnitz rule applied to $d(e_a,e_b)=0$ implies that $\beta_{abc}$ is skew-symmetric in $ab$.
On the other hand, $\nabla$ is torsion-free and so
$$
de^a=\op{alt}[\nabla e^a]=-\sum(\vartheta^a_c\we e^c+\vartheta_{\bar c}^a\we e^{\bar c}).
$$
The Nijenhuis tensor of $J$ vanishes $0=d^{-1,2}:\Omega^{1,0}(M)\to\Omega^{0,2}(M)$
and this implies $\sum\vartheta''{}_{\bar c}^a\we e^{\bar c}=0$. Therefore
$\beta_{abc}$ is symmetric in $bc$, and the $S3$-lemma
$(V\wedge V\otimes V)\cap(V\otimes V\odot V)=0$
yields $\beta_{abc}=0$, i.e.\ $\vartheta''{}_{\bar c}^a=0$.
Now we pass to the subbundle $K=\La^{n,0}(M)\subset\La^nT^*M\ot\C$. Since
$$
\nabla(e^1\we\dots\we e^n)=-\sum\vartheta^a_{\bar c}\ot(e^1\we\dots\we e^{a-1}\we
e^{\bar c}\we e^{a+1}\we\dots\we e^n)\,\op{mod}K
$$
the claim follows.
\end{proof}
\begin{cor}
We have: $-i\Phi^*\we\Phi\ge0$.
\end{cor}
\begin{proof}
Since $\Phi$ has type $(1,0)$ we conclude
$$
\tfrac1i(-i\Phi^*\we\Phi)(X',X'')=\Phi^*(X'')\Phi(X')=(\Phi(X'))^*\Phi(X')\ge0.
$$
\vspace{-19pt}
\end{proof}
\section{Proof of Theorem \ref{thm2}}\label{S4}
This section contains the proof of Theorem \ref{thm2}, following the approach of \cite{BHL}.
Theorem \ref{thm1} is an immediate corollary.
We first prove a quantitative assertion that a small perturbation of a positive form is positive.
Let $(g,J,\omega)$ be a (linear) Hermitian structure on a vector space $V$,
$n=\dim_\C V=\tfrac12\dim_\R V$. The Euclidean structure on $V$ induces the following
norm on $\op{End}(V)$: $\|A\|_E=\sqrt{\sum|A_i^j|^2}$, where $A=[A_i^j]$ is the matrix representation
in some unitary basis. It also yields the norm $\z\mapsto\|\z\|_{\La^2}$ on $\La^2V^*$.
Note that the embedding $\La^2V^*\simeq\op{End}_\text{skew}(V)\subset\op{End}(V)$,
$\hat{A}\mapsto A$, scales the norm: $\|\z\|^2_E=2\|\z\|_{\La^2}^2$.
Below we identify $\hat{A}$ with $A$.
\begin{lem}\label{LL}
Let $\z_0$ be a real $(1,1)$-form and $\z_0\geq\omega$. Then any real $(1,1)$-form
$\z$, such that $\|\z-\z_0\|_{\La^2}\leq\frac1{2\sqrt{n}}$, is nondegenerate.
\end{lem}
\begin{proof}
Recall that if $\|A\|_E<1$ for $A\in\op{End}(V)$, then $\1-A\in\op{End}(V)$ is invertible. Indeed,
$(\1-A)^{-1}=\sum_{k=0}^\infty A^k$.
Diagonalize $\oo$ and $\z$ simultaneously: in some unitary co-frame $e^a$
$$
\oo=i\sum e^a\we e^{\bar a},\qquad \z_0= i\sum \l_a e^a\we e^{\bar a},
$$
and $\l_a\geq1$ by the assumptions. Then $\z_0^{-1}=-i\sum \l_a^{-1} e^a\we e^{\bar a}$
and $\|\z_0^{-1}\|_E^2=2\sum\l_a^{-2}\leq 2n$.
Decompose $\z=\z_0+(\z-\z_0)=\z_0\cdot(\1+\z_0^{-1}(\z-\z_0))$. The claim follows from
$\|\z_0^{-1}(\z-\z_0)\|_E\leq\|\z_0^{-1}\|_E\cdot \|\z-\z_0\|_E<
\sqrt{2n}\cdot\frac{\sqrt{2}}{2\sqrt{n}}=1$.
\end{proof}
Now the proof of Theorem \ref{thm2} is concluded as follows. Let $n=3$.
Normalize $g$ by the requirement $\op{Sp}(\tilde{R})\in(\frac56,\frac76)$.
Then $\op{Sp}(\1-\tilde{R})\in(-\frac16,\frac16)$,
and so $\|\tilde{R}(\omega)-\omega\|<\frac16\|\omega\|=\frac16\sqrt{3}=\frac1{2\sqrt{3}}$.
By \eqref{NF} we get
$$
-i\Omega=\tilde{R}(\omega)-i\Phi^*\we\Phi=(\tilde{R}(\omega)-\omega)+(\oo-\Phi^*\we\Phi).
$$
Since $\z_0=\oo-\Phi^*\we\Phi\ge\oo$, then by Lemma \ref{LL} we conclude that $i\Omega$,
and hence $\Omega$ are nondegenerate. Since $\Omega$ is closed by Bianchi's identity,
it is symplectic on $\Ss$, which is a contradiction. \qed
\section{Another approach}\label{S5}
In this section we give yet another proof of Theorem \ref{thm1} due to K. Sekigawa and
L. Vanhecke \cite{SV}. We should warn the reader of some unspecified sign choices
in their paper, which we amend here.
Our sign conventions in this respect are in agreement with \cite{Gr,GBNV},
though in these sources the curvature is defined as minus that of ours.
Since there are several differences in sign agreements, for instance in passing from $(g,J)$ to
$\omega$, in Ricci contraction etc, this will be reflected in sign differences of our formulae,
which otherwise are fully equivalent.
\subsection{The first Chern class}
Given a connection $D$ and its curvature tensor $R_D$ on an almost Hermitian manifold $(M,g,J)$ of dimension $2n$
define its holomorphic Ricci curvature by
$$
\op{Ric}^*_D(X,Y)=-\op{Tr}\Bigl(R(X,J\cdot)JY\Bigr)=\frac12\sum_{i=1}^{2n} R_D(X,JY,e_i,Je_i),
$$
where $e_1,e_2=Je_1,\dots,e_{2n-1},e_{2n}=Je_{2n-1}$ is a $J$-adapted orthonormal basis.
For the characteristic (Chern) connection $\mathbb{D}$ the 2-form
$$
\gamma_1(X,Y)=\frac{-1}{2\pi}\op{Ric}^*_{\mathbb{D}}(X,JY)=\frac1{2\pi}\op{Ric}^*_{\mathbb{D}}(JX,Y)
$$
represents the first Chern class $c_1=[\gamma_1]$, see \cite{GBNV}.
When passing to the Levi-Civita connection $\nabla$, this simple formula is modified.
A relation between the two connections is given by \cite[(6.2)]{GBNV}
that, in the case of integrable $J$, states
$$
g(\mathbb{D}_XY,Z)= g(\D_XY,Z)-\tfrac12g(JX,\nabla_Y(J)Z-\nabla_Z(J)Y),
$$
with the canonical connection $\D$ given by (\ref{can}). This allows to express
the first Chern form in terms of $\nabla$ (the curvature of $\D$ is expressed through that
of $\nabla$ in \cite{T}, and the curvature of $\mathbb{D}$ -- in \cite{GBNV}).
Define the 2-forms $\psi(X,Y)=-2\op{Ric}^*_{\nabla}(X,JY)=\sum R_\nabla(X,Y,e_i,Je_i)$ and
$\vp(X,Y)=\op{Tr}\Bigl(J(\nabla_XJ)(\nabla_YJ)\Bigr)=-\sum(\nabla_XJ)^a_b(\nabla_{JY}J)^b_a$.
With these choices (cf.\ \cite{GBNV,SV}) the first Chern form is given by
\begin{equation}\label{Chern}
8\pi\gamma_1=2\psi+\vp.
\end{equation}
\subsection{Alternative proof of Theorem \ref{thm1} }
Now suppose that $g$ has constant sectional curvature $k>0$, i.e.\
$$
R_\nabla(X,Y,Z,T)=g(R_\nabla(X,Y)Z,T)=k\cdot(g\varowedge g)(X,Y,Z,T),
$$
where $(g\varowedge g)(X,Y,Z,T)=g(X,Z)g(Y,T)-g(X,T)g(Y,Z)$ is the Kulkarni-Nomizu product, whence
\begin{gather*}
\op{Ric}_\nabla(X,Y)=\sum R_\nabla(X,e_i,Y,e_i)=(2n-1)\,k\,g(X,Y),\\
\op{Ric}^*_\nabla(X,Y)=\sum R_\nabla(X,e_i,JY,Je_i)=k\,g(X,Y).
\end{gather*}
In other words, this metric $g$ is both Einstein and $*$-Einstein, and the scalar and $*$-scalar
curvatures are both positive.
Thus both $\op{Ric}_\nabla$ and $\op{Ric}^*_\nabla$ are positive definite, and hence $\psi>0$.
Now since $\vp(X,JX)=\|\nabla_{JX}J\|^2=\|\nabla_XJ\|^2\ge0$, we have $\vp\ge0$, and
consequently $\gamma_1>0$. Integrating $\gamma_1^n$ yields $c_1^n(M)\neq0$.
Returning to the case $M=\Ss$, $n=3$, and the standard round metric $g=g_0$
of constant sectional curvature 1,
we obtain a contradiction because $c_1\in H^2(\Ss)=0$,
and so $c_1^3=0$ as well. \qed
\section{Generalization of the idea of Section \ref{S5}}\label{S6}
If we perturb the metric $g$ starting from $g_0$, it is no longer $*$-Einstein, and
the argument of the previous section literally fails.
However, since the space of $g$-orthogonal complex structures
$\mathcal{J}_g\simeq O(2n)/U(n)$
is compact, the image of the map
$$
\mathcal{J}_g\ni J\mapsto\op{sym}[\psi(\cdot,J\cdot)]\in\Gamma(\odot^2T^*M)
$$
is close to the one-point set $\{g_0\}$ (because $g_0$ is $*$-Einstein)
and so is positive for $g$ sufficiently close to $g_0$. Thus we still get
the inequality $8\pi\gamma_1=2\psi+\vp>0$ as in the previous section, and so
conclude non-existence of $g$-orthogonal complex structures $J$ on $\Ss$ for an open set
of metrics $g\in\Gamma(\odot^2_+T^*\Ss)$ in $C^2$-topology.
A quantitative version of this idea is a novel result given below.
\subsection{Bounds in the space of curvature tensors}
Fix a Euclidean space $V$ of even dimension $2n$ with metric $g=\langle\cdot,\cdot\rangle$,
and consider the space $\mathcal{R}$ of algebraic curvature tensors on it.
Identifying $(3,1)$ and $(4,0)$ tensors via the metric,
$\mathcal{R}=\op{Ker}[\wedge:\odot^2\Lambda^2V^*\to\Lambda^4V^*]$.
In this subsection we restrict to linear tensors in $V$. Denote by $\mathcal{P}$ the space
$\{R\in\mathcal{R}:\op{Ric}^*_R(X,X)\ge0\ \forall X\in V,\forall J\in\mathcal{J}_g\}$,
where $\op{Ric}^*_R$ is computed via $R$ and $J$ as in the previous section.
This can be exposed in index terms as follows.
Denote by $\mathcal{F}_g$ the space of $g$-orthonormal frames $e=\{e_1,\dots,e_{2n}\}$ on $V$.
Each such frame yields an orthogonal complex structure on $V$ by
$Je_i=(-1)^{i-1}e_{i^\#}$, where $i^\#=i-(-1)^i$. For every $e\in\mathcal{F}_g$ and $R\in\mathcal{R}$
compute $\alpha_{ij}=\sum_{k=1}^{2n}R(e_i,e_k,e_{j^\#},e_{k^\#})=(-1)^{i+j}\alpha_{j^\#i^\#}$
and form the symmetric matrix $A$ with entries $a_{ij}=\frac12(\alpha_{ij}+\alpha_{ji})$.
Then $R\in\mathcal{P}$ iff $A$ is positive semidefinite for every $e\in\mathcal{F}_g$,
and this can be determined by finite-dimensional optimization via the Silvester criterion.
A simple sufficient criterion for this is the following. Introduce the following $L^\infty$-norm
on $\mathcal{R}$: $\|R\|_\infty=\max_{\{|v_i|=1\}}|R(v_1,v_2,v_3,v_4)|$.
\begin{lem}
If $\|R-g\varowedge g\|_\infty\leq\frac1{2n}$, then $R\in\mathcal{P}$.
\end{lem}
\begin{proof}
Denote $\check{R}= R-g\varowedge g$. Then for any $J\in\mathcal{J}_g$ and $X\in V$ with $\|X\|=1$
we get
\begin{multline*}
\op{Ric}^*_R(X,X)=\sum R(X,e_i,JX,Je_i)=\|X\|^2+\sum\check{R}(X,e_i,JX,Je_i)\\
\ge\|X\|^2-2n\|X\|^2\max_{\|u\|=\|v\|=1}|\check{R}(u,v,Ju,Jv)|\ge0.
\end{multline*}
Thus $R\in\mathcal{P}$.
\end{proof}
\subsection{A non-existence alternative to Theorem \ref{thm2}}
Write $g\in\mathcal{P}$ if the curvature tensor of $g$ satisfies this positivity property on every
tangent space $V=T_xM$, $x\in M$.
The set $\mathcal{P}$ is a neighborhood of
the round metric $g_0$ on $M=\Ss$ in the space of all metrics in $C^2$-topology.
\begin{theorem}\label{Thlast}
$\Ss$ possesses no Hermitian structure $(g,J,\omega)$ with $g\in\mathcal{P}$.
\end{theorem}
\begin{proof}
In formula \eqref{Chern} $\vp\ge0$, and if $g\in\mathcal{P}$ then $\psi\ge0$ as well.
Thus $\gamma_1\ge0$ and we conclude $0=c_1^3[\Ss]=\int_{\Ss}\gamma_1^3\ge0$.
This integral is the sum of several non-negative summands, the last of which is $\int_{\Ss}\vp^3$.
Since all of these summands have to vanish, we conclude $\vp=0$ implying $\|\nabla J\|^2=0$.
Thus $\nabla J=0$ meaning that $(g,J,\omega)$ is a K\"ahler structure on $\Ss$ and this is a
contradiction.
\end{proof}
\begin{cor}
If $\|R_\nabla-g\varowedge g\|_\infty\leq\frac16$ for the curvature $R_\nabla$ of a metric $g$ on $\Ss$,
then no $g$-orthogonal almost complex structure $J$ is integrable.
\end{cor}
Note that this can be again considered as a perturbation result for the metric $g_0$,
for which the curvature tensor is $R_{\nabla_0}=g_0\varowedge g_0$:
If $\|R_\nabla-R_{\nabla_0}\|_\infty\leq\epsilon_1$, $\|g-g_0\|_\infty\leq\epsilon_2$ and
$\epsilon_1+4\epsilon_2+2\epsilon_2^2\leq\frac16$ (one can check that this follows from the linear constraint
$\epsilon_1+\bigl(2+\sqrt{13/3}\bigr)\epsilon_2\leq\frac16$),
the claim follows. Indeed (note that $\|v\|^2=g_0(v,v)$) we have:
\begin{gather*}
\tfrac12\|g\varowedge g-g_0\varowedge g_0\|_\infty\leq
\max_{\|v_i\|=1}|g(v_1,v_3)g(v_2,v_4)-g_0(v_1,v_3)g_0(v_2,v_4)|\leq\\
\max_{\|v_i\|=1}|g(v_1,v_3)-g_0(v_1,v_3)|\,|g(v_2,v_4)|+|g(v_2,v_4)-g_0(v_2,v_4)|\,|g_0(v_1,v_3)|\\
\leq\|g-g_0\|_{\infty}(2+\|g-g_0\|_{\infty}).
\end{gather*}
Thus $\|R_\nabla-g\varowedge g\|_\infty\leq\|R_\nabla-g_0\varowedge g_0\|_\infty+\|g_0\varowedge g_0-g\varowedge g\|_\infty\leq
\epsilon_1+2\epsilon_2(2+\epsilon_2)$.
\section{Concluding remarks}\label{S7}
The non-existence results of this paper are not sharp. Indeed, both Theorems \ref{thm2} and
Corollary of Theorem \ref{Thlast} deal with rough upper bound and could be further improved.
Note also that the property of an almost complex structure $J$ being $g$-orthogonal depends
only on the conformal class of the metric\footnote{This has the following corollary generalizing Theorem \ref{thm1}:
If $g$ is conformally equivalent to $g_0$ on $\Ss$, then the space $J\in\mathcal{J}_g(\Ss)$ contains no complex structures.}, while the Riemann and holomorphic Ricci tensors, used in
the proofs, are not conformally invariant. It is a challenge to further elaborate the
results to get better bounds, implying non-existence of orthogonal complex structures
in a larger neighborhood of the round metric on $\Ss$.
Every almost complex structure $J$ on $\Ss$ is orthogonal with respect to some metric $g$, but
as this (or any conformally equivalent metric) can be far from $g_0$, the positivity argument will not work.
Note that all known proofs of non-existence of Hermitian structures for certain $g$ use only one property of the 6-sphere,
namely that $H^2(\Ss)=0$. It would be interesting to find a proof of non-existence of orthogonal complex
structures based on some other ideas.
\bigskip
\textsc{Acknowledgement.} I am grateful to Oliver Goertsches for a careful reading of the first draft,
and useful suggestions on the exposition.
|
1,108,101,566,711 | arxiv | \section{Introduction}
Star formation is a violent process, even in low-mass protostars ($L_{\rm bol}$ $<$ 100 L$_\odot$). X-rays and UV radiation from the accreting star-disk system illuminate the inner, dense envelope while the protostellar jet and wind impinge on the same inner envelope, causing shocks into dense gas. The implication is that the physical and chemical conditions along the outflow cavities are significantly different from the conditions in the bulk of the collapsing envelope. Very little is known about the hot ($T$ $>$ 500 K) gas in low-mass protostars, primarily because the mass of the hot gas is at most a few \% of that of the envelope. Second, few unique, abundant tracers of the hot gas in the inner envelope exist, save H$_2$ and CO observed at near-infrared wavelengths \citep{herczeg11}, both of which are very difficult to detect towards the deeply embedded protostars where the $A_{\rm V}$ is $\gtrsim$100 \citep[e.g.,][]{maret09}.
The hot gas is most prominently seen in high-$J$ CO observations with the Photodetector Array Camera and Spectrometer (PACS) on \textit{Herschel} \citep{poglitsch10, pilbratt10}, where CO emission up to $J$ = 49--48 is detected towards low-mass protostars \citep[$E_{\rm up}$ $\sim$ 5000 K;][]{herczeg12, gecco12}. The high-$J$ CO emission ($J_{\rm up}$ $>$ 14) traces two components with rotational temperatures of 300 and 700--800~K (a warm and hot component, respectively) seen towards several tens of low-mass protostars \citep{green13, karska13, manoj12}. At present it is unclear whether the two temperature components correspond to separate physical components, or whether they are part of a distribution of temperatures, or even just a single temperature and density \citep{visser12, neufeld12, manoj12, karska13}. Moreover, depending on the excitation conditions, the rotational temperature may or may not be identical to the kinetic gas temperature. \citet{santangelo13} and \citet{dionatos13} relate the two rotational-temperature components seen in CO to two rotational-temperature components seen in H$_2$ rotational diagrams; with its lower critical density ($\sim$ 10$^3$ cm$^{-3}$ versus $>$ 10$^5$~cm$^{-3}$ for high-$J$ CO transitions) H$_2$ is more likely to be thermally excited suggesting that the excitation is thermal.
The PACS lines provide little information, beyond the rotational temperature, as they are all velocity-unresolved and no information is therefore available on the kinematics of this gas. If the very high-$J$ CO emission is caused by shocks in the inner dense envelope ($n$ $\gtrsim$ 10$^6$ cm$^{-3}$) the rotational temperature is similar to the kinetic gas temperature, as proposed by, e.g., \citet{vankempen10} and \citet{visser12}. Further support for this hypothesis and the existence of shocks in the dense inner envelope comes from observations of [\ion{O}{i}] at 63 $\mu$m and OH, also done with PACS. Towards the low-mass protostar HH46, the inferred column densities of these species indicate the presence of fast ($\varv$ $>$ 60 km s$^{-1}$) dissociative shocks close to the protostar \citep{vankempen10, wampfler10, wampfler13}.
As part of the `Water in star-forming regions with \textit{Herschel}' programme \citep[WISH\footnote{\url{http://www.strw.leidenuniv.nl/WISH}};][]{vandishoeck11}, \citet{, kristensen10, kristensen12} detected a distinct velocity component towards six low-mass protostars in the H$_2$O 1$_{10}$--1$_{01}$ 557 GHz transition with the Heterodyne Instrument for the Far-Infrared on \textit{Herschel} \citep[HIFI;][]{degraauw10}. The component is typically blue-shifted from the source velocity by $\sim$2--10 km s$^{-1}$ and the width is in the same range as the offset, $\sim$5--10 km s$^{-1}$. \citet{kristensen12} referred to this component as the ``medium component'' and associated it with shocks on the inner envelope/cavity wall based on the coincidence of H$_2$O masers and this velocity component. The maser association suggests excitation conditions where the density is $>$ 10$^7$ cm$^{-3}$ and $T$ $>$ 500 K \citep{elitzur92}, conditions similar to those inferred for the high-$J$ CO emission. Yet, so far little is known concerning this velocity component, its origin in the protostellar system and the local conditions, primarily because the component is not seen in ground-based observations of lower-excited lines towards these same sources.
In this paper, we combine observations of the H$_2$O offset component presented above with observations of light hydrides \citep[OH, OH$^+$, CH$^+$;][Benz et al. in prep.]{wampfler13} and highly excited velocity-resolved CO (up to $J$ = 16--15, for one source) to constrain the physical and chemical conditions in this velocity component. The component considered in this paper is the same as presented in \citet{kristensen12}, except towards one source, NGC1333-IRAS2A. The observations and data reduction are described in Sect. 2. The results are presented in Sect. 3. In Sect. 4 we discuss the derived excitation conditions and the interpretation of the velocity component in the context of irradiated shocks. Finally, Sect. 5 contains the conclusions.
\section{Observations}
\label{sec:obs}
\subsection{Source sample}
Six low-mass protostars in NGC1333 and Serpens clearly show the presence of an offset component in the H$_2$O 557 GHz line \citep{kristensen12}. The sources are NGC1333-IRAS2A, IRAS3A, IRAS4A, IRAS4B and Serpens SMM1, SMM3. Two sources, IRAS3A and SMM3, show the offset component in absorption against the outflow and/or continuum. The sources are all part of the WISH sample of low-mass protostars \citep{vandishoeck11, kristensen12}. IRAS3A was only observed in the H$_2$O 557 GHz line and is therefore excluded from further analysis. The component is not seen towards any other source, and the possible reasons will be discussed in Sect. \ref{sec:origin}.
\citet{kristensen10, kristensen12} identified three characteristic components from the line profiles of H$_2$O lines in low-mass protostars: a narrow ($FWHM$ $<$ 5 km s$^{-1}$), medium (5 $<$ $FWHM$ $<$ 20 km s$^{-1}$), and broad ($FWHM$ $>$ 20 km s$^{-1}$) component. The component in this study is characterised by its offset rather than line width as in our previous work, and thus named ``offset'' component. For most sources, it is the same as the medium component, except for IRAS 2A. Towards this source the ``offset'' component is broader than the ``medium'' component (40~km~s$^{-1}$ vs. 10 km s$^{-1}$) but is clearly offset from the source velocity. The offset component is not seen in low-$J$ CO transitions from the ground but was detected in H$_2$O emission \citep{kristensen10}, which is not the case for the ``medium'' component, and hence the redefinition.
\begin{figure}[!t]
\begin{center}
\includegraphics[width=.98\columnwidth]{h2o_broadmedium_557.eps}
\includegraphics[width=.98\columnwidth]{h2o_broadmedium_987.eps}
\end{center}
\caption{\textit{Top:} Continuum-subtracted HIFI spectra of the H$_2$O 1$_{10}$--1$_{01}$ ground-state transition at 557 GHz ($E_{\rm up}$ = 60 K). The profiles have been decomposed into Gaussian components and the best-fit profile is shown in blue. The offset component is highlighted in magenta for clarity. The baseline is shown in green and the source velocity with a red dashed line. \textit{Bottom:} Same as the top figure, except that the line shown is the excited H$_2$O 2$_{02}$--1$_{11}$ line at 988 GHz ($E_{\rm up}$ = 100 K). More details on the decomposition are found in \citet{kristensen10, kristensen12} for the NGC1333 sources and Ser SMM3.}
\label{fig:spectra_all}
\end{figure}
\subsection{Herschel observations}
The central positions of the six low-mass protostars were observed with HIFI on \textit{Herschel} in twelve different settings covering six H$_2^{16}$O and two CO transitions, and HCO$^+$, OH, CH$^+$, OH$^+$ and C$^+$ ($E_{\rm u}/k_{\rm B}\approx50-300$ K; see Table \ref{tab:settings} for an overview). Only Ser-SMM1 was observed in all settings, the other sources in a sub-set (Tables \ref{tab:settings} and \ref{tab:obsid}).
Data were obtained using the dual beam-switch mode with a nod of 3\arcmin\ and a fast chop and continuum optimisation, except for the ground-state ortho-H$_2$O line at 557 GHz, where a position switch was used \citep[see][ for details]{kristensen12}. The diffraction-limited beam size ranges from 12\arcsec\ to 39\arcsec\ (2800--9200 AU for a distance of 235 pc). Data were reduced using HIPE ver. 8. The calibration uncertainty is taken to be 10\% for lines observed in Bands 1, 2, and 5 while it is 30\% in Band 4 \citep{roelfsema12}. The pointing accuracy is $\sim$2\arcsec. A main-beam efficiency of 0.65--0.75 is adopted (Table \ref{tab:settings}). Subsequent analysis of the data is performed in CLASS\footnote{\url{http://www.iram.fr/IRAMFR/GILDAS/}} including subtraction of linear baselines. H- and V-polarisations are co-added after inspection; no significant differences are found between the two data sets.
To compare observations done with different beam sizes, all components are assumed to arise in an unresolved physical component even in the smallest beams (12\arcsec). The emission is scaled to a common beam-size of 20\arcsec\ (the beam at 1 THz) using a simple geometrical scaling for a point source. We argue \textit{a posteriori} that this is an appropriate scaling (Sect. \ref{sec:excitation}).
\begin{table*}
\caption{Species and transitions observed with \textit{Herschel}-HIFI containing the offset component\tablefootmark{a}.}
\tiny
\begin{center}
\begin{tabular}{r c c c c c c c l}
\hline \hline
\multicolumn{1}{c}{Transition} & $\nu$ & $\lambda$ & $E_{\rm u}/k_{\rm B}$ & $A$ & Beam\tablefootmark{b} & $t_{\rm int}$\tablefootmark{c} & $\eta_{\rm MB}$\tablefootmark{b} & Sources \\
& (GHz) & ($\mu$m) & (K) & (s$^{-1}$) & ($''$) & (min.) \\
\hline
H$_2$O ~~~1$_{10}$--1$_{01}$ & \phantom{1}556.94 & 538.29 & \phantom{1}61.0 & 3.46(--3) & 38.1 & 13.0 & 0.75 & IRAS2A, IRAS3A, IRAS4A, IRAS4B, SMM1, SMM3 \\
2$_{12}$--1$_{01}$ & 1669.90 & 179.53 & 114.4 & 5.59(--2) & 12.7 & 23.7 & 0.71 & IRAS2A, IRAS4A, IRAS4B, SMM1, SMM3 \\
1$_{11}$--0$_{00}$ & 1113.34 & 269.27 & \phantom{1}53.4 & 1.84(--2) & 19.0 & 43.5 & 0.74 & IRAS2A, IRAS4A, IRAS4B, SMM1, SMM3 \\
2$_{02}$--1$_{11}$ & \phantom{1}987.93 & 303.46 & 100.8 & 5.84(--3) & 21.5 & 23.3 & 0.74 & IRAS2A, IRAS4A, IRAS4B, SMM1, SMM3 \\
2$_{11}$--2$_{02}$ & \phantom{1}752.03 & 398.64 & 136.9 & 7.06(--3) & 28.2 & 18.4 & 0.75 & IRAS2A, IRAS4A, IRAS4B, SMM1, SMM3 \\
3$_{12}$--3$_{03}$ & 1097.37 & 273.19 & 249.4 & 1.65(--2) & 19.7 & 32.5 & 0.74 & IRAS2A, IRAS4A, IRAS4B, SMM1, SMM3 \\
3$_{12}$--2$_{21}$ & 1153.13 & 259.98 & 249.4 & 2.63(--3) & 18.4 & 13.0 & 0.64 & IRAS2A, IRAS4A, IRAS4B, SMM1, SMM3 \\
\hline
CO ~~~ 10--9 & 1151.99 & 260.24 & 304.1 & 1.00(--4) & 18.4 & 13.0 & 0.64 & IRAS2A, IRAS4A, IRAS4B, SMM1, SMM3 \\
16--15 & 1841.35 & 162.81 & 751.7 & 4.05(--4) & 11.5 & 44.6 & 0.70 & SMM1 \\
\hline
CH$^+$ ~~~ 1--0 & \phantom{1}835.14 & 358.97 & \phantom{1}40.1 & 2.3(--3) & 25.4 & 15.6 & 0.75 & IRAS2A, IRAS4A, IRAS4B, SMM1 \\
OH$^+$ ~~~ 1--0\tablefootmark{d} & 1033.12 & 290.18 & \phantom{1}49.6 & 1.8(--2) & 20.5 & 30.1 & 0.74 & IRAS2A, IRAS4A, IRAS4B, SMM1 \\
C$^+$ ~~~ 2--1 & 1900.54 & 157.74 & \phantom{1}91.2 & 2.30(--6) & 11.2 & 15.8 & 0.69 & IRAS2A, IRAS4A, IRAS4B, SMM1 \\
HCO$^+$ ~~~ 6--5 & \phantom{1}535.06 & 560.30 & \phantom{1}89.9 & 1.27(--2) & 39.6 & 43.7 & 0.75 & IRAS2A, IRAS4A, IRAS4B, SMM1, SMM3 \\
CH ~~~ \phantom{6--5}\tablefootmark{d} & \phantom{1}536.76 & 558.52 & \phantom{1}25.8 & 6.80(--4) & 39.6 & 43.7 & 0.75 & IRAS2A, IRAS4A, IRAS4B, SMM1, SMM3 \\
OH ~~~ \phantom{6--5}\tablefootmark{d} & 1834.75 & 163.40 & 269.8 & 2.12(--2) & 11.6 & 44.6 & 0.70 & SMM1 \\
\hline
\end{tabular}
\tablefoot{For lines with hyperfine splitting (OH and OH$^+$) only the strongest component is shown here.
\tablefoottext{a}{From the JPL database of molecular spectroscopy \citep{pickett98}.}
\tablefoottext{b}{Half-power beam width, from \citet{roelfsema12}.}
\tablefoottext{c}{Total on $+$ off integration time incl. overheads.}
\tablefoottext{d}{Transitions with hyperfine splitting.}}
\end{center}
\label{tab:settings}
\end{table*}
\begin{table*}
\caption{H$_2$O emission in the offset component.}
\tiny
\begin{center}
\begin{tabular}{l c c c @{} c c c @{} c c c @{} c c c @{} c c c @{} c c c @{} c c c @{} c c c}
\hline \hline
& & \multicolumn{2}{c}{IRAS2A} && \multicolumn{2}{c}{IRAS3A} && \multicolumn{2}{c}{IRAS4A} && \multicolumn{2}{c}{IRAS4B} && \multicolumn{2}{c}{Ser-SMM1} && \multicolumn{2}{c}{Ser-SMM3} \vspace{1pt} \\
\cline{3-4} \cline{6-7} \cline{9-10} \cline{12-13} \cline{15-16} \cline{18-19}
Transition & rms\tablefootmark{a} & $\int T_{\rm MB}$ d$\varv$ & $T_{\rm peak}$ && $\int T_{\rm MB}$ d$\varv$ & $T_{\rm peak}$ && $\int T_{\rm MB}$ d$\varv$ & $T_{\rm peak}$ && $\int T_{\rm MB}$ d$\varv$ & $T_{\rm peak}$ && $\int T_{\rm MB}$ d$\varv$ & $T_{\rm peak}$ && $\int T_{\rm MB}$ d$\varv$ & $T_{\rm peak}$ \\
& (mK) & (K\,km\,s$^{-1}$) & (K) && (K\,km\,s$^{-1}$) & (K) && (K\,km\,s$^{-1}$) & (K) && (K\,km\,s$^{-1}$) & (K) && (K\,km\,s$^{-1}$) & (K) && (K\,km\,s$^{-1}$) & (K) \\ \hline
1$_{10}$--1$_{01}$ & \phantom{1}7 & 2.55 & 0.06 && --1.25 & --0.19 && 4.23 & 0.40 && \phantom{1}3.78 & 0.89 && 0.71 & 0.17 && --1.59 & --0.11 \\
2$_{12}$--1$_{01}$ & 80 & 7.73 & 0.17 && \ldots & \ldots && 2.40 & 0.23 && 16.35 & 3.84 && 4.51 & 1.06 && --2.08 & --0.14 \\
1$_{11}$--0$_{00}$ & 16 & 3.96 & 0.09 && \ldots & \ldots && 3.33 & 0.31 && \phantom{1}5.78 & 1.36 && 1.51 & 0.36 && --1.56 & --0.10 \\
2$_{02}$--1$_{11}$ & 16 & 4.65 & 0.11 && \ldots & \ldots && 3.67 & 0.35 && \phantom{1}2.17 & 0.51 && 2.54 & 0.60 && \phantom{--}0.69 & \phantom{--}0.05 \\
2$_{11}$--2$_{02}$ & 18 & 2.42 & 0.06 && \ldots & \ldots && 1.96 & 0.18 && \phantom{1}1.44 & 0.34 && 1.37 & 0.32 && \phantom{--}0.52 & \phantom{--}0.04 \\
3$_{12}$--3$_{03}$ & 56 & 3.14 & 0.07 && \ldots & \ldots && 1.17 & 0.11 && \phantom{1}1.11 & 0.26 && 1.66 & 0.39 && \phantom{--}0.25 & \phantom{--}0.02 \\
3$_{12}$--2$_{21}$ & 70 & 3.16 & 0.07 && \ldots & \ldots && 2.09 & 0.20 && \phantom{1}1.16 & 0.27 && 2.66 & 0.63 && $<$0.05 & $<$0.01 \\
\hline
$\Delta\varv$ (km\,s$^{-1}$) & & \multicolumn{2}{c}{40} && \multicolumn{2}{c}{6} && \multicolumn{2}{c}{10} && \multicolumn{2}{c}{4} && \multicolumn{2}{c}{4} && \multicolumn{2}{c}{14} \\
$\varv_{\rm LSR}$ (km\,s$^{-1}$) & & \multicolumn{2}{c}{--5} && \multicolumn{2}{c}{5} && \multicolumn{2}{c}{--1} && \multicolumn{2}{c}{8} && \multicolumn{2}{c}{4} && \multicolumn{2}{c}{\phantom{1}2} \\
$\varv_{\rm offset}$\tablefootmark{b} (km\,s$^{-1}$) & & \multicolumn{2}{c}{--12.7} && \multicolumn{2}{c}{--3.3} && \multicolumn{2}{c}{--8.0} && \multicolumn{2}{c}{0.9} && \multicolumn{2}{c}{--4.5} && \multicolumn{2}{c}{--5.6} \\ \hline
\end{tabular}
\tablefoot{Obtained from Gaussian fits to each component; negative values are for absorption. Upper limits are 1$\sigma$.
\tablefoottext{a}{Measured in 1 km\,s$^{-1}$ channels.}
\tablefoottext{b}{Offset velocity with respect to the source velocity as reported by \citet{yildiz13}.}}
\end{center}
\label{tab:intensity}
\end{table*}
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth, angle=0]{vlsr_fwhm.eps}
\end{center}
\caption{Velocity of the offset component with respect to the source velocity as a function of $FWHM$ for all observed transitions. The plus signs show the results of the decomposition of each line, whereas the circles show the values of the offset and width chosen as fixed.}
\label{fig:vlsr_fwhm}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.75\columnwidth, angle=0]{smm1_medium.eps}
\end{center}
\caption{Continuum-subtracted HIFI spectra of H$_2$O, CO, OH, CH$^+$, OH$^+$ obtained towards the central position of Ser SMM1 ($\varv_{\rm source}$ = 8.5 km s$^{-1}$). The red vertical line indicates $\varv_{\rm source}$, while the blue dashed line shows the position of the offset component. The blended OH triplet is centred on the strongest hyperfine component as indicated by the three vertical black lines situated directly beneath the OH spectrum. The same is the case for the OH$^+$ spectrum, with the location of the hyperfine components and their relative strengths indicated by the black lines above the spectrum. The spectra have been shifted vertically for clarity and in some cases scaled by a factor, indicated on the figure. CH$^+$ and OH$^+$ are fitted by a single Gaussian, OH by two, and CO and H$_2$O by three; the offset components are shown in magenta.}
\label{fig:spectra}
\end{figure}
\section{Results}
\label{sec:results}
\subsection{Quantifying emission from the offset component}\label{sect:gauss}
Figure \ref{fig:spectra_all} shows the H$_2$O 1$_{10}$--1$_{01}$ (557 GHz) and 2$_{02}$--1$_{11}$ (988 GHz) spectra towards all sources with an offset component; the offset component is marked in the figure. The remaining parts of each profile will be presented and analysed in a forthcoming paper (Mottram et al. in prep.). The component is characterised by being significantly blue-shifted ($\varv_{\rm source} - \varv_{\rm LSR} > 2$ km s$^{-1}$), or, for the isolated case of IRAS4B, by being located close to the source velocity ($\varv_{\rm source} - \varv_{\rm LSR} < 1$ km s$^{-1}$). Furthermore, the $FWHM$ ($\Delta\varv$) is $\gtrsim$ 4 km s$^{-1}$ extending to 40 km s$^{-1}$. These ranges may be caused by inclination effects, which will be discussed further in Sect. \ref{sect:incl}. To quantify the emission, each profile is decomposed into Gaussian components, but leaving the deep absorptions seen in the ground-state lines masked out. The resulting offsets and $FWHM$ are shown in Fig. \ref{fig:vlsr_fwhm}. For each source, a characteristic offset velocity and $FWHM$ were chosen based on the decomposition of high-$S/N$ data without self-absorption, typically the 3$_{12}$--3$_{03}$ (1097 GHz) and 2$_{02}$--1$_{11}$ (988 GHz) transitions (Fig. \ref{fig:vlsr_fwhm}). These parameters were fixed and the decomposition redone for all spectra, letting only the intensity and the parameters of the secondary component be free. The secondary component is typically the broader component (except for the case of IRAS2A) associated with the outflow \citep{kristensen10, kristensen12}. The resulting intensities are listed in Table~\ref{tab:intensity}.
The main uncertainty in the listed intensities comes from the fitting and the uniqueness of the fit, particularly for low signal-to-noise lines. For strong components and very offset components, the fit is unique and the corresponding uncertainty low, as illustrated by the scatter of the width and offset shown in Fig. \ref{fig:vlsr_fwhm}. By fixing the width and offset we have removed this scatter by assuming that the shape of the component is independent of excitation. The decompositions obtained by fixing the width and offset are equally good to the decompositions where all parameters are free, where the quality of the fit is taken to be the residual. Typically, the rms of the residual is \mbox{$<$ 1.5} times the rms in a line-free region.
CO 10--9 spectra were also examined for the presence of an offset component by using the same kinematic parameters as in the H$_2$O decomposition. The strongest CO 10--9 line is observed towards SMM1 \citep{yildiz13} where no direct evidence is found for an offset component; the line profile can be decomposed without the need for an additional offset component. SMM1 does show a clear offset component in CO 16--15, and the question therefore arises, how much emission from the offset component can be hidden in the CO 10--9 profile? Figure \ref{fig:smm1_decomp} shows the CO 16--15, 10--9 and H$_2$O 3$_{12}$--3$_{03}$ spectra obtained towards SMM1. The offset component was first fitted using CO 16--15 and subsequently the $\Delta\varv$ and $\varv_{\rm LSR}$ were fixed and used to quantify emission in the CO 10--9 profile. Second, the same exercise was done but the offset parameters from the H$_2$O decomposition were fixed. There is little difference in terms of the integrated intensity no matter whether the CO or H$_2$O parameters are chosen (5.5 vs. 4.9 K km s$^{-1}$). The quality of the fits are the same as if the offset component is not included, and we therefore treat the inferred intensities as upper limits. Similarly for the other sources, only upper limits are available and these are based on the H$_2$O kinematic parameters (Table~\ref{tab:colimit}).
CH$^+$ and OH$^+$ spectra show the offset component in absorption (Benz et al. in prep.). OH$^+$ is detected towards all sources whereas CH$^+$ is detected towards two out of four sources. IRAS4B shows the CH$^+$ profile to be shifted both with respect to the source velocity and the offset component seen in H$_2$O; it is centred on 5.7--6.0 km s$^{-1}$ and is thus offset by more than 1 km s$^{-1}$ towards the blue. The CH$^+$ feature towards SMM1 is centred on the velocity of the offset component and has a larger $FWHM$ than the offset component seen in H$_2$O and CO (Fig.~\ref{fig:spectra}). C$^+$ is detected towards IRAS2A and SMM1, in both cases blue-shifted with respect to the source velocity by $\sim$ 4--5 km s$^{-1}$.
OH is detected towards Ser SMM1 with HIFI at 1835 GHz (Wampfler et al. in prep.). A Gaussian decomposition is complicated by the fact that the hyperfine transitions are very closely spaced (2.4 km s$^{-1}$). Nevertheless, by fixing the intensity ratios of the hyperfine components, i.e., assuming the emission is optically thin and that the intensity ratios scale with $A_{\rm ul} g_{\rm u}$, a fit is obtained and the OH emission likely contains a mixture of the offset and broad component. Because of the shape of the profile, further decomposition is not performed, instead we assume that 50\% of the emission can be attributed to the offset component. HIFI OH spectra are not available towards the other sources as part of WISH.
\begin{figure}
\begin{center}
\includegraphics[width=0.7\columnwidth, angle=0]{smm1_decomposition.eps}
\end{center}
\caption{Decomposition of the CO 10--9 profile towards SMM1 using either the best-fit parameters obtained from CO 16--15 (top) or H$_2$O (bottom). The Gaussian profiles illustrate the maximum amount of emission that can be hidden in the CO 10--9 profile.}
\label{fig:smm1_decomp}
\end{figure}
\subsection{Time variability}
IRAS4A was re-observed in the H$_2$O 3$_{12}$--3$_{03}$ transition at 1097 GHz as part of an OT2 programme (PI: Visser) on Aug. 2, 2012, nearly two years after the original observations (July 31, 2010). During that period, the offset component doubled in intensity (Fig. \ref{fig:i4a_time}). IRAS2A, IRAS4B and SMM1 were also re-observed as part of the same OT2 programme, but show no signs of variability. All H$_2$O observations towards IRAS4A presented here were performed over a period of two months, and we assume that no significant variability took place over that time period.
To verify that the change in emission is not caused by a pointing offset when the data were obtained, the pointing offsets were checked using HIPE 9.1. The recorded pointing offset towards IRAS4A was 3\farcs8 in July 2010 and less than 0\farcs5 in 2012. The pointing offsets were similar towards IRAS4B for both epochs and $\lesssim$ 2\farcs5 for both epochs for the other sources. For the pointing offset to be the cause of the intensity difference, the offset component would need to be located at the edge of the HIFI beam, i.e., at a distance of more than 10$''$ from the pointing centre to cause a doubling in intensity. Otherwise the pointing alone cannot account for the change. Below in Sect. \ref{sect:offset} we argue why this origin is unlikely, based on the hydride absorption.
HIFI has an inherent calibration uncertainty of $\sim$ 10\% \citep{roelfsema12}. However, only the offset component shows a noticeable difference in intensity, the broader underlying outflow component appears unchanged between the two epochs. Towards SMM1, on the other hand, both the broad and offset components change intensity slightly, a change which can be attributed to calibration uncertainties; the difference in intensity is 10\% across the spectrum. In conclusion, the most likely explanation is that the offset component seen towards IRAS4A changed in intensity over the past two years.
Spectra of species other than H$_2$O towards IRAS4A were obtained over a period of 6 months from March 3, 2010 to September 3, 2010, and we assume that little or no change took place in that time frame.
\begin{figure}
\begin{center}
\includegraphics[width=0.9\columnwidth, angle=0]{spectra-312-303.eps}
\end{center}
\caption{H$_2$O 3$_{12}$--3$_{03}$ spectra at 1097 GHz towards SMM1, IRAS2A, IRAS4A and IRAS4B observed at two different epochs. The offset component towards IRAS4A has doubled in intensity, as shown by the two Gaussian fits in magenta. All spectra are shifted such that the source velocity is at 0 km s$^{-1}$.}
\label{fig:i4a_time}
\end{figure}
\subsection{CO and limits on kinetic temperature}
The offset component is seen in CO $J$=16--15 towards SMM1, but not in the lower-excited $J$=10--9 line. Thus, the upper limit from $J$=10--9 can be used to provide a lower limit on the rotational temperature and a corresponding upper limit on the column density, assuming the level populations are in local thermodynamic equilibrium and optically thin. The results are illustrated in Fig. \ref{fig:co_rotdiag}. The upper limit on the column density is $\sim$7$\times$10$^{13}$ cm$^{-2}$ in the 11\farcs5 beam of the CO 16--15 transition.
Assuming LTE and that both lines are optically thin, the limit on the rotational temperature is $T_{\rm rot}$ $\gtrsim$ 270 K. \citet{gecco12} find that the CO ladder towards Ser SMM1 from $J$ = \mbox{4--3} to 49--48 consists of three rotational-temperature components with $T_{\rm rot}$ = 100, 350 and 600 K, respectively, corresponding to low-$J$, mid-$J$ and high-$J$ CO emission ($J$ $\lesssim$ 14, 26 and 42, respectively). The offset component is clearly not associated with the 100-K temperature component, but based on the limits on the rotational temperature it is not possible to conclude whether it is associated with the warm or hot component. Indeed, if the distribution is continuous it is possible that the rotational temperatures do not correspond to discrete temperature regimes \citep[e.g.,][]{neufeld12}.
For the specific case of SMM1 it is clear that the offset component is a distinct physical component based on the line profile, and this component must be present in the observed CO ladder. Moreover, the contribution is embedded in the integrated emission for $J_{\rm up}$ $>$ 10 which further illustrates the need for high spectral resolution observations to isolate emission from the separate dynamical components, as opposed to observations with, e.g., SPIRE and PACS on \textit{Herschel}. The same is likely the case for the other sources although a future analysis will show to what extent the CO $J$ = 10--9 data can be used to constrain the CO rotational temperature. If emission is strongly beam-diluted (see next section) it is likely that high angular resolution observations with a facility such as ALMA using CO $J$ = 6--5 will be able to further constrain the rotational temperature as well.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth, angle=0]{co_trot_limit.eps}
\end{center}
\caption{CO rotational diagram for Ser SMM1 based on the upper limit of CO $J$ = 10--9 emission obtained from two different decompositions, and the detection in CO $J$ = 16--15. The lower limits on the rotational temperatures are shown for each upper limit on the CO $J$ = 10--9 emission.}
\label{fig:co_rotdiag}
\end{figure}
\begin{table}
\caption{CO 16--15 and limits on CO 10--9 integrated intensities in the offset components.}
\tiny
\begin{center}
\begin{tabular}{l c @{} c c c c}
\hline \hline
& CO 10--9 && \multicolumn{3}{c}{CO 16--15} \vspace{1pt} \\
\cline{2-2} \cline{4-6}
& $\int$ $T_{\rm MB}$ d$\varv$ && $\int$ $T_{\rm MB}$ d$\varv$ & $\varv_{\rm LSR}$ & $\Delta\varv$ \\
Source & (K km s$^{-1}$) && (K km s$^{-1}$) & (km s$^{-1}$) & (km s$^{-1}$) \\
\hline
IRAS2A & $<$ 0.9 && \ldots \\
IRAS4A & $<$ 1.2 && \ldots \\
IRAS4B & $<$ 6.6 && \ldots \\
SMM1(H$_2$O)\tablefootmark{a} & $<$ 4.9 && \ldots \\
SMM1(CO)\tablefootmark{b} & $<$ 5.5 && 6.2 & 5.5 & 3.4 \\
SMM3 & $<$ 2.3 && \ldots \\
\hline
\end{tabular}
\tablefoot{Upper limits are 1$\sigma$ and are obtained by fixing the position and width of the offset component from the H$_2$O data.
\tablefoottext{a}{$\varv_{\rm LSR}$ and $FWHM$ from the H$_2$O profiles.}
\tablefoottext{b}{$\varv_{\rm LSR}$ and $FWHM$ from the CO 16--15 profile.}}
\end{center}
\label{tab:colimit}
\end{table}
\subsection{H$_2$O and CO excitation conditions}\label{sec:excitation}
To determine the H$_2$O excitation conditions, $n$(H$_2$), $T$ and $N$(H$_2$O), specific H$_2$O line ratios are examined. The H$_2$O \mbox{3$_{12}$--3$_{03}$ / 3$_{12}$--2$_{21}$} ratio is particularly useful in providing initial constraints on the column density. Because the two transitions share the same upper level, the ratio is straightforward to calculate in the optically thin limit and is equal to 6.7. However, observations of these two transitions at 1153 and 1097 GHz, i.e., in similar beams, reveal an intensity ratio in the offset component ranging from 0.6 (IRAS4A) to 1.7 (SMM3). Such a ratio can only be explained if the 3$_{12}$--3$_{03}$ transition is optically thick ($\tau$ $>$ a few) and the column density is greater than $\sim$ 10$^{16}$ cm$^{-2}$ for any given emitting area.
Figure \ref{fig:h2o_diagnostic} shows the H$_2$O 2$_{02}$--1$_{11}$ / 2$_{11}$--2$_{02}$ versus H$_2$O 3$_{12}$--3$_{03}$ / 3$_{12}$--2$_{21}$ line ratios for various H$_2$O column densities (4$\times$10$^{15}$--10$^{17}$ cm$^{-3}$), H$_2$ densities (10$^6$--10$^9$ cm$^{-3}$) and a temperature of 750 K. The ratios are calculated using the non-LTE statistical equilibrium code RADEX \citep{vandertak07} for line widths of $\Delta\varv$ = 4, 10, 14 or 40 km s$^{-1}$ corresponding to the $FWHM$ of the different offset components. The H$_2$O-H$_2$ collisional rate coefficients from \citet{daniel11} are used and the H$_2$ and H$_2$O o/p ratios are set to 3, the high-temperature equilibrium value. Observed line ratios are also shown and these have been scaled to the same 20$''$ beam assuming that the emitting region is much smaller than the beam.
The resulting line ratios typically change by less than 10\% for temperatures in the range of 500--1000 K, i.e. they only weakly depend on the assumed temperature. \citet{gecco12} find from an excitation analysis of more H$_2$O lines that a kinetic temperature of $\sim$800 K reproduces the H$_2$O line ratios as well as the high-$J$ part of the CO ladder observed towards SMM1. We choose to fix the temperature to 750 K, the halfway point between 500 and 1000 K but note that this value is not constrained by the H$_2$O data.
For most model results there is a degeneracy between a (relatively) low H$_2$ density ($\sim$ 5 $\times$ 10$^6$ cm$^{-3}$), low H$_2$O column density (a few times 10$^{16}$ cm$^{-2}$) and a high H$_2$ density ($>$ 10$^8$ cm$^{-3}$), high H$_2$O column density ($>$ 10$^{17}$ cm$^{-2}$) (Fig. \ref{fig:h2o_diagnostic}). This degeneracy is most evident when comparing model results to the observations of IRAS4A, IRAS4B and SMM1, and corresponds to whether the line emission is sub-thermally or thermally excited. For the high column density case, the ground-state H$_2$O lines are very optically thick, $\tau$ $>$ 100, which may affect the accuracy of the radiative transfer. In the following, the results with the lowest column density and thereby lowest opacity will be analysed. The model results are summarised in Table \ref{tab:exc_con}.
The offset component was linked with 22 GHz H$_2$O maser emission in \citet{kristensen12}. For H$_2$O to mase, a density of $\sim$10$^7$ cm$^{-3}$ is required \citep{elitzur92}, which is typically a factor of two higher than what is inferred here. The maser density is based on H$_2$O-H$_2$ collisional rate coefficients which are more than twenty years old. Furthermore, the error bar on the observed line ratios is such that the best-fit densities span an order of magnitude and a density of 10$^7$ cm$^{-3}$ cannot be excluded. Thus the conclusion that the offset component is coincident with masers remains unchanged.
The absolute H$_2$O 2$_{02}$--1$_{11}$ intensity from the RADEX models are compared to the observed intensity in the offset component to estimate the beam filling factor, or, alternatively, the radius of the emitting region, $r$. Finally, the CO column density is varied until the CO 16--15 intensity towards SMM1 is recovered for the same conditions as for H$_2$O. Typically, the radius of the emitting region is of the order of 100 AU (Table \ref{tab:exc_con}), or about $\sim$~0\farcs5 at a distance of 235 pc.
Towards SMM1, \citet{gecco12} obtain CO column densities of the warm ($T$ $\sim$ 375 K) and hot ($\sim$ 800 K) components of 10$^{18}$ cm$^{-2}$ and 5$\times$10$^{16}$ cm$^{-2}$, respectively, over a region with a radius of 500 AU. Note that these temperatures are inferred from modelling the CO ladder and H$_2$O emission, and are not identical to the measured CO rotational temperature \citep[600~K,][]{gecco12}. If our inferred CO column density of 10$^{18}$ cm$^{-2}$ over a 110 AU emitting radius is scaled to a radius of 500 AU, the CO column density becomes $\sim$ 5$\times$10$^{16}$ cm$^{-2}$. It is therefore likely that the hot component observed in the high-$J$ CO data is identical to the offset component identified in the HIFI H$_2$O and CO data; the column density of the warm CO component is too high to be hidden in the HIFI offset component. This analysis shows that the profiles are necessary for disentangling the different kinematical components in each spectrum, and that the hot CO detected with PACS is likely a distinct physical component towards this source.
This work assumes that the temperature of the H$_2$O emitting gas is the same as that of CO, and that the temperature of the H$_2$O emitting gas is close to what has been determined by, e.g., \citet{gecco12}. What if this is not the case? Furthermore, how much column density can be hidden in the CO 10--9 profiles, where the offset component is not detected? The CO $J$=10--9 contribution of the warm and hot components is estimated from the rotational diagrams in \citet{karska13}, \citet{herczeg12} and \citet{gecco12}. In all cases, the integrated CO $J$=10--9 intensity is of the order of $\sim$30--40 K km s$^{-1}$ for the warm component and $\sim$ 2.5--3 K km s$^{-1}$ for the hot component when extrapolating the linear fits from \mbox{$J$ $\sim$ 15--40} down to $J$ = 10 and assuming emission is optically thin. The opacity can be estimated from optically thin $^{13}$CO 10--9 emission and the $J$ = 10--9 line is optically thin away from the line centre \citep{sanjosegarcia13, yildiz13}. The upper limit for the offset component in the CO 10--9 data is \mbox{$\sim$ 1--6 K km s$^{-1}$} (Table \ref{tab:colimit}). Thus, the offset component would have easily been detected in the HIFI spectra if it were associated with the warm PACS component, but not if it is associated with the hot component.
The H$_2$O/CO abundance ratio towards SMM1 is $\sim$0.04. Our analysis assumes that CO and H$_2$O share excitation conditions. This may not be the case, as CO is shifted with respect to H$_2$O by $\sim$ 1.5 km s$^{-1}$ towards SMM1, although they have identical line widths. \citet{gecco12} find a higher H$_2$O/CO abundance ratio of 0.4 for the same H$_2$ density of 5$\times$10$^6$ cm$^{-3}$, but for a different emitting region. Since little H$_2$ data exist towards the central source position, and certainly no H$_2$ data with the velocity resolution required to isolate the offset component, the H$_2$O/CO ratio serves as a proxy for the H$_2$O abundance with respect to H$_2$. For a canonical CO abundance of 10$^{-4}$ the H$_2$O abundance is 4$\times$10$^{-6}$ and thus lower by several orders of magnitude compared to what would be expected if all oxygen were locked up in H$_2$O ($\sim$ 3$\times$10$^{-4}$).
\begin{figure*}
\sidecaption
\includegraphics[width=12cm, angle=0]{diagnostic_ratio.eps}
\caption{H$_2$O 2$_{02}$--1$_{11}$ / 2$_{11}$--2$_{02}$ versus H$_2$O 3$_{12}$--3$_{03}$ / 3$_{12}$--2$_{21}$ line ratios for various H$_2$O column densities and H$_2$ densities from RADEX models. The four panels are for different line widths corresponding to the width of each of the offset components. The temperature is fixed at 750~K. The different line styles correspond to different H$_2$O column densities and the dots are for different H$_2$ densities. The observed ratios are scaled to the same beam and marked with 15\% error bars in both ratios.}
\label{fig:h2o_diagnostic}
\end{figure*}
\begin{table}
\caption{H$_2$O and CO excitation conditions.}
\begin{center}
\begin{tabular}{l c c c c c}
\hline \hline
Source & $n$(H$_2$) & $N$(H$_2$O)\tablefootmark{a} & $N$(CO)\tablefootmark{a} & $T_{\rm kin}$\tablefootmark{b} & $r$ \\
& (cm$^{-3}$) & (cm$^{-2}$) & (cm$^{-2}$) & (K) & (AU) \\
\hline
IRAS2A & 5$\times$10$^6$ & 4$\times$10$^{16}$ & & 750 & 80 \\
IRAS4A & 5$\times$10$^6$ & 1$\times$10$^{16}$ & & 750 & 140 \\
IRAS4B & 1$\times$10$^7$ & 4$\times$10$^{15}$ & & 750 & 160 \\
SMM1 & 5$\times$10$^6$ & 4$\times$10$^{16}$ & 1$\times$10$^{18}$ & 750 & 110 \\
SMM3 & 5$\times$10$^7$ & 1$\times$10$^{16}$ & & 750 & \phantom{1}50 \\
\hline
\end{tabular}
\tablefoot{
\tablefoottext{a}{Column density over the emitting region with radius $r$.}
\tablefoottext{b}{Kinetic temperature fixed in the model.}
}
\end{center}
\label{tab:exc_con}
\end{table}
\subsection{OH$^+$, C$^+$ and CH$^+$}
Observations and characteristics of the hydride observations are reported in Benz et al. (in prep.). We here summarise the main results concerning the offset component and report again the hydride column densities for completeness.
The offset component is uniquely identified in absorption in both OH$^+$ and CH$^+$ towards IRAS4B and SMM1, and in OH$^+$ towards all sources. The velocity offsets are consistent with those seen in H$_2$O and, for the case of SMM1, in CO $J$ = 16--15. From the absorption features it is possible to directly measure the absorbing column through
\begin{equation}
N_{\rm low} = \frac{8\pi}{c^3} \frac{\nu^3 g_{\rm low}}{A_{\rm ul} g_{\rm up}} \int \tau\, {\rm d}\varv\ ,
\end{equation}
where $\nu$ is the line frequency, $A_{\rm ul}$ the Einstein $A$-coefficient and $g$ the statistical weight of the lower and upper levels. The opacity, $\tau$, is determined as $\tau$ = ln($T_{\rm cont}$ / $T_{\rm line}$). In determining the column density, it is implicitly assumed there is no re-emission of the absorbed photons. The measured values are given in Table \ref{tab:hydrides} along with 3$\sigma$ upper limits (Benz et al. in prep.).
The CH$^+$ column densities are in the range of $<$ a few times 10$^{11}$ cm$^{-2}$ to 2 $\times$ 10$^{13}$ cm$^{-2}$. This range is similar to that found in diffuse interstellar clouds \citep[$\sim$ 10$^{12}$--10$^{14}$ cm$^{-2}$;][]{gredel97}. The OH$^+$ column densities are of the order of 10$^{13}$ cm$^{-2}$. The OH$^+$ column density is similar to what \citet{bruderer10} measured towards the high-mass star-forming region AFGL2591, $N$(OH$^+$) $\sim$ 1.6 $\times$ 10$^{13}$ cm$^{-2}$, whereas the CH$^+$ column densities towards the low-mass objects are 1--2 orders of magnitude lower than towards AFGL2591, $N$(CH$^+$) $\sim$ 1.8 $\times$ 10$^{14}$ cm$^{-2}$.
C$^+$ is not uniquely identified with the offset component although an absorption feature is seen towards SMM1 at the velocity of the offset component, and towards IRAS2A closer to the source velocity. The measured column densities are 4$\times$10$^{17}$~cm$^{-2}$. The observations were performed in dual-beam-switch mode and it cannot be ruled out that some emission is missing because it is chopped out. For example, \citet{larsson02} observed extended [\ion{C}{ii}] emission over the entire Serpens core with ISO-LWS. However, the absorption is expected to mainly affect any kinematical component at or close to the source velocity; the offset component is blue-shifted by several km s$^{-1}$ which suggests that the [\ion{C}{ii}] absorption is unrelated to the cloud or Galactic foreground but intrinsic to the sources.
\begin{table}
\caption{Column densities of OH$^+$, C$^+$ and CH$_+$ where detected, and 3$\sigma$ upper limits on CH and HCO$^+$ column densities. }
\begin{center}
\scriptsize
\begin{tabular}{l c c c c c}
\hline \hline
Source & $N$(OH$^+$) & $N$(CH$^+$) & $N$(C$^+$) & $N$(HCO$^+$) & $N$(CH) \\
& (cm$^{-2}$) & (cm$^{-2}$) & (cm$^{-2}$) & (cm$^{-2}$) & (cm$^{-2}$) \\
\hline
IRAS2A & 2.2$\times$10$^{13}$ & $<$4.2$\times$10$^{11}$ & 3.9$\times$10$^{17}$ & $<$8.0$\times$10$^{13}$ & $<$3.7$\times$10$^{14}$ \\
IRAS4A & 1.5$\times$10$^{13}$ & $<$6.4$\times$10$^{11}$ & $<$9.6$\times$10$^{16}$ & $<$6.6$\times$10$^{13}$ & $<$3.1$\times$10$^{14}$ \\
IRAS4B & 7.5$\times$10$^{12}$ & 1.8$\times$10$^{12}$ & $<$3.9$\times$10$^{17}$ & $<$2.0$\times$10$^{14}$ & $<$9.4$\times$10$^{14}$ \\
SMM1 & 3.3$\times$10$^{13}$ & 2.4$\times$10$^{13}$ & 4.2$\times$10$^{16}$ & $<$3.3$\times$10$^{14}$ & $<$1.5$\times$10$^{15}$ \\
SMM3 & \ldots & \ldots & \ldots & $<$1.1$\times$10$^{15}$ & $<$4.9$\times$10$^{15}$ \\
\hline
\end{tabular}
\tablefoot{OH$^+$ and CH$^+$ are from Benz et al. (in prep.). SMM3 was not targeted for observations of OH$^+$, CH$^+$ and C$^+$.
}
\end{center}
\label{tab:hydrides}
\end{table}
\subsection{OH, CH and HCO$^+$}
The offset component is not uniquely identified in the spectra of OH, CH and the very deep HCO$^+$ $J$ = 6--5 line obtained serendipitously in our observations of H$_2^{18}$O 1$_{10}$--1$_{01}$ \citep{kristensen10}. Nevertheless, for the OH HIFI spectrum towards SMM1, we assume that 50\% of the emission and therefore 50\% of the column density can be assigned to the offset component (see discussion above, Sect. \ref{sect:gauss}). An OH column density of 5$\times$10$^{15}$ cm$^{-2}$ is adopted, a value which is probably accurate to within a factor of a few \citep{wampfler13}.
As for CH and HCO$^+$, emitting region sizes, temperatures, H$_2$ densities and line-widths are as for H$_2$O and CO. For the case of CH, only one transition is observed and we therefore adopt the upper limit on the rotational temperature from \citet{bruderer10} of 25 K to estimate the total column density. If CH is in LTE and is optically thin with a rotational temperature of $>$ 500 K, the upper limit is only a factor of $\sim$ 5 higher than what is given in Table \ref{tab:hydrides}. The same procedure is adopted for HCO$^+$ 6--5 with the exception that the rotational temperature is taken to be $>$ 500 K, because the critical density of HCO$^+$ is $\sim$2$\times$10$^7$ cm$^{-3}$ and so emission may be in LTE. If HCO$^+$ is strongly subthermally excited and the rotational temperature is only 25 K, the column densities are overestimated by $\sim$ 25\%. Typical upper limits are $\sim$ 10$^{14}$ cm$^{-2}$ and 10$^{15}$ cm$^{-2}$ for HCO$^+$ and CH, respectively.
\section{Discussion}
\label{sec:disc}
\subsection{Location of the offset component}\label{sect:offset}
Because the offset component is typically blue-shifted and because it sometimes appears in absorption in certain species against the continuum, it is possible to constrain the physical location of the component in the protostellar system. First, we assume that the offset component consists of both a red- and blue-shifted component, but that the red-shifted component is hidden from view by some obscuring agent. This obscuring agent may consist of either gas or dust (or both), and both possibilities are discussed below.
The CO 16--15 emission is optically thin \citep{gecco12} and the red-shifted counterpart is the most difficult to hide. Is it possible to have a layer of CO gas between the blue- and red-shifted offset components that could shield the red-shifted component, and if so, what would the conditions need to be? For high densities of 5$\times$10$^6$ cm$^{-3}$, the optical depth of the CO 16--15 line is nearly independent of density, and thus only depends on temperature and column density. For temperatures greater than $\sim$ 100 K, the optical depth scales almost linearly with temperature. Thus, for $N$(CO) = 3$\times$10$^{18}$ cm$^{-2}$ and $T$ = 750 K the CO 16--15 transition is optically thick with $\tau$ = 3, for $\Delta\varv$ = 4 km s$^{-1}$, the width appropriate for SMM1. However, such conditions yield significant emission in the lower-$J$ transitions, and although the component would remain hidden in higher-$J$ lines, it would appear in lower-$J$ lines. Only for low temperatures and high column densities, e.g., $N$(CO) = 3$\times$10$^{19}$ cm$^{-2}$ and $T$ = 100 K is emission in both the high-$J$ and low-$J$ lines optically thick, similar to the conditions observed in a Herbig Ae/Be disk \citep{bruderer12} where even CO 16-15 is marginally optically thick. For H$_2$ densities of 5$\times$10$^6$ cm$^{-3}$, a column length of $\gtrsim$ 400 AU is required to obscure this component from view. If the density of the gas is higher, 10$^8$ cm$^{-3}$, the column length is correspondingly lower, 100 AU. Such length scales correspond to the inferred sizes of embedded disks around Class 0 objects \citep[e.g.,][]{jorgensen07, jorgensen09} and is also similar to the inferred size of the emitting region (Table \ref{tab:exc_con}). Gas densities in excess of 10$^8$ cm$^{-3}$ are only found close to the protostar in the disk, where the temperature is low. Thus it is possible that the red-shifted counterpart is hidden by large amounts of high-density, cold CO gas.
The red counterpart of the offset component is also obscured in H$_2$O emission. However, for similar conditions as discussed above ($n$ = 5$\times$10$^6$ cm$^{-3}$, $T$ = 100 K) an H$_2$O column of only $\sim$ 10$^{16}$ cm$^{-2}$ is required to obscure material. If the gas and dust temperatures are equal and close to 100 K, the water abundance is expected to be within a factor of a few of the CO abundance, and thus it is straightforward to hide any water emission appearing on the red side of the spectrum by colder H$_2$O gas.
The shielding molecular gas can be associated with the entrained gas in the molecular outflow. If that is the case, then a significant fraction of the outflow is entrained on very small scales ($<$ a few hundred AU) in order to hide the red offset component which also resides in the inner few hundred AU. Alternatively, the obscuring gas originates in the infalling envelope. Free-falling gas towards a protostar with a mass of 0.5 $M_\odot$ has a velocity of 3 km s$^{-1}$ at a distance of 100 AU, and therefore it is possible that the gas flowing towards the protostar (red-shifted and located on the side facing us) shields the red offset component located on the far side of the system in the sources where the offset and width is low. For the case of IRAS2A where the offset is 13 km s$^{-1}$ and the width is 40 km s$^{-1}$, the infalling gas cannot shield the red-shifted offset component.
Shielding by the dust is another possibility. The lowest frequency at which the offset component is detected is at 557 GHz in the H$_2$O 1$_{10}$--1$_{01}$ transition. Adopting a dust opacity of 5 cm$^2$ g$^{-1}$, the opacity from Table 1, column 5 of \citet{ossenkopf94} at 500 $\mu$m, a gas/dust ratio of 100 and a mean molecular weight of 2.8 $m_{\rm H}$, an H$_2$ column density of $>$ 10$^{24}$ cm$^{-2}$ is required for a dust optical depth of 1. At shorter wavelengths, the dust opacities increase and thus a lower dust column density is required to shield the offset component. Figure \ref{fig:tau1} illustrates the dust $\tau$ = 1 surface as a function of wavelength from the inside of the envelope of SMM1 using the spherical envelope model of \citet{kristensen12}. In this representation an observer is able to see the other side of the envelope if he is located in the ``optically thin'' zone; in the ``optically thick'' zone the dust blocks emission from the other side. At 162 $\mu$m, the wavelength of the CO $J$=16--15 transition, an H$_2$ column density of more than 10$^{23}$~cm$^{-2}$ is required for the dust to be optically thick, which is obtained on scales of $\sim$ 220 AU in this model, i.e., comparable to the size of the emitting region (Fig. \ref{fig:tau1}). Therefore it is possible that dust obscures the red-shifted offset component at 162 $\mu$m, but this is not the case at 500 $\mu$m.
\citet{jorgensen05a} showed that on scales of a few hundred AU, an additional component is required to reproduce interferometric observations. This component is likely the protostellar disk, and the column density is sufficient to provide enough shielding from the far side \citep[see][for the example of SMM1]{enoch09}. In conclusion, the red-shifted component is likely hidden either by high-density molecular gas in the disk or the inner, dense envelope.
The fact that several species tracing the offset component (e.g., OH$^+$, CH$^+$ and H$_2$O towards IRAS3A and SMM3) only appear in absorption against the continuum from the disk/envelope places the offset component in front of the disk. The continuum-emitting region at the wavelengths where these absorptions appear (300 $\mu$m for OH$^+$ to 540 $\mu$m for H$_2$O) is typically up to $\sim$~500~AU in size \citep[$\sim$ 2\arcsec;][]{jorgensen09, gecco12}. These considerations all point to a physical origin of the offset to within the inner few hundred AU of the protostar, and located between us and the protostar itself.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth, angle=0]{tau1_surface.eps}
\end{center}
\caption{Thickness of the optically thick dust zone as a function of wavelength from the inside of the envelope of SMM1 towards the outside using the model envelope of \citet{kristensen12}. The diameter of the emitting region is marked. }
\label{fig:tau1}
\end{figure}
\subsection{Inclination}\label{sect:incl}
The three NGC1333 sources have well-constrained inclination angles with respect to the plane of the sky. The outflow from IRAS2A is close to the plane ($i$ $\sim$ 90\degr), IRAS4A is at $i$ $\sim$ 45\degr, and IRAS4B is seen nearly pole on ($i$ $\sim$ 0\degr). Fitting the spectral energy distribution of SMM1, \citet{enoch09} find that SMM1 has an inclination of 30\degr. IRAS2A has the largest offset and $FWHM$ whereas that towards IRAS4B shows the smallest offset and $FWHM$, with IRAS4A and SMM1 falling between these two extremes. This four-point correlation suggests that the offset and width depend on inclination, and that the offset component is moving nearly perpendicular to the large-scale outflow as explained below.
If the origin of the offset component is a shock as suggested by the width and offset of the profile, how will the shape of the profile depend on the inclination? The offset velocity follows the inclination as sin($i$); when the inclination is 0\degr\ (the case of IRAS4B) the offset is 0 km s$^{-1}$ whereas it reaches its maximum value at $i$ = 90\degr\ (IRAS2A). The width will be narrow when observing the shock from an orientation close to face-on (IRAS4B) because only the velocity component inherent to the shock is probed. For an edge-on orientation the profile will appear broader because the shock is now observed at inclinations ranging from the plane of the sky to the line of sight.
To illustrate how the velocity offset and width changes with inclination in the proposed scenario, we make a geometrical toy model. The model consists of a half annulus which expands from the plane of the sky towards the observer. The intensity from any given point along the annulus corresponds to a Gaussian with a predefined offset (expansion velocity) and width (internal velocity dispersion) which both stay constant along the annulus. The offset, however, moves and an expansion into the plane of the sky corresponds to an offset velocity of 0 km s$^{-1}$, i.e., the offset velocity along the annulus scales with the angle $\theta$ going from 90\degr\ (the line of sight) to 0 and 180\degr\ (the plane of the sky). The annulus is furthermore given an inclination to the plane of the sky, $i$, which ranges from 0\degr\ (the plane of the sky) to 90\degr\ (the line of sight). The resulting profiles from the expanding half annuli are shown in Fig. \ref{fig:vel_inc}, where also the evolution of the width with inclination is shown. No single set of parameters (offset and width) are able to reproduce all observations, either because no such single set exists, or the toy model is too simplistic to capture the geometry and excitation. For example, here only a single shock or expansion velocity is considered; a range of velocities may be more appropriate as in the case of bow shocks \citep[e.g.][]{kristensen07}. Nevertheless, the behaviour of the offset components is captured qualitatively which suggests that the wind scenario is a geometrically possible solution, and that a shock velocity of 15~km~s$^{-1}$ is reasonable from the line profile perspective.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth, angle=0]{velocity_inclination.eps}
\end{center}
\caption{Toy model of line profiles originating in an expanding half annulus as a function of inclination to the observer. An inclination of 0\degr\ corresponds to the annulus expanding into the plane of the sky (IRAS4B) and 90\degr\ corresponds to expansion along the line of sight (IRAS2A). The incremental inclination is 10$^\circ$. The inset shows the measured $FWHM$ of the different profiles as a function of inclination, i.e., not obtained from a Gauss fit. The annulus expands at 15 km s$^{-1}$ and the internal velocity dispersion ($FWHM$) is 9 km s$^{-1}$.}
\label{fig:vel_inc}
\end{figure}
\subsection{Physical origin}\label{sec:origin}
Both the offset (2--15 km s$^{-1}$) and the width ($\sim$ 4--40 km s$^{-1}$) are indicative of a shock origin, a shock appearing close to the protostar. \citet{neufeld89} modelled fast, dissociative shocks and included the effects of UV radiation generated in the shock itself through Ly$\alpha$ emission. After initial heating to $>$ 5$\times$10$^4$ K, the compressed, shocked gas cools to $T$ $\sim$ 5000~K where it reaches a plateau. During this phase, the electron abundance is high ($\gtrsim$10$^{-2}$) and molecular ions are abundant, e.g., OH$^+$ and CH$^+$. The gas is compressed by a factor of $\sim$ 400 in this stage to $\sim$ 10$^8$ cm$^{-3}$. Eventually the OH formation rate exceeds the destruction rate, and OH brings the temperature down to $\sim$~500 K, at which point the temperature reaches another plateau while H$_2$ forms. Once H$_2$ is formed, the temperature quickly drops to $\sim$ 100 K when CO and H$_2$O take over as dominant coolants.
The predicted column densities \citep{neufeld89} are in good agreement with the inferred observational column densities (Fig. \ref{fig:nd89_comp}) for a dense, dissociative shock (10$^6$ cm$^{-3}$) with a velocity of 80 km s$^{-1}$. There is a trend in the model predictions for higher column densities of C$^+$ with higher velocity, but column densities are not reported for $\varv$ $>$ 60 km s$^{-1}$ and \mbox{$n$ = 10$^6$ cm$^{-3}$} (C$^+$ and CH column densities are taken from the model with $\varv$ = 60 km s$^{-1}$). In general, the agreement is remarkable and shows that a fast, dense shock is a possible explanation for the observed column densities.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth, angle=0]{nd89_comp.eps}
\end{center}
\caption{Comparison of inferred column densities over the size of the emitting region and upper limits with shock model results from \citet{neufeld89} for a pre-shock density of 10$^6$ cm$^{-3}$ and shock velocity of 80 km s$^{-1}$. Observations are marked with black dots and arrows are for upper limits. For the cases where both a black dot and arrow are present (OH$^+$ and C$^+$) the dot marks the detection and the arrow the upper limit towards the other sources; when two arrows are present (CH and HCO$^+$) they illustrate the range of upper limits. Model results are shown as red circles and are normalised to the inferred CO column density.}
\label{fig:nd89_comp}
\end{figure}
The high density required by the model is easily found in the inner parts of the molecular envelope. The high velocity required is probably attained in either the jet or the strong wind from the protostar, although no direct observations exist of the wind in Class 0 objects. In the following, the ``jet'' refers to the highly collimated and fast component observed as extremely high-velocity features in molecular species, and the ``wind'' refers to the wide-angle, slower component seen towards Class I and II sources \citep[e.g.,][and references therein]{arce07}. The slower wind is primarily observed in forbidden atomic and ionic transitions at near-infrared and shorter wavelengths \citep{arce07, ray07} where the velocity is $\lesssim$ 10--20 km s$^{-1}$.
If the shock is moving at 80 km s$^{-1}$ perpendicular to the outflow direction (see above, Sect. \ref{sect:incl}) the envelope will quickly dissipate on timescales of $<$ 10$^3$ years \citep[e.g.,][]{shang06} and the wind would be much faster than what is observed at later evolutionary stages. The key ingredients that a successful model should reproduce are: \textit{(i)} the excitation conditions and \textit{(ii)} the hydride column densities, while \textit{(iii)} not dissipating the envelope on very short timescales. The reason the fast dissociative shock is successful in reproducing the first two ingredients is the layered structure where the hottest gas is atomic (dissociated) and as the gas cools, molecules reform. Two alternatives, which would also reproduce the third ingredient, are possible: \textit{(a)} either the dissociating UV photons are not intrinsic to the shock in which case the shock velocity can be lower, possibly as low as the wind velocity in Class I/II sources (10--20 km s$^{-1}$), see Fig. \ref{fig:cartoon} for a schematic of the scenario; or \textit{(b)} the wind is faster in Class 0 sources but only interacts with the envelope in very small regions and over short enough timescales that no irreparable damage is done to the large-scale envelope, i.e., the wind is anisotropic in time and direction. In the following we argue why a combination of the two is a likely solution.
Accreting protostars generate copious amounts of UV photons \citep[e.g.,][]{ingleby11} and thus provide a natural source of external UV illumination \citep{spaans95, vankempen09a, visser12, yildiz12}. If the UV field is strong and hard enough, the ion chemistry naturally evolves in a similar fashion to high-mass star-forming objects along the outflow cavity walls \citep[e.g.,][]{stauber07, bruderer09, bruderer10, benz10}, and the high shock velocity is not required to dissociate the molecules. \citet{visser12} argue that the cavity density is low enough and that the distance to the cavity wall from the protostar is short enough that UV photons reach the cavity wall mostly unattenuated. Depending on the actual shock conditions and the characteristics of the UV field, it is possible that the combination of a dissociating UV field and lower shock velocity similar to the wind velocities observed towards Class I and II sources ($\varv$ $\sim$ 10--20 km$^{-1}$) can reproduce the chemical and physical conditions. However, models of dense irradiated shocks are required to test this hypothesis further.
The variability observed towards IRAS4A demonstrates that heating and cooling in the offset component takes place on timescales of years. Such a variability is only possible in J-type shocks \citep{flower03}, where the cooling lengths are short enough, of the order of a few tens of AU or less. This is consistent with the dynamical distance, i.e., the distance a shock with a velocity of 10 km s$^{-1}$ traverses over 2 years ($\sim$~5~AU). Furthermore, the variability illustrates that the offset components are transient phenomena, and that the driving agent is not constant over time, i.e., the envelope may have time to settle between outburst events. The combination of a low shock velocity \mbox{(10--15 km s$^{-1}$)}, dense medium (10$^6$--10$^8$ cm$^{-3}$), UV-driven chemistry and time variability point to a scenario in which shocks from the protostellar wind impinge on the very inner dense envelope at angles that are close to perpendicular to the large-scale outflow. Whether the envelope has enough time to relax between outburst events or not will await further observations.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth, angle=0]{cartoon.eps}
\end{center}
\caption{Cartoon illustrating the inner few 100 AU of a low-mass protostar (not to scale). Where the wind and UV photons interact directly with the envelope, an irradiated shock occurs which compresses the envelope by several orders of magnitude over short distances. On the side of the cavity walls facing the protostar, ions (C$^+$, CH$^+$, OH$^+$) dominate emission and cooling. Neutral species (OH, H$_2$O, CO) dominate further into the envelope and on larger size scales. The disk hides the innermost parts of the red-shifted outflow lobe. Typical densities and temperatures are provided where relevant.}
\label{fig:cartoon}
\end{figure}
\subsection{Tracing winds and shocks in other sources}
Most of the neutral species are reformed at a temperature plateau in the shock of $\sim$ 500 K. The heating of this plateau region is dominated by H$_2$ formation on grains, and CO, H$_2$O, OH and other neutral species dominate the cooling. The temperature of this plateau is coincident with that of the hot component seen in the PACS high-$J$ CO data towards low-mass protostars \citep{herczeg12, gecco12,manoj12,karska13, green13}, and it is possible that the CO emission arises in the dense post-shock gas. This would imply that almost all protostars contain dense dissociative shocks, and that the ``universality'' of this temperature component is due to the energy release of H$_2$ reformation.
If this interpretation is correct, it provides geometrical constraints on the type of system where the offset component can be observed. First, the offset component needs to move out of the plane of the sky, or the radial velocity is zero. If the component is moving entirely in the plane of the sky, the component is incident with the source velocity and there is no or little offset as in the case of IRAS4B. Second, the infall rate needs to be large enough that the red offset component is shielded by the infalling gas or a more dense inner envelope or disk. If it is not shielded, the profile becomes symmetric around the source velocity making identification more difficult. The offset component is only detected towards the more massive envelopes in the WISH low-mass sample with the higher infall rates and larger disks, which corroborates this interpretation.
If the infall rate is very low, the corresponding outflow rate is low and therefore the offset component will be weak because the shock is weaker. The sources showing the offset component have the highest mass of hot gas as measured by the very high-$J$ CO emission observed with PACS \citep{herczeg12,karska13}, suggesting that the other reason the offset component is not observed towards more sources may be a question of the signal-to-noise ratio. Alternatively, since the variability observed towards IRAS4A takes place over timescales of a few years, the lack of an offset component indicates that the current shock activity is low.
The offset component is primarily seen in H$_2$O, and as illustrated above, H$_2$O is a better tracer of the kinematics than CO when it comes to hot shocked gas. However, two additional reasons to those mentioned above may play a role as to why an offset component is not seen towards all sources in H$_2$O. First, when the envelope is diluted UV photons may penetrate further into the envelope and photodissociate H$_2$O even in the post-shock gas, thereby lowering the H$_2$O column density \citep{nisini02, visser12, karska13}. Second, as the H$_2$ density becomes lower excitation of H$_2$O becomes more inefficient, and thus the $S/N$ decreases significantly \citep{nisini02, kristensen12}. Thus, the reason the offset component is not detected towards all sources is likely a combination of geometrical effects and low signal from the emitting gas.
\section{Summary and conclusions}
\begin{itemize}
\item Water emission profiles trace components not seen in low-$J$ CO ($J$ $<$ 10--9) observed from the ground and space-based telescopes such as \textit{Herschel}. These components are uniquely detected in ionised hydrides such as OH$^+$ and CH$^+$, but also OH, CO 16--15 and C$^+$ show the same component in absorption and emission. The component is not detected in deep spectra of HCO$^+$ 6--5 nor CH, thus providing valuable upper limits on the column densities.
\item The component is observed at two epochs towards most sources and the intensity doubled over a period of two years in the H$_2$O 3$_{12}$--3$_{03}$ transition at 1097 GHz towards IRAS4A. The variability points to an origin in gas which is rapidly heated or cooling.
\item H$_2$O column densities are estimated to be $\gtrsim$ 10$^{16}$ cm$^{-2}$ while the H$_2$ density is 5$\times$10$^6$ -- 5$\times$10$^7$ cm$^{-3}$. The emitting regions are small, $\sim$ 100 AU. From the detection of the offset component in CO 16--15 the CO column density over the emitting region is estimated to be 10$^{18}$ cm$^{-2}$. The C$^+$, CH$^+$ and OH$^+$ column densities are measured and upper limits on CH and HCO$^+$ are provided.
\item The inferred CO column density implies that the offset component is identical to the hot CO component \mbox{($T$ $\sim$ 700--800~K)} seen in \textit{Herschel}-PACS data towards these low-mass protostars. The temperature is caused by the reformation of H$_2$ in the post-shock gas.
\item Several of the light hydrides are observed in absorption against the continuum, directly pointing to an origin close to the protostar. The most likely origin of this component is in dissociative shocks where the dissociation may come from the UV light from the accreting protostar rather than the shock itself, thus lowering the required shock velocity. The shocks possibly arise in the interaction between the protostellar wind and the inner, dense envelope. The UV field is required to account for the large column densities of the ionised hydrides, and models of irradiated shocks are necessary to further test and quantify this hypothesis.
\end{itemize}
These observations highlight the unique capability of, in particular, H$_2$O as a tracer of dynamical components in protostars even on the smallest spatial scales. Specifically, the data unequivocally reveal that dissociative shocks in the inner $\sim$~100~AU of low-mass protostars are common. These observations thus open a window towards understanding how low-mass protostars energetically interact with their parental material. Future facilities such as ALMA will undoubtedly shed more light on these energetic processes for a larger number of sources and at the angular resolution required to spatially resolve these processes.
\begin{acknowledgements}
The authors would like to thank the entire WISH team for many, many stimulating discussions. HIFI has been designed and built by a consortium of institutes and university departments from across Europe, Canada and the US under the leadership of SRON Netherlands Institute for Space Research, Groningen, The Netherlands with major contributions from Germany, France and the US. Consortium members are: Canada: CSA, U.Waterloo; France: CESR, LAB, LERMA, IRAM; Germany: KOSMA, MPIfR, MPS; Ireland, NUI Maynooth; Italy: ASI, IFSI-INAF, Arcetri-INAF; Netherlands: SRON, TUD; Poland: CAMK, CBK; Spain: Observatorio Astronomico Nacional (IGN), Centro de Astrobiolog{\'i}a (CSIC-INTA); Sweden: Chalmers University of Technology - MC2, RSS \& GARD, Onsala Space Observatory, Swedish National Space Board, Stockholm University - Stockholm Observatory; Switzerland: ETH Z{\"u}rich, FHNW; USA: Caltech, JPL, NHSC. Astrochemistry in Leiden is supported by the Netherlands Research School for Astronomy (NOVA), by a Spinoza grant and grant 614.001.008 from the Netherlands Organisation for Scientific Research (NWO), and by the European Community's Seventh Framework Programme FP7/2007-2013 under grant agreement 238258 (LASSIE).
\end{acknowledgements}
\bibliographystyle{aa}
|
1,108,101,566,712 | arxiv |
\section{Introduction}\label{sec:intro}
Stars more massive than $\sim 8$ M$_\odot$ end their lives in a core-collapse supernova (CCSN). Some of these massive stars lose their hydrogen and/or helium envelopes before explosion, either through winds or interaction with a binary companion (e.g., \citealt{Heger03, Smith14_ARAA}). These stripped stars can lead to either Type Ib SNe (lacking hydrogen), or Type Ic SNe (lacking hydrogen and helium) \citep{Woosley95,Filippenko97}. Based on their light curve evolution and spectral properties, we know SNe Ib/c are powered by the radioactive decay of $^{56}$Ni synthesized during the explosion \citep{Arnett82, Gal-Yam17}. SNe Ib/c are relatively dim and fast-evolving, reaching typical peak magnitudes of $M_r= -17.7\pm 0.9$ within about $20\pm 10$ days after explosion \citep{Barbarino20}. Spectroscopically, SNe Ib/c exhibit strong suppression blueward of $\sim 4000$ \AA\ due to line blanketing from Fe-peak elements. In terms of their environments, SNe Ib/c tend to occur in galaxies with relatively high metallicities of \mbox{$12 + \log($O/H$) = 8.8\pm 0.3$} \citep{Modjaz20}.
More recently, a new class of stripped-envelope CCSN was discovered and designated Type-I superluminous supernovae (hereafter, SLSNe; \citealt{Chomiuk11,Quimby11}). SLSNe can be up to 100 times more luminous than SNe Ib/c and reach typical peak magnitudes of $M_r = -21.7\pm 0.7$ (e.g., \citealt{Gal-Yam19,Gomez20,Chen22}) with longer rise times of $\sim 20-80$ days \citep{Nicholl17_mosfit}. The spectra of SLSNe are marked by distinctive W-shaped \ion{O}{2} absorption features around $\sim 3500-5000$ \AA\ at early times \citep{Chomiuk11,Quimby11}; and while the spectra of SLSNe tend to be much bluer than those of SNe Ib/c before peak, as they evolve and cool they begin to more closely resemble normal SNe Ic (e.g., \citealt{Pastorello10, Quimby18, Blanchard19, Nicholl19_nebular}). Unlike SNe Ib/c, SLSNe generally occur in low-metallicity galaxies with typical values of \mbox{$12 + \log($O/H$) = 8.4\pm 0.3$} \citep{Lunnan14}. Their blue spectra, bright and slowly evolving light curves, and low metallicity environments all point to an energy source distinct from radioactive decay \citep{Angus16,Nicholl17_mosfit,Margalit18}, most likely the spin-down energy of a millisecond magnetar born in the explosion \citep{Kasen10,Woosley10}. Additionally, it has been shown that SLSNe progenitors are generally more massive before explosion ($\approx 3-40$ M$_\odot$; \citealt{Blanchard20}) than those of SNe Ib/c ($4.5\pm 0.8$ M$_\odot$; \citealt{Barbarino20}). SLSNe are also rare, representing $\lesssim 1\%$ of the SNe Ib/c volumetric rate \citep{Frohmaier2020}.
Given the distinct energy sources of SNe Ib/c and SLSNe, we may expect SNe in the intermediate regime to exist. These intermediate SNe could either have weaker magnetar engines than those of normal SLSNe, or an over-abundant production of $^{56}$Ni compared to SNe Ib/c. Recent studies have begun exploring these intermediate SNe, suggesting they are powered by more than just radioactive decay (e.g., SN\,2012aa from \citealt{Roy16} and SNe 2019dwa, 2019cri, 2019hge, and 2019unb from \citealt{Prentice21}). Here, we report the first systematic search and analysis of such intermediate events, their properties, and power sources. We present a list of 40 SNe with intermediate luminosities between SLSNe and SNe Ib/c, compiled either through our own observational program, or publicly available transients from the literature. We define the sample of \textit{luminous supernovae} (LSNe) as SNe with spectra consistent with a stripped-envelope CCSN and a peak absolute $r$-band magnitude of $M_r = -19$ to $M_r = -20$ mag, bound by SLSNe on the bright end and by SNe Ib/c on the dim end. We caution that the LSN label does not necessarily imply a physical connection between these objects, but is rather a phenomenological grouping based solely on their peak luminosity. We use these selection criteria to explore their physical properties, connections to both SLSNe and SNe Ib/c, and sub-groupings within the LSNe sample.
This paper is structured as follows: In \S\ref{sec:sample} we present the samples of LSNe, SNe Ib/c and SLSNe used in our analysis, as well as the sources of their photometry and spectroscopy. In \S\ref{sec:modeling} we describe the light curve modeling, and in \S\ref{sec:results} we discuss the results of these models. In \S\ref{sec:grouping} we outline the possible sub-groupings for LSNe. In \S\ref{sec:properties} we discuss the observational features and rates of the LSNe population, and finally conclude in \S\ref{sec:conclusions}. Throughout this paper we assume a flat $\Lambda$CDM cosmology with \mbox{$H_{0} = 69.3$ km s$^{-1}$ Mpc$^{-1}$}, $\Omega_{m} = 0.286$, and $\Omega_{\Lambda} = 0.712$ \citep{Hinshaw13}.
\section{Sample of Luminous Supernovae}\label{sec:sample}
\begin{figure*}[t]
\begin{center}
\includegraphics[width=\textwidth]{KLSNe_LCs.pdf}
\caption{Light curves of all the Gold and Silver LSNe in our sample. The individual data points are $r$-band magnitudes of the SNe, and the lines are the corresponding best-fit models described in \S\ref{sec:modeling}. The shaded regions represent the 1, 2, and $3\sigma$ intervals for typical light curves of SLSNe (\textit{blue}) and SNe Ic (\textit{red}) obtained from averaging their light curve models. \label{fig:KLSNe_LCs}}
\end{center}
\end{figure*}
We begin our analysis by gathering a sample of 315 stripped-envelope CCSNe, including a list of all known SLSNe in addition to SNe Ib/c and SNe Ic-BL with above-average luminosities, and a sample of SNe Ic/Ic-BL for comparison. The SNe in this master list were obtained either from the Open Supernova Catalog\footnote{\label{ref:osc}\url{https://sne.space/}; frontend now defunct, but backend available.} (OSC; \citealt{Guillochon17}), the Transient Name Server (TNS)\footnote{\label{ref:tns}\url{https://www.wis-tns.org/}}, the Weizmann Interactive Supernova Data Repository (WISeREP; \citealt{Yaron12})\footnote{\url{https://wiserep.weizmann.ac.il/}}, a literature search, or from our own FLEET transient follow-up program \citep{Gomez20_FLEET}. We include comparison samples of SNe Ic-BL from \cite{Taddia19_broadlined}, SNe Ic from \cite{Barbarino20}, and SLSNe from Gomez et al., in prep; where a full description of the SLSNe sample will be presented in a forthcoming paper. In total, we include 149 SLSNe and 61 SNe Ic/Ic-BL for our comparative analysis. The individual SNe used for this work are all listed in the Appendix.
We select the LSNe from our master sample by focusing on the objects that have a peak absolute magnitude of $M_r = -19$ to $-20$ mag. For SNe that either do not have $r$-band observations available, or that were not observed during peak, we estimate their peak absolute magnitude using the light curve models discussed in \S\ref{sec:modeling}. This range is motivated by the fact that SNe brighter than $M_r \approx -20$ mag tend to show relatively uniform spectroscopic and photometric features that allow us to classify them as SLSNe, and SNe dimmer than $M_r \approx -19$ mag can usually be confidently classified as SNe Ib/c.
Our final sample is made up of 59 LSNe, in addition to the 149 SLSNe and 61 SNe Ic/Ic-BL used for comparison. These SNe are not easily classified into either the SLSNe or SNe Ib/c categories, but lie somewhere in an intermediate regime in terms of both their light curves and spectra. Of these 59 LSNe, we designate 25 as ``Gold'' LSNe, when we have both enough photometry to be able to model their light curves, and spectroscopic observations from which we can verify their classification as stripped-envelope CCSNe. We designate 15 objects as ``Silver" when either their spectroscopic data is of poor quality but still consistent with stripped-envelope CCSNe, or they lack good photometric coverage before peak, but we are still able to constrain their peak using light curve models. Lastly, we label 19 objects as ``Bronze'' LSNe when either they do not have photometry available near peak, have less than four epochs of photometry, or have no public spectra available, any of which prevents us from producing trustworthy light curve models and/or confident spectroscopic classifications. We do not include the Bronze LSNe in our analysis. The full list of 59 LSNe, along with their individual data sources, notes, and peculiarities are presented in the Appendix. The final working sample of 40 Gold and Silver LSNe is listed in Table~\ref{tab:classes}.
\begin{deluxetable*}{cccccccccc}
\tablecaption{Luminous Supernovae \label{tab:classes}}
\tablehead{\colhead{Name} & \colhead{Redshift} & \colhead{Literature Class.} & \colhead{Spectral Group} & \colhead{Light curve} & \colhead{Quality} }
\startdata
DES14C1rhg & 0.481 & SLSN-I & Superluminous & Fast & Gold \\
DES15C3hav & 0.392 & SLSN-I & Superluminous & Fast & Gold \\
DES16C3cv & 0.727 & SLSN-I & Normal & Slow & Gold \\
iPTF13dnt & 0.137 & Ic-BL & Normal & Fast & Silver \\
iPTF16asu & 0.187 & SLSN-I & Superluminous & Fast & Gold \\
iPTF17cw & 0.093 & Ic-BL & Normal & Fast & Gold \\
OGLE15xl & 0.198 & SLSN-I & Superluminous & Slow & Silver \\
PS15cvn & 0.058 & Ic-BL & Normal & Fast & Gold \\
PTF10gvb & 0.098 & Ic-BL & Normal & Fast & Gold \\
PTF10iam & 0.109 & SLSN-I / Ic & Superluminous & Medium & Silver \\
PTF11img & 0.158 & Ic-BL & Normal & Fast & Gold \\
PTF12gty & 0.177 & SLSN-I / Ic & Superluminous & Slow & Gold \\
PTF12hni & 0.106 & SLSN-I & Ambiguous & Medium & Gold \\
SN\,1991D & 0.042 & Ib & Ambiguous & Fast & Silver \\
SN\,2003L & 0.021 & Ib/c & Normal & Slow & Silver \\
SN\,2007ce & 0.046 & Ic-BL & Normal & Fast & Silver \\
SN\,2009cb & 0.187 & SLSN-I & Superluminous & Fast & Silver \\
SN\,2010ay & 0.067 & Ic-BL & Normal & Fast & Silver \\
SN\,2011kl & 0.677 & SLSN-I / GRB & Superluminous & Fast & Gold \\
SN\,2012aa & 0.083 & SLSN-I / Ibc & Ambiguous & Slow & Gold \\
SN\,2013hy & 0.663 & SLSN-I & Superluminous & Medium & Silver \\
SN\,2018beh & 0.060 & Ib/ Ic & Superluminous & Slow & Gold \\
SN\,2018don & 0.073 & SLSN-I / Ic & Superluminous & Slow & Silver \\
SN\,2018fcg & 0.101 & SLSN-I & Normal & Fast & Gold \\
SN\,2019cri & 0.050 & Ic & Normal & Slow & Gold \\
SN\,2019dwa & 0.082 & Ic & Ambiguous & Medium & Gold \\
SN\,2019gam & 0.124 & SLSN-Ib/IIb & Superluminous & Slow & Silver \\
SN\,2019hge & 0.086 & SLSN-Ib & Superluminous & Slow & Gold \\
SN\,2019J & 0.120 & SLSN-I & Superluminous & Slow & Gold \\
SN\,2019moc & 0.056 & Ic & Normal & Fast & Gold \\
SN\,2019obk & 0.166 & SLSN-Ib & Superluminous & Slow & Silver \\
SN\,2019pvs & 0.167 & SLSN-I & Superluminous & Slow & Gold \\
SN\,2019stc & 0.117 & Ic & Normal & Slow & Gold \\
SN\,2019unb & 0.064 & SLSN-Ib & Superluminous & Slow & Gold \\
SN\,2019uq & 0.100 & Ic & Normal & Fast & Silver \\
SN\,2019wpb & 0.068 & Ic & Normal & Fast & Silver \\
SN\,2021lei & 0.112 & Ic & Normal & Fast & Silver \\
SN\,2021lwz & 0.065 & SLSN-I & Superluminous & Fast & Gold \\
SN\,2021uvy & 0.095 & SLSN-I / Ib/c & Normal & Slow & Gold \\
SN\,2021ybf & 0.130 & SLSN-I & Normal & Slow & Gold \\
\enddata
\tablecomments{List of all the Gold and Silver LSNe used for this work, sorted alphabetically. We include classifications from the literature for each object from references listed in the Appendix, in addition to our own label based solely on their spectral features as either ``Superluminous" for SLSNe-like events or ``Normal" for SNe Ic/Ic-BL-like events. We add a light curve classification based on the duration of the light curve rise and whether it is fast like SNe Ic ($\lesssim 25$ days), slow like SLSNe ($\gtrsim 35$ days), or intermediate. Additional Bronze objects are listed in the Appendix but are otherwise excluded from this work.}
\end{deluxetable*}
\subsection{Photometry}\label{sec:photometry}
We collect all available photometry for the 315 SNe in our sample. We obtain publicly available photometry from the OSC, TNS, and WISeREP for the SNe that have these data available. In addition, we include photometry from the Zwicky Transient Facility (ZTF; \citealt{Bellm19}) taken from the Automatic Learning for the Rapid Classification of Events (ALeRCE) broker \citep{Forster20}, the Asteroid Terrestrial-impact Last Alert System (ATLAS; \citealt{Tonry18}), the All Sky Automated Survey for SuperNovae (ASAS-SN; \citealt{Kochanek17}), the \textit{Gaia} Science Alerts (GSA; \citealt{Wyrzykowski16}), the Optical Gravitational Lensing Experiment (OGLE; \citealt{Wyrzykowski14}), the Catalina Real Time Transient Survey (CRTS; \citealt{Drake09}), and the Pan-STARRS Survey for Transients (PSST; \citealt{Huber15}). Photometry from ZTF, ATLAS, PSST, and ASAS-SN is reported from difference images and therefore already has the host flux subtracted. Photometry from the GSA, CRTS, and OGLE does not have the host contribution subtracted, so we subtract the corresponding host magnitude whenever necessary and possible.
\begin{figure*}
\begin{center}
\includegraphics[width=0.7\textwidth]{Peak_spectra.pdf}
\caption{Representative spectra for all the Gold and Silver LSNe that have spectra available within $\pm10$ days of peak, sorted by color and spectral features. We find that LSNe appear to form a continuous distribution, from blue SLSN-like spectra to red SN Ic-like spectra. Individual references for each spectrum are listed in the Appendix. \label{fig:spectra}}
\end{center}
\end{figure*}
In addition to publicly available photometry, we perform our own photometry on either public images or images from our own FLEET transient follow-up program \citep{Gomez20}. A few SNe observed by ZTF had sparsely sampled light curves reported by the automatic photometry pipeline. For these SNe, we download the raw ZTF images from the NASA/IPAC Infrared Science Archive\footnote{\url{https://irsa.ipac.caltech.edu/Missions/ztf.html}} to re-do the photometry and recover any sub-threshold detections that were missed by the automated pipeline. Additionally, we include $gri$ images of SNe that were observed by the Global Supernova Project (GSP) with the Las Cumbres Observatory Global Telescope Network (LCO; \citealt{Brown13}). Finally, we include images taken as part of FLEET with either KeplerCam on the 1.2-m telescope at the Fred Lawrence Whipple Observatory (FLWO), the Low Dispersion Survey Spectrograph (LDSS3c; \citealt{stevenson16}) or Inamori-Magellan Areal Camera and Spectrograph (IMACS; \citealt{dressler11}) both on the Magellan Clay 6.5-m telescopes at Las Campanas Observatory, or Binospec \citep{Fabricant19} on the MMT 6.5-m telescope.
We perform photometry on all images from ZTF, LCO, and FLEET in the same manner. Instrumental magnitudes are measured by modeling the point-spread function (PSF) of each image using field stars and subtracting the model PSF from the target. The magnitudes are then calibrated to AB magnitudes from the PS1/$3\pi$ catalog \citep{Chambers16}. For the majority of sources, we separate the flux of the SN from its host galaxy by performing difference imaging using a pre-explosion PS1/$3\pi$ template for comparison. We subtract the template from the science images using {\tt HOTPANTS} \citep{Becker15}. For sources where there is no host galaxy detected above the PS1/$3\pi$ detection limit of $\approx 23$ mag, we report PSF photometry taken directly from the science images without subtracting a template. The details for the data reduction of each SN are listed in the Appendix.
Finally, we verify that all the photometry is either already corrected for Galactic extinction, or correct it directly. We use the dust maps from \cite{Schlafly11} to obtain an estimate of $E(B-V)$ and the \cite{Barbary16} implementation of the \cite{Cardelli89} extinction law to calculate the corresponding extinction in each band.
A plot showing the $r$-band light curves of all 40 Gold and Silver LSNe is shown in Figure~\ref{fig:KLSNe_LCs}. The blue and red shaded regions in Figure~\ref{fig:KLSNe_LCs} represent the 1, 2, and 3$\sigma$ intervals for the light curves of SLSNe and SNe Ic, respectively. These regions were estimated by averaging the light curve models of all the SLSNe and SNe Ic/Ic-BL used for our comparative analysis, the full list of SNe is listed in \S\ref{sec:compare} of the Appendix.
\subsection{Spectra}\label{sec:spectra}
In this work, we focus on the photometric properties of LSNe, but for comparison and to verify their classification as stripped-envelope CCSNe, we make use of either publicly available optical spectra, or our own newly collected spectra. The public spectra were obtained from WISeREP, the OSC, or the TNS. Some spectra were also obtained from published papers, either from journal databases or private communication with the authors. We use these public spectra to verify the classification given to each SN. Special care is taken for objects that have only been classified in an Astronomer's Telegram or TNS report, but not in a refereed publication. We then verify the redshift of each SN and update it if a better estimate was found from newer or higher quality spectra. Finally, we correct these spectra for Galactic extinction using the extinction maps from \cite{Schlafly11} and the \cite{Barbary16} implementation of the \cite{Cardelli89} extinction law. The individual data sources and any notes regarding the spectra of each SN are listed in the Appendix.
The newly collected spectra presented here are part of our FLEET observational program. These spectra were taken with either LDSS3c, Binospec, IMACS, or the Blue Channel spectrographs \citep{schmidt89} on the MMT 6.5-m telescope. We reduced these spectra using standard IRAF routines with the {\tt twodspec} package. The spectra were bias-subtracted and flat-fielded, the sky background was modeled and subtracted from each image, and the one-dimensional spectra were optimally extracted, weighed by the inverse variance of the data. Wavelength calibration was applied using an arc lamp spectrum taken near the time of each science image. Relative flux calibration was applied to each spectrum using a standard star taken close to the time of observation. Lastly, the spectra were corrected for Galactic extinction in the same way as the public spectra described above.
Spectroscopically, LSNe are not a uniform sample but span a wide range of features. Some LSNe are blue like SLSNe, while others are red and closely resemble SNe Ic. Representative spectra of LSNe are shown in Figure~\ref{fig:spectra}, where we include the closest spectrum to peak for each LSN that has a spectrum taken within $\pm 10$ days of peak. We find that LSNe appear to create a smooth continuum, from blue and SLSN-like to red and SN Ic-like, without a clear threshold or distinction that allows us to separate them neatly into either class. An in-depth study of the spectral properties of LSNe will be presented in a future paper.
\section{Light Curve Modeling}\label{sec:modeling}
\begin{deluxetable*}{llll}
\tablecaption{{\tt MOSFiT} Parameter Definitions \label{tab:parameters}}
\tablehead{\colhead{Parameter} & \colhead{Prior} & \colhead{Units} & \colhead{Definition}}
\startdata
$M_{\text{ej}}$ & $[0.1, 100]$ & M$_\odot$ & Ejecta mass \\
$f_{\text{Ni}}$ & $\log((0, 0.5])$ & & Nickel mass as a fraction of the ejecta mass \\
$v_{\text{ej}}$ & $\log([10^3, 10^5])$ & km s$^{-1}$ & Ejecta velocity \\
$M_{\text{NS}}$ & $1.7 \pm 0.2$ & M$_\odot$ & Neutron star mass \\
$P_{\text{spin}}$ & $[0.7, 30]$ & ms & Magnetar spin \\
$B_{\perp}$ & $\log((0, 15])$ & $10^{14}$ G & Magnetar magnetic field strength \\
$\theta_{\text{BP}}$ & $[0, \pi/2]$ & rad & Angle of the dipole moment \\
$t_{\text{exp}}$ & $[0, 200]$ & days & Explosion time relative to first data point \\
$T_{\text{min}}$ & $[3000, 10000]$ & K & Photosphere temperature floor \\
$\lambda$ & $[2000, 6000]$ & \AA & Flux below this wavelength is suppressed \\
$\alpha$ & $[0, 5]$ & & Slope of the wavelength suppression \\
$n_{H,\text{host}}$ & $\log([10^{16},10^{23}])$ & cm$^{-2}$ & Column density in the host galaxy \\
$\kappa$ & $[0.01, 0.34]$ & & Optical opacity \\
$\kappa_{\gamma}$ & $\log([0.01, 0.5])$ & cm$^2$g$^{-1}$& Gamma-ray opacity \\
$\sigma$ & $[10^{-3}, 10^2]$ & & Uncertainty required for $\chi^2_r=1$ \\
\enddata
\tablecomments{Parameters used in the {\tt MOSFiT} model, their priors, units, and definitions. Priors noted in $\log$ have a log-flat prior, priors without it are flat in linear space, and priors with a center and error bars have a Gaussian distribution.}
\end{deluxetable*}
To explore the properties of LSNe, and to enable a robust comparison to SLSNe and SNe Ic/Ic-BL, we model the light curves of all the LSNe, SLSNe, and SNe Ic/Ic-BL in our sample in a uniform way. To achieve this we use the Modular Open-Source Fitter for Transients ({\tt MOSFiT}) package, a flexible Python code that uses the {\tt emcee} \citep{Foreman13} Markov chain Monte Carlo (MCMC) implementation to fit the light curves of transients using a variety of different power sources \citep{guillochon18}. Since SNe Ic/Ic-BL are known to be powered by radioactive decay \citep{Filippenko97, Taddia19_broadlined}, and SLSNe are likely powered by a magnetar central engine \citep{Kasen10,Woosley10,Nicholl17_mosfit}, we model all light curves using a combined magnetar central engine plus radioactive decay model (designated {\tt slsnni}). By fitting all SNe with the same model we can evaluate which power source best reproduces their light curves, without imposing our own assumption based on properties such as peak magnitude or spectral classification.
The {\tt MOSFiT} setup for the magnetar model \citep{Kasen10,Woosley10} is described in detail in \citet{Nicholl17_mosfit}, while the radioactive component implementation is taken from \cite{Nadyozhin94}. The magnetar model imposes a constraint that penalizes models in which the total kinetic energy is higher than the magnetar energy plus neutrino energy, minus radiative losses. In the {\tt slsnni} implementation we use here, we relax this constraint to allow for additional energy from the radioactive decay component, effectively allowing the total kinetic energy to be higher. The extra luminosity comes from the maximum allowed energy from burning pure helium into nickel, or $\sim 10^{52} \times (M_{\rm Ni} / {\rm M}_\odot)$ erg, where $M_{\rm Ni}$ is the total nickel mass synthesized during the explosion.
The magnetar model in {\tt MOSFiT} has a modified blackbody SED, where flux bluewards of a cutoff wavelength $\lambda_0$ is suppressed by a factor proportional to $(\lambda / \lambda_0)^{\alpha}$ with a fixed $\alpha = 1$ and a variable $\lambda_0$, in order to account for the UV absorption seen in SLSNe (e.g., \citealt{Yan18}). In our modified {\tt slsnni} model we allow the power law index $\alpha$ of the suppression to vary in addition to $\lambda_0$, so that we can fit all SNe with the same uniform model, regardless of the location or steepness of the suppression. We are thus also able to fit the light curves of SNe that are reddened due to line-blanketing from radioactive decay.
\begin{figure*}
\begin{center}
\includegraphics[width=0.9\textwidth]{triangle.pdf}
\caption{Best-fit {\tt MOSFiT} parameters for the SLSNe, SNe Ic, SNe Ic-BL, and LSNe populations. The green circles represent the Golden LSNe sample, while the diamonds represent the Silver sample. Other SNe types are shown as faded circles for comparison. We see that SLSNe and SNe Ic/Ic-BL separate very well in terms of most parameters, while LSNe span almost the entire range of allowable models, further emphasizing their intermediate nature. \label{fig:triangle}}
\end{center}
\end{figure*}
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{Pspin_Bfield.pdf}
\caption{Best-fit Magnetar spin period and magnetic field values for the LSNe, SLSNe, SNe Ic, and SNe Ic-BL populations. The upper left corner, where most SLSNe lie, is dominated by powerful fast-spinning magnetars, whereas in the bottom right corner, populated by SNe Ic/Ic-BL, the magnetar power is low and the observed optical emission is dominated by radioactive decay. Some LSNe appear powered by magnetars similar to those in SLSNe, some have intermediate power magnetars, and some have weak or no magnetar contribution. \label{fig:magnetar}}
\end{center}
\end{figure}
We retain similar model priors to those \cite{Nicholl17_mosfit} used to model SLSNe, with two modifications to accommodate the wider range of SNe modeled here. First, we impose a conservative upper limit on the total nickel mass fraction of $f_{\rm Ni} < 0.5$, higher than the typical ranges that SNe Ic/Ic-BL reach \citep{Barbarino20, Taddia19_broadlined}. And second, since we are unable to constrain the value for the neutron star mass $M_{\rm NS}$ in any model, we impose a Gaussian prior of $M_{\rm NS} = 1.7 \pm 0.2 $ M$_\odot$, similar to previous studies \citep{Blanchard20}, and motivated by the typical masses of neutron stars \citep{Ozel16}. The actual choice of prior for $M_{\rm NS}$ has no effect on the output parameters since the mass of the neutron star has a negligible effect on the output light curves. In Table~\ref{tab:parameters} we list all the model parameters, their priors, units, and definitions.
We fit the multi-band light curves of all LSNe and list the best-fit values and uncertainties in Table~\ref{tab:mosfit}. The uncertainties presented here represent only the statistical errors on the fits. In Table~\ref{tab:derived} we list additional parameters calculated from the posteriors of the fitted parameters. We measure the peak $r$-band magnitude of each SN from its light curve model to quantify the peak, even for the SNe that do not have $r$-band observations available. Table~\ref{tab:derived} also lists an estimated explosion date in MJD and a rise time, defined as the time from explosion date to maximum $r$-band brightness. In the same table, we list estimates for the total kinetic energy, $E_k = (3/10) M_{\rm ej} V_{\rm ej}^2$ and total nickel mass synthesized in the explosion, $M_{\rm Ni} = f_{\rm Ni} \times M_{\rm ej}$.
We explore which areas of parameter space LSNe could exist in by generating 10,000 LSNe input light curves based on the priors listed in Table~\ref{tab:parameters} and then selecting the output objects with a peak magnitude $M_r$ between $-19$ and $-20$ mag. We find the input and output distributions to be similar at the $\sim 90$\% level for almost every parameter, meaning if LSNe exist within our defined prior, we would be able to recover them. The one exception are SNe that have both a $P_{\text{spin}}> 9$ ms and a $f_{\text{Ni}} < 0.03$, as they all fall below our lower luminosity threshold of $M_r = -19$ and would likely appear to be normal SNe Ic.
\section{Modeling Results}
\label{sec:results}
\begin{figure*}[]
\begin{center}
\includegraphics[width=\columnwidth]{Magnetar_power.pdf}
\includegraphics[width=\columnwidth]{Magnetar_Radioactive_Hist.pdf}
\caption{\textit{Left}: Fractional contribution of the magnetar component to the total output luminosity of the light curve (magnetar plus radioactive decay) for all the SNe in our sample as a function of days since explosion. \textit{Right}: Posterior model distribution for the same fractional magnetar contribution, but integrated over the first 200 days of the SN. We include 150 samples for each SN and normalize the populations by their respective sample sizes. SLSNe appear magnetar dominated and mostly stay as such throughout their evolution, whereas SNe Ic are mostly radioactively dominated. While some LSNe appear magnetar dominated, some seem to be powered entirely by radioactive decay. \label{fig:power}}
\end{center}
\end{figure*}
In Figure~\ref{fig:triangle} we show the distribution of the most relevant physical parameters $P_{\text{spin}}$, $B_{\perp}$, $M_{\text{ej}}$, $v_{\text{ej}}$, and $f_{\text{Ni}}$ for the LSNe, SLSNe, SNe Ic, and SNe Ic-BL populations. In general, we find that SLSNe and SNe Ic/Ic-BL separate well in terms of most parameters; while LSNe span the whole range of allowed parameter space. Some LSNe have magnetar parameters ($P_{\text{spin}}$, $B_{\perp}$) that overlap the SLSNe population, consistent with powerful central engines, while some LSNe overlap the SNe Ic/Ic-BL population. The latter have weak or no evidence for magnetars, but instead appear powered by radioactive decay, as evidenced by their high $f_{\text{Ni}}$ values. The ejecta masses of LSNe span a wide range, from $\sim 1.5$ M$_\odot$ and up to $\sim 30$ M$_\odot$. We find the ejecta velocity estimates among all types of SNe to be very similar.
\subsection{Magnetar Parameters}
In order to quantify how different the parameter distributions of LSNe are from those of SLSNe and SNe Ic/Ic-BL, we implement a two-sample Kolmogorov-Smirnov (KS). A KS metric of $D = 0.0$ indicates the two populations are drawn from the same distribution, and $D = 1.0$ means there is no overlap between the distributions. We find a KS metric (and $p$ value) for the LSN distribution of $P_{\text{spin}}$ of $D = 0.63\ (< 10^{-3})$ and $D = 0.62\ (< 10^{-3})$ when compared to the SLSNe and SNe Ic/Ic-BL distributions, respectively. This indicates that the spin period distribution of LSNe is not similar to that of either SLSNe or SNe Ic/Ic-BL, as LSNe span a wider range of $P_{\text{spin}}$ values than either SLSNe or SNe Ic/Ic-BL. A similar result is found for the magnetic field strength, where we find values of $D = 0.34\ (< 10^{-3})$ and $D = 0.75\ (< 10^{-3})$ when LSNe are compared to SLSNe and SNe Ic/Ic-BL, respectively. This suggests that the LSNe $B_{\perp}$ distribution is very different from that of SNe Ic/Ic-BL, and still distinct (but less so) from the SLSN population.
In Figure~\ref{fig:magnetar} we focus on the distribution of magnetar parameters ($P_{\text{spin}}$, and $B_{\perp}$). While 90\% of SLSNe have $P_{\text{spin}}$ values $\lesssim 6.9$ ms, and 90\% of SNe Ic/Ic-BL have values of $P_{\text{spin}} \gtrsim 17$ ms (i.e., there is no evidence that they have a rapidly spinning magnetar engine), LSNe span the whole range of $P_{\text{spin}} \approx 2 -23$ ms. Similarly, while only $\sim 10$\% of SLSNe have spin periods $P_{\text{spin}} \approx 7 -17$ ms (and 3\% of SNe Ic/Ic-BL), 38\% of LSNe lie in this intermediate range. We note that some LSNe are best fit by magnetars with very strong magnetic fields and slow spin periods. The slow spin period makes it such that the contribution from the magnetar is negligible and the magnetic field strength becomes irrelevant in terms of its effect on the output light curve for SNe with slow spin periods.
In Figure~\ref{fig:power} we explore the relative contribution of the magnetar engine component to the total luminosity of the SNe. The left panel of Figure~\ref{fig:power} shows how this magnetar contribution evolves as a function of time. We find that 88\% of SLSNe have a significant magnetar contribution $\gtrsim 80$\% soon after explosion, and all SLSNe have at least some magnetar contribution above $10$\%. On the other hand, 60\% of SNe Ic/Ic-BL have a magnetar contribution $\lesssim 10$\%, and only 31\% have a magnetar contribution $\gtrsim 50$\% in the few days after explosion. LSNe span a wide range of magnetar contributions that overlap with both SLSNe and SNe Ic/Ic-BL. While 80\% of LSNe have a magnetar contribution $\gtrsim10$\%, 75\% of them have at least a 50\% magnetar contribution.
Along the same lines, in the right panel of Figure~\ref{fig:power} we show the same fractional magnetar contribution but integrated over the first 200 days after explosion for every SN. This histogram includes 150 samples for each SN, one for each realization (or walker) of the model light curves. We mark the threshold that corresponds to $N = 1$ SN, above which we can consider the measurements to be significant. Above this threshold, 92\% of SLSNe samples have a magnetar contribution $> 80$\%, and only 8\% have a contribution $< 10$\%. Conversely, only 5\% of SNe Ic-BL (and no SNe Ic) have a magnetar contribution $> 80$\%; and 45\% and 71\% of SNe Ic-BL and SNe Ic have a magnetar contribution $< 10$\%, respectively. LSNe show a more noticeable bifurcation, where 74\% of samples have a magnetar contribution $> 80$\% and 27\% have a magnetar contribution $< 10$\%.
\subsection{Ejecta Parameters}\label{sec:results_mass}
The parameter distributions for nickel mass fraction $f_{\text{Ni}}$ and ejecta mass $M_{\text{ej}}$ show less stark differences between LSNe and the other populations than the magnetar parameters. We find a KS metric (and $p$ value) for the distribution of $f_{\text{Ni}}$ values of SLNe of $D = 0.23\ (0.06)$ and $D = 0.25\ (0.09)$ compared to the SLSNe and SNe Ic/Ic-BL distributions, respectively. In terms of $M_{\text{ej}}$, we find a metric of $D = 0.39\ (< 10^{-3})$ and $D = 0.19\ (0.34)$ for the same populations. With the exception of the distribution of $M_{\text{ej}}$ between LSNe and SLSNe ($p < 10^{-3}$), we can not rule out the null hypothesis that these populations are drawn for the same distribution based on the KS metric. Finally, we measure values of $D = 0.14\ (0.5)$ and $D = 0.14\ (0.74)$ for the distribution of $v_{\text{ej}}$ values for LSNe compared to the SLSNe and SNe Ic/Ic-BL populations; suggesting that all populations of SNe are consistent with being drawn for the same distribution of $v_{\text{ej}}$ values.
In Figure~\ref{fig:mejecta} we show the values of nickel mass as a function of ejecta mass. The main difference between SLSNe and SNe Ic is not necessarily the nickel mass or even the nickel mass fraction, but rather how well constrained this parameter is. Effectively all SNe Ic/Ic-BL have well-constrained nickel mass fractions, usually $f_{\rm Ni}\lesssim 0.1$, which translates to a total nickel mass of $M_{\rm Ni} \lesssim 0.5$ M$_\odot$. On the other hand, $\sim 50$\% of SLSNe and LSNe have an unconstrained nickel mass fractions with an uncertainty on $\log(f_{\rm Ni}) \gtrsim 0.5$. For some SLSNe, the posterior reaches the $f_{\rm Ni} = 0.5$ limit imposed by the prior. In these situations, the light curves are dominated by the magnetar component, and the value for $f_{\rm Ni}$ can therefore not be constrained. As expected, LSNe have nickel masses that span a wide range of possibilities, while some appear magnetar-dominated, others are best fit by a radioactively powered light curve. We find that the values of $f_{\rm Ni}$ for the radioactively dominated LSNe span the same range as SNe Ic/Ic-BL, $f_{\rm Ni} \approx 0.01 - 0.1$.
Figure~\ref{fig:peak_magnitudes} shows how the rise time of LSNe compares to those of SLSNe and SNe Ic/Ic-BL. LSNe occupy a very distinct space in terms of peak magnitude (by definition), but also tend to have intermediate rise times between SLSNe and SNe Ic. The rise times of LSNe span the range from $\sim 20$ to $\sim 65$ days, similar but slightly shorter than the $\sim 20$ to 90 days for SLSNe, but significantly wider than the $\lesssim 30$ days of SNe Ic/Ic-BL. We explore the possibility that this ``broadening'' of allowed rise times is caused by the underlying power source but conclude this does not appear to be the case, since we find no strong correlation between the dominant power source and the rise time of the transient. Instead, the parameter controlling the rise-time difference appears to be the ejecta mass.
The rise times of CCSNe has been shown to correlate with ejecta mass, as higher ejecta masses lead to longer rise times (e.g., \citealt{Dessart16}). This correlation is to be expected since the diffusion timescale is proportional to $\propto (M_{\rm ej} / v_{\rm ej})^{1/2}$, and shown to be evident in SLSNe (e.g., \citealt{Nicholl15, Konyves21}). In Figure~\ref{fig:rise_mejecta} we show this trend for all SNe in our sample and see that it appears to also apply to LSNe. None of the fast-evolving LSNe with rise times $< 25$ days have high ejecta masses above $20$ M$_\odot$, and most of the slowly evolving LSNe with rise times $> 30$ days have ejecta masses between $10-40$ M$_\odot$. Out of the 40 LSNe, four (SN\,2019dwa, SN\,2019stc, SN\,2021uvy, and DES16C3cv) appear to deviate from this trend, since they have rise times between 30 and 70 days but ejecta masses $< 6$ M$_\odot$.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{mejecta_nickel_unconstrained.pdf}
\caption{Best fit nickel mass as a function of ejecta mass for the LSNe, SLSNe, SNe Ic, and SNe Ic-BL populations. For clarity, we exclude the SLSNe with unconstrained nickel mass fractions (i.e., those for which even $f_{\rm Ni}=0.5$ still provides a sub-dominant contribution to the light curve). The dashed lines indicate $f_{\rm Ni}=0.01, 0.1, 1$ . LSNe mostly occupy a similar range to SNe Ic/Ic-BL, but also overlap the parameter space occupied by SLSNe. The higher luminosities of LSNe compared to SNe Ic/Ic-BL indicates an additional contribution from magnetar engines. \label{fig:mejecta}}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{peak_magnitudes.pdf}
\caption{Rise time as a function of peak $r$-band absolute magnitude for the LSNe, SLSNe, SNe Ic, and SNe Ic-BL populations. We find that, by definition, LSNe occupy a very clear space in terms of peak magnitude, but they also have intermediate rise times between those of SLSNe and SNe Ic/Ic-BL. \label{fig:peak_magnitudes}}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{rise_mejecta.pdf}
\caption{Ejecta mass as a function of rise time for the LSNe, SLSNe, SNe Ic, and SNe Ic-BL population. We see that, like previously established for SLSNe (e.g., \citealt{Nicholl15, Konyves21}) and SNe Ic/Ic-BL (e.g., \citealt{Dessart16}), LSNe with longer rise times tend to have larger ejecta masses. \label{fig:rise_mejecta}}
\end{center}
\end{figure}
In Figure~\ref{fig:mass_distribution} we show the pre-explosion mass distribution for the LSNe, SLSNe, SNe Ic, and SNe Ic-BL populations, calculated by summing the posteriors for the ejecta mass and neutron star mass of each SN. The histogram includes 150 samples for each SN, one for each model realization (or walker). We indicate with a shaded region the threshold where the number of samples is equivalent to one SN, below which the measurements are not significant. The distribution of LSNe pre-explosion masses is intermediate to those of SLSNe and SNe Ic/Ic-BL. This is particularly evident at the high mass end, where the populations appear clearly distinct. LSNe extend to masses of $\sim 30$ M$_\odot$, higher than the $\sim 20$ M$_\odot$ limit for SNe Ic/Ic-BL, but not as high as SLSNe, which extend up to $\sim 40$ M$_\odot$. While SLSNe have a sharp drop off at the low-mass end below $\sim 2$ M$_\odot$, the mass distribution for LSNe extends as low as those of SNe Ic/Ic-BL, down to $\sim 1.5$ M$_\odot$. To quantify this distinction we fit a linear slope of the form $d N / d \log M \propto M ^ \alpha$ to the mass distribution above $10$ M$_\odot$ and up to where the distributions reach the $1$ SN threshold and find best fit values of $\alpha$ for the different populations of: $\alpha = -1.26 \pm 0.04$ (SLSNe), $\alpha = -1.83 \pm 0.07$ (LSNe), $\alpha = -3.21 \pm 0.23$ (SN Ic), $\alpha = -3.02 \pm 0.26$ (SN Ic-BL). LSNe have a steeper slope than SLSNe, but shallower than SNe Ic/Ic-BL, suggesting their progenitors might be a mix of SLSN-like and SNe Ic-like. The value we obtain for SLSNe perfectly matches the one found by \cite{Blanchard20} of $\alpha = -1.26 \pm 0.06$. We find the peaks of the distributions to be $\sim 3.5$ M$_\odot$ (SNe Ic-BL), $\sim 4.0$ M$_\odot$ (SNe Ic), $\sim 5.6$ M$_\odot$ (LSNe), and $\sim 6.6$ M$_\odot$ (SLSNe). Our estimate for the peak of the SLSNe distribution is higher than the $\sim 4$ M$_\odot$ found by \cite{Blanchard20} as a result of our models allowing for an additional energy component from radioactive decay that is not present in their models.
\subsection{Comparison To Other Studies}
We compare our best fit $M_{\rm ej}$ values obtained from {\tt MOSFiT} to the results from several independent studies. \cite{Taddia19_broadlined} and \cite{Barbarino20} fit the bolometric light curves of SNe Ic-BL and SNe Ic, respectively, with an Arnett model \citep{Arnett82} to measure the ejecta masses of these SNe. \cite{Konyves21} used a combination of bolometric light curve models and ejecta velocity measurements to estimate the ejecta mass of SLSNe. \cite{Jerkstrand17_slsn} modeled the spectra of SLSNe with the SUMO spectral synthesis code \citep{Jerkstrand12} to select the best ejecta mass estimates from a grid of models. And finally, \cite{Mazzali16} modeled the spectra of SLSNe using a Monte Carlo spectral synthesis code \citep{Mazzali93} to infer the ejecta masses of SLSNe.
We find our estimates of $M_{\rm ej}$ for SLSNe and SNe Ic/Ic-BL to be in good agreement with values estimated by these studies. Five of the LSNe in this work are included in these studies. We find an ejecta mass for SN\,2011kl of $M_{\rm ej} = 3.9 \pm 1.8$ M$_\odot$, consistent with the estimate of $M_{\rm ej} = 2 - 3$ M$_\odot$ from \cite{Mazzali16}. We estimate $M_{\rm ej} = 9.6^{+7.8}_{-4.0}$ M$_\odot$ for PTF12gty, within $2\sigma$ of the average ejecta mass estimate of $M_{\rm ej} = 20.7 \pm 6.0$ M$_\odot$ found by \cite{Konyves21}, but consistent with their ``Equation 8'' estimate of $M_{\rm ej} = 14.68$ M$_\odot$. The ejecta masses we find for iPTF13dnt ($9.5^{+6.8}_{-4.2}$ M$_\odot$), iPTF16asu ($0.4^{+0.6}_{-0.2}$ M$_\odot$), and iPTF17cw ($4.3^{+2.8}_{-1.7}$ M$_\odot$) are all within $1\sigma$ of the estimates from \cite{Taddia19_broadlined} of $7.0 \pm 2.6$ M$_\odot$, $0.9 \pm 0.1$ M$_\odot$, and $4.5 \pm 1.8$ M$_\odot$, respectively.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=\columnwidth]{distribution_mass_100.pdf}
\caption{Pre-explosion mass distribution for the LSNe, SLSNe, SNe Ic, and SNe Ic-BL populations, normalized by their respective sample sizes. We estimate the progenitor mass by summing the best fit value of the ejecta mass with the mass of the remnant neutron star. SLSNe have a sharp drop-off at the low-mass end, where SNe Ic peak, and LSNe are intermediate between these two populations. The same trend is seen in the high-mass end: SLSNe extend to very high masses of $\sim 40$ M$_\odot$, LSNe appear to max out at $\sim 30$ M$_\odot$, and SNe Ic/Ic-BL have almost no objects above $\sim 20$ M$_\odot$. Dashed lines are linear fits to the distributions above $10$ M$_\odot$ and below the $N = 1$ SN threshold. The peak in the LSNe distribution at $\sim 2$ M$_\odot$ is driven entirely by two objects with tightly constrained $M_{\text{ej}}$ values (iPTF16asu and SN\,2021lwz). \label{fig:mass_distribution}}
\end{center}
\end{figure}
\begin{longrotatetable}
\begin{deluxetable}{cccccccccccc}
\tablecaption{Fitted Parameters \label{tab:mosfit}}
\tablehead{\colhead{Name} & \colhead{$B_{\perp}$} & \colhead{$\lambda_P$} & \colhead{$\lambda$} & \colhead{$f_{\text{Ni}}$} & \colhead{$\kappa$} & \colhead{$\log{(\kappa_{\gamma})}$} & \colhead{$M_{\text{ej}}$} & \colhead{$P_{\text{spin}}$} & \colhead{$T_{\text{min}}$} & \colhead{$\theta_{\text{PB}}$} & \colhead{$V_{\text{ej}}$} \\
& ($10^{14}$ G) & & (1000 \AA) & & & (cm$^2$g$^{-1}$) & (M$_\odot$) & (ms) & (1000K) & (rad) & (1000 km s$^{-1}$)}
\startdata
DES14C1rhg & $8.5 \pm 3.2$ & $2.0 \pm 1.7$ & $2.73 \pm 0.55$ & $0.01^{+0.03}_{-0.01}$ & $0.05^{+0.05}_{-0.03}$ & $0.07^{+0.19}_{-0.05}$ & $4.1^{+3.7}_{-2.0}$ & $10.0^{+3.1}_{-5.3}$ & $5.1 \pm 1.54$ & $1.06 \pm 0.34$ & $6.39 \pm 0.55$ \\
DES15C3hav & $6.1^{+3.4}_{-1.6}$ & $0.2 \pm 0.2$ & $4.01 \pm 1.31$ & $0.01^{+0.02}_{-0.0}$ & $0.04^{+0.03}_{-0.02}$ & $0.09^{+0.19}_{-0.06}$ & $8.1^{+8.8}_{-3.6}$ & $2.4^{+2.3}_{-1.3}$ & $7.56 \pm 0.76$ & $0.96 \pm 0.42$ & $6.07 \pm 0.64$ \\
DES16C3cv & $0.8^{+1.0}_{-0.5}$ & $3.2^{+1.0}_{-0.6}$ & $3.43 \pm 0.19$ & $0.01^{+0.05}_{-0.01}$ & $0.28 \pm 0.05$ & $0.01^{+0.0}_{-0.0}$ & $5.2 \pm 1.1$ & $4.2 \pm 1.4$ & $5.04 \pm 1.44$ & $0.29 \pm 0.15$ & $6.52 \pm 0.52$ \\
iPTF13dnt & $0.1^{+2.9}_{-0.1}$ & $2.8 \pm 1.7$ & $4.05 \pm 1.35$ & $0.15^{+0.11}_{-0.07}$ & $0.09^{+0.08}_{-0.05}$ & $0.05^{+0.21}_{-0.03}$ & $9.5^{+6.8}_{-4.2}$ & $20.0^{+7.0}_{-10.8}$ & $3.79^{+0.83}_{-0.51}$ & $0.77 \pm 0.56$ & $14.08 \pm 4.52$ \\
iPTF16asu & $7.9^{+3.0}_{-1.8}$ & $2.5 \pm 0.3$ & $5.49 \pm 0.13$ & $0.01^{+0.02}_{-0.0}$ & $0.05^{+0.06}_{-0.02}$ & $0.03^{+0.04}_{-0.01}$ & $0.4^{+0.6}_{-0.2}$ & $13.2 \pm 1.5$ & $7.78^{+1.45}_{-3.49}$ & $1.02 \pm 0.37$ & $4.02 \pm 0.96$ \\
iPTF17cw & $11.6 \pm 2.9$ & $2.7^{+1.2}_{-0.5}$ & $5.17 \pm 0.31$ & $0.03^{+0.03}_{-0.01}$ & $0.03^{+0.02}_{-0.01}$ & $0.09^{+0.19}_{-0.06}$ & $4.3^{+2.8}_{-1.7}$ & $22.5 \pm 2.9$ & $6.11 \pm 0.53$ & $1.21 \pm 0.3$ & $8.34 \pm 1.24$ \\
OGLE15xl & $0.4^{+1.2}_{-0.3}$ & $2.2 \pm 1.7$ & $3.93 \pm 1.32$ & $0.14^{+0.13}_{-0.07}$ & $0.1^{+0.14}_{-0.06}$ & $0.23 \pm 0.15$ & $10.5 \pm 6.5$ & $10.9^{+11.1}_{-6.0}$ & $5.35^{+2.53}_{-1.37}$ & $0.8 \pm 0.55$ & $12.04 \pm 4.89$ \\
PS15cvn & $0.1^{+0.5}_{-0.1}$ & $3.9 \pm 0.7$ & $4.79 \pm 0.11$ & $0.03^{+0.04}_{-0.02}$ & $0.07 \pm 0.01$ & $0.01^{+0.0}_{-0.0}$ & $3.7^{+2.6}_{-1.6}$ & $19.1 \pm 8.5$ & $6.34 \pm 0.22$ & $0.64 \pm 0.56$ & $9.07 \pm 0.66$ \\
PTF10gvb & $12.5^{+1.8}_{-2.9}$ & $1.5^{+2.3}_{-1.1}$ & $3.33^{+1.43}_{-0.89}$ & $0.07^{+0.06}_{-0.02}$ & $0.08^{+0.05}_{-0.03}$ & $0.21^{+0.18}_{-0.11}$ & $4.2 \pm 1.9$ & $23.2 \pm 4.2$ & $4.85 \pm 0.2$ & $1.22 \pm 0.25$ & $14.11 \pm 1.09$ \\
PTF10iam & $2.6^{+1.0}_{-0.6}$ & $3.4 \pm 0.9$ & $5.83 \pm 0.16$ & $<0.01$ & $0.02 \pm 0.01$ & $0.1^{+0.2}_{-0.07}$ & $0.9^{+1.0}_{-0.4}$ & $13.7 \pm 1.5$ & $6.61 \pm 2.0$ & $1.06 \pm 0.37$ & $3.02 \pm 0.86$ \\
PTF11img & $0.1^{+0.8}_{-0.1}$ & $1.9^{+2.0}_{-0.9}$ & $4.65 \pm 0.67$ & $0.22^{+0.11}_{-0.07}$ & $0.12^{+0.07}_{-0.04}$ & $0.02^{+0.01}_{-0.0}$ & $3.9^{+2.3}_{-1.4}$ & $18.5 \pm 8.7$ & $5.52 \pm 0.44$ & $0.71 \pm 0.55$ & $16.54 \pm 3.12$ \\
PTF12gty & $1.6^{+1.0}_{-0.5}$ & $1.3^{+2.1}_{-1.0}$ & $2.95^{+1.2}_{-0.67}$ & $0.02^{+0.04}_{-0.01}$ & $0.16^{+0.11}_{-0.07}$ & $0.15^{+0.21}_{-0.09}$ & $9.6^{+7.8}_{-4.0}$ & $7.4 \pm 0.8$ & $5.26 \pm 0.33$ & $0.96 \pm 0.41$ & $5.79 \pm 0.49$ \\
PTF12hni & $7.0^{+4.1}_{-2.4}$ & $1.6^{+1.7}_{-0.9}$ & $4.36^{+0.99}_{-1.76}$ & $0.02^{+0.02}_{-0.01}$ & $0.05^{+0.03}_{-0.02}$ & $0.1^{+0.2}_{-0.08}$ & $14.1 \pm 7.8$ & $5.0 \pm 3.2$ & $5.48^{+0.55}_{-0.3}$ & $1.08 \pm 0.41$ & $7.58 \pm 1.63$ \\
SN\,1991D & $9.7 \pm 3.2$ & $2.1 \pm 1.7$ & $3.54 \pm 1.13$ & $0.03^{+0.05}_{-0.02}$ & $0.05 \pm 0.04$ & $0.04^{+0.05}_{-0.03}$ & $5.3^{+7.4}_{-2.3}$ & $3.1^{+4.8}_{-1.9}$ & $4.57^{+0.38}_{-0.25}$ & $1.07 \pm 0.34$ & $10.45 \pm 1.8$ \\
SN\,2003L & $0.2^{+4.2}_{-0.2}$ & $2.2 \pm 1.6$ & $4.15 \pm 1.35$ & $0.12^{+0.14}_{-0.08}$ & $0.09^{+0.13}_{-0.06}$ & $0.08^{+0.22}_{-0.06}$ & $17.2^{+23.9}_{-9.0}$ & $16.6 \pm 9.0$ & $6.58 \pm 2.41$ & $0.8 \pm 0.53$ & $9.24^{+6.22}_{-3.85}$\\
SN\,2007ce & $10.0 \pm 2.7$ & $2.6 \pm 0.2$ & $5.43 \pm 0.07$ & $0.02 \pm 0.01$ & $0.02^{+0.01}_{-0.0}$ & $0.21 \pm 0.15$ & $10.1 \pm 4.7$ & $20.9 \pm 5.6$ & $7.26 \pm 0.17$ & $1.16 \pm 0.31$ & $5.84 \pm 1.0$ \\
SN\,2009cb & $7.2^{+4.2}_{-2.5}$ & $2.2 \pm 1.8$ & $2.99 \pm 0.64$ & $0.03^{+0.04}_{-0.02}$ & $0.04 \pm 0.02$ & $0.06^{+0.2}_{-0.05}$ & $7.1^{+8.4}_{-3.3}$ & $2.8^{+3.3}_{-1.5}$ & $4.74 \pm 1.5$ & $1.07 \pm 0.42$ & $9.59 \pm 1.88$ \\
SN\,2010ay & $0.1^{+3.2}_{-0.1}$ & $3.3 \pm 1.1$ & $4.7 \pm 0.25$ & $0.15^{+0.12}_{-0.07}$ & $0.02^{+0.02}_{-0.01}$ & $0.03^{+0.07}_{-0.02}$ & $8.1^{+6.8}_{-3.6}$ & $20.7^{+6.6}_{-10.8}$ & $5.14 \pm 1.46$ & $0.73 \pm 0.57$ & $8.93 \pm 2.31$ \\
SN\,2011kl & $5.5^{+4.0}_{-2.5}$ & $2.1 \pm 1.9$ & $2.41 \pm 0.34$ & $0.27 \pm 0.14$ & $0.26^{+0.05}_{-0.09}$ & $0.01^{+0.0}_{-0.0}$ & $3.9 \pm 1.8$ & $11.0^{+3.2}_{-2.0}$ & $9.35 \pm 0.49$ & $0.93 \pm 0.44$ & $22.8 \pm 5.69$ \\
SN\,2012aa & $4.7^{+2.2}_{-1.2}$ & $4.1 \pm 0.7$ & $4.84 \pm 0.16$ & $<0.0$ & $0.03^{+0.03}_{-0.01}$ & $0.07^{+0.22}_{-0.05}$ & $26.7^{+19.0}_{-11.1}$ & $6.2 \pm 1.9$ & $6.45 \pm 0.49$ & $1.05 \pm 0.39$ & $2.69 \pm 0.41$ \\
SN\,2013hy & $5.6^{+4.3}_{-2.7}$ & $2.6^{+1.5}_{-0.9}$ & $3.25^{+0.27}_{-0.17}$ & $0.09^{+0.11}_{-0.04}$ & $0.02^{+0.02}_{-0.01}$ & $0.07^{+0.19}_{-0.05}$ & $17.0 \pm 9.6$ & $14.0 \pm 2.9$ & $7.88 \pm 0.42$ & $0.97 \pm 0.44$ & $6.16 \pm 0.71$ \\
SN\,2018beh & $6.1^{+1.8}_{-1.2}$ & $0.1^{+0.2}_{-0.1}$ & $3.07^{+1.7}_{-0.85}$ & $<0.0$ & $0.07^{+0.05}_{-0.03}$ & $0.09^{+0.2}_{-0.06}$ & $10.6^{+8.1}_{-4.1}$ & $1.5^{+1.1}_{-0.5}$ & $4.4 \pm 1.06$ & $1.19 \pm 0.32$ & $5.79 \pm 0.18$ \\
SN\,2018don & $0.7 \pm 0.3$ & $4.6^{+0.3}_{-0.5}$ & $4.59 \pm 0.06$ & $<0.01$ & $0.26 \pm 0.05$ & $0.01^{+0.0}_{-0.0}$ & $6.7 \pm 1.4$ & $7.8 \pm 1.2$ & $3.91^{+0.37}_{-0.15}$ & $0.83 \pm 0.27$ & $5.45 \pm 0.33$ \\
SN\,2018fcg & $4.6^{+1.4}_{-0.9}$ & $0.5^{+1.5}_{-0.3}$ & $3.78 \pm 1.52$ & $<0.01$ & $0.09^{+0.05}_{-0.03}$ & $0.22 \pm 0.13$ & $2.1 \pm 0.9$ & $6.3^{+1.6}_{-1.0}$ & $4.72 \pm 0.13$ & $1.13 \pm 0.33$ & $9.17^{+0.53}_{-0.82}$\\
SN\,2019cri & $6.5^{+4.3}_{-2.7}$ & $2.4 \pm 1.3$ & $5.17 \pm 0.44$ & $0.02^{+0.02}_{-0.01}$ & $0.03^{+0.03}_{-0.02}$ & $0.07^{+0.21}_{-0.05}$ & $25.6^{+19.9}_{-10.3}$ & $12.9 \pm 4.0$ & $6.05 \pm 0.88$ & $0.99 \pm 0.44$ & $3.77 \pm 1.11$ \\
SN\,2019dwa & $2.3^{+2.2}_{-0.9}$ & $1.7^{+2.3}_{-1.4}$ & $3.04^{+1.11}_{-0.74}$ & $0.01^{+0.07}_{-0.01}$ & $0.26 \pm 0.07$ & $0.01^{+0.01}_{-0.0}$ & $2.7 \pm 0.9$ & $12.5 \pm 2.0$ & $4.67 \pm 0.4$ & $0.94 \pm 0.42$ & $8.59 \pm 0.79$ \\
SN\,2019gam & $5.0^{+3.7}_{-2.1}$ & $2.4 \pm 1.7$ & $4.46^{+0.68}_{-0.3}$ & $0.07 \pm 0.04$ & $0.28^{+0.05}_{-0.08}$ & $0.02^{+0.02}_{-0.01}$ & $29.5 \pm 8.4$ & $3.1^{+3.5}_{-2.0}$ & $9.28^{+0.53}_{-1.13}$ & $1.09 \pm 0.36$ & $14.23 \pm 3.68$ \\
SN\,2019hge & $2.3^{+1.0}_{-0.5}$ & $0.2^{+0.2}_{-0.1}$ & $3.69 \pm 1.28$ & $0.01^{+0.01}_{-0.0}$ & $0.05^{+0.04}_{-0.02}$ & $0.06^{+0.21}_{-0.04}$ & $16.7^{+16.0}_{-6.8}$ & $1.7^{+1.4}_{-0.7}$ & $5.42 \pm 1.71$ & $1.04 \pm 0.39$ & $3.61 \pm 0.32$ \\
SN\,2019J & $5.2^{+2.3}_{-1.2}$ & $1.1^{+1.9}_{-0.8}$ & $2.63^{+1.31}_{-0.48}$ & $<0.0$ & $0.06^{+0.06}_{-0.03}$ & $0.06^{+0.19}_{-0.05}$ & $13.7^{+11.7}_{-5.7}$ & $1.2^{+0.7}_{-0.4}$ & $9.29^{+0.5}_{-0.79}$ & $1.01 \pm 0.4$ & $4.06 \pm 0.95$ \\
SN\,2019moc & $0.1^{+0.4}_{-0.0}$ & $4.2 \pm 0.7$ & $4.61 \pm 0.13$ & $0.1^{+0.06}_{-0.04}$ & $0.02 \pm 0.01$ & $0.05^{+0.18}_{-0.04}$ & $4.4^{+3.0}_{-1.8}$ & $19.9 \pm 8.0$ & $4.26^{+0.58}_{-0.28}$ & $0.63 \pm 0.54$ & $8.08 \pm 1.11$ \\
SN\,2019obk & $3.4^{+2.7}_{-1.3}$ & $0.8^{+2.1}_{-0.7}$ & $3.28^{+1.46}_{-0.92}$ & $0.03^{+0.04}_{-0.01}$ & $0.03^{+0.04}_{-0.02}$ & $0.09^{+0.19}_{-0.06}$ & $14.5^{+20.3}_{-7.9}$ & $8.8 \pm 1.5$ & $6.49^{+1.32}_{-0.71}$ & $0.92 \pm 0.46$ & $6.92 \pm 1.23$ \\
SN\,2019pvs & $1.5^{+3.7}_{-0.8}$ & $1.8^{+2.1}_{-1.4}$ & $3.2 \pm 0.94$ & $0.03^{+0.07}_{-0.02}$ & $0.03^{+0.05}_{-0.02}$ & $0.16^{+0.2}_{-0.11}$ & $16.2^{+16.7}_{-9.0}$ & $9.1^{+3.3}_{-1.5}$ & $8.71 \pm 1.03$ & $0.8 \pm 0.53$ & $4.48^{+2.17}_{-1.04}$\\
SN\,2019stc & $1.5^{+1.2}_{-0.6}$ & $1.1^{+1.9}_{-0.7}$ & $4.34^{+0.89}_{-1.64}$ & $0.02^{+0.1}_{-0.01}$ & $0.24 \pm 0.07$ & $0.01^{+0.01}_{-0.0}$ & $4.0^{+2.1}_{-1.1}$ & $7.7 \pm 1.0$ & $5.51^{+0.52}_{-0.96}$ & $0.92 \pm 0.44$ & $6.71 \pm 1.09$ \\
SN\,2019unb & $2.2^{+1.8}_{-0.7}$ & $0.2^{+0.4}_{-0.1}$ & $3.58 \pm 1.51$ & $0.01^{+0.03}_{-0.01}$ & $0.06^{+0.06}_{-0.03}$ & $0.1^{+0.19}_{-0.07}$ & $15.2^{+12.0}_{-6.5}$ & $1.9 \pm 0.8$ & $8.41 \pm 1.33$ & $0.98 \pm 0.47$ & $4.22 \pm 0.5$ \\
SN\,2019uq & $10.2 \pm 3.5$ & $2.0 \pm 1.8$ & $3.07 \pm 0.85$ & $0.02^{+0.03}_{-0.01}$ & $0.03^{+0.03}_{-0.01}$ & $0.05^{+0.21}_{-0.04}$ & $7.9^{+6.6}_{-3.8}$ & $17.0 \pm 5.3$ & $5.86^{+1.13}_{-1.83}$ & $1.11 \pm 0.36$ & $6.99 \pm 1.26$ \\
SN\,2019wpb & $0.1^{+0.5}_{-0.1}$ & $3.0 \pm 1.5$ & $4.3^{+0.52}_{-0.93}$ & $0.08^{+0.06}_{-0.03}$ & $0.02^{+0.02}_{-0.01}$ & $0.07^{+0.22}_{-0.05}$ & $7.4^{+5.0}_{-3.2}$ & $21.5^{+6.5}_{-9.9}$ & $4.08 \pm 0.81$ & $0.74 \pm 0.57$ & $9.88 \pm 2.0$ \\
SN\,2021lei & $0.1^{+1.0}_{-0.1}$ & $2.7 \pm 1.4$ & $4.79 \pm 0.47$ & $0.1 \pm 0.05$ & $0.04^{+0.04}_{-0.02}$ & $0.08^{+0.19}_{-0.06}$ & $7.8^{+4.6}_{-2.8}$ & $20.1 \pm 8.4$ & $4.31 \pm 1.0$ & $0.73 \pm 0.58$ & $12.8 \pm 4.19$ \\
SN\,2021lwz & $4.0^{+1.7}_{-0.8}$ & $3.8 \pm 1.0$ & $3.59 \pm 0.36$ & $0.01^{+0.02}_{-0.0}$ & $0.03^{+0.02}_{-0.01}$ & $0.1^{+0.2}_{-0.08}$ & $0.4^{+0.6}_{-0.2}$ & $11.0 \pm 2.0$ & $6.55 \pm 2.45$ & $1.04 \pm 0.37$ & $2.98 \pm 0.31$ \\
SN\,2021uvy & $1.5^{+1.3}_{-0.5}$ & $0.2^{+0.6}_{-0.2}$ & $2.89^{+1.67}_{-0.71}$ & $0.01^{+0.06}_{-0.0}$ & $0.25 \pm 0.07$ & $0.01^{+0.0}_{-0.0}$ & $3.8 \pm 1.0$ & $5.9 \pm 1.0$ & $9.64^{+0.27}_{-0.78}$ & $0.91 \pm 0.48$ & $5.5 \pm 0.46$ \\
SN\,2021ybf & $0.2^{+2.3}_{-0.2}$ & $1.8^{+2.1}_{-1.4}$ & $3.44 \pm 1.01$ & $0.08^{+0.08}_{-0.04}$ & $0.05^{+0.05}_{-0.03}$ & $0.07^{+0.2}_{-0.05}$ & $20.1^{+16.9}_{-10.4}$ & $18.5 \pm 8.2$ & $4.39 \pm 1.02$ & $0.74 \pm 0.54$ & $6.15 \pm 1.01$ \\
\enddata
\tablecomments{Full list of the best-fit parameters from the {\tt MOSFiT} model to all the LSNe in our sample. The definitions and priors of all parameters are given in Table~\ref{tab:parameters}. Additional parameters derived from these posteriors are listed in Table~\ref{tab:derived}. The only parameter excluded from this table is the mass of the neutron star $M_{\text{NS}}$, since it is effectively equal to the prior for all objects.}
\end{deluxetable}
\end{longrotatetable}
\subsection{Summary}
We determined that LSNe have ejecta masses in between those of SLSNe and SNe Ic/Ic-BL, and magnetar parameters ($P_{\text{spin}}$, $B_{\perp}$) that span the entire range of allowed parameter space, emphasizing their intermediate nature and the contribution to their luminosity from both magnetar engines and radioactive decay. While SLSNe appear to have fast spins and strong magnetic fields, SNe Ic/Ic-BL have weak or no magnetars. This agrees with the idea that SLSNe are powered by a magnetar central engine, whereas there is no evidence for a significant magnetar contribution in SNe Ic/Ic-BL. In terms of their pre-explosion masses, LSNe extend to higher masses than SNe Ic/Ic-BL, but not as massive as SLSNe, and while SLSNe have a sharp drop off at the low-mass end, the ejecta masses of LSNe extend as low as those of SNe Ic/Ic-BL. We find that LSNe tend to be powered either by an over-abundant production of $^{56}$Ni or by weak magnetar engines.
\begin{deluxetable*}{cccccccc}
\tablecaption{Additional Parameters \label{tab:derived}}
\tablehead{\colhead{Name} & \colhead{Absolute $r$-Mag} & \colhead{Explosion Date} & \colhead{Rise Time}& $A_{V, \text{host}}$ & $E_{k}$ & $M_{\text{Ni}}$ & WAIC \\
& & (MJD) & (Days) & (mag) & ($10^{51}$ erg) & (M$_\odot$) & }
\startdata
DES14C1rhg & $-19.40^{+0.03}_{-0.02}$ & $56983.2 \pm 2.2$ & $21.0 \pm 1.6$ & $<0.05$ & $1.3^{+1.4}_{-0.6}$ & $0.04^{+0.15}_{-0.03}$ & 51 \\
DES15C3hav & $-19.35 \pm 0.04$ & $57303.1 \pm 1.0$ & $24.6 \pm 1.0$ & $0.71 \pm 0.23$ & $1.7^{+1.9}_{-0.8}$ & $0.08^{+0.13}_{-0.06}$ & 64 \\
DES16C3cv & $-19.62 \pm 0.03$ & $57578.1 \pm 2.9$ & $57.2 \pm 1.9$ & $<0.11$ & $1.3 \pm 0.4$ & $0.06^{+0.27}_{-0.05}$ & 105 \\
iPTF13dnt & $-19.40^{+0.07}_{-0.28}$ & $56529.3^{+9.9}_{-27.2}$ & $26.0 \pm 19.3$ & $<0.06$ & $11.8^{+16.7}_{-7.4}$ & $1.28^{+1.09}_{-0.43}$ & 16 \\
iPTF16asu & $-20.46 \pm 0.10$ & $57513.4 \pm 1.1$ & $10.1 \pm 1.2$ & $<0.02$ & $0.04^{+0.09}_{-0.02}$ & $0.003^{+0.007}_{-0.002}$ & 135 \\
iPTF17cw & $-19.40^{+0.03}_{-0.02}$ & $57754.1 \pm 0.7$ & $13.9 \pm 1.1$ & $<0.02$ & $1.8^{+1.5}_{-0.8}$ & $0.14^{+0.06}_{-0.09}$ & 32 \\
OGLE15xl & $-19.36 \pm 0.25$ & $57320.8 \pm 2.9$ & $36.3 \pm 5.2$ & $<0.21$ & $8.7^{+17.4}_{-6.1}$ & $1.65^{+0.55}_{-0.88}$ & 34 \\
PS15cvn & $-19.64 \pm 0.04$ & $57326.6 \pm 0.4$ & $15.8 \pm 0.3$ & $<0.01$ & $0.8^{+0.2}_{-0.1}$ & $0.14 \pm 0.09$ & 116 \\
PTF10gvb & $-19.10 \pm 0.04$ & $55314.7 \pm 0.9$ & $17.9 \pm 0.9$ & $<0.04$ & $5.0 \pm 2.4$ & $0.31 \pm 0.06$ & 70 \\
PTF10iam & $-20.10 \pm 0.03$ & $55338.3^{+1.0}_{-1.7}$ & $13.7 \pm 1.8$ & $<0.04$ & $0.05^{+0.12}_{-0.03}$ & $0.005^{+0.011}_{-0.003}$ & 56 \\
PTF11img & $-19.27 \pm 0.03$ & $55743.9 \pm 1.2$ & $19.0 \pm 1.1$ & $<0.04$ & $6.5^{+6.9}_{-3.4}$ & $0.84^{+0.2}_{-0.11}$ & 52 \\
PTF12gty & $-19.86 \pm 0.03$ & $56071.0 \pm 2.1$ & $61.3 \pm 2.2$ & $<0.09$ & $1.9^{+1.9}_{-0.8}$ & $0.18^{+0.46}_{-0.15}$ & 92 \\
PTF12hni & $-19.94 \pm 0.04$ & $56126.1 \pm 5.3$ & $26.3 \pm 5.4$ & $<0.08$ & $4.8^{+4.1}_{-2.7}$ & $0.35^{+0.09}_{-0.16}$ & 71 \\
SN\,1991D & $-20.10^{+0.34}_{-0.58}$ & $48263.7 \pm 6.8$ & $15.3 \pm 9.0$ & $<0.06$ & $3.6^{+4.5}_{-1.8}$ & $0.2 \pm 0.11$ & 21 \\
SN\,2003L & $-19.50 \pm 0.20$ & $52635.9^{+3.2}_{-9.9}$ & $47.4 \pm 15.2$ & $<0.33$ & $10.4^{+21.5}_{-8.3}$ & $1.97^{+1.64}_{-0.63}$ & -3 \\
SN\,2007ce & $-19.33 \pm 0.10$ & $54200.1 \pm 5.2$ & $18.2 \pm 5.8$ & $<0.02$ & $2.0 \pm 1.0$ & $0.21 \pm 0.05$ & 140 \\
SN\,2009cb & $-20.33 \pm 0.32$ & $54881.7^{+6.3}_{-10.0}$ & $16.4 \pm 8.8$ & $0.04^{+0.62}_{-0.04}$ & $4.3^{+4.5}_{-2.4}$ & $0.21^{+0.28}_{-0.18}$ & 33 \\
SN\,2010ay & $-19.87 \pm 0.08$ & $55249.8 \pm 0.5$ & $21.2 \pm 1.7$ & $<0.05$ & $3.7^{+6.5}_{-2.2}$ & $1.19 \pm 0.13$ & 14 \\
SN\,2011kl & $-19.73 \pm 0.10$ & $55903.8 \pm 2.9$ & $16.0 \pm 1.8$ & $<0.06$ & $12.1^{+17.3}_{-7.2}$ & $1.06^{+1.05}_{-0.68}$ & 11 \\
SN\,2012aa & $-19.86 \pm 0.06$ & $55921.0 \pm 3.1$ & $37.9 \pm 3.9$ & $<0.04$ & $1.1^{+1.0}_{-0.4}$ & $0.09^{+0.1}_{-0.05}$ & 65 \\
SN\,2013hy & $-20.00 \pm 0.04$ & $56515.9 \pm 2.2$ & $26.8 \pm 1.6$ & $<0.03$ & $3.8 \pm 2.5$ & $1.76^{+0.25}_{-0.51}$ & 109 \\
SN\,2018beh & $-19.78 \pm 0.01$ & $58212.9 \pm 0.4$ & $37.8 \pm 0.4$ & $<0.03$ & $2.1^{+1.7}_{-0.9}$ & $0.03^{+0.03}_{-0.01}$ & 211 \\
SN\,2018don & $-19.06 \pm 0.01$ & $58198.6 \pm 1.0$ & $74.1 \pm 1.0$ & $0.01^{+0.33}_{-0.01}$ & $1.2 \pm 0.3$ & $0.02^{+0.06}_{-0.01}$ & 967 \\
SN\,2018fcg & $-20.28 \pm 0.03$ & $58336.8 \pm 0.8$ & $20.3 \pm 0.7$ & $<0.02$ & $1.0^{+0.6}_{-0.3}$ & $0.01^{+0.02}_{-0.0}$ & 235 \\
SN\,2019cri & $-19.10^{+0.08}_{-0.05}$ & $58560.4 \pm 2.5$ & $47.0 \pm 2.9$ & $<0.08$ & $2.1^{+2.8}_{-1.2}$ & $0.52 \pm 0.23$ & 54 \\
SN\,2019dwa & $-19.21 \pm 0.03$ & $58576.2 \pm 1.1$ & $34.7 \pm 1.5$ & $<0.03$ & $1.2 \pm 0.5$ & $0.04^{+0.21}_{-0.04}$ & 79 \\
SN\,2019gam & $-19.90^{+0.13}_{-0.08}$ & $58607.1^{+4.1}_{-7.7}$ & $55.6 \pm 6.7$ & $<0.09$ & $36.3 \pm 21.3$ & $2.15 \pm 1.3$ & 52 \\
SN\,2019hge & $-19.87 \pm 0.02$ & $58625.6 \pm 1.1$ & $58.9 \pm 1.1$ & $0.51 \pm 0.19$ & $1.3^{+1.2}_{-0.5}$ & $0.1^{+0.2}_{-0.07}$ & 145 \\
SN\,2019J & $-19.87 \pm 0.11$ & $58487.1 \pm 3.0$ & $41.5 \pm 3.0$ & $<0.08$ & $1.3^{+1.4}_{-0.5}$ & $0.03^{+0.04}_{-0.02}$ & 28 \\
SN\,2019moc & $-19.06 \pm 0.04$ & $58691.9^{+0.1}_{-0.2}$ & $15.4 \pm 0.6$ & $<0.02$ & $1.7^{+1.6}_{-0.8}$ & $0.43 \pm 0.03$ & 81 \\
SN\,2019obk & $-20.06 \pm 0.05$ & $58689.1 \pm 1.0$ & $37.3 \pm 0.9$ & $<0.04$ & $4.2^{+6.6}_{-2.5}$ & $0.55 \pm 0.51$ & 45 \\
SN\,2019pvs & $-19.58 \pm 0.10$ & $58678.6^{+4.7}_{-7.7}$ & $53.8 \pm 7.6$ & $<0.05$ & $1.9^{+5.5}_{-1.2}$ & $0.58^{+1.33}_{-0.55}$ & 10 \\
SN\,2019stc & $-20.05 \pm 0.04$ & $58735.6 \pm 1.5$ & $48.6 \pm 1.4$ & $<0.04$ & $1.0^{+0.9}_{-0.4}$ & $0.08^{+0.44}_{-0.07}$ & 64 \\
SN\,2019unb & $-20.09 \pm 0.05$ & $58763.7 \pm 1.7$ & $59.9 \pm 2.5$ & $0.53 \pm 0.26$ & $1.5^{+1.5}_{-0.6}$ & $0.17^{+0.47}_{-0.14}$ & 63 \\
SN\,2019uq & $-19.00 \pm 0.06$ & $58482.8 \pm 2.0$ & $22.2 \pm 2.2$ & $<0.05$ & $2.3^{+2.3}_{-1.3}$ & $0.23 \pm 0.15$ & 31 \\
SN\,2019wpb & $-19.14 \pm 0.06$ & $58808.9 \pm 0.9$ & $18.5 \pm 1.6$ & $<0.02$ & $4.1^{+4.5}_{-2.3}$ & $0.63 \pm 0.08$ & 43 \\
SN\,2021lei & $-19.33 \pm 0.04$ & $59326.3 \pm 1.2$ & $19.1 \pm 1.4$ & $<0.05$ & $8.6 \pm 7.0$ & $0.74 \pm 0.16$ & 50 \\
SN\,2021lwz & $-19.70^{+0.02}_{-0.01}$ & $59341.4 \pm 0.1$ & $ 9.7 \pm 0.1$ & $0.50 \pm 0.07$ & $0.02^{+0.04}_{-0.01}$ & $0.002^{+0.007}_{-0.002}$ & 336 \\
SN\,2021uvy & $-19.62 \pm 0.03$ & $59391.6 \pm 1.7$ & $56.5 \pm 2.9$ & $0.62 \pm 0.17$ & $0.7 \pm 0.3$ & $0.02^{+0.26}_{-0.02}$ & 67 \\
SN\,2021ybf & $-19.26 \pm 0.04$ & $59419.1 \pm 4.7$ & $54.1 \pm 4.9$ & $<0.05$ & $4.4^{+4.7}_{-2.5}$ & $1.74^{+0.22}_{-0.33}$ & 81 \\ \enddata
\tablecomments{Parameters derived from the {\tt MOSFiT} light curve models: the peak absolute magnitude in observer $r$-band (after correcting for extinction and cosmological K-correction of $+ 2.5\times \log(1 + z)$), the average rest-frame days from explosion to peak in $r$-band, the intrinsic host extinction in $V$ band, the kinetic energy, the total nickel mass, and the Watanabe-Akaike Information Criterion (WAIC) for fit quality \citep{Watanabe10, Gelman14}.}
\end{deluxetable*}
\section{LSN sub-groups}
\label{sec:grouping}
LSNe span a wide range of both observational properties and model parameters. Significant differences within the LSNe population exist and they are unlikely to all be a product of the same physical process. In this section, we attempt to group LSNe into distinct sub-groups with uniform spectral and photometric properties based on the labels designated in Table~\ref{tab:classes}. We label each LSN with either a ``Superluminous" label if their spectra are SLSN-like or ``Normal" if their spectra are Ic-like. Similarly, we use a ``Fast" label if their rise times are $\lesssim 25$ days like most SNe Ic or ``Slow" if they are $\gtrsim 35$ days like most SLSNe. Given their intermediate nature, we resort to labeling some LSNe as having ``Ambiguous" spectra consistent with either a SLSN or a SN Ib/c, or a light curve with ``Medium" rise time between 25 and 35 days. This breakdown leads to four main groups: \textit{Slow SLSN-like}, \textit{Fast SLSN-like}, \textit{Slow Ic-like}, and \textit{Fast Ic-like}. We include an \textit{Other} group for the LSNe that do not clearly fit into any of the previous four groups. Of the 40 Gold and Silver LSNe presented here, Slow SLSN-like make up 23\% (N = 9), Fast SLSN-like 18\% (N = 7), Slow Ic-like 15\% (N = 6), Fast Ic-like 30\% (N = 12), and 15\% (N = 6) are in the Other group.
\begin{figure*}[]
\begin{center}
\includegraphics[width=0.9\textwidth]{Slow_SLSNe.pdf}
\caption{Light curves and spectra of the Slow SLSN-like LSNe group. Only $r$-band light curves and their respective {\tt MOSFiT} models are shown. Spectra are arbitrarily scaled. Individual references are listed in the Appendix. \label{fig:slow_slsn}}
\end{center}
\end{figure*}
\subsection{Slow SLSN-like}
This is the group most similar to normal SLSNe (Figure~\ref{fig:slow_slsn}). The objects we group here are: SN\,2013hy, SN\,2018beh, SN\,2019gam, SN\,2019hge, and SN\,2019J, SN\,2019obk, SN\,2019pvs, SN\,2019unb, and PTF12gty. These LSNe have spectra that closely resemble those of SLSNe, and broad light curves reminiscent of SLSNe. Nevertheless, they are dimmer than typical SLSNe. This group of LSNe has the most energetic magnetars of the LSNe population, with typical spin periods $\lesssim 10$ ms and magnetar magnetic fields $\gtrsim 10^{14}$ G. Their magnetic fields are similar to those of normal SLSNe, which span the $\sim 0.4-60\times10^{14}$ G range, but their spin periods extend to higher values than the $P_{\rm spin} \lesssim 6$ ms found in most SLSNe; likely the reason for their lower luminosity. The fact that some of these LSNe have magnetars with stronger magnetic fields than some normal SLSNe suggests magnetic field strength might not the dominant parameter powering SLSNe, particularly for slow spin periods. Compared to the rest of the LSN population, this group has the largest ejecta masses, all $\gtrsim 10 $ M$_\odot$. The power behind their light curves is also greatly dominated by the magnetar component, where all but two SNe have a magnetar contribution $> 90$\%; SN\,2013hy and SN\,2019pvs are the exception with a magnetar contribution of $\sim 50$\%.
Almost all LSNe in this group have relatively low ejecta velocities $V_{\rm ej}\lesssim 6000$ km s$^{-1}$, which is likely contributing to their low peak luminosity, as low ejecta velocities increase the diffusion time, distributing the output luminosity over a larger stretch of time. Two exceptions are SN\,2019gam and SN\,2019obk, which have ejecta velocities of $\approx 14000$ km s$^{-1}$ and $\approx 8200$ km s$^{-1}$, respectively. It is possible these two objects are just normal SLSNe but are marginally under-luminous due to either the slow spin period of $\approx 9$ ms for SN\,2019obk, or low magnetic field of $\approx 0.7\times10^{14}$ G for SN\,2019gam.
This group of LSNe appear powered by magnetar engines, have light curve durations similar to those of SLSNe, and have spectra consistent with SLSNe. Therefore, these LSNe can be considered to be the faintest SLSNe known, extending down to $M_r \sim -19.5$ mag, and likely under-luminous due to their slower spin periods than typical SLSNe.
\subsection{Fast SLSN-like}
\begin{figure*}
\begin{center}
\includegraphics[width=0.9\textwidth]{Fast_SLSNe.pdf}
\caption{Light curves and spectra for the Fast SLSN-like LSNe group. Only $r$-band light curves and their respective {\tt MOSFiT} models are shown. Spectra are arbitrarily scaled. Individual references are listed in the Appendix. \label{fig:fast_slsn}}
\end{center}
\end{figure*}
We find seven LSNe that appear spectroscopically consistent with being SLSNe, but have light curves that are much faster than normal SLSNe (Figure~\ref{fig:fast_slsn}). These objects are: iPTF16asu, PTF10iam, DES14C1rhg, DES15C3hav, SN\,2011kl, SN\,2021lwz, SN\,2009cb. Even though the decline time of PTF10iam is relatively long, we include it in this group given that it has a rise time of $\sim 14$ days, among the fastest of all LSNe. All objects in this group have very strong magnetic fields $\gtrsim 4\times10^{14}$ G (higher than normal SLSNe), but slow spin periods $\gtrsim 5$ ms, leading to their low luminosities and fast time-scales.
The light curves of Fast SLSN-like LSNe also appear largely dominated by a magnetar engine, where all have magnetar contributions $> 90$\%, except for SN\,2011kl, which has a $\sim 75$\% magnetar contribution. Almost all the SNe in this group have low ejecta masses $M_{\rm ej} \lesssim 4$ M$_\odot$ (except for SN\,2009cb with $M_{\rm ej} \approx 7$ M$_\odot$), which lies in stark contrast to the LSNe in the Slow SLSN-like group, which have ejecta masses $\gtrsim 10 $ M$_\odot$. This is to be expected, since higher ejecta masses correlate to longer diffusion times, as discussed in \S\ref{sec:results_mass}.
LSNe in this group appear to be an extension of the SLSN population, given their spectra and physical parameters, but they have relatively fast-evolving light curves due to their low ejecta masses.
\begin{figure*}
\begin{center}
\includegraphics[width=0.9\textwidth]{Slow_Ic.pdf}
\caption{Light curves and spectra of the Slow Ic-like LSNe group. Only $r$-band light curves and their respective {\tt MOSFiT} models are shown. Spectra are arbitrarily scaled. Individual references are listed in the Appendix. \label{fig:slow_Ic}}
\end{center}
\end{figure*}
\subsection{Slow Ic-like}
The LSne in this group have red spectra like normal SNe Ic but light curves that are as broad as normal SLSNe (Figure~\ref{fig:slow_Ic}). Six objects fall in this category: SN\,2021ybf, DES16C3cv, SN\,2021uvy, SN\,2003L, SN\,2019stc and SN\,2019cri. Unlike the previous two groups of Slow and Fast SLSN-like LSNe, the parameters of Slow Ic-like LSNe appear to bifurcate rather than cluster in a particular region of parameter space.
While the light curves of DES16C3cv, SN\,2019cri, SN\,2019stc, and SN\,2021uvy are best fit by an almost pure magnetar model, SN\,2003L and SN\,2021ybf appear entirely radioactively powered. The only distinguishing feature is that the four magnetar powered LSNe all show either a prominent secondary light curve peak, or in the case of SN\,2019cri, a late time flattening that could be indicative of the start of a secondary peak. Neither of the radioactively powered SNe show evidence for a secondary peak.
SN\,2003L and SN\,2021ybf have respective nickel mass fractions of $f_{\rm Ni} \approx 0.12$ and $f_{\rm Ni} \approx 0.08$, which explains why they are brighter than normal SNe Ic, which tend to have values of $f_{\rm Ni} \lesssim 0.04$. Additionally, their respective high ejecta masses of $M_{\rm ej} \approx 17$ M$_\odot$ and $M_{\rm ej} \approx 20$ M$_\odot$ explain their slow evolution.
Conversely, DES16C3cv, SN\,2019stc, SN\,2021uvy, and SN\,2019cri all appear magnetar dominated, but powered by weaker magnetar engines than normal SLSNe. The first three have spin periods between $4-8$ ms and magnetic fields between $4-8\times10^{14}$ G, and SN\,2019cri has a relatively high magnetic field of $\approx 6.5\times10^{14}$ G but a spin period of 13 ms, leading to its relatively low luminosity. We presented an in-depth analysis of SN\,2019stc in \cite{Gomez21_2019stc}, where we found the source to have a SLSN-like light curve, but a spectrum that is identical to those of normal SNe Ic. We concluded that a combination of radioactive decay and a magnetar central engine was required to power the luminous first peak while preserving a red Ic-like spectra. A similar interplay of power sources could be responsible for the luminous nature of these objects, while still preserving Ic-like spectra.
The LSNe in this group divide into two groups. The SNe in one set (SN\,2003L and SN\,2021ybf) appear to be radioactively powered and are more luminous than normal SNe Ic due to their high nickel fractions, while their high ejecta masses lead to their slow evolution. The second set (DES16C3cv, SN\,2019stc, SN\,2021uvy, and SN\,2019cri) are dominated by magnetars, but retain SNe Ic-like spectra. This could be a consequence of an interplay of power sources; while the magnetar component makes the SNe more luminous, the radioactive component makes the spectra appear Ic-like.
\begin{figure*}[]
\begin{center}
\includegraphics[width=0.9\textwidth]{Fast_Ic.pdf}
\caption{Light curves and spectra of the Fast Ic-like LSNe group. Only $r$-band light curves and their respective {\tt MOSFiT} models are shown. Spectra are arbitrarily scaled. Individual references are listed in the Appendix. \label{fig:fast_Ic}}
\end{center}
\end{figure*}
\subsection{Fast Ic-like}
This group is defined as LSNe with SN Ic-like spectra that also evolve rapidly, like SNe Ic, yet are significantly brighter than normal SNe Ic (Figure~\ref{fig:fast_Ic}). This is the most populous group, with twelve SNe: iPTF13dnt, PTF11img, SN\,2019uq, PS15cvn, SN\,2019moc, SN\,2021lei, SN\,2019wpb, SN\,2007ce, and SN\,2010ay, SN\,2018fcg, PTF10gvb, and iPTF17cw. We note that SN\,2018fcg peaked at $M_r \sim -20.3$ mag and is technically outside the LSN definition, but we include it in the sample due to its strong spectral resemblance to normal SNe Ic and intermediate nature. These SNe have the slowest spin periods of all LSNe, with $P_{\rm spin} \gtrsim 15$ ms. For spin periods this slow, if the SNe have magnetars, these would have little to no contribution to the light curves. The main difference in terms of parameters between these SNe and normal SNe Ic is that these have significantly higher nickel mass fractions ($f_{\rm Ni} \gtrsim 0.07$) compared to the normal SNe Ic population ($f_{\rm Ni} \lesssim 0.04$). Two exceptions are PS15cvn and SN\,2019uq, which both have nickel mass fractions $f_{\rm Ni} \lesssim 0.04$. SN\,2019uq is a Silver LSN with poor photometric coverage, the dimmest object in this group, and possibly just a normal SN Ic. On the other hand, PS15cvn is the third brightest object in this group, likely due to its relatively high nickel mass of $M_{\rm Ni} \approx 0.15$ M$_\odot$.
These LSNe are the most similar to normal SNe Ic in terms of power sources, light curve durations, and spectral properties. These SNe can therefore be considered to be the brightest SNe Ic, where their high luminosity is due to an over-abundance of nickel.
\subsection{Other Objects}
Some LSNe do not neatly fit into any of the previous four groups, and we include them here. There are six SNe in this group: SN\,2019dwa, SN\,2018don, SN\,2019stc, PTF11hrq, OGLE15xl, and PTF12hni (Figure~\ref{fig:other_sne}). The classification of these SNe is uncertain, either due to a lack of data, or because their nature is intermediate to two of the four groups presented here.
First is OGLE15xl, its light curve fits are of poor quality due to the fact only one band is available, and its spectrum lies in-between that of SLSNe and SNe Ic.
SN\,2019dwa and PTF12hni are intermediate in every sense. Both objects have spectra intermediate to SLSNe and SNe Ic; bluer than normal SNe Ic but not quite as much as normal SLSNe. Both SNe also have an intermediate light curve evolution. Therefore, these do not easily fit into any of the defined groups. \cite{Quimby18} reached the same conclusion regarding the classification of PTF12hni.
SN\,1991D is not spectroscopically similar to any other LSN, since it has clear evidence for helium in its spectra and was previously classified as a Type-Ib SN by \cite{Benetti02}. SN\,2012aa is another peculiar object in this category, which although lacking in helium, was previously identified by \cite{Yan17} to have late time signatures of hydrogen.
Finally, SN\,2018don was presented in \cite{Lunnan20_four} as a SLSN with possibly substantial host galaxy extinction of $A_V \approx 0.4$ mag. Correcting for this amount of extinction would place the SN well into the normal SLSN regime. Due to the uncertainty in the extinction value, we avoid grouping this object into one of the four groups.
\begin{figure*}
\begin{center}
\includegraphics[width=0.9\textwidth]{Other.pdf}
\caption{Light curves and spectra of the Other group. Only $r$-band light curves and their respective {\tt MOSFiT} models are shown. Spectra are arbitrarily scaled. Individual references are listed in the Appendix. \label{fig:other_sne}}
\end{center}
\end{figure*}
\subsection{Summary}
In Figure~\ref{fig:magnetar_class} we show how the LSNe, now labeled in terms of their groupings, lie in the $B_{\perp}$ vs $P_{\text{spin}}$ parameter space and how they compare to the SLSNe and SNe Ic/Ic-BL populations. We find that after grouping LSNe into distinct classes, some separation does begin to appear. Mainly, Slow SLSN-like LSNe seem to occupy mostly the same parameter space as SLSNe, having powerful magnetars; whereas Fast Ic-like LSNe overlap mostly with the existing SNe Ic population with weak or no evidence for magnetars. Slow Ic-like LSNe on the other hand, still span a wide range of parameter space. And Fast Ic-like LSNe have strong magnetic fields, but spin periods that are slow enough to make the magnetar contribution negligible. In conclusion, brighter objects tend to have SLSN-like spectra, while dimmer objects more closely resemble SNe Ic.
The different groups of LSNe separate well in terms of their rise-time and peak luminosity. In Figure~\ref{fig:luminosity} we show that Fast SLSN-like LSNe have short rise times and high peak luminosities, Fast Ic-like LSNe also have short rise times but with low peak luminosities, Slow SLSN-like LSNe have long rise times and high peak luminosities, and finally, Slow Ic-like LSNe have long rise times but low peak luminosities. In the same plot, we can see that all objects with a double-peaked light curve lie in the quadrant with slow rise times and high peak luminosities. And all the objects that show possible helium are among the brighter LSNe, most of them also having long rise times.
\section{Observational Properties}\label{sec:properties}
\subsection{Spectral Features}
We explore the presence of helium in LSNe and find that SN\,1991D, SN\,2003L, SN\,2019gam, SN\,2019hge, SN\,2019obk, SN\,2019unb, SN\,2018beh, and SN\,2018fcg show tentative evidence for helium in their spectra. These SNe are all brighter than $M_r \sim -19.5$ mag and have rise times spanning $\sim 15$ to $\sim 60$ days. \cite{Yan20} presented a sample of seven SLSNe with possible detections of helium, three of which had peak magnitudes $M_r \lesssim -20.5$ mag and are therefore outside the LSNe definition, the remaining four (SN\,2019gam, SN\,2019hge, SN\,2019unb, SN\,2019obk) all lie in the Slow SLSN-like LSNe group. The excitation of Helium in SLSNe requires either non-thermal radiation, potentially from a magnetar central engine \citep{Dessart12}, or interaction with helium-rich circumstellar material \citep{Yan15}.
SLSNe tend to show a broad W-shaped absorption feature found around 4200\AA, and 4450\AA\ not seen in SNe Ic \citep{Quimby18}. We find that no LSN has this distinctive \ion{O}{2} W-shaped line, which could simply be a result of these SNe not reaching sufficiently high temperatures at early time to excite these ions or very rapid cooling of the ejecta.
\subsection{Double-peaked Light Curves}
Four LSNe (SN\,2019stc, DES16C3cv, SN\,2021uvy, and SN\,2019hge) show a double-peaked light curve structure. All four are relatively bright with magnitudes $M_r \lesssim -19.5$ mag, and have rise times greater than $\sim 45$. SN\,2019hge is the only LSN that shows both a double-peaked structure and the presence of helium. The interplay between the distinct power sources of radioactive decay and a magnetar engine may be responsible for the double-peaked structure observed in some LSNe.
\subsection{Relative Rates}
Given the fact that we selected the population of LSNe presented here from non-uniform surveys, we can not draw definitive conclusions regarding their absolute rate. Nevertheless, we can estimate their relative rates. We apply a volumetric correction to the population of LSNe, SLSNe, and SNe Ic/Ic-BL to account for Malmquist bias following the method of \cite{Cia18} and find that LSNe are more common than SLSNe but less common than either SNe Ic or SNe Ic-BL.
We compare our sample of LSNe to the ZTF Bright Transient Survey (BTS; \citealt{Perley20_BTS}) to estimate their observational rates. The BTS aims to spectroscopically classify every bright transient found by ZTF, with an completeness of $\sim 93$\% down to 18.5 mag. At the time of comparison the BTS\footnote{\url{https://sites.astro.caltech.edu/ztf/bts/bts.php}} list of SN-like events has 3636 objects brighter than $m_r = 19$ mag, which have already been pruned to only include SN-like objects with well-sampled light curves and low extinction $A_V < 1$ mag. We find that LSNe are rare, making up only $0.3 \pm 0.1$\% of SN-like transients from the BTS survey, or $0.4 \pm 0.1$\% of all CCSNe observed by this magnitude-limited survey. For comparison, SLSNe make up $\sim 1$\% of all the SN-like transients in the BTS survey.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{Pspin_Bfield_classes_contour.pdf}
\caption{Same as Figure~\ref{fig:magnetar}, but with LSNe labeled based on their distinct groups. We see that LSNe do not perfectly separate into distinct classes, even after sub-diving them. But some trends so start to appear: Slow SLSN-like LSNe mostly reside in the SLSNe dominated parameter space; Fast Ic-like LSNe mostly overlap with the existing SNe Ic population; Fast SLSN-like and Slow Ic-like LSNe lie outside the typical parameter space occupied by SLSNe or SNe Ic. \label{fig:magnetar_class}}
\end{center}
\end{figure}
\section{Conclusions}\label{sec:conclusions}
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{peak_subclasses.pdf}
\caption{Peak $r$-band magnitude as a function of rise time for LSNe, labeled by their distinct groupings. We see that LSNe do appear to separate well in terms of rise time and absolute peak magnitude, forming four distinct quadrants. \label{fig:luminosity}}
\end{center}
\end{figure}
We have presented the first comprehensive study of all the stripped-envelope CCSNe that lie in the intermediate regime between SLSNe and SNe Ic, allowing us to better place SLSNe in the context of CCSNe, how they relate to other SNe, and the nature of their progenitors. We analyzed a sample of 40 luminous supernovae (LSNe), defined as stripped-envelope core-collapse SNe with a peak $r$-band magnitude between $M_r = -19$ and $-20$ mag, bounded by SLSNe on the bright end, and by SNe Ic/Ic-BL on the dim end. Observationally, we find that:
\begin{itemize}
\item LSNe have intermediate rise times between $\approx 20 - 65$ days.
\item The spectra of LSNe span a continuum, from blue and SLSN-like to red and SNe Ic-like.
\item Brighter LSNe tend to have SLSN-like spectra, while dimmer LSNe resemble SNe Ic.
\item No LSN shows the distinctive W-shaped \ion{O}{2} absorption feature found in some SLSNe.
\item LSNe are rare and make up $\sim 0.3$\% of all SNe-like transients from a magnitude-limited survey; or $\sim 0.4$\% of all observed CCSNe.
\item In absolute terms, LSNe are likely more common than SLSNe, but less common than SNe Ic/Ic-BL.
\item LSNe with possible helium are brighter than $\sim -19.5$ mag.
\item LSNe with a double-peaked light curve are brighter that $\sim -19.5$ mag and have long rise times $\gtrsim 45$ days.
\end{itemize}
We modeled the light curves of all 40 LSNe, as well as a sample of 149 SLSNe and 61 SNe Ic/Ic-BL in a uniform way with a combined magnetar plus radioactive decay model to compare their physical parameters. From our models we find that:
\begin{itemize}
\item Around 25\% of LSNe appear to be radioactively powered, while the rest have at least a 50\% contribution from a magnetar engine.
\item The nickel fractions for the radioactively dominated LSNe span a range of $f_{\rm Ni} \approx 0.01 - 0.1$, similar to SNe Ic/Ic-BL.
\item The pre-explosion masses of LSNe extend to $\sim 30$ M$_\odot$, higher than SNe Ic/Ic-BL, but not as high as SLSNe. The slope of the high-end mass distribution of LSNe is also intermediate to SLSNe and SNe Ic/Ic-BL, as is the peak of their distribution.
\item The pre-explosion masses of LSNe can be as low as those of SNe Ic/Ic-BL, $\sim 1.5$ M$_\odot$.
\item Like SLSNe and SNe Ic/Ic-BL, LSNe with larger ejecta masses have longer rise times.
\end{itemize}
We attempt to separate LSNe into distinct groups and find a natural breakdown in terms of their spectral similarity to either SLSNe or SNe Ic, and whether their light curves evolve fast like SNe Ic or slowly like SLSNe. We present four main groups of LSNe: \textit{Slow SLSN-like}, \textit{Fast SLSN-like}, \textit{Slow Ic-like}, and \textit{Fast Ic-like}. From these sub-groups, we find that:
\begin{itemize}
\item Slow SLSN-like LSNe are the most similar to normal SLSNe. They are less luminous due to their slow spin periods, and long-lasting due to their large ejecta masses. These are effectively the lowest luminosity SLSNe known.
\item Fast SLSN-like LSNe evolve rapidly due to their low ejecta masses, but their strong magnetars make them more luminous than normal SNe Ic.
\item Slow Ic-like LSNe bifurcate into two groups: a population of radioactively powered SNe with higher nickel fractions than typical SNe Ic/Ic-BL; and a magnetar-powered population with magnetars weaker than normal SLSNe, and spectra that still resemble SNe Ic likely due to the presence of radioactive decay.
\item Fast Ic-like LSNe are the most similar to normal SNe Ic but are more luminous due to their relatively high nickel fractions and masses. These can be considered the most luminous SNe Ic known.
\end{itemize}
We have shown that some LSNe are an extension towards either the dimmest SLSNe (SLow SLSN-like LSNe) or the brightest SNe Ic known (Fast Ic-like LSNe), while other LSNe appear to have a more complex nature, borrowing a combination of properties from SLSNe and SNe Ic. We have analyzed in a systematic way all the SNe that occupy the up to now mostly unexplored link between SLSNe and SNe Ic. This work opens the door for subsequent studies that will focus on the details of the spectroscopic features of LSNe and how they relate to SLSNe and SNe Ic, as well as an in-depth study of their host galaxies and environments. Looking further ahead, the Legacy Survey of Space and Time \citep{Ivezic19}, scheduled to commence in 2023, will increase the transient discovery rate by about 2 orders of magnitude, and will allow us to explore the cutting edges of parameter space, providing a more comprehensive view of the relation between various types of CCSNe.
\acknowledgments
We thank Y.~Beletsky for carrying out some of the Magellan observations. S.G. is partly supported by an STScI Postdoctoral Fellowship. The Berger Time-Domain Group at Harvard is supported by NSF and NASA grants. MN is supported by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No.~948381) and by a Fellowship from the Alan Turing Institute. This paper includes data gathered with the 6.5 meter Magellan Telescopes located at Las Campanas Observatory, Chile. Observations reported here were obtained at the MMT Observatory, a joint facility of the University of Arizona and the Smithsonian Institution. This research has made use of NASA’s Astrophysics Data System. This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France. IRAF is written and supported by the National Optical Astronomy Observatories, operated by the Association of Universities for Research in Astronomy, Inc. under cooperative agreement with the National Science Foundation. Operation of the Pan-STARRS1 telescope is supported by the National Aeronautics and Space Administration under grant No. NNX12AR65G and grant No. NNX14AM74G issued through the NEO Observation Program. This work has made use of data from the European Space Agency (ESA) mission {\it Gaia} (\url{https://www.cosmos.esa.int/gaia}), processed by the {\it Gaia} Data Processing and Analysis Consortium (DPAC, \url{https://www.cosmos.esa.int/web/gaia/dpac/consortium}). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the {\it Gaia} Multilateral Agreement. This work makes use of observations from Las Cumbres Observatory global telescope network.
\facilities{ADS, TNS}
\software{Astropy \citep{Astropy18}, extinction \citep{Barbary16}, Matplotlib \citep{matplotlib}, emcee \citep{Foreman13}, NumPy \citep{Numpy}, FLEET \citep{Gomez20_FLEET}, MOSFiT \citep{guillochon18}, PyRAF \citep{science12}, SAOImage DS9 \citep{Smithsonian00}, corner \citep{foreman16}, HOTPANTS \citep{Becker15}, SciPy \citep{Walt11}, PYPHOT (\url{https://github.com/mfouesneau/pyphot}).}
\clearpage
\newpage
|
1,108,101,566,713 | arxiv | \section{Introduction}
To guarantee the security of large projects, companies usually deploy various bug checking tools in the development process. Parfait~\cite{cifuentes2008parfait} is such a static code analysis tool designed for large-scale codebases to find security and quality defects in C/C++, Java, Python, and PL/SQL languages. In particular, Parfait focuses on defects from the lists of CWE Top 25~\cite{cwetop25} and OWASP Top 10~\cite{owasptop10}.
Cryptographic vulnerabilities caused by misusing Java Cryptographic APIs are getting more and more attention~\cite{acar2016you,meng2018secure,georgiev2012most,egele2013empirical,zuo2019does}. A survey shows that cryptographic API misuses dominate the cryptographic vulnerabilities, accounting for 83\% in ``cryptography issues'' category of the Common Vulnerabilities and Exposures (CVE) database~\cite{lazar2014does}. Cryptographic failure has been recognized as the second risk in OWASP Top 10 for 2021~\cite{owasptop10}. Java provides basic cryptographic objects (e.g., \lstinline|Cipher|, \lstinline|MessageDigest|) in Java Cryptography Architecture (JCA) and Java Cryptography Extension (JCE) libraries. Due to complex documentation and the lack of security expertise, developers may not know how to use these APIs correctly~\cite{nadi2016jumping,acar2017comparing}. Parfait supports the detection of simple cryptographic vulnerabilities, such as using broken Cipher or Hash algorithms.
However, many studies show that cryptographic API misuses are more complicated and involve more security rules~\cite{fahl2012eve,egele2013empirical,nguyen2017stitch,meng2018secure,DBLP:conf/ccs/BosuLYW17,DBLP:journals/tdsc/TianYRTP20,patnaik2019usability}.
Software developers struggle to understand and comply with the implicit and explicit requirements of using cryptographic APIs securely. Violating these requirements may cause various vulnerabilities including exposing sensitive information, bypassing necessary authentication, etc. Egele et al.~\cite{egele2013empirical} identified six types of cryptographic API misuses that violate different security rules. Nguyen et al.~\cite{nguyen2017stitch} showed
thirteen security pitfalls common in Android development and nine of them are Java cryptographic API misuses. Recently, Rahaman et al.~\cite{rahaman2019cryptoguard} summarized sixteen common types of cryptographic API misuses in Java and developed the CryptoGuard tool to detect them. It relies on backward and forward program slicing and introduces several refinement insights to achieve high precision and scalability in large projects.
We extended Parfait with a precise and scalable dataflow analysis to detect Java cryptographic API misuse vulnerabilities. Parfait offers a proprietary compilation process to transform Java source code into the low level virtual machine
(LLVM) intermediate representation (IR).
In particular, we need to develop a precise and scalable cryptographic API misuse detection on top of LLVM IR with Parfait's supports. In this work, we identify eleven cryptographic vulnerability types (see Table~\ref{tab:crypto_api_usage}) that can be mapped to backward dataflow analysis problems. By monitoring their different vulnerable usages, we designed corresponding alarm criteria. For example, the alarm criterion for the vulnerability ``Use of a Broken or Risky Cryptographic Algorithm'' is a constant matching given weak algorithm names (e.g., "DES") and the alarm criterion for the vulnerability ``Use of Password Hash With Insufficient Computational Effort'' is an iteration count number less than 1000.
Cryptographic vulnerabilities are difficult to identify precisely.
Most of these vulnerabilities are caused by assigning inappropriate values (e.g., hard-coded values) to sensitive information (e.g., keys, passwords) that are required to be secret or unpredictable. To detect them, the backward dataflow analysis is used to trace all the sources influencing these security-critical variables in a program. Sources that are constants are treated as hard-coded values and they may be reported as vulnerabilities. However, this technique can cause many false alarms. There are many cases that involve constants in constructing a non-constant value~\cite{afrose2019cryptoapi}. For example, a constant string can represent a file location where the secret key is loaded. Those constants that do not impact security are called \textit{pseudo-influences} in the work of CryptoGuard~\cite{rahaman2019cryptoguard}, which has identified five types of pseudo-influences (e.g., state indicator) and refinement insights to reduce them. In our work, these refinement insights are further adjusted to improve detection precision.
We built our cryptographic vulnerability detection using Parfait framework with its many built-in program analysis techniques. In particular, we specialize the IFDS analysis, which is a dataflow analysis framework for interprocedural, finite, distributive subset (IFDS) problems~\cite{reps1995precise}, for cryptographic vulnerability detection. It allows program analysis designers to configure API methods as taint sources or sinks, and then check whether there is a dataflow from a source to a sink. In this work, we first identify the sensitive variables by setting eighteen error-prone Java cryptographic API methods (see Table~\ref{tab:crypto_api_usage}) as sinks. Because ordinary taint analysis does not track constants, we further modify the taint analysis to be capable of tracking all constant sources. Moreover, we refine the taint analysis by eliminating tracing the pseudo-influences identified by the refinement rules of CryptoGuard. This refinement significantly reduces the false alarms and improves efficiency by eliminating unnecessary dataflows. Finally, we improve the scalability by leveraging Parfait's layered framework to break down the interprocedural analysis into method-level pieces and schedule them adaptively.
Our contributions are summarized as follows:
\begin{itemize}
\item We realized the detection for complex Java cryptographic vulnerabilities in \anonymous{Oracle's}{} Parfait static analysis platform. Specifically, we implemented analyses for eleven CWE types caused by misusing eighteen associated Java cryptographic API methods. The detection relies on a backward inter-procedural, flow-, context-, field-sensitivity dataflow analysis with Parfait and LLVM supports. We designed different alarm criteria for identifying these cryptographic vulnerabilities.
\item We specialized the backward IFDS taint analysis provided by Parfait to overcome the precision challenge caused by pseudo-influences, security-irrelevant constants used in constructing security-critical values. Inspired by the refinement insights in CryptoGuard~\cite{rahaman2019cryptoguard}, we defined the refinement rules in the form of IFDS dataflow analysis. Significantly, the refined analysis not only reduces false alarms but also improves scalability.
\item We evaluated the precision and scalability of Parfait cryptographic vulnerability detection on a comprehensive cryptographic vulnerability benchmark CryptoAPI-Bench~\cite{afrose2019cryptoapi} and several large-scale industrial applications. The results demonstrate that our detection achieves a high precision (86.62\%) and recall (98.40\%) overall. The precision excluding the path-sensitivity test cases reaches 100\%. Parfait-based cryptographic vulnerability detection achieves 100\% precision on the eleven large-scale applications. The runtime for analyzing the codebases with sizes from 2K to 1321K lines of code ranges from 2 seconds to 36 minutes, with the majority of the codebases analyzed within ten minutes. We further show some noteworthy examples to help readers better understand the practices.
\end{itemize}
In summary, we have developed a precise and scalable analysis to detect cryptographic vulnerabilities. Our work incorporates the false positive reduction refinements of CryptoGuard, the scalable framework of Parfait, and the IFDS analysis on top of LLVM IR. The evaluation results show that our tool works well in an industrial setting.
\section{Background}
This section describes the Java cryptographic API misuses that are the targets of our detection and provides background of CryptoGuard and the \anonymous{Oracle}{industrial} Parfait static analysis framework.
\subsection{Java Cryptographic API misuses}
Table~\ref{tab:crypto_api_usage} lists the targeted Java cryptographic API misuses from the developer's perspective, with the involved API classes, methods, and the vulnerable usages of them. We summarize these Java Cryptographic API misuses that can be detected by backward dataflow analysis from the existing studies~\cite{rahaman2019cryptoguard,egele2013empirical,nguyen2017stitch}. Compared with CryptoGuard, it does not cover a few vulenrability types that require combining forward analysis with backward analysis to detect.
\input{table_crypto_api_usage.tex}
\noindent
The involved error-prone Java classes include:
\smallskip
\noindent
\verb|SecureRandom| \textit{Class}. Any nonce used in cryptography operations should be generated with \verb|SecureRandom| instead of \verb|Random|. Furthermore, setting a static or predictable seed via the constructors or \verb|setSeed| methods\footnote{This API has two different method signatures (setSeed(long seed) and setSeed(byte[] seed)), we skip them for simplicity.} is also considered vulnerable.
\smallskip
\noindent
\verb|MessageDigest| \textit{Class}. Passing a broken hash algorithm (e.g., MD5) to \verb|getInstance| method of \verb|MessageDigest| class is vulnerable.
\smallskip
\noindent
\verb|Cipher| \textit{Class}. The method \verb|getInstance| of \verb|Cipher| class is error-prone of using broken ciphers or insecure mode. The specific vulnerable usages include 1) passing a weak cipher algorithm (e.g., \verb|"DES"|); 2) specifying \verb|"ECB"| mode for a block cipher (e.g., \verb|"AES/ECB/NoPadding"|); 3) a block cipher without explicitly specifying a mode (e.g., \verb|"AES"|) because the vulnerable mode \verb|ECB| is used by default.
\smallskip
\noindent
\verb|KeyStore| and \textit{Key Specification Classes}. Many API methods of \verb|KeyStore| and various key specification classes (e.g., \verb|SecretKeySpec|, \verb|PBEKeySpec|) accept secrets (e.g., passwords, key materials) by passing them through the method arguments. Any method call accepting a hard-coded or predictable secret is vulnerable.
\smallskip
\noindent
\textit{Algorithm Parameter Classes}. Algorithm parameter classes, such as \verb|IvParameterSpec| and \verb|PBEParameterSpec|, work with the initial vector (IV), salt, and PBE iteration count. IVs and salts that are static or predictable can cause vulnerabilities. Besides, the iteration count is required to be not fewer than 1000.
\smallskip
\noindent
\verb|javax.net.ssl| \textit{Classes}. The methods of Java classes \verb|TrustManager|, \verb|HostnameVerifier| and \verb|SSLSocketFactory| in \verb|javax.net.ssl| package provide the SSL/TLS services. Issues usually happen when developers override the default methods or skip necessary steps to bypass the proper verification.
\subsection{CryptoGuard}
CryptoGuard~\cite{rahaman2019cryptoguard} applies backward and forward program slicing to discover constant sources and configurations causing Java cryptographic API misuses. It has implemented a set of refined slicing algorithms to achieve high precision.
\noindent
\textbf{False Positive Reduction.}
CryptoGuard adopts five refinement insights to remove the language-specific irrelevant elements that cause false positives. During the analysis process, the state indicators (e.g., \verb|getBytes("UTF-8")|), resource identifiers (e.g., keys of a map), bookkeeping indices (e.g., size parameters of an array), contextually incompatible constants, and constants in infeasible paths are removed by refinements conditioned on Jimple, which is an intermediate representation of Soot~\cite{vallee2010soot}.
\noindent
\textbf{Runtime Improvement.}
The most costly parts of the inter-procedural analysis are usually the iterative orthogonal explorations.
CryptoGuard improves the runtime by limiting the orthogonal explorations to depth 1, whereas deeper orthogonal method calls are handled by the refinement insights.
\subsection{Dataflow Analysis in CryptoGuard and Parfait.}
Parfait supports various static program analyses. An important feature of Parfait that is not present in CryptoGuard~\cite{rahaman2019cryptoguard} is the IFDS analysis framework\footnote{The project Heros~\cite{bodden2012inter} implements the IFDS framework on top of Soot, however, CryptoGuard only uses the FlowAnalysis library in Soot, which does not provide IFDS.}.
\noindent
\textbf{Dataflow Analysis in CryptoGuard.}
CryptoGuard achieves dataflow analysis based on Soot's \verb|FlowAnalysis| library. \verb|FlowAnalysis| includes the intra-procedural dataflow analysis that maintains a flow set and updates it along the dataflow traces.
CryptoGuard iteratively runs its intra-procedural analysis for callee and caller methods on the call graph. However, this design can result in re-exploring callee methods multiple times. To reduce complexity, its implementation sets the default depth of the clipping callee method exploration to 1.
\noindent
\textbf{IFDS in Parfait.}
Parfait contains both a classic dataflow analysis and analysis using the IFDS algorithm.
The IFDS framework reduces the dataflow analysis into a graph reachability problem and performs the analysis by building edges among the data facts (i.e., variables) of certain program points. The reachability can be summarized and queried for the future usage to avoid unnecessary re-analysis as much as possible.
\noindent
\textbf{Parfait Framework.}
To improve scalability, Parfait offers a layered framework to optimize the ensemble of static program analyses. According to the time cost, the analyses are scheduled from the quickest to the slowest. In this way, more bugs can be found with a lower time overhead. Specifically, in cryptographic vulnerability detection, we dynamically schedule the analyses into different layers according to the depth of callers. More details are in Section~\ref{sec:implementation}.
\section{Conclusion and Future Work}
We have implemented a precise and scalable cryptographic vulnerability detection in the framework of Parfait. Leveraging the refinement insights from CryptoGuard, our detection reproduced the high precision results (few or no false positives) achieved by CryptoGuard. Experiments show 100\% precision for eleven real-world large-scale projects and CryptoAPI-Bench, excluding the path-sensitivity cases. Our cryptographic vulnerability detection benefited from the IFDS and layered framework of Parfait to achieves good runtime performance for large-scale codebases. The runtime for these eleven large-scale codebases ranges from 2 seconds to 36 minutes. Ten of them can be screened within 10 minutes.
We leverage the backward dataflow analysis for our cryptographic vulnerability detection in Parfait. For future improvement, there are still some remained cases that require other techniques, such as forward dataflow analysis, symbolic execution, to handle.
Besides, how to improve the detection accuracy of Weak PRNG vulnerabilities by identifying their context is also an interesting future direction.
\section{Acknowledgement}
The Virginia Tech authors have been supported by the National Science Foundation under Grant No. CNS-1929701.
\section{Accuracy Analysis and Real-world Findings}
We have tested our cryptographic vulnerability detection on a comprehensive cryptographic vulnerability benchmark (CryptoAPI-Bench \cite{afrose2019cryptoapi}) to evaluate the precision and recall. To learn its scalability, we further perform experiments by scanning eleven large real-world codebases to obtain the runtime performance.
\subsection{Accuracy Analysis on CryptoAPI-Bench}
\begin{table}
\centering
\caption{Parfait's evaluation results on 158 test cases from CryptoAPI-Bench. We show the numbers of insecure cases, secure cases, reported cases, false positives (FPs) and false negatives (FNs). The 158 test cases include basic cases (intra-procedural), and different inter-procedural cases that require across methods, across classes, field sensitivity, path-sensitivity, and heuristics to handle.}
\label{tab:crypto_bench_results}
\begin{tabular}{|l|c|c|c|c|c|c|c|c|}
\hline
\textbf{Type} & Test Cases & Insecure & Secure & Reported & FPs & FNs & Precision & Recall \\ \hline
Basic Cases & 27 & 24 & 3 & 24 & 0 & 0 & 100\% & 100\% \\ \hline
Multiple methods & 57 & 56 & 1 & 54 & 0 & 2 & 100\% & 96.43\% \\ \hline
Multiple Classes & 23 & 18 & 5 & 18 & 0 & 0 & 100\% & 100\% \\ \hline
Field Sensitivity & 19 & 18 & 1 & 18 & 0 & 0 & 100\% & 100\% \\ \hline
Path Sensitivity & 19 & 0 & 19 & 19 & 19 & 0 & 0 \% & 0 \% \\ \hline
Heuristics & 13 & 9 & 4 & 9 & 0 & 0 & 100\% & 100\% \\ \hline
Total & \textbf{158} & \textbf{125} & \textbf{33} & \textbf{142} & \textbf{19} & \textbf{2} & \textbf{86.62\%} & \textbf{98.40\%} \\ \hline
\end{tabular}
\end{table}
We have tested Parfait on 158 test cases from CryptoAPI-Bench~\cite{afrose2019cryptoapi}. CryptoAPI-Bench includes various kinds of test units from basic ones to more advanced cases. The basic test cases only require intra-procedural analysis to handle. The advanced cases are inter-procedural ones that require analyses across multiple methods, multiple classes, achieving field sensitivity, and path sensitivity.
The breakdown numbers are shown in Table~\ref{tab:crypto_bench_results}. The overall precision and recall are 86.62\% and 98.40\%, respectively. All the false positive cases come from path sensitivity cases, which verifies that our tool has achieved high precision for the cases excluding path-sensitive ones. We analyzed several examples to further reveal the details of Parfait cryptographic vulnerability detection and discuss possible improvements.
\noindent
\textbf{Impact of Refinement Insights.} We demonstrate the impact of our refinement insights by comparing the Parfait cryptographic vulnerability detection with its intermediate version that does not have the refinement strategies. Table~\ref{tab:impact_heuristics} shows the comparison. Without the refinements, there are 38 false positive cases. Based on our manual analysis, most of the false positives are caused by the pseudo-influences we introduced in Section~\ref{sec:refinement}. The refinement insights successfully reduce all the false positive cases except for the path-sensitive case.
\begin{table}[]
\centering
\caption{False positive reduction derived from applying the refinement insights (RIs). We compare Parfait cryptographic vulnerability detection with its intermediate version without the refinement insights.}
\label{tab:impact_heuristics}
\begin{tabular}{|l|c|c|c|c|}
\hline
\textbf{Type} & \# of Vulnerabilities & FPs (w/o RIs) & FPs (with RIs) & Reduction \\ \hline
Basic Cases & 24 & 1 & 0 & 100\% \\ \hline
Multiple Methods & 56 & 3 & 0 & 100\% \\ \hline
Multiple Classes & 18 & 1 & 0 & 100\% \\ \hline
Field Sensitivity & 18 & 2 & 0 & 100\% \\ \hline
Path Sensitivity & 0 & 19 & 19 & 0 \\ \hline
Heuristics & 9 & 12 & 0 & 100\% \\ \hline
Total & \textbf{125} & \textbf{38} & \textbf{19} & \textbf{50\%} \\ \hline
\end{tabular}
\end{table}
\subsection{Evaluation on Real World Projects}\label{sec:real-world-findings}
We evaluated our tool on eleven real world codebases. Nine of them are \anonymous{Oracle}{}internal products \anonymous{}{of a large software company} while two are open-source projects Spring-Security\footnote{https://github.com/spring-projects/spring-security} and Orchid\footnote{https://github.com/OrchidTechnologies/orchid}. We select these projects because they are security-relevant and use Java cryptographic APIs.
\subsubsection{Runtime and Precision}
Scalability is always one of the most important concerns. We list the runtime performance and the size of these scanned projects in Fig.~\ref{fig:runtime}. The project sizes vary from 2K to 1321K. The detection is run on the machine with Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz, 128G memory, and Oracle Linux Server release 6.9 operating system. The results show that Parfait achieves excellent scalability.
The analysis can be finished within 10 minutes for the majority of these projects including those with millions of lines of code (Project 10).
\begin{figure}
\centering
\includegraphics[width=.6\linewidth]{figures/runtime.pdf}
\caption{Runtime performance of Parfait for screening the eleven real-world codebases. The size shows how many lines of code these codebases have. }
\label{fig:runtime}
\end{figure}
Fig.~\ref{fig:real_world_acc} demonstrates the precision results of Parfait and CryptoGuard on the eleven real-world projects. Compared with CryptoGuard, Parfait successfully identified more true positive cases with fewer false positives. Parfait reported 42 vulnerabilities and all of them are manually verified as true positives. The precision reaches 100\%. We show several real-world vulnerabilities found by Parfait in Section~\ref{sec:findings} CryptoGuard reported 69 vulnerabilities. However, there are 47 false positives among them. The precision is 31.88\%. We noticed that all the false positive cases of CryptoGuard are caused by the same issue, that is, how CryptoGuard detects weak Pseudo-random Number Generator (PRNG) vulnerabilities. We noticed that all the false positives of CryptoGuard are caused by the same issue, that is, how CryptoGuard identifies weak PRNG cases. We will discuss it in the comparison between CryptoGuard and Parfait.
\begin{figure}
\centering
\includegraphics[width=.6\linewidth]{figures/parfait_cryptoGuard_acc_comparison.pdf}
\caption{The number of vulnerabilities reported by Parfait and CryptoGuard in the eleven real-world industrial applications. The upper area of the x axis shows the true positive alerts while the bottom area of the x axis shows the false positive alerts. Nine of them are \anonymous{Oracle}{}internal codebases of \anonymous{}{a large software company}. Two of them are open-source projects.}
\label{fig:real_world_acc}
\end{figure}
\smallskip
\noindent
\textbf{Comparison with CryptoGuard.} As we introduced, CryptoGuard and Parfait leverage identical refined dataflow analysis at a high level to detect cryptographic vulnerabilities. Here, we analyze the differences between them in detection results.
\smallskip
\noindent
\textit{Detection for Weak PRNG.}
A major difference between Parfait and CryptoGuard is the way they identify weak PRNG vulnerabilities. After manual analysis, we noticed that all the false positives of CryptoGuard shown in Fig.~\ref{fig:real_world_acc} are weak PRNG cases. To make it more clear, we break down the reported cases into weak PRNG cases and other types of vulnerabilities, as shown in Fig.~\ref{fig:cryptoguard}. Overall, there are 48 weak PRNG vulnerabilities and 21 other types of vulnerabilities reported by CryptoGuard. Among the 48 weak PRNG cases, only 1 of them is verified as a true positive case. As a contrast, Parfait reported 0 weak PRNG case, which indicates that Parfait missed at least 1 weak PRNG vulnerability. This suggests that CryptoGuard tends to have a more conservative approximation on weak PRNG vulnerability detection while Parfait reports this type of vulnerability in a more precise approximation.
Listing~\ref{lst:weak_prng} shows a false positive weak PRNG identified by CryptoGuard. The Java class \verb|Random| is not strong enough, therefore, an alternative class \verb|SecureRandom| that is cryptographically strong is recommended to use. However, our manual verification confirmed that the \verb|Random| instance is not used in a security or cryptographic context. Hence, we consider it is a false positive as there is no impact on security.
CryptoGuard performs an exhaustive search in the codebase to report every \verb|Random| usage regardless of the context. Hence, there are many false positives. On the opposite, Parfait applies a more strict criterion for alerting this type of vulnerability. Only when the \verb|Random| instance is passed to the cryptographic APIs covered in Table~\ref{tab:crypto_api_usage}, will it be reported as a weak PRNG case. However, this may miss some cases due to the limited coverage of the cryptographic APIs.
It is difficult to accurately determine whether a \verb|Random| instance is used for cryptographic purposes. Identifying more vulnerable usage patterns and involved cryptographic APIs for this type of vulnerability can be future work. To extend the current detection criteria, Parfait provides the flexibility for users to change the sinks, sources, sanitizers, and verifiers of the dataflow analysis through configuration, which makes customizing the vulnerability detection rules easy.
\begin{figure}
\centering
\includegraphics[width=.6\linewidth]{figures/parfait_cryptoGuard_breakdown_2.pdf}
\caption{The number of vulnerabilities reported by Parfait and CryptoGuard in the eleven real-world industrial applications. We break them down into the weak PRNG vulnerabilities and the other vulnerability types. Parfait reported 0 weak PRNG vulnerability. CryptoGuard reported 48 weak PRNG vulnerabilities while only 1 of them is a true positive case. Parfait and CryptoGuard both achieves 100\% precision in the other vulnerability types excluding the weak PRNG cases.}
\label{fig:cryptoguard}
\end{figure}
\begin{lstlisting}[language=Java,escapechar=\%,caption={A reported weak PRNG vulnerability that is a false positive},captionpos=b,label={lst:weak_prng}]
Random random = new Random();
int rnumber = random.nextInt();
\end{lstlisting}
\noindent
\textit{Exploration Depth for Callee Methods.} Another difference between Parfait and CryptoGuard is the exploration depth for callee methods when performing interprocedural analysis. The interprocedural dataflow analysis requires exploring the encountered callee methods. When meeting the recursive callee methods or the callee stack is too deep, the analysis needs to clip the call graph. CryptoGuard allows users to configure the exploration callee stack depth. To make the analysis fast, CryptoGuard set the default callee stack depth as 1. Parfait deals with this problem by summarization mechanism (see details in Section~\ref{sec:implementation}). This design avoids clipping the callee stack, however, the price is that the summarization becomes the most costly part. To make the summarization as a one-time cost, it is performed separately in advanced and stored for queries when encountering a callee method in the dataflow analysis.
In Fig.~\ref{fig:real_world_acc}, we observe that CryptoGuard missed 21 cases that have been reported by Parfait. This might be attributed to the limited default callee stack depth that CryptoGuard explores. It can be improved by setting a larger value of the callee stack depth.
\noindent
\textit{Application Perspective vs. Library Perspective.}
Parfait differs from CryptoGuard in the vulnerability definitions in some situations. An example is given in Listing~\ref{lst:no_caller} in the Appendix. If the potentially vulnerable method is not called in the scanned codebase, the concerned field variable is left undetermined and then Parfait considers it as a non-vulnerable case. However, CryptoGuard applies a forward slicing for this field variable to find out the possible assignments in the initialization. If a constant is assigned in the initialization, CryptoGuard still considers it as a vulnerability. If the detected issues are in applications, Parfait's design is superior because it avoids overestimating the vulnerabilities. If they are in libraries, CryptoGuard's design is better as it discovers the potential buggy method although they may not be called yet.
\subsubsection{Real-world Findings}\label{sec:findings}
We have reported the detected vulnerabilities to corresponding developers. In terms of the open-source projects, we further find that the vulnerabilities are either in their non-production (development) mode or fixed in their latest versions.
We show several real-world detected cases below.
\begin{lstlisting}[language=Java,escapechar=\%,caption={A real-world vulnerability about using constant salt and insufficient iteration count (We modified the code to make the codebase unidentifiable.)},captionpos=b,label={lst:real_case2}]
public class DesEncrypter{
public DesEncrypter(final String passPhrase){
initDesEncrypter(passPhrase);}
private void initDesEncrypter(final String passPhrase){
...
AlgorithmParameterSpec paramSpec = new PBEParameterSpec
\end{lstlisting}
Listing~\ref{lst:real_case2} shows vulnerabilities of using constant salt and insufficient iteration count as PBE parameters. This case represents the most common vulnerable pattern of the sensitive cryptographic materials (e.g., passwords, salts, IVs, etc) to be hard-coded in the initialization.
\begin{lstlisting}[language=Java,escapechar=\%,caption={A real-world vulnerability about insufficient entropy salts},captionpos=b,label={lst:real_iterative_salt}]
public String padding_salts(String salts){
StringBuffer sb = new StringBuffer();
for
String padded_salts =
return
\end{lstlisting}
Listing~\ref{lst:real_iterative_salt} is a noteworthy real-world example. It introduces a vulnerability of using salts with insufficient entropy. When a random salt is iteratively assigned by the same variable, its value space is reduced significantly and hence makes the exhaustive attack feasible. Our analysis reports a constant number 16 at Line 3 involved in the construction of the salts. However, to accurately capture the insufficient entropy issue, symbolic execution is required.
\begin{lstlisting}[language=Java,escapechar=\%,caption={An example from CVE-2019-3795},captionpos=b,label={lst:real_casel}]
public SecureRandom getObject() throws Exception{
SecureRandom rnd = SecureRandom.getInstance(algorithm);
if(seed != null){
rnd.setSeed
}else{
rnd.nextBytes(new byte[1]) //self-seeding
}}
\end{lstlisting}
Listing~\ref{lst:real_casel} shows a detected vulnerability in the open-source project Spring Security, disclosed as CVE-2019-3795~\cite{cve_case}. This vulnerability appears in Spring Security versions 4.2.x before 4.2.12, 5.0.x before 5.0.12, and 5.1.x before 5.1.5. Although not involving a hard-coded seed, the \verb|SecureRandom| instance relies on an unreliable \verb|InputStream| at Line 4 as the \verb|seed|. Inspired by this real-world vulnerability, we apply a more strict rule for \verb|SecureRandom.setSeed| to avoid unreliable seeding. Only self-seeding and manual seeding by the method \verb|SecureRandom.generateSeed()| are considered as secure. A self-seeding (secure) will be automatically enforced if the API \verb|nextBytes| is called immediately after the \verb|SecureRandom| instantiation~\cite{securerandom}.
\begin{lstlisting}[language=Java,escapechar=\%,caption={A real-world false positive case about TrustManager},captionpos=b,label={lst:false_positive1_real}]
public void checkClientTrusted(X509Certificate[] certs, String authType) throws CertificateException{
\end{lstlisting}
Listing~\ref{lst:false_positive1_real} shows a reported case for bypassing certificate verification. This case disables the certificate verification by simply throwing the \verb|UnsupportedOperationException| for all certificates. This misuse, matching a vulnerable pattern, was reported, however it is not enabled in the production code path, and hence not exploitable or requiring any remediation.
\subsection{Discussion}
We discuss the potential improvement and limitations of Parfait.
\smallskip
\noindent
\textbf{Potential Improvement.}
There are two potential improvements to fix the false-negative cases. First, a false negative could be caused by missing the summarization for \verb|clinit| method. An example is shown in Listing~\ref{lst:false_negative1} in the Appendix. This deficiency is derived from the fact that \verb|clinit| has not appeared in Parfait's call graph. A fix for this issue could be updating the call graph construction to cover the \verb|clinit| of every class. Second, a false-negative case shown in Listing~\ref{lst:false_negative2} is caused by incompatible types between the captured source (i.e., \verb|String|) and the sensitive argument (i.e., \verb|int|). This corner case can be improved by checking the type compatibility through the type casting in Java language.
\smallskip
\noindent
\textbf{Limitations.}
Our cryptographic vulnerability detection still has limitations with handling path-sensitive cases and pointer issues. We show a path-sensitive false-positive case in Listing~\ref{lst:false_positive1} in the Appendix. Furthermore, another potential cause for false positives could be pointer issues. Due to the limitation of static analysis, there may be over-approximation in our call graph construction, which leads to potential false positives. However, path-sensitivity and pointer precision are too costly, in our experience, for large codebases. Our analysis is designed to scan large-scale industrial projects, therefore we accept the trade-off for better overall performance.
\section{Detection Methods and Implementation}
Our detection covers all the misuses shown in Table~\ref{tab:crypto_api_usage}. Two scalability enablers of it are the layered framework of Parfait and the summarization mechanism in IFDS to handle callee methods.
\subsection{Detection Methods}\label{sec:dmethod}
The detecting logic is similar to CryptoGuard which maps the cryptographic API misuses to the dataflow analysis problems. In terms of the specific detection methods, there are three groups.
\noindent
\textbf{Group 1: Inter-procedural Backward Dataflow Analysis.}
This group includes the API misuses determined by constant sources. Specifically, these are APIs in Table~\ref{tab:crypto_api_usage} of Java Class \verb|SecureRandom|, \verb|MessageDigest|, \verb|Cipher|, \verb|KeyStore|, \verb|SecretKeySpec|, \verb|PBEKeySpec|, \verb|PBEParameterSpec|, and \verb|IvParameterSpec|. We require an inter-procedural backward dataflow analysis to capture the constant sources of the API arguments. We apply different verifying rules to the collected constant sources according to the vulnerability types. The verifying rules include whether it is a constant, whether it is a number less than 1000, or whether it matches some weak algorithms (e.g., "DES").
\noindent
\textbf{Group 2: Intra-procedural Pattern Matching.}
The vulnerabilities related to \verb|TrustManager|, \verb|HostnameVerifier|, and \verb|SSLSocketFactory| in Table~\ref{tab:crypto_api_usage} belong to this group. These vulnerabilities often happen within one method that is responsible for authentication operations. We find them by the intra-procedural pattern matching. Specifically, for \verb|HostnameVerifier|, we detect whether the return value of the method \verb|verify| is always ``True'' regardless of the verification. For \verb|TrustManager|, we detect three vulnerable patterns in the \verb|checkClientTrusted| and \verb|checkServerTrusted| methods including 1) missing verification behavior; 2) catching the verification exception without throwing it; 3) missing verification under a certain path. For \verb|SSLSocketFactory|, we perform the intra-procedural pattern matching to check whether the \verb|HostNameVerifier.verify| method is called after the \verb|SSLSocketFactory| instance creation.
\noindent
\textbf{Group 3: Sanitizer vs. Verifier.}
In cryptography operations, \verb|Random| is not strong enough~\cite{random}. However, it is unreasonable to report every \verb|Random| used in a program as a vulnerability. Therefore, we regard \verb|Random| as a verifier and \verb|SecureRandom| as a sanitizer for the traced arguments in group 1. Accordingly, we only report \verb|Random| in these cryptographic usages.
\subsection{Cryptographic Vulnerability Detection Implementation}\label{sec:implementation}
Supported by Parfait, we implement the inter-procedural flow-, context-, and field-sensitive backward dataflow analysis for cryptographic vulnerability detection. Next, we introduce several specific features of Parfait for scalability and good precision.
\begin{figure}
\centering
\includegraphics[width=0.47\textwidth]{figures/parfait_framework.png}
\caption{The inter-procedural analysis under Parfait's layered framework. This design is important to achieve the scalability of Parfait.}
\label{fig:parfait_framework}
\end{figure}
\noindent
\textbf{Layered Scheduler for Caller Methods.}
Parfait optimizes the analysis ensemble to improve scalability.
Figure~\ref{fig:parfait_framework} demonstrates the backward analyses that are broken down and assigned to different layers. The analyses are scheduled layer by layer. At each layer, the backward analysis ends up at the entry point of the current method with three situations. First, a real bug is verified. Second, the potential bug is sanitized as no bug. Third, further analyses are required in its caller methods. Further analyses will be scheduled at the next layer. In this way, the analysis requiring less time can be performed first. It also avoids the duplicated parts of two potential vulnerabilities detection traces.
This layered framework effectively improves the efficiency of finding bugs.
\noindent
\textbf{Flow Functions in IFDS.}
There are several flow functions used to define the analysis. In our cryptographic vulnerability detection, they are:
\begin{itemize}
\item \verb|flow|: This function specifies the dataflow edges through ordinary non-call instructions. Specifically, it applies to the LLVM instructions \verb|ReturnInst|, \verb|LoadInst|, \verb|StoreInst|, and \verb|BitCastInst|.
\item \verb|phiFlow|: This function specifies the dataflow edges through the LLVM \verb|phi| instruction.
\item \verb|returnVal|: The function specifies the dataflow edges between the \verb|ReturnInst| of the callee method and its callsite. The summary edges of the callee method are queried at this point to handle the callee method.
\item \verb|passArgs|: The function specifies the dataflow edges between the arguments of the callee method and the parameters passed in its callsite.
\item \verb|callFlow|: The function handles the dataflow edges regardless of the callee method. Most of the refinements happen here to handle the callee method whose implementation is unavailable.
\end{itemize}
The major differences of these flow functions between the analysis for cryptographic vulnerabilities and taint analysis are the dataflow edges from constants. The cryptographic vulnerability detection discovers the edges flowing out from constants and refines them according to five refinement insights, which does not happen in the taint analysis. Furthermore, cryptography vulnerability detection redefines the default dataflow edges in \verb|callFlow|. More details are in Section~\ref{sec:refinement}.
\noindent
\textbf{Summarization for Callee Methods.}
Another design improving the scalability is the summarization mechanism for the callee methods. After a method is explored, the summary edges for it are stored for future usage. Parfait exhaustively summarizes all methods in advance and queries the summary edges of the callee methods on demand. All the methods are summarized in a bottom-up manner according to the call graph, beginning from leaf methods to their callers. This design guarantees every method is only explored once. Hence, the re-exploration for callee methods is eliminated to avoid complexity explosion
\begin{figure}
\centering
\includegraphics[width=.95\linewidth]{figures/refined_dataflow.png}
\caption{The false-positive reduction refinements represented in IFDS. It shows the dataflow propagating edges for three different types of method calls, 1) a virtual method call with a return value, 2) a static method call with a return value, and 3) a virtual method without a return value. The above ones are the default propagating edges. The bottom ones are the refined propagating edges.}
\label{fig:heuristics}
\end{figure}
\subsection{Pseudo-influences and Refined Analysis}\label{sec:refinement}
\textbf{Pseudo-influences.}
We use the backward dataflow analysis to capture the constants involved in constructing a security-critical value. When a constant is used to hard-code the security-critical value (e.g., secret key, password), it may cause vulnerability by exposing sensitive information. However, some constants do not have security impacts on the value, referred to as pseudo-influences. Static analysis is unable to identify them. Reporting all the captured constants as a dangerous source leads to an extremely high false-positive rate. In the work CryptoGuard~\cite{rahaman2019cryptoguard}, the authors summarize five language-specific scenarios that use constants without resulting in hard-coded values. These scenarios include using constants as a state indicator, resource identifier, and bookkeeping indices to retrieve the value. The contextually incompatible constants, and constants in infeasible paths are also regarded as pseudo-influences.
\smallskip
\noindent
\textbf{Refined Dataflow Analysis.}
We refine our dataflow analysis to exclude these pseudo-influences and thus achieve good precision.
According to the refinement insights from CryptoGuard, we define our pseudo-influence excluding rules in the context of IFDS algorithms and LLVM IR instructions. We select \verb|callFlow| function in our IFDS dataflow analysis to apply the refinement rules. The reason is that most of the pseudo-influences appear as the arguments of a method call. For example, the pseudo-influence \verb|"UTF-8"| is the argument of the method \verb|<String: byte[] getBytes(String)>|.
In the form of IFDS, we describe the rules with the graph reachability between the data variables given an LLVM instruction. As shown in Fig.~\ref{fig:heuristics}, the data flow edges are refined according to the method signature we obtained from the LLVM instruction. Specifically, there are three types of call instructions. We apply different data flow propagation rules to them. First, if the call instruction has a return value and invoking an instance method that belongs to an object, we change the default data flow propagation edges as described in Fig.~\ref{fig:heuristics} (a). The edge from the argument to the return value is eliminated because the argument is likely to be a pseudo-influences. Second, if the call instruction has a return value and invoking a static method without an associated object, we also eliminate the edge from its argument to the return value to avoid pseudo-influences, as shown in Fig.~\ref{fig:heuristics} (b). Finally, if the call instruction does not have a return value and belongs to an object, we add a data flow edge from its argument to the object holder. Meanwhile, we remove the edge between the object holder itself before and after this call instruction. This allows us to stop tracing the object but tracing the argument that influences the object.
The example is given in Fig.~\ref{fig:heuristics} (c).
\section{A step-by-step Illustration of Our IFDS analysis}
We give a step-by-step breakdown to show how a vulnerability is captured by our IFDS analysis implementation in Fig.~\ref{fig:steps}. Fig.~\ref{fig:steps} (a) gives a simple example of a detected vulnerability. The analysis starts from Line 8 in the code snippet. A constant ``defaultkey'' at Line 6 is captured by our analysis. The right part shows how the constant ``defaultkey'' is connected to the variable \verb|keyBytes| by dataflow. Fig.~\ref{fig:steps} (b) is the step-by-step process to illustrate how the dataflow propagation is handled by the flow functions (see Section~\ref{sec:implementation}) of our IFDS analysis implementation.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figures/step-by-step-ifds.png}
\caption{A step-by-step breakdown of A vulnerability detected by our IFDS analysis implementation. (a) shows a vulnerable code snippet with the captured dataflow propagation graph by IFDS analysis. The right side of (a) is a dataflow propagation graph obtained by IFDS analysis. At each program line, there are several dots representing a data fact (variable) at this program point. An edge from dot \lstinline|v1| to dot \lstinline|v2| means there is a dataflow edge from \lstinline|v1| to \lstinline|v2|. The numbering in circles corresponds to the steps in (b), which process the dataflow and draw an edge in the graph. The red dots form a detected dataflow path from the insecure constant to the targeted cryptographic API. (b) shows the steps of our IFDS analysis of (a) from implementation perspective. \lstinline|flow|, \lstinline|retVal|, \lstinline|callFlow|, etc. are the flow functions defined in Section~\ref{sec:implementation}.}
\label{fig:steps}
\end{figure}
\section{Ordinary Iterative Analysis vs. IFDS analysis}
Fig.~\ref{fig:ifds} shows the difference between an ordinary iterative analysis and an IFDS analysis. Fig.~\ref{fig:ifds} (a) is a code snippet. Fig.~\ref{fig:ifds} (b) and (c) are the diagrams of an ordinary dataflow analysis and an IFDS analysis, respectively. As shown in Figure~\ref{fig:ifds}, the ordinary analysis maintain a flowset during the interative analysis. The IFDS framework reduces the analysis as a graph reachability problem and guarantees that the analysis can be finished in polynomial time.
\begin{figure}
\centering
\includegraphics[width=0.46\textwidth]{figures/IFDS_ordinary.png}
\caption{The comparison of the ordinary iterative analysis and IFDS analysis. (a) is a code snippet. (b) shows the ordinary flowset based analysis by collecting and updating a flow set. The code blocks with black edges in (b) represent the control flow graph of (a). The bracket (i.e., \{ \}) between the code blocks represents the flowset at that program point. The flowset keeps track of all the data facts (variables) that can propagates to the entry point of our backward dataflow analysis. (c) shows the dataflow propagation graph obtained by IFDS analysis which builds edges and then summarizes edges during the analysis. $\Lambda$ represents the empty set the backward analysis starts from. Here, we use it as an alert identification. If a dangerous source (e.g., hardcoded key) is connected to $\Lambda$, we will identify it as a vulnerability. }
\label{fig:ifds}
\end{figure}
\section{False positive cases in CryptoAPI-Bench}
\begin{lstlisting}[language=Java,escapechar=\%,caption={A false positive caused by path sensitivity},captionpos=b,label={lst:false_positive1}]
String
int choice = 2;
byte[]
//keyBytes-->key material after phiFLow
if(choice>1){
//nothing-->key material
SecureRandom random = new SecureRandom();
}
SecretKeySpec keySpec = new SecretKeySpec
\end{lstlisting}
\begin{lstlisting}[language=Java,escapechar=\%,caption={A false negative case caused due to type matching},captionpos=b,label={lst:false_negative2}]
public class LessThan1000IterationPBEABICase2 {
private static char[] COUNT;
private static char[] count;
public static void main(){ //Bug condition: "20"<1000?
LessThan1000IterationPBEABICase2 lt = new LessThan1000IterationPBEABICase2();
go2(); //"20"-->PBE iteration
go3(); //this.COUNT-->PBE iteration
lt.key2(); //this.count-->PBE iteration
}
private static void go2(){
COUNT = DEFAULT_COUNT.toCharArray();
}
private static void go3(){
count = COUNT;
}
public void key2(){ //this.count-->PBE iteration
...
pbeParamSpec = new PBEParameterSpec(salt,
}
}
\end{lstlisting}
\begin{lstlisting}[language=Java,escapechar=\%,caption={A test cases considered non-vulnerable by Parfait but vulnerable by CryptoGuard. The backward analysis in Parfait terminates at Line 11 and leaves this.crypto.defaultKey as a variable due to no caller of this method. },captionpos=b,label={lst:no_caller}]
public class PredictableCryptographicKeyABSCase1 {
Crypto crypto;
public PredictableCryptographicKeyABSCase1() throws Exception {
String passKey = PredictableCryptographicKeyABSCase1.getKey("pass.key");
if(passKey == null) {
crypto = new Crypto("defaultkey");
}
}
//this.crypto.defaultKey-->secret key; no caller for encryptPass, terminate
public byte[] encryptPass(String pass, String src) throws Exception {
String
return crypto.method1(pass, keyStr);
//keyStr-->secret key; this.crypto.defaultKey-->secret key
}
public static String getKey(String s) {
return System.getProperty(s);
}
}
class Crypto {
Cipher cipher;
String algoSpec = "AES/CBC/PKCS5Padding";
String algo = "AES";
String defaultKey;
public Crypto(String defkey) throws NoSuchPaddingException, NoSuchAlgorithmException {
cipher = Cipher.getInstance(algoSpec);
defaultKey = defkey;
}
//key-->secret key; this.defaultKey-->secret key
public byte[] method1(String txt, String
if(key.isEmpty()){
}
byte[]
byte [] txtBytes = txt.getBytes();
SecretKeySpec keySpec = new SecretKeySpec
cipher.init(Cipher.ENCRYPT_MODE,keySpec);
return cipher.doFinal(txtBytes);
}
}
\end{lstlisting}
\begin{lstlisting}[language=Java,escapechar=\%,caption={A false negative case caused due to the summarization},captionpos=b,label={lst:false_negative1}]
public class PredictablePBEPasswordABICase2 {
public static char []
private static char[] encryptKey;
...
public static void main(String [] args) { //this.DEFAULT_ENCRYPT_KEY-->PBE password
...
}
}
\end{lstlisting}
\section{Introduction}
ACM's consolidated article template, introduced in 2017, provides a
consistent \LaTeX\ style for use across ACM publications, and
incorporates accessibility and metadata-extraction functionality
necessary for future Digital Library endeavors. Numerous ACM and
SIG-specific \LaTeX\ templates have been examined, and their unique
features incorporated into this single new template.
If you are new to publishing with ACM, this document is a valuable
guide to the process of preparing your work for publication. If you
have published with ACM before, this document provides insight and
instruction into more recent changes to the article template.
The ``\verb|acmart|'' document class can be used to prepare articles
for any ACM publication --- conference or journal, and for any stage
of publication, from review to final ``camera-ready'' copy, to the
author's own version, with {\itshape very} few changes to the source.
\section{Template Overview}
As noted in the introduction, the ``\verb|acmart|'' document class can
be used to prepare many different kinds of documentation --- a
double-blind initial submission of a full-length technical paper, a
two-page SIGGRAPH Emerging Technologies abstract, a ``camera-ready''
journal article, a SIGCHI Extended Abstract, and more --- all by
selecting the appropriate {\itshape template style} and {\itshape
template parameters}.
This document will explain the major features of the document
class. For further information, the {\itshape \LaTeX\ User's Guide} is
available from
\url{https://www.acm.org/publications/proceedings-template}.
\subsection{Template Styles}
The primary parameter given to the ``\verb|acmart|'' document class is
the {\itshape template style} which corresponds to the kind of publication
or SIG publishing the work. This parameter is enclosed in square
brackets and is a part of the {\verb|documentclass|} command:
\begin{verbatim}
\documentclass[STYLE]{acmart}
\end{verbatim}
Journals use one of three template styles. All but three ACM journals
use the {\verb|acmsmall|} template style:
\begin{itemize}
\item {\verb|acmsmall|}: The default journal template style.
\item {\verb|acmlarge|}: Used by JOCCH and TAP.
\item {\verb|acmtog|}: Used by TOG.
\end{itemize}
The majority of conference proceedings documentation will use the {\verb|acmconf|} template style.
\begin{itemize}
\item {\verb|acmconf|}: The default proceedings template style.
\item{\verb|sigchi|}: Used for SIGCHI conference articles.
\item{\verb|sigchi-a|}: Used for SIGCHI ``Extended Abstract'' articles.
\item{\verb|sigplan|}: Used for SIGPLAN conference articles.
\end{itemize}
\subsection{Template Parameters}
In addition to specifying the {\itshape template style} to be used in
formatting your work, there are a number of {\itshape template parameters}
which modify some part of the applied template style. A complete list
of these parameters can be found in the {\itshape \LaTeX\ User's Guide.}
Frequently-used parameters, or combinations of parameters, include:
\begin{itemize}
\item {\verb|anonymous,review|}: Suitable for a ``double-blind''
conference submission. Anonymizes the work and includes line
numbers. Use with the \verb|\acmSubmissionID| command to print the
submission's unique ID on each page of the work.
\item{\verb|authorversion|}: Produces a version of the work suitable
for posting by the author.
\item{\verb|screen|}: Produces colored hyperlinks.
\end{itemize}
This document uses the following string as the first command in the
source file:
\begin{verbatim}
\documentclass[acmlarge]{acmart}
\end{verbatim}
\section{Modifications}
Modifying the template --- including but not limited to: adjusting
margins, typeface sizes, line spacing, paragraph and list definitions,
and the use of the \verb|\vspace| command to manually adjust the
vertical spacing between elements of your work --- is not allowed.
{\bfseries Your document will be returned to you for revision if
modifications are discovered.}
\section{Typefaces}
The ``\verb|acmart|'' document class requires the use of the
``Libertine'' typeface family. Your \TeX\ installation should include
this set of packages. Please do not substitute other typefaces. The
``\verb|lmodern|'' and ``\verb|ltimes|'' packages should not be used,
as they will override the built-in typeface families.
\section{Title Information}
The title of your work should use capital letters appropriately -
\url{https://capitalizemytitle.com/} has useful rules for
capitalization. Use the {\verb|title|} command to define the title of
your work. If your work has a subtitle, define it with the
{\verb|subtitle|} command. Do not insert line breaks in your title.
If your title is lengthy, you must define a short version to be used
in the page headers, to prevent overlapping text. The \verb|title|
command has a ``short title'' parameter:
\begin{verbatim}
\title[short title]{full title}
\end{verbatim}
\section{Authors and Affiliations}
Each author must be defined separately for accurate metadata
identification. Multiple authors may share one affiliation. Authors'
names should not be abbreviated; use full first names wherever
possible. Include authors' e-mail addresses whenever possible.
Grouping authors' names or e-mail addresses, or providing an ``e-mail
alias,'' as shown below, is not acceptable:
\begin{verbatim}
\author{Brooke Aster, David Mehldau}
\email{dave,judy,[email protected]}
\email{[email protected]}
\end{verbatim}
The \verb|authornote| and \verb|authornotemark| commands allow a note
to apply to multiple authors --- for example, if the first two authors
of an article contributed equally to the work.
If your author list is lengthy, you must define a shortened version of
the list of authors to be used in the page headers, to prevent
overlapping text. The following command should be placed just after
the last \verb|\author{}| definition:
\begin{verbatim}
\renewcommand{\shortauthors}{McCartney, et al.}
\end{verbatim}
Omitting this command will force the use of a concatenated list of all
of the authors' names, which may result in overlapping text in the
page headers.
The article template's documentation, available at
\url{https://www.acm.org/publications/proceedings-template}, has a
complete explanation of these commands and tips for their effective
use.
Note that authors' addresses are mandatory for journal articles.
\section{Rights Information}
Authors of any work published by ACM will need to complete a rights
form. Depending on the kind of work, and the rights management choice
made by the author, this may be copyright transfer, permission,
license, or an OA (open access) agreement.
Regardless of the rights management choice, the author will receive a
copy of the completed rights form once it has been submitted. This
form contains \LaTeX\ commands that must be copied into the source
document. When the document source is compiled, these commands and
their parameters add formatted text to several areas of the final
document:
\begin{itemize}
\item the ``ACM Reference Format'' text on the first page.
\item the ``rights management'' text on the first page.
\item the conference information in the page header(s).
\end{itemize}
Rights information is unique to the work; if you are preparing several
works for an event, make sure to use the correct set of commands with
each of the works.
The ACM Reference Format text is required for all articles over one
page in length, and is optional for one-page articles (abstracts).
\section{CCS Concepts and User-Defined Keywords}
Two elements of the ``acmart'' document class provide powerful
taxonomic tools for you to help readers find your work in an online
search.
The ACM Computing Classification System ---
\url{https://www.acm.org/publications/class-2012} --- is a set of
classifiers and concepts that describe the computing
discipline. Authors can select entries from this classification
system, via \url{https://dl.acm.org/ccs/ccs.cfm}, and generate the
commands to be included in the \LaTeX\ source.
User-defined keywords are a comma-separated list of words and phrases
of the authors' choosing, providing a more flexible way of describing
the research being presented.
CCS concepts and user-defined keywords are required for for all
articles over two pages in length, and are optional for one- and
two-page articles (or abstracts).
\section{Sectioning Commands}
Your work should use standard \LaTeX\ sectioning commands:
\verb|section|, \verb|subsection|, \verb|subsubsection|, and
\verb|paragraph|. They should be numbered; do not remove the numbering
from the commands.
Simulating a sectioning command by setting the first word or words of
a paragraph in boldface or italicized text is {\bfseries not allowed.}
\section{Tables}
The ``\verb|acmart|'' document class includes the ``\verb|booktabs|''
package --- \url{https://ctan.org/pkg/booktabs} --- for preparing
high-quality tables.
Table captions are placed {\itshape above} the table.
Because tables cannot be split across pages, the best placement for
them is typically the top of the page nearest their initial cite. To
ensure this proper ``floating'' placement of tables, use the
environment \textbf{table} to enclose the table's contents and the
table caption. The contents of the table itself must go in the
\textbf{tabular} environment, to be aligned properly in rows and
columns, with the desired horizontal and vertical rules. Again,
detailed instructions on \textbf{tabular} material are found in the
\textit{\LaTeX\ User's Guide}.
Immediately following this sentence is the point at which
Table~\ref{tab:freq} is included in the input file; compare the
placement of the table here with the table in the printed output of
this document.
\begin{table}
\caption{Frequency of Special Characters}
\label{tab:freq}
\begin{tabular}{ccl}
\toprule
Non-English or Math&Frequency&Comments\\
\midrule
\O & 1 in 1,000& For Swedish names\\
$\pi$ & 1 in 5& Common in math\\
\$ & 4 in 5 & Used in business\\
$\Psi^2_1$ & 1 in 40,000& Unexplained usage\\
\bottomrule
\end{tabular}
\end{table}
To set a wider table, which takes up the whole width of the page's
live area, use the environment \textbf{table*} to enclose the table's
contents and the table caption. As with a single-column table, this
wide table will ``float'' to a location deemed more
desirable. Immediately following this sentence is the point at which
Table~\ref{tab:commands} is included in the input file; again, it is
instructive to compare the placement of the table here with the table
in the printed output of this document.
\begin{table*}
\caption{Some Typical Commands}
\label{tab:commands}
\begin{tabular}{ccl}
\toprule
Command &A Number & Comments\\
\midrule
\texttt{{\char'134}author} & 100& Author \\
\texttt{{\char'134}table}& 300 & For tables\\
\texttt{{\char'134}table*}& 400& For wider tables\\
\bottomrule
\end{tabular}
\end{table*}
Always use midrule to separate table header rows from data rows, and
use it only for this purpose. This enables assistive technologies to
recognise table headers and support their users in navigating tables
more easily.
\section{Math Equations}
You may want to display math equations in three distinct styles:
inline, numbered or non-numbered display. Each of the three are
discussed in the next sections.
\subsection{Inline (In-text) Equations}
A formula that appears in the running text is called an inline or
in-text formula. It is produced by the \textbf{math} environment,
which can be invoked with the usual
\texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with
the short form \texttt{\$\,\ldots\$}. You can use any of the symbols
and structures, from $\alpha$ to $\omega$, available in
\LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few
examples of in-text equations in context. Notice how this equation:
\begin{math}
\lim_{n\rightarrow \infty}x=0
\end{math},
set here in in-line math style, looks slightly different when
set in display style. (See next section).
\subsection{Display Equations}
A numbered display equation---one set off by vertical space from the
text and centered horizontally---is produced by the \textbf{equation}
environment. An unnumbered display equation is produced by the
\textbf{displaymath} environment.
Again, in either environment, you can use any of the symbols and
structures available in \LaTeX\@; this section will just give a couple
of examples of display equations in context. First, consider the
equation, shown as an inline equation above:
\begin{equation}
\lim_{n\rightarrow \infty}x=0
\end{equation}
Notice how it is formatted somewhat differently in
the \textbf{displaymath}
environment. Now, we'll enter an unnumbered equation:
\begin{displaymath}
\sum_{i=0}^{\infty} x + 1
\end{displaymath}
and follow it with another numbered equation:
\begin{equation}
\sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f
\end{equation}
just to demonstrate \LaTeX's able handling of numbering.
\section{Figures}
The ``\verb|figure|'' environment should be used for figures. One or
more images can be placed within a figure. If your figure contains
third-party material, you must clearly identify it as such, as shown
in the example below.
Your figures should contain a caption which describes the figure to
the reader.
Figure captions are placed {\itshape below} the figure.
Every figure should also have a figure description unless it is purely
decorative. These descriptions convey what’s in the image to someone
who cannot see it. They are also used by search engine crawlers for
indexing images, and when images cannot be loaded.
A figure description must be unformatted plain text less than 2000
characters long (including spaces). {\bfseries Figure descriptions
should not repeat the figure caption – their purpose is to capture
important information that is not already provided in the caption or
the main text of the paper.} For figures that convey important and
complex new information, a short text description may not be
adequate. More complex alternative descriptions can be placed in an
appendix and referenced in a short figure description. For example,
provide a data table capturing the information in a bar chart, or a
structured list representing a graph. For additional information
regarding how best to write figure descriptions and why doing this is
so important, please see
\url{https://www.acm.org/publications/taps/describing-figures/}.
\subsection{The ``Teaser Figure''}
A ``teaser figure'' is an image, or set of images in one figure, that
are placed after all author and affiliation information, and before
the body of the article, spanning the page. If you wish to have such a
figure in your article, place the command immediately before the
\verb|\maketitle| command:
\begin{verbatim}
\begin{teaserfigure}
\includegraphics[width=\textwidth]{sampleteaser}
\caption{figure caption}
\Description{figure description}
\end{teaserfigure}
\end{verbatim}
\section{Citations and Bibliographies}
The use of \BibTeX\ for the preparation and formatting of one's
references is strongly recommended. Authors' names should be complete
--- use full first names (``Donald E. Knuth'') not initials
(``D. E. Knuth'') --- and the salient identifying features of a
reference should be included: title, year, volume, number, pages,
article DOI, etc.
The bibliography is included in your source document with these two
commands, placed just before the \verb|\end{document}| command:
\begin{verbatim}
\bibliographystyle{ACM-Reference-Format}
|
1,108,101,566,714 | arxiv | \section{Section Heading}
\section{Introduction}
Despite early discoveries of OB stars and molecular gas in the
outer Milky Way (MW; e.g., \citealt{fich84, brand88}), not much attention
had been paid to molecular gas
in galaxy outskirts primarily because there was a notion that
virtually no star formation occurs there. This notion was altered
entirely by the {\it Galaxy Evolution Explorer} ({\it GALEX})\index{{\it GALEX}}, which
revealed that ultraviolet emission often extends far beyond the edges of
optical disks (namely, extended ultraviolet disks, or XUV
disks\index{XUV disk}; \citealt{Thilker:2005ff, Gil-de-Paz:2007eu}). The UV
emission suggests the presence of massive stars, at least B stars, and
hence that there was recent star formation within the lifetime of B stars
($\sim 100$\,Myr). These young stars must have been born nearby,
perhaps requiring unnoticed molecular gas and clouds somewhere in the
extended galaxy outskirts. Average gas densities there are extremely
low compared to typical star-forming regions within the MW. Understanding the
conditions of parental molecular gas in such an extreme condition
is vital to expand our knowledge of the physics of star formation.
We need to understand the internal properties of molecular clouds,
including the atomic-to-molecular gas phase transition, the
distribution of molecular clouds, and the external environment in
galaxy outskirts.
A blind search for molecular gas has been difficult for
the large outskirts of nearby galaxies due to the limited capability of existing facilities.
The Atacama Large Millimeter/submillimeter Array (ALMA) improved the sensitivity remarkably,
but even ALMA would need to invest hours to days to carry out a large areal search for molecular gas over extended disks.
This review summarizes the current knowledge on molecular gas and star formation
in the outskirts, but this research field is still in a phase of discovery.
The space to explore is large, and more systematic understanding will become possible
with future observations.
Studies of molecular gas in the outskirts will also reveal the yet
unknown physical properties of the interstellar medium (ISM) in the
outskirts. Most observational tools were developed and calibrated in
the inner parts of galactic disks and may not be applicable as they
are to the outskirts. Many studies are subject to {\it
systematic biases}, especially when molecular gas in the outskirts
is compared with inner disks. For example, the rotational
transition of carbon monoxide (CO) is often used to measure the mass
of molecular gas in normal galaxies; however, its presence and
excitation conditions depend on the metal abundance, stellar radiation
field, internal volume and column densities, and kinetic temperature,
all of which may change in the outskirts.
In this review, we start from a summary of how the ISM evolves in the
inner parts of the MW and nearby galaxies with an emphasis on
molecular gas (Sect.~\ref{sec:inout}). We then discuss the observational methods, including
the equations needed to plan for a future observational search of
molecular gas with a radio telescope (Sect.~\ref{sec:clouds}). We explain the
potential effects of applying these equations under the extreme
conditions in galaxy outskirts, which may cause systematic biases when the ISM is
compared between galaxies' inner parts and outskirts (Sect.~\ref{sec:ISMextreme}).
Although not many observations have been carried out in galaxy outskirts,
we summarize the current state of molecular gas observations in spiral (Sect.~\ref{sec:disks})
and elliptical galaxies (Sect.~\ref{sec:ellipticals}) and in galaxy groups and clusters (Sect.~\ref{sec:groups}).
We finish the review with possible future directions
(Sect.~\ref{sec:future}). The term ``outskirts" is abstract and has
been used differently in different contexts. In this review we use
this term for the area beyond the optical radius of galaxy, e.g.,
beyond $r_{25}$, which is the radius where the $B$-band surface
brightness of a galaxy falls to $25\,\rm mag\, arcsec^{-2}$. We should,
however, note that in some circumstances $r_{25}$ is not defined well,
and we have to rely on a loose definition of ``outskirts".
The measurements of gas properties, such as molecular mass, often depend on some
assumptions of the gas properties themselves.
However, galaxy outskirts are an extreme environment, and the
assumptions based on previous measurements in inner disks may not be appropriate.
This problem needs to be resolved iteratively by adjusting the
assumptions to match future observations. We therefore spend a
number of pages on the methods of basic measurements
(Sect.~\ref{sec:clouds}), so that the equations and assumptions can be
revisited easily in future studies. Readers who already understand the
basic methods and assumptions may skip Sect.~\ref{sec:clouds}
entirely and move from Sect.~\ref{sec:inout} to Sect.\ref{sec:disks}.
\section{Molecular Gas from the Inner to the Outer Regions of Galaxies}\label{sec:inout}
The most abundant molecule H$_2$\index{H$_{2}$ molecule} does not have
significant emission at the cold temperatures that are typical in
molecular clouds ($<30$\,K). Hence, the emission from CO\index{CO molecule}, the
second-most abundant molecule, is commonly used to trace molecular gas.
Molecular gas is typically concentrated toward the centres of galaxies and
its surface density decreases with galactic radius (\citealt{Young:1991aa, Wong:2002lr}).
The gas phase changes from mostly molecular in the central regions
to more atomic in the outer regions (\citealt{Sofue:1995fk, Koda:2016aa, Sofue:2016aa}).
These trends apparently continue into the outskirts, as H{\sc i} disks often
extend beyond the edges of optical disks (\citealt{Bosma:1981aa}).
We may infer the properties of gas in the outskirts by extending our knowledge from
the inner disks.
Recently, \citet{Koda:2016aa} concluded that the H{\sc i}-H$_2$ gas phase
transition\index{H{\sc i}-H$_2$ gas phase transition} between spiral arm and
interarm regions changes as a function of radius in the MW
and other nearby galaxies. In the molecule-dominant inner parts, the
gas remains highly molecular as it moves from an interarm region into
a spiral arm and back into the next interarm region. Stellar feedback
does not dissociate molecules much, and perhaps the coagulation and
fragmentation of molecular clouds dominate the evolution of the ISM at
these radii. The trend differs in the outer regions where the gas
phase is atomic on average. The H{\sc i} gas is converted to H$_2$ in spiral
arm compression and goes back into the H{\sc i} phase after passing spiral
arms. These different regimes of ISM evolution are also seen in the
LMC, M33, and M51, depending on the dominant gas phase there
(\citealt{Heyer:1998kx, Engargiola:2003jo, Koda:2009wd, Fukui:2009lr, Tosaki:2011fk, Colombo:2014uq}).
Even in regions of relatively low gas densities, a natural fluctuation
may occasionally lead to gravitational collapse into molecular gas and clouds.
For example, many low-density dwarf galaxies show some molecular gas and star formation.
However, some stimulus, such as spiral arm compression, seems necessary to accelerate
the H{\sc i} to H$_2$ phase transition. In addition to such internal stimuli, there are
external stimuli, such as interactions with satellite galaxies,
which may also trigger the phase transition into molecular gas in the outskirts.
\section{Molecular ISM Masses: Basic Equations} \label{sec:clouds}
The molecular ISM is typically cold and is observed at radio wavelengths.
To search for the molecular ISM in galaxy outskirts one needs to be familiar
with conventional notations in radio astronomy.
Here we summarize the basic equations and assumptions that
have been used in studies of the molecular ISM in traditional environments, such as
in the MW's inner disk. In particular, we focus on the $J=1-0, 2-1$ rotational transitions
of CO molecules and dust continuum emission at
millimetre/sub-millimetre wavelengths.
The molecular ISM in galaxy outskirts may have different properties from those in the inner disks.
We discuss how expected differences could affect the measurements with
CO $J=1-0, 2-1$, and dust continuum emission.
\subsection{Brightness Temperature, Flux Density and Luminosity}
The definitions of brightness temperature $T_{\nu}$\index{brightness
temperature}, brightness $I_{\nu}$\index{brightness}, flux density
$S_{\nu}$\index{flux density}, and luminosity
$L_{\nu}$\index{luminosity} are often confusing.
It is useful to go back to the amount of energy ($dE$) that passes through an aperture (e.g., detector, or sometimes the $4\pi$ sky area),
\begin{equation}
dE = I_{\nu} d\Omega_{\rm B} dA dt d\nu = \left\{ \left[ I_{\nu} d\Omega_{\rm B} \right] dA \right\} dt d\nu = \left\{ S_{\nu} dA \right\} dt d\nu = L_{\nu} dt d\nu, \label{eq:energy}
\end{equation}
where $S_{\nu} = \int I_{\nu} d\Omega_{\rm B}$ and $L_{\nu} = \int \int I_{\nu} d\Omega_{\rm B} dA$ (see Fig.~\ref{fig:rad}).
The $dt$ and $d\nu$ denote unit time and frequency, respectively.
The $d\Omega_{\rm B}$ is the solid angle of the source and has the relation with the physical area $dB=D^2 d\Omega_{\rm B}$
with the distance $D$.
Similarly, $dA=D^2 d\Omega_{\rm A}$ using the solid angle of the aperture area seen from the source $d\Omega_{\rm A}$.
The aperture $dA$ can be a portion of the $4\pi$ sky sphere as it is seen from the source and is $4\pi D^2$
when integrated over the entire sphere to calculate luminosity.
The $dA$ could also represent an area of a detector (or a pixel of a detector).
The flux density $S_{\nu}$ is often expressed in the unit of ``Jansky (Jy)'', which is equivalent to
``$10^{-23}\,\rm erg\, s^{-1}\,cm^{-2}\,Hz^{-1}$".
An integration of $I_{\nu}$ over a solid angle $d\Omega_{\rm B}$
(e.g., telescope beam area or synthesized beam area)
provides $S_{\nu}$. In reverse, $I_{\nu}$ is $S_{\nu}$ divided by the solid angle $\Omega_{\rm B}$ [$= \int d\Omega_{\rm B}$].
Therefore, the brightness $I_{\nu}$ [$= S_{\nu}/ \Omega_{\rm B}$] is expressed in the unit of ``Jy/beam".
\begin{figure}[h]
\begin{center}
\includegraphics[scale=.7]{fig1.eps}
\end{center}
\caption{Definitions of parameters. The rays emitted from the source with the area $dB = D^2 d\Omega_{\rm B}$
pass through the solid angle $d\Omega_{\rm A}$ (or the area $D^2 d\Omega_{\rm A}$) at the distance of $D$}
\label{fig:rad}
\end{figure}
The brightness temperature $T_{\nu}$ is the temperature that makes the black body function
$B_{\nu}(T_{\nu})$ have the same brightness as the observed $I_{\nu}$
at a frequency $\nu$ (i.e., $I_{\nu}=B_{\nu}(T_{\nu})$), even when
$I_{\nu}$ does not follow the black body law! In the Rayleigh-Jeans
regime ($h\nu \ll kT$),
\begin{equation}
T_{\nu}= \frac{c^2 }{2 \nu^2 k} I_{\nu}= \frac{c^2 }{2 \nu^2 k} \left( \frac{S_{\nu}}{\Omega_{\rm B}} \right). \label{eq:t_s}
\end{equation}
The $T_{\nu}$ characterizes radiation and is {\it not necessarily} a physical temperature of an emitting body.
However, if the emitting body is an optically thick black body and is filling the beam $\Omega_{\rm B}$,
$T_{\nu}$ is equivalent to the physical temperature of the emitting
body when the Rayleigh-Jeans criterion is satisfied.
The $T_{\nu}$ is measured in ``Kelvin". This unit is convenient in radio astronomy since radio single-dish observations
calibrate a flux scale in the Kelvin unit using hot and cold loads of known temperatures.
Giant molecular clouds (GMCs) in the MW have a typical temperature of
$\sim$10\,K (\citealt{Scoville:1987vo}), and the black body radiation $B_{\nu}(T)$ at this temperature
peaks at $\nu \sim 588\,\rm GHz$ ($\sim 510\mu\rm m$).
Therefore, most radio observations of molecular gas are in the Rayleigh-Jeans range.
A numerical expression of Eq.~(\ref{eq:t_s}) is useful in practice,
\begin{equation}
\left( \frac{T_{\nu}}{\rm K} \right)= 13.6 \left( \frac{\lambda}{\rm mm} \right)^2 \left( \frac{S_{\nu}}{\rm Jy} \right) \left( \frac{b_{\rm maj} \times b_{\rm min}}{\rm 1" \times 1"} \right)^{-1}.\label{eq:numRJ}
\end{equation}
The last term corresponds to $\Omega_{\rm B}$ in Eq.~(\ref{eq:t_s}) and is calculated as
\begin{equation}
\Omega_{\rm B} = \frac{\pi b_{\rm maj} b_{\rm min}}{4\ln 2} \sim 1.133 b_{\rm maj} b_{\rm min},
\end{equation}
which represents the area of interest (e.g., source size, telescope beam) as a 2-d Gaussian with the major and minor axis FWHM diameters of $b_{\rm maj}$ and $b_{\rm min}$, respectively.
Equation~(\ref{eq:numRJ}) is sometimes written with brightness as
\begin{equation}
\left( \frac{T_{\nu}}{\rm K} \right)= 13.6 \left( \frac{\lambda}{\rm mm} \right)^2 \left( \frac{I_{\nu}}{\rm Jy/beam} \right) \left( \frac{b_{\rm maj} \times b_{\rm min}}{\rm 1" \times 1"} \right)^{-1},
\end{equation}
where in this case the last term is for the unit conversion from
``beam" into arcsec$^2$, and $b_{\rm maj}$ and $b_{\rm min}$ must refer
to the telescope beam or synthesized beam.
\subsection{Observations of the Molecular ISM using CO Line Emission}
Molecular hydrogen (H$_2$) is the principal component of the ISM at a high density, $>100\rm\, cm^{-3}$.
This molecule has virtually no emission at cold temperatures.
Hence, CO emission is typically used to trace the molecular ISM.
Conventionally, the molecular ISM mass $M_{\rm mol}$\index{molecular
mass} includes the masses of helium and other elements. $M_{\rm
mol}=1.36\,M_{\rm H_2}$ is used to convert the H$_2$ mass into $M_{\rm
mol}$.
\subsubsection{CO($J=1-0$) Line Emission}\index{CO($J=1-0$) line emission}
The fundamental CO rotational transition $J=1-0$ at $\nu_{\rm CO}(1-0)=115.271208$\,GHz
has been used to measure the molecular ISM mass since the 1980s.
For simplicity we omit ``CO($1-0$)" in subscript and instead write ``10". Hence, $\nu_{\rm CO}($1-0$)=\nu_{10}$.
The dynamical masses of GMCs and their CO($1-0$) luminosities are linearly correlated
in the MW's inner disk (\citealt{Scoville:1987lp, solomon87}).
If a great majority of molecules reside in GMCs, the CO($1-0$) luminosity $L_{\rm 10}^{\prime}$
integrated over an area (i.e., an ensemble of GMCs in the area) can be linearly translated to the molecular mass $M_{\rm mol}$,
\begin{equation}
M_{\rm mol}=\alpha_{\rm 10} L_{\rm 10}^{\prime},
\end{equation}
where $\alpha_{10}$ (or $X_{\rm CO}$\index{$X_{\rm CO}$}; see below)
is a mass-to-light ratio and is called the CO-to-H$_2$ conversion
factor\index{conversion factor} (\citealt{Bolatto:2013ys}).
By convention we define $L_{10}^{\prime}$, instead of $L_{10}$ (Eq.\ref{eq:energy}).
With the CO($1-0$) brightness temperature $T_{10}$ (instead of $I_{\nu}$ or $I_{10}$),
velocity width $d v$ (instead of frequency width $d\nu$), and beam area in physical scale $ dB= D^2 d\Omega_{\rm B}$,
it is defined as
\begin{equation}
L_{10}^{\prime} \equiv \int \int T_{10}dv dB= \frac{c^2 }{2 \nu_{10}^2 k} \left[ \int S_{10} dv \right] D^2 ,\label{eq:lum_s10}
\end{equation}
where we used Eq.~(\ref{eq:t_s}) for $T_{10}$.
The molecular mass is
\begin{equation}
M_{\rm mol} = \alpha_{\rm 10} \frac{c^2}{2 \nu_{10}^2 k } \left[ \int S_{10} dv \right] D^2. \label{eq:mass_s10}
\end{equation}
Numerically, this can be expressed as
\begin{equation}
\left( \frac{M_{\rm mol}}{M_{\odot}} \right) = 1.1 \times 10^{4} \left( \frac{\alpha_{\rm 10}}{4.3\,M_{\odot}\,{\rm pc^{-2} [K\cdot km/s]}^{-1}} \right)\left( \frac{\int S_{10}dv}{\rm Jy \cdot km/s} \right) \left( \frac{D}{\rm Mpc}\right)^2.\label{eq:m_s10_num}
\end{equation}
Note that $S_{10}$ [$=\int I_{10}d\Omega_{\rm B}$] is an integration over an area of interest (or summation over all pixels within the area).
The $\alpha_{\rm 10}=4.3\,M_{\odot}\,{\rm pc}^{-2}$ corresponds to the conversion factor of
$X_{\rm CO}=2.0\times 10^{20}\rm \, cm^{-2}\,[K\cdot km/s]^{-1}$ multiplied by the factor of 1.36 to account for
the masses of helium and other elements.
$\alpha_{\rm 10}$ includes helium, while $X_{\rm CO}$ does not.
The calibration of $\alpha_{\rm 10}$ (or $X_{\rm CO}$) is discussed in \citet{Bolatto:2013ys}.
A typical GMC in the MW has a mass of $4\times10^5\,M_{\odot}$ and $dv=8.9\,\rm km/s$ (FWHM)
(\citealt{Scoville:1987vo}), which is $\int S_{10}dv\sim1.5\,\rm Jy\,km/s$ or $S_{10}\sim 170\,\rm mJy$ at $D=5\,\rm Mpc$.
\subsubsection{CO($J=2-1$) Line Emission}\index{CO($J=2-1$) line emission}
The CO($J=2-1$) emission (230.538\,GHz) is also useful for a rough estimation of molecular mass
though an excitation condition may play a role (see below).
We can redefine Eq.~(\ref{eq:mass_s10}) for CO($2-1$) by replacing the subscripts from 10 to 21
and using a new CO($2-1$)-to-H$_2$ conversion factor $\alpha_{21} \equiv \alpha_{10}/R_{\rm 21/10}$,
where $R_{\rm 21/10} [\equiv T_{21}/T_{10}]$ is the CO $J=2-1/1-0$ line ratio in brightness temperature.
In practice, $\alpha_{10}$ and $R_{\rm 21/10}$ are carried over in use of CO($J=2-1$)
as these are the parameters that have been measured.
Equation~(\ref{eq:mass_s10}) is now
\begin{equation}
M_{\rm mol} = \left( \frac{\alpha_{\rm 10}}{R_{\rm 21/10}} \right) \frac{c^2}{2 \nu_{21}^2 k} \left[ \int S_{21}dv \right] D^2. \label{eq:mass_s21}
\end{equation}
A numerical evaluation gives
\begin{equation}
\left( \frac{M_{\rm H_2}}{M_{\odot}} \right) = 3.8 \times 10^{3} \left( \frac{\alpha_{\rm 10}}{4.3\,M_{\odot}\,{\rm pc^{-2} [K\cdot km/s]}^{-1}} \right) \left( \frac{R_{21/10}}{0.7} \right)^{-1} \left( \frac{ \int S_{21}dv}{\rm Jy \cdot km/s} \right) \left( \frac{D}{\rm Mpc}\right)^2.\label{eq:m_s21_num}
\end{equation}
The typical GMC with $4\times10^5\,M_{\odot}$ and $dv=8.9\,\rm
km/s$ has $\int S_{21}dv\sim4.2\,\rm Jy \, km/s$ or $S_{21}\sim
470\,\rm mJy$ at $D=5\,\rm Mpc$.
Note $S_{21}>S_{10}$ for the same GMC
because $S_{21}/S_{10} = (\nu_{21}/ \nu_{10})^2 T_{21}/ T_{10}= (\nu_{21}/
\nu_{10})^2 R_{21/10}\sim2.8$ from Eq.~(\ref{eq:t_s}),
where the $(\nu_{21}/ \nu_{10})^2$ term arises from two facts:
at the higher frequency,
(a) each photon carries twice the energy, and
(b) there are two times more photons in each frequency interval $d\nu$,
which is in the denominator of the definition of flux density $S$.
Empirically, $R_{\rm 21/10}\sim 0.7 $ on average in the MW (\citealt{Sakamoto:1997ys, Hasegawa:1997lr}),
which is consistent with a theoretical explanation under the conditions of the MW disk
(\citealt{Scoville:1974yu, Goldreich:1974jh}; see Sect.~\ref{sec:ISMextreme}).
\subsection{Observations of the Molecular ISM using Dust Continuum Emission}
Continuum emission from dust provides an alternative means for ISM mass measurement.
Dust is mixed in the gas phase ISM, and its emission at millimetre/submillimetre waves
correlates well with the fluxes of both atomic gas (H{\sc i} 21\,cm emission) and molecular gas (CO emission).
\citet{Scoville:2016aa} discussed the usage and calibration of dust emission for ISM mass
measurement.
We briefly summarize the basic equations, whose normalization will be adjusted with an empirical
fitting in the end.
The radiative transfer equation gives the brightness of dust emission
\begin{equation}
I_{\nu}=(1-e^{-\tau_{\nu}}) B_{\nu}(T_{\rm d})
\end{equation}
with the black body radiation $B_{\nu}(T_{\rm d})$ at the dust temperature $T_{\rm d}$ and the optical depth $\tau_{\nu}$.
The flux density of dust is an integration:
\begin{equation}
S_{\nu} = \int (1-e^{-\tau_{\nu}}) B_{\nu}(T_{\rm d}) d\Omega_{\rm B} = (1-e^{-\tau_{\nu}}) B_{\nu}(T_{\rm d}) \Omega_{\rm B},\label{eq:dustfluxdensity}
\end{equation}
where $B_{\nu}$ and $\tau_{\nu}$ are assumed constant within $\Omega_{\rm B}$ [$= \int d\Omega_{\rm B}$].
When the integration is over the beam area, $S_{\nu}$ is the flux density within the beam,
and $(S_{\nu}/\Omega_{\rm B})$, from Eq.~(\ref{eq:dustfluxdensity}), is in Jy/beam.
An integration of $S_{\nu}$ over the entire sky area at the distance of $D$ (i.e., $\int dA=D^2 \int_{4 \pi} d\Omega_{\rm A} = 4 \pi D^2$) gives the luminosity
\begin{eqnarray}
L_{\nu} &=& \int (1-e^{-\tau_{\nu}}) B_{\nu}(T_{\rm d}) \Omega_{\rm B} dA = (1-e^{-\tau_{\nu}}) B_{\nu}(T_{\rm d}) \Omega_{\rm B} 4\pi D^2 \\
&\approx& 4 \pi \tau_{\nu} B_{\nu}(T_{\rm d}) D^2 \Omega_{\rm B} =
4 \pi \kappa_{\nu} \Sigma_{\rm d} B_{\nu}(T_{\rm d}) D^2 \Omega_{\rm B} =4 \pi \kappa_{\nu} M_{\rm d} B_{\nu}(T_{\rm d}). \label{eq:dustlum}
\end{eqnarray}
The dust is optically thin at mm/sub-mm wavelengths, and we used
$(1-e^{-\tau_{\nu}}) \sim \tau_{\nu} = \kappa_{\nu} \Sigma_{\rm d}$,
where $\kappa_{\nu}$ and $\Sigma_{\rm d}$ are the absorption coefficient and surface density of dust.
The dust mass within the beam is $M_{\rm d} = \Sigma_{\rm d} D^2 \Omega_{\rm B}$.
Obviously, the dust continuum luminosity depends on the dust properties (e.g., compositions and
size distribution; via $\kappa_{\nu}$),
amount ($M_{\rm d}$), and temperature ($T_{\rm d}$).
Equation (\ref{eq:dustlum}) gives the mass-to-light ratio for
dust\index{dust mass-to-light ratio}
\begin{equation}
\frac{M_{\rm d}}{L_{\nu}} = \frac{1}{4 \pi \kappa_{\nu} B_{\nu}(T_{\rm d})}.\label{eq:dustml1}
\end{equation}
We convert $M_{\rm d}$ into gas mass, $M_{\rm gas}=\delta_{\rm GDR}M_{\rm d}$,
with the gas-to-dust ratio $\delta_{\rm GDR}$.
By re-defining the dust absorption coefficient $\kappa_{\nu}^{\prime} \equiv \kappa_{\nu}/\delta_{\rm GDR}$
(the absorption coefficient per unit total mass of gas), the gas mass-to-dust continuum flux
ratio $\gamma_{\nu}$ at the frequency $\nu$ becomes,
\begin{equation}
\gamma_{\nu} \equiv \frac{M_{\rm gas}}{L_{\nu}} = \frac{1}{4 \pi \kappa_{\nu}^{\prime} B_{\nu}(T_{\rm d})}. \label{eq:dustml}
\end{equation}
Once $\gamma_{\nu}$ is obtained, the gas mass is estimated as $M_{\rm gas}=\gamma_{\nu} L_{\nu}$.
Here, we use the character $\gamma$, instead of $\alpha$ that \citet{Scoville:2016aa} used,
to avoid a confusion with the CO-to-H$_2$ conversion factor.
Dust continuum emission is associated with H{\sc i} and H$_2$,
and $M_{\rm gas}\sim M_{\rm mol}$ in dense, molecule-dominated regions ($\gtrsim 100\,\rm cm^{-3}$).
The $\kappa_{\nu}^{\prime}$ can be approximated as a power-law
$\kappa_{\nu}^{\prime}=\kappa_{850\mu m}^{\prime} (\lambda/850\mu m)^{-\beta}$
with the spectral index $\beta \sim 1.8$ (\citealt{Planck-Collaboration:2011qy}) and
coefficient $\kappa_{850\mu m}^{\prime}$ at $\lambda=850\rm \mu m$ (352\,GHz).
In order to show the frequency dependence explicitly,
we separate $B_{\nu}(T_{\rm d})$ into the Rayleigh-Jeans term and the correction term $\Gamma_{\nu}(T_{\rm d})$
as $B_{\nu}(T_{\rm d})=(2\nu^2k T_{\rm d}/c^2) \Gamma_{\nu}(T_{\rm d})$, where
\begin{equation}
\Gamma_{\nu}(T_{\rm d}) = \frac{ x}{e^x-1} \,\,\,\, \textrm{with} \,\,\,\,x=\frac{h\nu}{k T_{\rm d}}.
\end{equation}
Equation (\ref{eq:dustml}) has the dependence
$\gamma_{\nu} \propto \nu^{-(\beta + 2)} T_{\rm d}^{-1} \Gamma_{\nu}(T_{\rm d})^{-1}$,
and the proportionality coefficient, including $\kappa_{850\mu m}^{\prime}$ and $\delta_{\rm GDR}$,
is evaluated empirically.
\citet{Scoville:2016aa} cautioned that $T_{\rm d}$ should not be derived from a spectral energy distribution fit (which gives
a luminosity-weighted average $T_{\rm d}$ biased toward hot dust
with a peak in the infrared). Instead, they suggested to use a mass-weighted $T_{\rm d}$
for the bulk dust component where the most mass resides.
\citet{Scoville:2016aa} adopted $T_{\rm d}=25\,\rm K$ and calibrated $\gamma_{\nu850\mu m}$
from an empirical comparison of $M_{\rm mol}$ (from CO measurements) and $L_{\nu}$,
\begin{equation}
\left( \frac{\gamma_{\nu}}{M_{\odot} {\rm [Jy\,cm^2]}^{-1}} \right) =1.5\pm 0.4 \times 10^{3} \left( \frac{\nu}{352\,\rm GHz} \right)^{-3.8} \left( \frac{T_{\rm d}}{25\,\rm K} \right)^{-1} \left( \frac{\Gamma_{\nu}(T_{\rm d})}{\Gamma_{\nu 850\mu m}(25\,K)} \right)^{-1}.
\end{equation}
The luminosity is calculated from the observed $S_{\nu}$ in Jy and distance $D$ in centimetre
as $L_{\nu}=4 \pi D^2 S_{\nu}$ [$\rm Jy\,cm^2$]. The gas mass is then
$M_{\rm mol} = \gamma_{\nu} L_{\nu}$.
\subsection{The ISM in Extreme Environments Such as the
Outskirts}\label{sec:ISMextreme}\index{extreme
environment}\index{molecular mass: calibration issues}
The methods for molecular ISM mass measurement that we discussed above were developed
and calibrated mainly for the inner parts of galaxies.
However, it is not guaranteed that these calibrations are valid
in extreme environments such as galaxy outskirts.
In fact, metallicities appear to be lower in the outskirts than in the inner part (see Bresolin, this volume).
On a 1\,kpc scale average, gas and stellar surface densities,
and hence stellar radiation fields, are also lower, although it is not clear if these trends
persist at smaller scales, e.g., cloud scales, where the molecular ISM typically exists.
Empirically, $\alpha_{\rm 10}$ could be larger when metallicities are lower,
and $R_{\rm 21/10}$ could be smaller when gas density and/or temperature are lower.
In order to search for the molecular ISM and to understand star formation in the outskirts,
it is important to take into account the properties and conditions of the ISM there. Here we
explain some aspects that may bias measurements if the above equations
are applied naively as they are. These potential biases should not
discourage future research, and instead, should be adjusted
continuously as we learn more about the ISM in the extreme
environment.
\subsubsection{Variations of $\alpha_{\rm 10}$ (or $X_{\rm
CO}$)}\index{$X_{\rm CO}$ variations}
The CO-to-H$_2$ conversion factor $\alpha_{\rm 10}$ (or $X_{\rm CO}$)
is a mass-to-light ratio between the CO($1-0$) luminosity and the
molecular ISM mass (\citealt{Bolatto:2013ys}).
Empirically, this factor increases with decreasing metallicity\index{metallicity} (\citealt{Arimoto:1996aa, Leroy:2011lr})
due to the decreasing abundance of CO over H$_2$.
At the low metallicity of the small Magellanic cloud ($\sim 1/10 \, Z_{\odot}$),
$\alpha_{\rm 10}$ appears $\sim10-20$ times larger (\citealt{Arimoto:1996aa, Leroy:2011lr}).
This trend can be understood based on the self-shielding nature of molecular clouds.
Molecules on cloud surfaces are constantly photo-dissociated by stellar UV radiation.
At high densities within clouds, the formation rate of molecules can be as fast as
the dissociation rate, and hence molecules are maintained in molecular clouds.
The depth where molecules are maintained depends on the strength of the
ambient UV radiation field and its attenuation by line absorptions by
the molecules themselves as well as by continuum absorption by dust
(\citealt{van-Dishoeck:1988br}).
H$_2$ is $\sim10^4$ times more abundant than CO. It can easily become optically thick on
the skin of cloud surfaces and be self-shielded (Fig.~\ref{fig:codark}).
On the other hand, UV photons for CO dissociation penetrate deeper into the cloud
due to its lower abundance.
This process generates the CO-dark H$_2$\index{CO-dark H$_2$} layer around molecular clouds (Fig.~\ref{fig:codark}b; \citealt{Wolfire:2010fk}).
Shielding by dust is more important for CO than H$_{2}$. Therefore, if
the metallicity or dust abundance is low, the UV photons for CO
dissociation reach deeper and deeper, and eventually destroy all CO
molecules while H$_2$ still remains (Fig.~\ref{fig:codark}c).
As the CO-dark H$_2$ layer becomes
thicker, $L_{10}$ decreases while $M_{\rm H_2}$ stays high, resulting
in a larger $\alpha_{\rm 10}$ in a low metallicity environment, such
as galaxy outskirts. Since this process depends on the depth that
photons can penetrate (through dust attenuation as well as line absorption),
the visual extinction $A_{\rm V}$ is often used
as a parameter to characterize $\alpha_{\rm 10}$ (or $X_{\rm CO}$).
\begin{figure}[h]
\sidecaption
\includegraphics[scale=.38]{fig2.eps}
\caption{Self-shielding nature of molecules in molecular clouds. The abundance of molecules is maintained
in clouds, since the destruction (photo-dissociation by UV radiation) and formation rates are in balance.
The shielding from ambient UV radiation is mainly due to line absorption by molecules themselves.
Therefore, the abundant H$_2$ molecules become optically thick at the absorption line wavelengths
on the skin of clouds, while UV photons for CO dissociation can get deeper into clouds.
This mechanism generates the CO-dark H$_2$ layer on the surface of molecular clouds.
This layer can become thicker ({\it panels a, b, c}) under several conditions: e.g.,
lower metallicity or stronger local radiation field.
The CO-to-H$_2$ conversion factor $\alpha_{\rm 10}$ (or $X_{\rm CO}$) increases
with the increasing thickness of the CO-dark H$_2$ layer, and
therefore, with lower metallicity or stronger local radiation field}
\label{fig:codark}
\end{figure}
\subsubsection{Variations of $R_{\rm
21/10}$}\index{CO($J=2-1$)/CO($J=1-0$) variations}
The CO($2-1$) line emission is useful to locate
the molecular ISM and to derive a rough estimation of its
mass. However, the higher transitions inevitably suffer from
excitation conditions\index{excitation conditions}. Indeed, $R_{\rm
21/10}$ ($\equiv T_{\rm 21}/T_{\rm 10}$) has been observed to vary by
a factor of $2-3$ in the MW and in other nearby galaxies, e.g., between
star-forming molecular clouds (typically $R_{\rm 21/10}\sim 0.7-1.0$
and occasionally up to 1.2) and dormant clouds ($\sim0.4-0.7$), and
between spiral arms ($>0.7$) and inter-arm regions ($<0.7$;
\citealt{Sakamoto:1997ys, Koda:2012lr}). The variation may be negligible
for finding molecular gas, but may cause a systematic bias, for
example, in comparing galaxy outskirts with inner disks. It is
noteworthy that $R_{\rm 21/10}$ changes systematically with star
formation activity, and varies along the direction of the
Kennicutt-Schmidt relation, which can introduce a bias.
Theoretically, $R_{\rm 21/10}$ is controlled by three parameters: the volume density $n_{\rm H_2}$
and kinetic temperature $T_{\rm k}$ -- which determine the CO excitation condition due to
collisions -- and the column density $N_{\rm CO}$, which controls radiative transfer and
photon trapping (\citealt{Scoville:1974yu, Goldreich:1974jh}).
Figure~\ref{fig:co2110} shows the variation of $R_{\rm 21/10}$ with respect to $n_{\rm H_2}$
and $T_{\rm k}$ under the large velocity gradient (LVG) approximation.
In this approximation, the Doppler shift due to a cloud's internal velocity gradient
is assumed to be large enough such that any two parcels along the line
of sight do not overlap in velocity space. The front
parcel does not block emission from the back parcel, and the optical
depth is determined only locally within the parcel (or in small
$dv$). Therefore, the column density is expressed per velocity $N_{\rm CO}/dv$.
A typical velocity range in molecular clouds is adopted for this figure.
An average GMC in the MW has $n_{\rm H_2}\sim 300 \,\rm cm^{-3}$ and $T_{\rm k}\sim10 \,\rm K$
(\citealt{Scoville:1987vo}), which results in $R_{\rm 21/10}$ of $\sim0.6-0.7$.
If the density and/or temperature is a factor of $2-3$ higher due to a contraction before star formation
or feedback from young stars, the ratio increases to $R_{\rm 21/10}>0.7$. On the contrary,
if a cloud is dormant compared to the average, the ratio is lower $R_{\rm 21/10}<0.7$.
\begin{figure}[t]
\sidecaption
\includegraphics[scale=.6]{fig3.eps}
\caption{The CO $J=2-1/1-0$ line ratios as function of the gas kinetic temperature $T_{\rm kin}$
and H$_2$ density $n_{\rm H2}$ under the LVG approximation (from \citealt{Koda:2012lr}).
Most GMCs in the MW have CO column density in the range of $\log(N_{\rm CO}/dv)\sim16.6$ to 17.3,
assuming the CO fractional abundance to H$_2$ of $8\times 10^{-5}$.
An average GMC in the MW has $n_{\rm H_2}\sim 300 \,\rm cm^{-3}$ and $T_{\rm k}\sim10 \,\rm K$,
and therefore shows $R_{\rm 21/10}\sim$0.6-0.7. $R_{\rm 21/10}$ is $<0.7$ if the density and/or temperature
decrease by a factor of $2-3$, and $R_{\rm 21/10}$ is $>0.7$ if the
density and/or temperature increase by a factor of $2-3$.
Observationally, dormant clouds typically have $R_{\rm 21/10}=0.4-0.7$,
while actively star forming clouds have $R_{\rm 21/10}=0.7-1.0$ (and occasionally up to $\sim1.2$; \citealt{Sakamoto:1997ys, Hasegawa:1997lr}).
There is also a systematic variation between spiral arms ($R_{\rm
21/10}>0.7$) and interarm regions ($R_{\rm 21/10}<0.7$; \citealt{Koda:2012lr})}
\label{fig:co2110}
\end{figure}
In the MW, cloud properties appear to change with the galactocentric radius (\citealt{heyer15}).
If their densities or temperatures are lower in the outskirts, it would result in a lower $R_{\rm 21/10}$,
and hence, a higher H$_2$ mass at a given CO($2-1$) luminosity.
If the $R_{\rm 21/10}$ variation is not accounted for, it could result
in a bias when clouds within the inner disk and in the outskirts are compared.
\subsubsection{Variations of Dust Properties and Temperature}\index{dust
emission variations}
The gas mass-to-dust luminosity $M_{\rm gas}/L_{\nu}$ depends on the
dust properties/emissivity ($\kappa_{\nu}$), dust temperature ($T_{\rm
d}$), and gas-to-dust ratio ($\delta_{\rm GDR}$) -- see Eqs.~(\ref{eq:dustml1}) and (\ref{eq:dustml}). All of these parameters
could change in galaxy outskirts, which have low average
metallicity\index{metallicity}, density, and stellar radiation
field. Of course, the assumption of a single $T_{\rm d}$ casts a
limitation to the measurement as the ISM is
multi-phase in reality, although the key idea of using
Eqs.~(\ref{eq:dustml1}) and (\ref{eq:dustml}) is to target regions
where the cold, molecular ISM is dominant (\citealt{Scoville:2016aa}).
The $\delta_{\rm GDR}$ may increase with decreasing metallicity by about an order of magnitude
($\delta_{\rm GDR} \sim 40 \rightarrow$ 400) for the change of metallicity $12+\log({\rm O/H})$
from $\sim 9.0 \rightarrow$ 8.0 (their Fig.~6; \citealt{Leroy:2011lr}).
If this trend applies to the outskirts, Eq.~(\ref{eq:dustml}) would tend to underestimate the gas mass
by up to an order of magnitude.
Excess dust emission at millimetre/submillimetre wavelengths has been
reported in the small and large Magellanic clouds (SMC and LMC) and
other dwarfs (\citealt{Bot:2010aa, Dale:2012aa}; although see also
\citealt{Kirkpatrick:2013aa}). This excess emission appears
significant when spectral energy distribution fits to infrared data
are extrapolated to millimetre/submillimetre wavelengths. Among the
possible explanations are the presence of very cold dust, a change of
the dust spectral index, and spinning dust emission (e.g., \citealt{Bot:2010aa}).
\citet{Gordon:2014aa} suggested that variations in the dust emissivity
are the most probable cause in the LMC and SMC
from their analysis of infrared data from the {\it Herschel Space Observatory}.
The environment of galaxy outskirts may be similar to those of the LMC/SMC.
The excess emission (27\% and 43\% for the LMC and SMC, respectively;
\citealt{Gordon:2014aa}) can be ignored if one only needs to locate
dust in the vast outskirts, but could cause a systematic bias when the
ISM is compared between inner disks and outskirts.
\section{Molecular Gas Observations in the Outskirts of Disk Galaxies}\label{sec:disks}
A primary motivation for molecular gas observations in the outskirts of
disk galaxies has been to study molecular clouds and star formation in
an extreme environment with lower average density and
metallicity. Many researchers highlight that these studies may teach
us about the early Universe, where these conditions were more
prevalent.
\subsection{The Milky Way}\index{Milky Way: molecular gas}
The MW is the disk galaxy with the most molecular gas detections in
the outskirts, with pioneering studies of the outer disk
molecular gas and star formation properties beginning in the 1980s
(e.g., \citealt{fich84, brand88}). The MW
can serve as a model for the types of studies that can be done in
nearby galaxies with larger and more sensitive facilities. We will
use ``outer'' MW to refer to galactocentric radii between the solar
circle ($R_{\rm Gal} > R_{\odot} = 8.5 \, {\rm kpc}$) and the edge of
the optical disk, which is estimated to be at $R_{\rm Gal} \sim 13-19
\, {\rm kpc}$ (\citealt{ruffle07,sale10} and references therein). We
will use ``outskirts'' to refer to galactocentric radii beyond the
edge of the optical disk.
Only about $2\%$ of the molecular mass of the MW is at $R_{\rm Gal}
> 14.5 \, {\rm kpc}$ (\citealt{nakagawa05} estimated the molecular mass
at $R_{\rm Gal}>14.5 \, {\rm kpc}$ to be $2 \times 10^7 \, M_{\odot}$ while
\citealt{heyer15} estimated the total molecular mass of the Galaxy to
be $(1 \pm 0.3) \times 10^{9} \, M_{\odot}$). N. Izumi (personal communication)
collected the known molecular clouds with $R_{\rm Gal}>13.5 \, {\rm
kpc}$ in the second and third quadrants (Fig.~\ref{fig:izumi}). The
molecular cloud with the largest known galactocentric radius is
probably Digel Cloud 1 with a kinematic galactocentric radius of
$R_{\rm Gal} = 22 \, {\rm kpc}$, dynamical mass of $\sim 6 \times
10^{4} \, M_{\odot}$, and radius of $36 \, {\rm pc}$ (Digel
Cloud 2 has a larger kinematic distance of $R_{\rm Gal} = 24 \, {\rm
kpc}$, but the photometric distance is $R_{\rm Gal} = 15-19 \, {\rm
kpc}$ based on optical spectroscopy of an associated B
star; \citealt{digel94,yasui06,yasui08,izumi14}). Digel Cloud 1 is beyond
the edge of the optical disk but well within the H{\sc i} disk, which extends to
$R_{\rm Gal} \sim 30 \, {\rm kpc}$ (\citealt{digel94,ruffle07} and references
therein).
\begin{figure}
\begin{center}
\includegraphics[width=\textwidth]{fig4.eps}
\caption{Figure from N. Izumi (personal communication) showing the known
molecular clouds at $R_{\rm Gal} > 13.5 \, {\rm kpc}$ in the second and third
quadrants overlaid on an artist's conception of the MW (R. Hurt:
NASA/JPL-Caltech/SSC). The colours correspond to the following surveys:
orange: \citet{brunt03}, magenta: \citet{sun15}, red: \citet{digel94},
cyan: \citet{brand94}, blue: \citet{may97}, green:
\citet{nakagawa05}, yellow: \citet{vazquez08}. The points represent
molecular clouds and the fan-shaped regions represent the survey
area. The distances were derived assuming $R_{\odot} = 8.5 \, {\rm
kpc}$ and a solar orbital speed of $V_{\odot} = 220 \, {\rm km \, s^{-1}}$}
\label{fig:izumi}
\end{center}
\end{figure}
Extremely tenuous H$_2$ gas is mixed with the H{\sc i} gas in the Galactic
halo with a fraction of H$_2$ over H{\sc i} of only $10^{-4 \sim -5}$
(\citealt{lehner02}). Such tenuous H$_2$ is observed via UV
absorption, e.g., toward the Magellanic stream (\citealt{lehner02}) and high
velocity clouds (HVCs; \citealt{bluhm01}). This component is important
for understanding the complex physics of the ISM, but is not a major
molecular component in galaxy outskirts. We therefore do not discuss
this component further in this review.
\subsubsection{Properties of Molecular Clouds in the Outer Milky
Way}\index{molecular cloud properties}
In this Section we highlight studies that have compared the mass, size,
and mass surface density of molecular clouds in the outer MW to clouds
in the inner MW. Molecular clouds are the site of star formation, and hence,
comparisons of their properties between the inner and outer MW is important.
In general, molecular clouds in the outer MW have lower mass and mass
surface density than clouds in the inner disk. We also describe how
molecular clouds have been used to trace spiral arms into the
outskirts and to study relatively high-mass star formation.
\citet{heyer15} combined published data on the CO surface brightness
out to $R_{\rm Gal} \sim 20 \, {\rm kpc}$. The clouds in the outer
MW and outskirts are $\sim 7$ times fainter than clouds in the
inner MW (and even fainter relative to the Galactic
centre). Assuming a constant $X_{\rm CO}$, this corresponds to a
factor of $\sim 7$ decrease in the mass surface density of molecular
clouds. \citet{heyer15} argued that there is a real decrease in the
mass surface density of the molecular clouds, perhaps caused by the
lower mid-plane pressure or stronger local FUV radiation field in the
outer Galaxy. However, there is also evidence that the outer MW
requires a larger $X_{\rm CO}$ to convert the CO surface
brightness into the mass surface density (see Sect.\ref{sec:ISMextreme}). Therefore the mass surface
density likely decreases by somewhat less than a factor of $\sim 7$.
The mass function of molecular clouds in the outer MW ($9.5 \,
{\rm kpc} \lesssim R_{\rm Gal} \lesssim 13.5 \, {\rm kpc}$ in this
study) has a steeper power law index than that in the inner MW, such
that the outer disk hosts more of its molecular mass in lower-mass
clouds (\citealt{rosolowsky05}, based on the $330 \, {\rm deg^2}$
\citealt{heyer98} catalogue and analysis in \citealt{heyer01} and
\citealt{brunt03}), although this conclusion may at some level be a
result of variable angular resolution (\citealt{heyer15}). The mass
function of the outer MW shows no clear evidence for a
truncation at the high-mass end, but under some assumptions
\citet{rosolowsky05} estimated that the maximum molecular cloud mass
is $\sim2-3 \times 10^{5} \, M_{\odot}$. In contrast,
\citet{rosolowsky05} concluded that the inner MW shows a clear
truncation with maximum molecular cloud mass of $\sim 3 \times 10^{6}
\, M_{\odot}$. Because of the small number of known clouds, the
apparent lack of massive clouds in the outer MW might be due to a
sampling effect. This possibility should be addressed in future
studies, as a truncation, if it exists, would be an important clue to
understanding cloud physics in the outskirts.
\citet{heyer01} concluded that the size distribution of molecular
clouds in the outer MW is similar to the distribution in the
inner MW from \citet{solomon87}, but note that surveys
with fewer clouds and different galactocentric distance ranges reached
different conclusions. \citet{may97} concluded that outer MW
clouds have smaller sizes than the inner MW while
\citet{brand95} concluded that the outer MW clouds have larger
sizes than inner MW clouds at the same mass. While there are
conflicting results in the literature, it seems natural to conclude
that an outer MW cloud must have a larger radius than an inner MW
cloud at the same mass because it appears that the mass surface
density of clouds is lower in the outer MW (see above and
\citealt{heyer15}).
Molecular gas observations in the outskirts of the MW have been
used to identify spiral arms\index{spiral arms}. \citet{dame11}
discovered a spiral arm in the first quadrant at $R_{\rm Gal} \sim 15
\, {\rm kpc}$, based on H{\sc i} and CO data. Their new arm is consistent
with being an extension of the Scutum-Centaurus arm. \citet{sun15}
also used H{\sc i} and CO data to discover an arm in the second quadrant at
$R_{\rm Gal} = 15 - 19 \, {\rm kpc}$. This arm could be a further
continuation of the Scutum-Centaurus arm and the \citet{dame11}
arm. These kinds of studies are important not only to map the spiral
structure of the MW, but also to help understand the observation that
star formation in the outskirts of other galaxies often follows spiral
arms.
Another important goal of molecular gas studies in the outskirts of
the MW has been to understand the connection with star
formation\index{star formation: Milky Way} under low density and
metallicity conditions. For example, \citet{brand07} studied an
IRAS-selected molecular cloud with a mass of $4.5 - 6.6 \times 10^{3}
\, M_{\odot}$ at $R_{\rm Gal} \sim 20.2 \, {\rm kpc}$. They discovered
an embedded cluster of 60 stars and the lack of radio continuum
emission limits the most massive star to be later than B0.5. In
addition, \citet{kobayashi08} studied Digel Cloud 2, which is really
two clouds each with a mass of $\sim 5 \times 10^{3} \,
M_{\odot}$. They discovered embedded clusters in each of the
clouds. One cluster likely contains a Herbig Ae/Be star and there are
also several Herbig Ae/Be star candidates, a B0-B1 star, and an H{\sc ii}
region nearby. Therefore, high-mass star formation has occurred near
this low-mass molecular cloud. We encourage more study on the
relationship between cloud mass and the most massive star
present, as extragalactic studies can trace O and B stars relatively
easily, but have difficulty detecting the parent molecular clouds (see
Sect.~\ref{sec:disk_galaxies_mol_gas}).
In the outskirts of the MW and other galaxies, it is important
to ask what triggers molecular cloud and star formation. In Digel
Cloud 2, star formation may have been triggered by the expanding H{\sc i}
shell of a nearby supernova remnant
(\citealt{kobayashi00,yasui06,kobayashi08}) while \citet{izumi14}
hypothesized that the star formation in Digel Cloud 1 may have been
triggered by interaction with a nearby HVC.
\subsection{Extragalactic Disk Galaxies}\index{disk galaxies}
We can study molecular gas in more varied environments by moving from
the MW to extragalactic disk galaxies. In this Section, we use
``outskirts'' to refer to galactocentric radii greater than the
optical radius ($R_{\rm Gal}>r_{25}$).
\subsubsection{Molecular Gas Detections}\index{molecular clouds: extragalactic}
\label{sec:disk_galaxies_mol_gas}
Numerous attempts to detect CO beyond the optical radius in the disks
of spiral galaxies have failed, although many of the non-detections
are unpublished (\citealt{watson16,morokuma16}; J.\ Braine, F.\ Combes,
J.\ Donovan Meyer, and A.\ Gil de Paz, personal communications). To our
knowledge, there are only four isolated spiral galaxies
with published CO detections beyond the optical radius
(\citealt{braine04,braine07,braine10,braine12,dessauges14}). Table~\ref{tab:disk_galaxies}
summarizes the number of detected regions and their range of
galactocentric radii and molecular gas masses. Extragalactic studies
have not yet reached the molecular gas masses that are typical in the
outskirts of the MW ($2-20 \times 10^{3} \, M_{\odot}$
for the eleven Digel clouds at $R_{\rm Gal} = 18-22 \, {\rm kpc}$;
\citealt{digel94,kobayashi08}; see also \citealt{braine07}).
\begin{table}
\caption{Extragalactic disk galaxies in relative isolation with
CO detections beyond the optical radius
(\citealt{braine04,braine07,braine10,braine12,dessauges14}).
For M33, the molecular gas mass is for one of the detected clouds. For M63,
the molecular gas mass is based on a sum of the CO line intensities in
twelve pointings, two of which are detections. The NGC~4414,
NGC~6946, and M63 masses were computed assuming $X_{\rm
CO} = 2 \times 10^{20} \, {\rm cm^{-2} (K \, km \, s^{-1})^{-1}}$.}
\label{tab:disk_galaxies}
\begin{tabular}{p{1.5cm}p{1.5cm}p{2cm}p{1.5cm}p{4.5cm}}
\hline\noalign{\smallskip}
Galaxy & Detected & Galactocentric & Molecular & Method used for Mass \\
& Regions & Radius & Gas Mass & \\
& (\#) & ($r_{25}$) & ($10^{5} \, M_{\odot}$) & \\
\noalign{\smallskip}\svhline\noalign{\smallskip}
NGC~4414 & 4 & $1.1 - 1.5$ & $10-20$ & Within 21'' IRAM $30 \, {\rm m}$ beam \\
NGC~6946 & 4 & $1.0 - 1.4$ & $1.7-3.3$ & Within 21'' IRAM $30 \, {\rm m}$ beam \\
M33 & 6 & $1.0 - 1.1$ & $0.43$ & Virial mass using resolved PdBI data \\
M63 & 2 & $1.36$ & $7.1$ & Sum of 12 IRAM $30 \, {\rm m}$ pointings \\
\noalign{\smallskip}\hline\noalign{\smallskip}
\end{tabular}
\end{table}
It would be useful to be able to predict where CO will be detected in
the outskirts of disk galaxies, both as a test of our understanding of
the physics of CO formation and destruction in extreme conditions (see
Sect.~3.4) and to help us efficiently collect more detections. Most of
the published CO studies selected high H{\sc i} column density regions or
regions near young stars traced by H$\alpha$, FUV, or FIR
emission. None of these selection methods is completely
reliable. \citet{braine10} concluded that CO is often associated with
large H{\sc i} and FIR structures, but it is not necessarily located at H{\sc i},
FIR, or H$\alpha$ peaks. Many factors might affect the association
between H{\sc i}, CO and star formation tracers. For example, the star forming
regions may drift away from their birthplaces over the $10-100 \, {\rm
Myr}$ timescales traced by H$\alpha$, FUV, and FIR emission. In
addition, feedback from massive stars might destroy molecular clouds
more easily in the low-density outskirt environment. Finally, higher-resolution H{\sc i} maps may show better correlation with CO
emission. Sensitive, large-scale ($> {\rm kpc}$) maps of the outskirts
of disk galaxies may allow for a more impartial study of the
conditions that maximize the CO detection rate.
\subsubsection{Star Formation in Extragalactic Disk
Galaxies}\index{star formation: extragalactic disk galaxies}
It is generally accepted that stars form from molecular gas
(e.g., \citealt{fukui10}) and that an important stage before star formation is
the conversion of H{\sc i} to H$_{2}$ (e.g., \citealt{leroy08}). A main tool to
study the connection between gas and star formation is the
Kennicutt-Schmidt law\index{Kennicutt-Schmidt law}
(\citealt{schmidt59,kennicutt98}), which is an empirical relationship
between the star formation rate (SFR) surface density ($\Sigma_{\rm
SFR}$) and the gas surface density. Within the optical
disk of spiral galaxies, there is an approximately linear correlation
between $\Sigma_{\rm SFR}$ and the molecular hydrogen surface density
($\Sigma_{\rm H_{2}}$) but no correlation between $\Sigma_{\rm SFR}$
and the atomic hydrogen surface density ($\Sigma_{\rm HI}$; e.g.,
\citealt{bigiel08,schruba11}).
The majority of the published work connecting the SFR and gas density
in the outskirts of disk galaxies has focused on the atomic gas
because molecular gas is difficult to detect
(Sect.~\ref{sec:disk_galaxies_mol_gas}) and because the ISM is
dominantly atomic in the outskirts, at least on $\gtrsim \, {\rm kpc}$
scales. \citet{bigiel10} concluded that there is a correlation between
the FUV-based $\Sigma_{\rm SFR}$ and $\Sigma_{\rm HI}$ in the
outskirts of 17 disk galaxies and 5 dwarf galaxies. They measured a
longer depletion time in the outskirts, such that it will take on average
$10^{11}$ years to deplete the H{\sc i} gas reservoir in the outskirts
versus $10^{9}$ years to deplete the ${\rm H_{2}}$ gas reservoir
within the optical disk. \citet{roychowdhury15} reached a similar
conclusion using H{\sc i}-dominated regions in disks and dwarfs, including
some regions in the outskirts, although they concluded that the depletion
time is somewhat shorter than in the outskirts of the \citet{bigiel10}
sample (see also \citealt{boissier07,dong08,barnes12}). The correlation
between $\Sigma_{\rm SFR}$ and $\Sigma_{\rm HI}$ is surprising because
there is no correlation within the optical disk. \citet{bigiel10}
suggested that high H{\sc i} column density is important for determining
where stars will form in the outskirts.
The study of the connection between molecular gas and star formation
in the outskirts has been limited by the few molecular gas
detections. Figures~5 and 6 show the relationship between $\Sigma_{\rm
SFR}$ and $\Sigma_{\rm H_{2}}$ for the molecular gas detections from
Table~\ref{tab:disk_galaxies} plus a number of deep CO upper
limits. In both panels the SFR was computed based on FUV and $24 \,
{\rm \mu m}$ data to account for the star formation that is unobscured
and obscured by dust.
\begin{figure}[b]
\begin{center}
\includegraphics[width=0.8\textwidth]{fig5.eps}
\end{center}
\caption{Figure 7 from \citet{dessauges14} showing
the molecular-hydrogen Kennicutt-Schmidt relation for the star forming
regions in the UV-complex at $r=1.36 \, r_{25}$ in M63 (red points)
compared to regions within the optical disk (blue points). The blue
line shows the fit for the optical disk. The black lines represent
constant star formation efficiency, assuming a timescale of $10^{8}$
years. Credit: \citet{dessauges14}, reproduced with permission \copyright\ ESO}
\end{figure}
\begin{figure}[b]
\begin{center}
\includegraphics[width=0.8\textwidth]{fig6.eps}
\end{center}
\caption{The molecular hydrogen Kennicutt-Schmidt relation
for the remaining star
forming regions that are beyond the optical radius in isolated
extragalactic disk galaxies and have published CO detections or
deep upper limits. The solid line shows the fit for the optical disk
of normal spiral galaxies at $\sim$kpc resolution, with the $1\sigma$
scatter shown by the dotted lines (\citealt{leroy13}). This figure was
originally presented in Fig.~4 in \citet{watson16}}
\end{figure}
\citet{dessauges14} studied a UV-bright region at $r=1.36 \, r_{25}$
in the XUV disk of M63 (Fig.~5). They detected CO in two out of
twelve pointings and concluded that the molecular gas has a low star
formation efficiency\index{star formation efficiency} (or,
equivalently, the molecular gas has a long depletion time) compared to
regions within the optical disk. They suggested that the low star
formation efficiency may be caused by a warp or by high
turbulence. \citet{watson16} measured a deep CO upper
limit in a region at $r=3.4 \, r_{25}$ in the XUV disk of NGC~4625 and
compiled published CO measurements and upper limits for 15 regions in
the XUV disk or outskirts of NGC~4414, NGC~6946, and M33 from
\citet{braine04} and \citet{braine07,braine10} (see Table~\ref{tab:disk_galaxies}
and Fig.~6). They concluded that star-forming regions in the
outskirts are in general consistent with the same $\Sigma_{\rm
SFR}$-$\Sigma_{\rm H_{2}}$ relationship that exists in the optical
disk. However, some points are offset to high star formation
efficiency (short depletion time), which may be because the authors selected
H$\alpha$- or FUV-bright regions that could have already exhausted
some of the molecular gas supply (as in \citealt{schruba10,kruijssen14}).
We should ask what stimulates the formation of molecular gas and stars
in the outskirts of disk galaxies. \citet{thilker07} suggested that
interactions may trigger the extended star formation in XUV disks
while \citet{holwerda12} suggested that cold accretion may be more
important. \citet{bush08,bush10} carried out hydrodynamic simulations
and concluded that spiral density waves can raise the density in an
extended gas disk to induce star formation (see also Sect.~4.1.1. of Debattista et al., this volume).
The state-of-the-art data from SINGS (\citealt{kennicutt03}), the
{\it GALEX} Nearby Galaxy Survey (\citealt{gildepaz07}), THINGS
(\citealt{walter08}), and HERACLES (\citealt{leroy09}) brought new insight
into the Kennicutt-Schmidt law within the optical disk of
spirals. Deeper CO surveys over wider areas in the outskirts could
bring a similar increase in our understanding of star formation at the
onset of the H{\sc i}-to-H$_{2}$ transition. In such wide-area studies, one
should keep in mind that the ``standard" physical condition of gas in
inner disks could change in the outskirts, which could affect the
measurements (Sect.~\ref{sec:ISMextreme}).
\subsubsection{Theory}\index{theory}
This Chapter focuses on observations, but here we briefly highlight
theoretical works that are related to molecular gas in the
outskirts. The majority of the relevant theoretical studies have
concentrated on the origin of gas in the outskirts
(e.g., \citealt{dekel06,sancisi08,sanchez14,mitra15}) and star
formation in the outskirts
(\citealt{bush08,bush10,ostriker10,krumholz13,sanchez14};
see also \citealt{roskar10,khoperskov15}). \citet{krumholz13} is
particularly relevant because he extended earlier work to develop an
analytic model for the atomic and molecular ISM and star formation in
outer disks. Krumholz assumed that hydrostatic equilibrium sets the
density of cold neutral gas in the outskirts and was able to match
the \citet{bigiel10} observations that show a correlation between
$\Sigma_{\rm SFR}$ and $\Sigma_{\rm HI}$ (see also Sect.~7 of Elmegreen and Hunter, this volume).
\section{Molecular Gas Observations in the Outskirts of Early-Type
Galaxies}\index{early-type galaxies}\label{sec:ellipticals}
\label{sec:early_types}
Early-type galaxies were historically viewed as ``red and dead,'' with
little gas to form new stars. However, more recent surveys have found
reservoirs of cold gas both at galaxy centres and in the
outskirts. Molecular gas in the centres of early-type galaxies can
have an internal and/or external origin while the molecular gas in the
outskirts often originated in a gas-rich companion that has interacted
or merged with the early-type. As in all of the environments we have
explored, stimuli can also trigger new molecule formation in the
outskirts of early-types.
We start with a review of H{\sc i} in the inner and outer regions of
early-type galaxies to put the molecular gas observations in
context. The ATLAS$^{\rm 3D}$ survey detected H{\sc i} in 32\% of 166
early-type galaxies in a volume-limited sample, down to a $3\sigma$
upper limit of $M_{\rm HI} = 5 \times 10^{6} - 5 \times 10^{7} \,
{M_{\odot}}$. Atomic gas in the outskirts of early-type galaxies is
even relatively common, as 14\% of the ATLAS$^{\rm 3D}$ sample have H{\sc i}
that extends out to more than 3.5 times the optical effective radius
(\citealt{serra12}).
Most surveys of molecular gas in early-type galaxies have focussed on
the inner regions. 22\% of 260 early-type galaxies in the ATLAS$^{\rm
3D}$ sample were detected in CO, down to a $3\sigma$ upper limit of
$M_{\rm H_{2}} \sim 10^{7} - 10^{8} \, {M_{\odot}}$ (\citealt{young11};
see also \citealt{sage89,knapp96,welch03,combes07,welch10}). Within
the areas searched, the molecular gas is generally confined to the
central few kpc and is distributed in disks, bars plus rings, spiral
arms, or with a disrupted morphology (\citealt{young02, welch03, young08,
davis13, alatalo13}).
One important motivation for studies of molecular gas in early-type
galaxies has been to determine whether the gas is of internal or
external origin. Some of the molecular gas has likely either
been present since the galaxies transitioned to being early-type or
has accumulated from stellar mass loss (\citealt{faber76, young02,
young08, mathews03, ciotti10}). In contrast, some molecular gas has
likely been accreted more recently through minor mergers and/or cold
accretion. This external origin is most clearly exhibited by galaxies
that display a misalignment between the kinematic axes of the
molecular/ionized gas and the stars (\citealt{young08, crocker08,
davis11, alatalo13}). In particular, \citet{alatalo13} concluded that 15
galaxies out of a sample of 40 show a kinematic misalignment of at
least 30 degrees, which is consistent with gas accretion via minor
mergers.
The majority of accreting gas is perhaps in the atomic form, but
the outskirts of early-type galaxies also offer the opportunity to study
recently accreted molecular gas, which has mainly been detected in
polar rings of elliptical and S0 galaxies\index{polar-ring galaxies}
(see Fig.~\ref{fig:polar_ring} for an example). These polar rings
are present in about $0.5\%$ of nearby S0 galaxies
(\citealt{whitmore90}). CO has been detected in polar rings at
galactocentric radii of $12 \, {\rm kpc}$ in NGC~660 (\citealt{combes92})
and $2 \, {\rm kpc}$ in NGC~2685 (\citealt{schinnerer02}; see also
\citealt{watson94, galletta97, combes13}). Published values
for the mass of molecular hydrogen in the polar rings range from $8-11
\times 10^{6} \, {M_{\odot}}$ in NGC~2685 (\citealt{schinnerer02}) to
$10^{9} \, {M_{\odot}}$ in NGC~660 (\citealt{combes92}), although the
handful of polar rings with CO detections are likely biased towards
high $M_{\rm H_{2}}$.
\begin{figure}
\begin{center}
\includegraphics[width=0.9\textwidth, angle=-90]{fig7.eps}
\caption{Figure~2 from \citet{watson94} showing the Caltech
Submillimeter Observatory CO($2-1$) spectra ({\it left}) at three
pointings, which are indicated by circles in the B-band image of the
polar-ring galaxy NGC~4650A (\citealt{whitmore87}) on the
{\it right}. \citet{watson94} estimated the mass of molecular hydrogen in
the polar ring of NGC 4650A to be $M_{\rm H_{2}} = 8-16 \times 10^{8}
\, M_{\odot}$. \copyright\ AAS. Reproduced with permission}
\label{fig:polar_ring}
\end{center}
\end{figure}
Polar rings are likely caused by tidal accretion from, or a
merger\index{galaxy merger} with, a gas-rich companion and are stable on
timescales of a few Gyr as a result of self gravity
(\citealt{bournaud03}). The molecular gas observations generally support
this hypothesis because the molecular gas masses are consistent with
those of a dwarf or spiral galaxy (\citealt{watson94, galletta97, schinnerer02}).
Mergers between an early-type galaxy and a gas-rich companion can
manifest in non-polar ring systems as well. \citet{buta95} studied the
spheroid-dominated spiral galaxy NGC~7217 and concluded that most of
the molecular mass is in an outer star-forming ring at $R_{\rm Gal}
\sim 0.6 \, r_{25}$ that could have an H$_{2}$ mass that is equal to
or greater than the H{\sc i} mass. More recent work by
\citet{silchenko11} indicates that minor mergers may be responsible
for the outer ring structures.
Molecular gas has also been detected in shells at a galactocentric
radius of $15 \, {\rm kpc}$ ($1.16 \, r_{25}$) in the elliptical
galaxy Centaurus A (\citealt{charmandaris00}). \citet{charmandaris00}
calculated the mass of molecular hydrogen in the CenA shells to be
$M_{\rm H_{2}} = 4.3 \times 10^{7} \, M_{\odot}$. Like polar rings,
shells are likely caused by galaxy interactions\index{galaxy interactions} and
\citet{charmandaris00} concluded that CenA interacted with a
massive spiral galaxy rather than a low-mass dwarf galaxy because of
the large total gas mass and large ratio of molecular to atomic gas in
CenA. Additional molecular cloud formation may have been triggered
by the interaction between the shells and the CenA radio jet (see
also \citealt{salome16}).
\section{Molecular Gas Observations in Galaxy Groups and
Clusters}\index{group environment}\index{cluster environment}\label{sec:groups}
Extended H{\sc i} gas disks beyond optical edges are common around spiral galaxies,
and as already discussed, some stimulus seems necessary to accelerate molecule formation there.
In the group/cluster environment, galaxy interactions and interactions
with the intergalactic medium (IGM)\index{intergalactic medium (IGM)}
are triggers for the H{\sc i} to H$_2$ phase transition. In the nearby M81
triplet (M82, M81, and NGC 3077), tidal interactions stretch the
atomic gas in the outskirts into tidal spiral arms, leading to
gravitational collapse to form molecular gas and stars
(\citealt{Brouillet:1992aa, Walter:2006aa}). Even an interaction with a
minor partner can be a trigger, e.g., in the M51 system, CO emission
is detected along the tidal arm/bridge between the main galaxy NGC
5194 and its companion NGC 5195 (\citealt{Koda:2009wd}).
Interaction with the IGM in clusters is also important for the gas phase transition.
Most H{\sc i} gas in galaxy outskirts is stripped away by the ram pressure from the IGM
(\citealt{van-Gorkom:2004aa}), while the molecular gas, which resides
mostly in inner disks, remains less affected (\citealt{Kenney:1989aa, Boselli:1997aa}).
Some compression acts on the molecular gas near the transition from
the molecular-dominant inner disks to the atomic-dominant outer disks, as
the extents of molecular disks are smaller when the H{\sc i} in the
outskirts is stripped away (\citealt{Boselli:2014ab}).
The stripped gas in the outskirts is seen as multiphase and has been
detected in H{\sc i} (e.g., \citealt{Chung:2009aa}), H$\alpha$ (e.g., \citealt{Yagi:2010ve}),
and X-rays (e.g., \citealt{Wang:2004aa, Sun:2010aa}).
Stripped molecular gas is found in NGC 4438 and NGC 4435, which are
interacting galaxies in the Virgo cluster (\citealt{Vollmer:2005aa}).
CO emission has also been discovered in the trailing tails of the
stripped gas from the disk galaxies ESO137-001 and NGC~4388 in the
Norma and Virgo clusters, respectively
(\citealt{Jachym:2014aa,verdugo15}).
The ram pressure from the IGM can also heat up and excite H$_2$
molecules, and H$_2$ rotational emission lines\index{H$_2$ emission}
are detected in the mid-infrared in spiral galaxies in the Virgo cluster
(\citealt{Wong:2014aa}). The emission from warm H$_2$ is also detected
over large scales in the intergalactic space of Stephan's Quintet
galaxy group with the {\it Spitzer Space Telescope} (\citealt{Appleton:2006aa}).
An analysis of the rotational transition ladder of its ground vibrational state
suggests the molecular gas has temperatures of $185\pm 30$\,K and
$675\pm 80$\,K. This H$_{2}$ emission coincides with and extends along
the X-ray-emitting shock front that is generated by the galaxy NGC
7318b passing through the IGM at a high velocity.
A final example of the cluster environment affecting molecular
gas formation is that CO has been detected in cooling flows in the
outskirts of galaxies in cluster cores (e.g., \citealt{salome06}).
Clearly, the group and cluster environments produce some triggers for
the formation of molecular gas in galaxy outskirts and therefore
represent another extreme environment where we can test our
understanding of the physics of the ISM and star formation.
\section{Conclusions and Future Directions}\index{future}\label{sec:future}
Throughout the Chapter, we have highlighted that some stimuli seem
necessary to accelerate the formation of molecular gas in galaxy
outskirts. In the outskirts of the MW, stimuli include spiral arm
compression, expanding shells from supernova remnants, and
interactions with HVCs (\citealt{yasui06,izumi14,Koda:2016aa}). These same
processes are likely at play in the outskirts of
extragalactic disk galaxies. In particular, spiral density waves,
interactions, and/or cold accretion may stimulate molecule formation and the
subsequent star formation activity in XUV disks
(\citealt{thilker07,bush08,holwerda12}). Interactions and mergers likely
cause the polar rings in the outskirts of S0 galaxies, although it may
be more likely that the molecules form in the gas-rich companion
before the merger (\citealt{bournaud03}). Finally, in groups and clusters,
interactions and ram pressure stripping may accelerate molecular gas
formation in some localized areas of galaxies even as the overall
effect is to remove the star-forming fuel from the galaxies
(\citealt{Vollmer:2005aa,Jachym:2014aa}). Galaxy outskirts offer
opportunities to study the formation of molecular gas over a variety
of conditions and will be the key to understanding if there are
different modes of star formation.
Fundamental questions remain about the physical conditions of the
ISM in the outskirts. Where is the molecular gas? What
are the basic properties of the molecular clouds, e.g., the H$_{2}$
volume density, H$_{2}$ column density, temperature, mass, and size?
How do these properties differ from the properties of molecular clouds
in the inner regions of galaxies? Is the transition from H{\sc i} to H$_{2}$
and the transition from H$_{2}$ to stars more or less efficient in the
outskirts? Are these phase transitions affected by different
large-scale processes, stimuli, or environmental conditions compared
to inner regions? Measurements of molecular gas properties often
depend on assumptions about the gas properties themselves. Right now,
those assumptions are based on our knowledge of molecular gas in
inner disks. Those assumptions need to be revisited and adjusted
continuously as we learn more about molecular gas in the
outskirts. This iterative improvement of our knowledge is now starting
in the field of galaxy outskirts.
Building on the research that has already been done, we have
identified a number of specific studies that would begin to address
the fundamental questions above. In the outskirts of the MW, we can
study whether the relationship between the mass of the molecular cloud
and the most massive associated star is different than in the inner
MW. In the outskirts of extragalactic disk galaxies, we need to
measure the mass and size functions of molecular clouds and compare to
the MW results. In addition, theoretical studies can work towards
predicting where and how molecular gas will form in the outskirts. To
test these predictions, we encourage sensitive and wide-area mapping
of CO and/or dust continuum emission. Higher resolution (cloud-scale)
maps of H{\sc i} may also be required to accurately locate potential sites
of molecular gas formation. After each discovery of molecular gas,
subsequent multi-wavelength studies including excitation ladders of
molecular line emission are necessary to refine our knowledge of the
physical conditions of molecular gas there. In early-type galaxies, we
should search for molecular gas in XUV disks, as XUV emission could be
even more common in early-type galaxies than late-type galaxies
(\citealt{moffett12}). We hope those researchers will take note and learn
from the high failure rate of previous (published and unpublished)
searches for molecular gas in the outskirts of disk galaxies.
\begin{acknowledgement}
We are grateful to Fran\c coise Combes, Jennifer Donovan Meyer, Natsuko
Izumi, and Hiroyuki Nakanishi for their advice, reading suggestions,
and comments. We also thank Natsuko Izumi for allowing us to use her
figure for the distribution of molecular clouds in the outer MW
(Fig.~\ref{fig:izumi}). JK thanks the NAOJ Chile observatory, a
branch of the National Astronomical Observatory of Japan, and the
Joint ALMA Observatory for hospitality during his sabbatical visit.
JK acknowledges the support from NASA through grant NNX14AF74G.
\end{acknowledgement}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.